Next time you try to send a message on Tinder, the dating app may ask you “Are you sure you want to send?” Last week, the app announced launch of the Are You Sure? (AYS?) feature to reduce harassment. The feature uses artificial intelligence (AI) to scan private messages and detect harmful language. It proactively intervenes to warn the sender that their message may be offensive, asking them to pause before hitting send.
“The AI was built based on what members have reported in the past, and it will continue to evolve and improve over time,” Tinder said.
Tinder has been using different harm-reduction tools to stop harassment on its app. Its “Does This Bother You?” feature provides proactive support to members when harmful language is detected in a message they received.
The dating app claims that the new features have contributed to more matches, longer conversations and created better environment.
The AYS? feature has already reduced inappropriate language in sent messages by more than 10 per cent in early testing, it said. “Members who saw the AYS? prompt were less likely to be reported for inappropriate messages over the next month, which indicates AYS? is changing longer-term behaviour, not just behaviour in one conversation.”
Besides, members who have seen Does This Bother You? are more empowered to report bad behaviour, with reports of inappropriate messages increasing by 46 per cent, Tinder said.
As per a report in Quartz, Tinder has been testing out algorithms that scan private messages for inappropriate language since November last year. The Does This Bother You? feature was launched in January this year. If a user says yes to the question, the app guides the user in reporting the message.
While social media apps like Twitter, Instagram and TikTok use AI to moderate public posts, Tinder is among the first few companies to use it to scan private messages. Considering that Tinder is a dating app, most of the conversations among its users are likely to take place through direct messages. Hence the need to scan these messages for inappropriate language.
The success of these AI-based features can lead to a wider use of the technology for content moderation and preventing abuse and harassment.