Twitter updates warning of offensive tweet, admitting that you like swearing to your friends

Twitter updates warning of offensive tweet, admitting that you like swearing to your friends

Twitter is updating its “good, think twice” system that prompts users to rethink about tweeting “potentially harmful or offensive” answers. The advanced feature is now superior to claiming “strong language” on Twitter; More are known about such terminology, which has been “reclaimed by underrepresented communities” and used in non-harmful ways; And now also keep in mind your relationship with the person to whom you are sending the message.

In other words, if you are tweeting a reciprocal whom you interact with regularly, Twitter will assume that “it’s more likely [you] Have a better understanding of the preferred tone of communication “and No Show you soon So, you can call your friend **** or **** – **** or even a **** – ******* son’s **** – *** *** ** And Twitter won’t care. This is freedom, friends.

Twitter first began testing the system in May 2020, stopping it a little later, then bringing it back to life February this year. This is one of several signals the company is testing to try and shape user behavior, including its “read before you retweet” message.

A sample signal shown to the user before sending an offensive reply.
Picture: Twitter

The correction of offensive tweets will be “in English days” for English users of the Twitter iOS app today and for Android users. The company says that it is already making a difference in how people interact on the platform.

Twitter claims that internal tests showed 34 percent of people who were given such an indication “modified their initial answer or decided not to send their answer at all.” Once given such a signal, people composed an average of 11 percent “less aggressive answers”. And those who were prompted about an answer (and could therefore improve their language) were themselves “less likely to get aggressive and hurtful answers back.”

These “stats” are as opaque as you’d expect from any major Internet platform (the company has determined in the final example the amount of “low probability”? How many people are involved in any of these tests? How do we know Who are the people? Modifying their answer made it less aggressive, or did they just use offensive language that the system didn’t recognize?). But the ongoing roll-out shows that the feature is minimal, which is not making things Active Worse on twitter. This is probably the best we can hope for.

Be the first to comment

Leave a Reply

Your email address will not be published.


*