Twitter has launched a new, limited test on iOS which will provide warnings on tweet replies that include potentially harmful language, and could cause unnecessary angst.
As explained by Twitter, if your reply includes words or phrases that Twitter’s system has identified as being commonly included in aggressive, harmful, or otherwise offensive tweets, a prompt will now appear, asking whether you might want to revise your response to avoid potentially causing offense.
Here’s an example:
The system mirrors Instagram’s automated alerts on potentially offensive comments which it released in July last year. Instagram expanded the same to potentially offensive captions in December.
The expansion likely suggests that it has acted as an effective deterrent on Instagram, and you can imagine that it could have a positive effect on tweet interactions, especially given the difficulties of expressing nuance in tweets, and the capacity for your short responses to be interpreted in a way you may not have intended.
Indeed, Facebook recently published a new research report which found that misinterpretation was a key element in causing more angst and argument among Facebook users. Facebook plans to use those findings to potentially build more tools like Instagram’s comment warnings, which simply prompt users to re-think their wording in order to reduce the potential for offense.
It may well be that a lot of arguments online could be eliminated by getting people to take a moment to consider how their words could be taken the wrong way. The data suggests that this could have a significant impact, and it’s often just based on the language that people use, which is what Twitter’s new test will work to highlight in order to prompt a re-think.
As noted, it’s only a small scale test at the moment, but it could have significant impacts. It also adds to Twitter’s ongoing efforts to improve on-platform discussion and keep users safe, and free from abuse, via tweet.