Twitter has announced three changes designed to restrict the effects of abusive messages. It’s clear the company is trying to find a difficult balance between free speech and combating harassment.
The first change is about policy rather than process. Until now, Twitter only considered it a violation when people made “direct, specific threats of violence against others.” That’s now been widened in terms of both content and target, with the new rule banning posts that contain “threats of violence against others or promote violence against others.”
The second change involves Twitter’s options for dealing with abusive posts. At the moment it can force users to delete posts, make them verify a phone number (which theoretically makes them easier to identify if they make criminal threats) or, in the most extreme circumstances, permanently suspend an account (even if that is an oxymoron.)
Under the new rules, Twitter will have the option to lock an account for a certain period of time. It’s not clear if this simply means a ban on posting, or will stop people accessing their customized timeline altogether. Twitter says it’s particularly likely to use this option in cases ” where multiple users begin harassing a particular person or group of people.”
Finally, Twitter is working on tools to identify and hide specific abusive tweets rather than (or as well as) tackling a particular user. This will be based on a range of factors including how old the account posting the tweet is, thus targeting people who create account specifically to attack somebody, and how closely the content matches tweets that staff have already manually labelled as abusive.
Abusive tweets highlighted in this way won’t be completely deleted. Instead they will be changed so they are only viewable by user’s who’ve actively chosen to follow the account concerned. That means that abusive posts which mention the subject’s Twitter name won’t necessarily be seen by the subject.