Periscope is to ask users to act as a “flash jury” to decide if comments are abusive. There’s no word on whether parent company Twitter will adopt the idea.
The move is designed to deal with the problem of navigating the spectrum between clear-cut cases of abusive content, more debatable cases, and incidents where the person reporting content is doing so maliciously.
Under the new system, when somebody flags a comment as being either abusive or spam, it will immediately go to a selection of randomly selected viewers of the relevant video for their verdict. They’ll be able to say whether or not the report is valid, but will also have the option to say they are unsure. There’s also an option to permanently opt-out of reviewing comments.
The thinking is that not only does this mean dealing with the report quickly but that it takes advantage of live viewers being better placed to examine the comment in the context of the broadcast, rather than leaving it to a company administrator who is coming to the broadcast fresh, or relying on filtering specific words and phrases.
If the “jury” considers the report valid, the person who made the comment will have a 60-second “time-out” from commenting on the broadcast in question. If they then post another comment ruled as spam or abuse, they’ll be blocked from commenting on that specific broadcast at all. Regardless of the verdict, the person making the complaint will no longer see comments from the poster in question.
A demo of the system reportedly showed one case took just 10 seconds from a comment being posted to the poster being put on a time-out.
The instant verdict system will enhance rather than replace the current system by which Periscope staff examine complaints of ongoing abuse and harassment in comments, as well as moderating and dealing with reports about the content of the video itselgf.