Google will help media to find malicious comments on articles
Google from Alphabet Inc and its subsidiary Jigsaw launched new technology on Thursday to help online media and platforms identify abusive comments on their websites.
The technology, called Perspective, will review comments and assign them a note based on how similar they are to messages that people said were “toxic” or that would make them leave a conversation.
The tool was tested in the New York Times and companies hope to extend it to other media such as The Guardian and The Economist as well as Internet sites.
“Media organizations want to encourage participation and discussion about their content, but they find that seeing millions of comments to find abusers or malicious people takes a lot of money, work and time. As a result, many sites have closed their sections Of comments, “wrote Jared Cohen, president of Jigsaw, on a blog.
Perspective examined hundreds of thousands of comments that had been rated offensive by supervisors to learn how to identify potentially abusive language.
CJ Adams, Jigsaw product manager, said the company was open to bringing technology to all platforms, without specifying whether that included networks such as Facebook and Twitter, where malicious and abusive comments can be a big headache.
Perspective will not decide what to do with comments that are potentially offensive, in which case it will depend on the medium sending them to their moderators or developing tools to help people understand the impact of what they are writing.
The initiative against such messages follows efforts by Google and Facebook to fight false news in France, Germany and the United States, which was hit hard by criticism during the November presidential vote when it became clear that they had promoted false stories.