Bizarrely, a team at Microsoft has released work on an AI bot, called DeepCom, that can be used to generate fake comments about news articles.
The team says these fake comments can create engagement for new websites and news outlets, essentially encouraging the use of fake accounts to generate real engagement with fake discussion.
RELATED: A NEW AI TOOL CAN HELP US FIGHT AI-WRITTEN FAKE NEWS
Do we need this?
The short answer. No.
The Internet is already teeming with fake accounts and bots, used to troll and misinform the general public.
The question the team of researchers, from Microsoft and Beihang University in China, seems to be asking, however, is: what's the harm in using fake comments to stimulate and encourage discussion in real news readers?
The researchers argue that readers enjoy posting comments under news articles and sharing their opinions. For publishers, this increases engagement, so why not give it all an AI-powered boost?
“Such systems can enable commenting service for a news website from cold start, enhance the reading experience for less commented news articles, and enrich skill lists of other artificial intelligence applications, such as chatbots,” the paper says.
A paper by Beijing researchers presents a new machine learning technique whose main uses seem to be trolling and disinformation. It's been accepted for publication at EMLNP, one of the top 3 venues for Natural Language Processing research. Cool Cool Coolhttps://t.co/ZOcrhjKiEcpic.twitter.com/Y8U5AjENrh— Arvind Narayanan (@random_walker) September 30, 2019
The catch is that the new paper, released on arXiv, includes no mentions of possible malicious uses, or the dangers the technology could pose by helping to spread fake news. Several critics have already voiced their concerns.
News-reading neural networks
As The Register reports, DeepCom simultaneously employs two neural networks: a reading network and a generating network.
All of the words in a news article are encoded as vectors for the AI to analyze. The reading network picks up what it calculates are the most important aspects of the article (a person, event, or topic), before the generating network creates a comment based on that.
The researchers trained DeepCom on a Chinese dataset made up of millions of human comments posted on news articles online, as well as an English language dataset from articles on Yahoo! News.
The example above shows important parts of a Yahoo! article highlighted in red (the reading network), and what the bot chooses to comment about in blue (the generating network).
Another controversial bot
It's not Microsoft's first brush with bot controversy. Their AI bot "Tay" famously had to be taken down three years ago, after it quickly learned to tweet racist, homophobic, and anti-feminist messages after being trained on a vast database of tweets.
Tay famously had to be taken down after only 16 hours due to its unsolicited outpouring of hateful tweets. It seems that training an AI on online commenters — people who often lose all inhibitions thanks to their anonymity — isn't such a good idea after all.
It seems that more work is needed before DeepCom can truly incite engagement or create comments that could be used to disrupt society on a large scale. For the time being, the comments generated by the AI bot are short and simple.
The research, however, has been accepted to EMNLP-IJCNLP, an international natural language processing conference, which will be held in Hong Kong next month (November 3-7).
In the meantime, MIT researchers have been creating an AI tool, called the Giant Language Model Test Room (GLTR), that can detect AI-written text. They might need to start upping their game.