World Economic Forum pushes MIT program that automatically detects “disinformation narratives” online
More calls for censorship.
Misinformation and propaganda are as old and as present as human communication and societies organized into states. The curiously intense focus on these the last couple of years prompts many to believe such issues are now used as justification and smokescreen for suppressing legitimate but unwanted speech, and free expression-related civil rights.
Depending on where you stand on that, you may be reassured or dismayed that the World Economic Forum (WEF) is one of the entities pushing hard the agenda of online misinformation presenting a huge danger today – and endorsing some “solutions.”
One comes from MIT and its “Reconnaissance of Influence Operations (RIO)” – now there’s a name George Orwell would probably not kick out of his novels.
Out the gate, RIO “establishes” that power of misinformation on social networks and elsewhere online is such as to sway elections, but also allow different points of view to be expressed, or as the program announcement put it, “sow discord.”
Another dangerous point the MIT program aims to address is disinformation feeding “conspiracy theories” – but in light of the recent embarrassing U-turn on the “Wuhan lab leak” there’s no evidence of a firm and consensus-based definition of what a “conspiracy” even is. Rather, the category seems prone to political whim.
But RIO looks to be designed to work under the false premise that these definitions are objective, and agreed on by all stakeholders.
You’d think that as far as conspiracies go, the one about Russians electing a US president by exploiting social media is the most tired and by now well-debunked one.
Nevertheless, that is one of the claims to fame made by those behind RIO and MIT Lincoln Laboratory’s Artificial Intelligence Software Architectures and Algorithms Group, whose goal is not only to “automatically” detect what they deem to be disinformation, but also what their machine learning algorithms are programed to identify as “disinformation narratives.”
That means the tool would recognize the level of influence of accounts not only by their size in terms of followers, but also other elements as a measure of overall impact. Recognize for what reason? Highly likely to censor and remove.
“The team envisions RIO being used by both government and industry as well as beyond social media and in the realm of traditional media such as newspapers and television,” the WEF said on its website.