Facebook bug led to news feeds being populated with misinformation for over half a year
Facebook engineering staff outlined a ranking failure bug that up to 50% of all news feed views on the social network contained misinformation for six months.
According to The Verge's reporting on an internal Facebook report, company engineers first discovered the issue back in October of 2021. The report on the findings since then leading up to the ranking bug's resolution on March 11th of 2022 was only shared last week with company officials.
The bug made it so that, rather than suppress posts from sources that were frequently confirmed to provide misinformation and fake news, they gave them further exposure. This spiked up their visibility by up to 30 percent. Engineers saw the initial surge of this subside, only to find that it came in waves across the months they were observing this behavior until the March 11th fix.
Not only did this bug impact misinformation distribution, but it also made it so Facebook wasn't properly warning users of content that contained "nudity, violence, and even Russian state," according to The Verge's investigation. That said, this lack of proper warning didn't impact the network's reporting and deletion systems regarding this content.
A spokesperson for Meta stated the following regarding the recently resolved bug:
"[The company] detected inconsistencies in downranking on five separate occasions, which correlated with small, temporary increases to internal metrics. We traced the root cause to a software bug and applied needed fixes.”
"[The bug] has not had any meaningful, long-term impact on our metrics.” It was also stated that the issue did not apply to content that met Facebook's system’s threshold for deletion.
By IanDorfman
CONTINUED:
A FACEBOOK BUG LED TO INCREASED VIEWS OF HARMFUL CONTENT OVER SIX MONTHS
The social network touts downranking as a way to thwart problematic content, but what happens when that system breaks?
A group of Facebook engineers identified a “massive ranking failure” that exposed as much as half of all News Feed views to potential “integrity risks” over the past six months, according to an internal report on the incident obtained by The Verge.
The engineers first noticed the issue last October, when a sudden surge of misinformation began flowing through the News Feed, notes the report, which was shared inside the company last week. Instead of suppressing posts from repeat misinformation offenders that were reviewed by the company’s network of outside fact-checkers, the News Feed was instead giving the posts distribution, spiking views by as much as 30 percent globally. Unable to find the root cause, the engineers watched the surge subside a few weeks later and then flare up repeatedly until the ranking issue was fixed on March 11th.
In addition to posts flagged by fact-checkers, the internal investigation found that, during the bug period, Facebook’s systems failed to properly demote probable nudity, violence, and even Russian state media the social network recently pledged to stop recommending in response to the country’s invasion of Ukraine. The issue was internally designated a level-one SEV, or site event — a label reserved for high-priority technical crises, like Russia’s ongoing block of Facebook and Instagram.
THE TECHNICAL ISSUE WAS FIRST INTRODUCED IN 2019 BUT DIDN’T CREATE A NOTICEABLE IMPACT UNTIL OCTOBER 2021
Meta spokesperson Joe Osborne confirmed the incident in a statement to The Verge, saying the company “detected inconsistencies in downranking on five separate occasions, which correlated with small, temporary increases to internal metrics.” The internal documents said the technical issue was first introduced in 2019 but didn’t create a noticeable impact until October 2021. “We traced the root cause to a software bug and applied needed fixes,” said Osborne, adding that the bug “has not had any meaningful, long-term impact on our metrics” and didn’t apply to content that met its system’s threshold for deletion.
For years, Facebook has touted downranking as a way to improve the quality of the News Feed and has steadily expanded the kinds of content that its automated system acts on. Downranking has been used in response to wars and controversial political stories, sparking concerns of shadow banning and calls for legislation. Despite its increasing importance, Facebook has yet to open up about its impact on what people see and, as this incident shows, what happens when the system goes awry.
In 2018, CEO Mark Zuckerberg explained that downranking fights the impulse people have to inherently engage with “more sensationalist and provocative” content. “Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content,” he wrote in a Facebook post at the time.
“WE NEED REAL TRANSPARENCY TO BUILD A SUSTAINABLE SYSTEM OF ACCOUNTABILITY”
Downranking not only suppresses what Facebook calls “borderline” content that comes close to violating its rules but also content its AI systems suspect as violating but needs further human review. The company published a high-level list of what it demotes last September but hasn’t peeled back how exactly demotion impacts distribution of affected content. Officials have told me they hope to shed more light on how demotions work but have concern that doing so would help adversaries game the system.
In the meantime, Facebook’s leaders regularly brag about how their AI systems are getting better each year at proactively detecting content like hate speech, placing greater importance on the technology as a way to moderate at scale. Last year, Facebook said it would start downranking all political content in the News Feed — part of CEO Mark Zuckerberg’s push to return the Facebook app back to its more lighthearted roots.
I’ve seen no indication that there was malicious intent behind this recent ranking bug that impacted up to half of News Feed views over a period of months, and thankfully, it didn’t break Facebook’s other moderation tools. But the incident shows why more transparency is needed in internet platforms and the algorithms they use, according to Sahar Massachi, a former member of Facebook’s Civic Integrity team.
“In a large complex system like this, bugs are inevitable and understandable,” Massachi, who is now co-founder of the nonprofit Integrity Institute, told The Verge. “But what happens when a powerful social platform has one of these accidental faults? How would we even know? We need real transparency to build a sustainable system of accountability, so we can help them catch these problems quickly.”
Clarification at 6:56 PM ET: Specified with confirmation from Facebook that accounts designated as repeat misinformation offenders saw their views spike by as much as 30%, and that the bug didn’t impact the company’s ability to delete content that explicitly violated its rules.
Correction at 7:25 PM ET: Story updated to note that “SEV” stands for “site event” and not “severe engineering vulnerability,” and that level-one is not the worst crisis level. There is a level-zero SEV used for the most dramatic emergencies, such as a global outage. We regret the error.
By Alex Heath
CONTINUED:
Facebook News Feed bug injected misinformation into users' feeds for months
The "massive ranking failure" affected "as much as half of all News Feed views."
A “bug” in Facebook’s News Feed ranking algorithm injected a “surge of misinformation” and other harmful content into users’ News Feeds between last October and March, according to an internal memo reported by The Verge. The unspecified bug, described by employees as a “massive ranking failure,” went unfixed for months and affected "as much as half of all News Feed views."
The problem affected Facebook’s News Feed algorithm, which is meant to down-rank debunked misinformation as well as other problematic and “borderline” content. But last fall, views on debunked misinformation began rising by “up to 30 percent,” according to the memo, while other content that was supposed to be demoted was not. “During the bug period, Facebook’s systems failed to properly demote nudity, violence, and even Russian state media the social network recently pledged to stop recommending in response to the country’s invasion of Ukraine,” according to the report.
More worrying, is that Facebook engineers apparently realized something was very wrong — The Verge reports the problem was categorized as a “severe” vulnerability in October — but it went unfixed until March 11th because engineers were “unable to find the root cause.”
The incident underscores just how complex, and often opaque, Facebook’s ranking algorithms are even to its own employees. Whistleblower Frances Haugen has argued that issues like this one are evidence that the company needs to make its algorithms transparent to outside researchers or even move away from engagement-based ranking altogether.
A Facebook spokesperson confirmed to The Verge that the bug had been fixed, saying it “has not had any meaningful, long-term impact on our metrics.”
Still, the fact that it took Facebook so long to come up with a fix, is likely to bolste calls for the company to change its approach to algorithmic ranking. The company recently brought back Instagram’s non-algorithmic feed partially in response to concerns about the impact its recommendations have on younger users. Meta is also facing the possibility of legislation that would regulate algorithms like the one used in News Feed.
By Karissa Bell
CONTINUED:
This article is more than 7 years old
Protests over secret study involving 689,000 users in which friends' postings were moved to influence moods
It already knows whether you are single or dating, the first school you went to and whether you like or loathe Justin Bieber. But now Facebook, the world's biggest social networking site, is facing a storm of protest after it revealed it had discovered how to make users feel happier or sadder with a few computer key strokes.
It has published details of a vast experiment in which it manipulated information posted on 689,000 users' home pages and found it could make people feel more positive or negative through a process of "emotional contagion".
In a study with academics from Cornell and the University of California, Facebook filtered users' news feeds – the flow of comments, videos, pictures and web links posted by other people in their social network. One test reduced users' exposure to their friends' "positive emotional content", resulting in fewer positive posts of their own. Another test reduced exposure to "negative emotional content" and the opposite happened.
The study concluded: "Emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks."
Lawyers, internet activists and politicians said this weekend that the mass experiment in emotional manipulation was "scandalous", "spooky" and "disturbing".
On Sunday evening, a senior British MP called for a parliamentary investigation into how Facebook and other social networks manipulated emotional and psychological responses of users by editing information supplied to them.
Jim Sheridan, a member of the Commons media select committee, said the experiment was intrusive. "This is extraordinarily powerful stuff and if there is not already legislation on this, then there should be to protect people," he said. "They are manipulating material from people's personal lives and I am worried about the ability of Facebook and others to manipulate people's thoughts in politics or other areas. If people are being thought-controlled in this kind of way there needs to be protection and they at least need to know about it."
A Facebook spokeswoman said the research, published this month in the journal of the Proceedings of the National Academy of Sciences in the US, was carried out "to improve our services and to make the content people see on Facebook as relevant and engaging as possible".
She said: "A big part of this is understanding how people respond to different types of content, whether it's positive or negative in tone, news from friends, or information from pages they follow."
But other commentators voiced fears that the process could be used for political purposes in the runup to elections or to encourage people to stay on the site by feeding them happy thoughts and so boosting advertising revenues.
In a series of Twitter posts, Clay Johnson, the co-founder of Blue State Digital, the firm that built and managed Barack Obama's online campaign for the presidency in 2008, said: "The Facebook 'transmission of anger' experiment is terrifying."
He asked: "Could the CIA incite revolution in Sudan by pressuring Facebook to promote discontent? Should that be legal? Could Mark Zuckerberg swing an election by promoting Upworthy [a website aggregating viral content] posts two weeks beforehand? Should that be legal?"
It was claimed that Facebook may have breached ethical and legal guidelines by not informing its users they were being manipulated in the experiment, which was carried out in 2012.
The study said altering the news feeds was "consistent with Facebook's data use policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research".
But Susan Fiske, the Princeton academic who edited the study, said she was concerned. "People are supposed to be told they are going to be participants in research and then agree to it and have the option not to agree to it without penalty."
James Grimmelmann, professor of law at Maryland University, said Facebook had failed to gain "informed consent" as defined by the US federal policy for the protection of human subjects, which demands explanation of the purposes of the research and the expected duration of the subject's participation, a description of any reasonably foreseeable risks and a statement that participation is voluntary. "This study is a scandal because it brought Facebook's troubling practices into a realm – academia – where we still have standards of treating people with dignity and serving the common good," he said on his blog.
It is not new for internet firms to use algorithms to select content to show to users and Jacob Silverman, author of Terms of Service: Social Media, Surveillance, and the Price of Constant Connection, told Wire magazine on Sunday the internet was already "a vast collection of market research studies; we're the subjects".
"What's disturbing about how Facebook went about this, though, is that they essentially manipulated the sentiments of hundreds of thousands of users without asking permission," he said. "Facebook cares most about two things: engagement and advertising. If Facebook, say, decides that filtering out negative posts helps keep people happy and clicking, there's little reason to think that they won't do just that. As long as the platform remains such an important gatekeeper – and their algorithms utterly opaque – we should be wary about the amount of power and trust we delegate to it."
Robert Blackie, director of digital at Ogilvy One marketing agency, said the way internet companies filtered information they showed users was fundamental to their business models, which made them reluctant to be open about it.
"To guarantee continued public acceptance they will have to discuss this more openly in the future," he said. "There will have to be either independent reviewers of what they do or government regulation. If they don't get the value exchange right then people will be reluctant to use their services, which is potentially a big business problem."
By Robert Booth