Repeating Facebook History, Meta Is Courting An Election 2022 Disaster
Generation Z on why their peers should vote
SHARE
SHARE
TWEET
SHARE
Click to expand
UP NEXT
With less than two weeks until the midterm elections, Meta and its platforms are careening in for a bumpy landing that could leave our fragile democracy in chaos. All signs point to an unwillingness to take the steps necessary to safeguard the election on its platforms.
Facebook and other top social media networks claim they’ve stepped up moderation and fact-checking in preparation for the midterms. But these efforts are disastrously insufficient.
To address this dangerous situation and safeguard the election, Meta and other platforms must make a rapid course correction, including investing more in content-moderation staff and rooting out accounts promoting conspiracies about poll rigging and false information about voting hours and locations.
Start the day smarter. Get all the news you need in your inbox each morning.
Social media feeds vs. Free and fair electionsHere’s how this massive company’s failure to fix its feed poses a fundamental threat to free and fair elections:
►Slashing election integrity. Meta has cut its election-integrity staff by 80% – from 300 people to 60, according to The New York Times. This means fewer experts on hand to detect election lies, voting misinformation and foreign interference in the midterms – and put in place the safeguards to stop them.
►Misinforming Spanish speakers. Many Spanish-language pages pushing former President Donald Trump’s Big Lie are still up on Meta platforms (even when their English-language counterparts have been taken down). The content ranges from baseless claims about voting machines to lies about dead people voting – all specifically designed to mislead or suppress the votes of the Latinx community.
►Stoking election violence. The Change the Terms coalition recently shared with Meta Facebook posts that target and harass election workers, as well as posts labeling the Jan. 6 insurrection a hoax. The Global Project Against Hate and Extremism has identified ongoing efforts by extremist groups and known election disinformers to use social media platforms to threaten election workers, threaten candidates, spread the Big Lie, demonize immigrants, attack Black and brown communities and intentionally confuse voters in six battleground states.
Will young people turn out for midterms?: As Gen Z, we're told we will 'fix everything.' Voting in the midterms is the first step.
►De-platforming outside researchers. Researchers and academics face widespread problems accessing election information with the CrowdTangle tool that measures the flow of content across its platforms, and Meta has fewer public-facing information guides on the election than it has in previous cycles. New York University data researcher Laura Edelson – whom Meta cut off from accessing this data – recently said the platform’s lack of data transparency presents very real problems to those trying to alert users to the ongoing creation and spread of anti-democratic content.
►Flirting with Trumpism. Meta President of Global Affairs Nick Clegg recently indicated that he was considering allowing Trump back on the company’s platforms in January, in spite of the former president’s use of his platform Truth Social to threaten violence, post QAnon conspiracies and, most recently, engage in blatant antisemitism.
'We're not going away': Conservatives build media ecosystem to fight cancel culture
►Failing to enforce. Meta continues to exempt politicians from its third-party fact-checking program, allowing them to run advertising with false claims. Recent research found that Meta’s ad library – which it supposedly uses to track all political ads on its platforms –can’t even correctly determine what constitutes a political ad and what doesn’t.
Opinion alerts: Get columns from your favorite columnists + expert analysis on top issues, delivered straight to your device through the USA TODAY app. Don't have the app? Download it for free from your app store.
Don't let Facebook's history repeatIn 2020, Meta (then Facebook) acceded to pressure to stop election misinformation and implemented what whistleblower Frances Haugen later called “break-glass” measures designed to slow the spread of the worst misinformation during important elections and times of crisis. This included changing the platform’s news feed to “uprank” credible news sources in an effort to drown out misinformation. We know Meta can act now if it chooses to do so.
Red and blue America don't trust each other. That's driving us dangerously apart.
Meta must prioritize credible news and information in its feed by boosting the algorithm tied to “news ecosystem quality,” an internal news rating system that assigns a higher value to quality reporting and, by default, downranks hate and misinformation. (The company took this exact measure across its platforms following the 2020 election.)
Meta can reject any advertising that delegitimizes election outcomes at a national, state and local level – including from candidates themselves. It can ban violative accounts and remove all posts that incite election violence.
Meta should prioritize review of possible harassment of election workers, which is happening across its platforms. The company is well aware that this is happening, but many threatening posts remain up.
Do political debates even matter?: Was Ryan or Vance, Walker or Warnock tougher? Who cares? Political debates are overrated.
As Election Day nears, it's not too late for actionEven though voting is already underway in some states, and much damage is already done, Meta can still make an immediate, urgent investment in non-English-language moderation of election content, using qualified moderators rather than relying on artificial intelligence.
Finally, Meta can listen to itself – its own civic and integrity team recommendations for the 2022 election have yet to be fully implemented, but include crucial steps like dialing back reliance on what the company calls downstream “meaningful social interactions” that spark engagement but can also drive the spread of misinformation.
Midterm election polls: We're addicted to midterm election polls. And it's not doing us any good.
These changes shouldn’t be enacted just for the days immediately following the election. We learned the hard way on Jan. 6, 2021, what happens when platforms turn the torrent of extremist content back on, as Facebook did in late 2020. Violative content about the electoral process and democracy writ large does not have an “end” time after an election occurs. We urge Meta to keep election-integrity efforts in place indefinitely.
These demands should also be embraced by Meta’s shareholders, elected officials and the largely impotent Facebook Oversight Board, which has not engaged substantively on Meta’s failure to prepare for the 2022 election.
We’ve seen the corrosive impacts of social media on democracy these past few years. With just a couple weeks until Election Day, it will get much worse without immediate action.
Heidi Beirich, Ph.D., is chief strategy officer and co-founder of the Global Project Against Hate and Extremism. Jessica J. González is co-CEO of Free Press. Both are members of the Real Facebook Oversight Board.
This article originally appeared on USA TODAY: Repeating Facebook history, Meta is courting an election 2022 disaster
For The First Time In Years, Facebook, Instagram, Twitter Are 'weak' And VCs Are Eager To Back Alternatives, They Say.
With Elon Musk's turbulent acquisition of Twitter officially a done deal, some disenchanted users have already begun to talk about a great migration away from the platform. But where exactly would they go?
The hashtag #TwitterMigration was trending Thursday on the open-source social network Mastodon, which received thousands of new users in the hours after the deal was finalized, it said.
Meanwhile Bluesky, a decentralized social media protocol backed by former Twitter CEO Jack Dorsey, saw 30,000 new users sign up just 48 hours after opening its waitlist.
And there may be room for even more young social media companies to rise. With the expected exodus of not just users but also talent from Twitter, some venture capitalists see this as the perfect moment to back a Twitter alternative.
"I think the next six months is going to be a great time to launch whatever this next platform or platforms will be," said Anna Barber, a partner at venture fund M13 who's actively looking to fund a new social media entrant.
For her, identity verification and responsible content moderation are non-negotiable in any new social platform that she backs.
"We've got an opportunity to elevate public discourse and to make people feel more connected to each other and all of those, I would say, are social goods, but there's also an underlying massive economic opportunity here as well," she said. Not only is Ward looking at startups, but M13 is also considering using its in-house venture studio to build something from scratch, she said.
Of course breaking into an industry dominated by behemoths like Facebook, Twitter, and TikTok is no easy feat. "It's a big swing," says Taryn Ward, a privacy consultant and the founder of pre-launch social media app Bright, which plans on using robust user authentication and an ad-free business model to build a less toxic online community.
She began working on Bright two years ago after seeing her 12-year-old daughter go down a rabbit hole of disordered eating content on TikTok. "It hit this vulnerability in this sweet spot between emotional insecurity and bad information," she said.
She raised £350,000 in friends and family financing and built a team consisting of a public health expert, a data privacy lawyer, a scientist, and a retired military intelligence officer. They built a demo product, opened a waitlist, and are currently looking to raise a £1.5 million dollar pre-seed round ahead of a planned launch in early 2023.
Despite being a first-time founder without a launched product, she said interest in her company exploded back in April after Elon Musk announced his bid to buy Twitter.
"We started getting these really weird messages on our website from like tech bros who I think thought this is the cool thing to do now, is to buy a social media network," she said.
Ward turned down one offer from an angel investor who compared her platform to Parler, the right wing social media app that's in the process of being acquired by rapper Ye, formerly known as Kanye West. "That was sort of conversation stopper," says Ward, adding that she's hoping to partner with an investor who shares her vision for a more positive, inclusive community.
Despite the challenges, Ann Bordetsky, a partner at venture fund New Enterprise Associates and a former Twitter executive herself, said she views this as a boom time for founders looking to disrupt the social media landscape.
"For the first time in a long time, Facebook, Instagram, Twitter, they're weak, and they're just losing that edge and that relevance," she said, pointing to a generational shift, which has seen Gen Z move away from existing social networks.
In talking to founders, she said she's seen more innovation in consumer social media in the last six months than in the previous six years. NEA recently backed multiple companies in this space, she said, though declined to name them.
With Musk reportedly considering massive layoffs at Twitter and Meta's declining stock price making stock options look less appealing, Bordetsky also said it's a great time to be in the market for social media talent.
"Those companies are bleeding talent," Bordetsky said, "and so if you are a company that's emerging right now in this space, I think you have this amazing unfair advantage. Whereas before, it would've been really hard to pull that engineer out of Meta, it should be a lot easier today."
Youri Lee, an investor at venture fund IVP, is also on the lookout for the next big social media startup.
"There is a sense of decline for some of these large social media platforms," Lee said. She hasn't made any investments yet but she's talking to founders and points to Farcaster, an invite-only blockchain-based social network, as an example of a promising company.
"I think people are really excited about building a new platform that can bring again that sense of community," she said.
Even with the growing interest from investors and founders, any startup looking to take on Meta or Twitter faces a serious uphill battle, investors warned.
"You do have to have a little bit of magical thinking, you know, an unrealistically optimistic outlook," says M13's Barber, given this is an industry that hasn't seen a major new competitor since TikTok and Snap. But, she says, doing the impossible is what venture capitalists are good at.
How Facebook And TikTok Are Helping Push Stop The Steal In Brazil
In the days leading into Brazil’s most consequential presidential election in decades, Meta and TikTok have steered millions of Brazilians toward baseless accusations, false claims of election fraud, and extremist content, according to a left-leaning advocacy group that researches disinformation.
Portuguese-language searches for basic election-related terms such as “fraud,” “intervention” and “ballots” on Facebook and Instagram, which are owned by Meta, have overwhelmingly directed people toward groups pushing claims questioning the integrity of the vote or openly agitating for a military coup, researchers from the advocacy group SumOfUs found. On TikTok, five out of eight top search results for the keyword “ballots” were for terms such as “rigged ballots” and “ballots being manipulated.”
The research is the latest in a growing body of evidence that social platforms are failing to prevent a flood of disinformation on their services ahead of the runoff election Sunday between President Jair Bolsonaro and former president Luiz Inácio Lula da Silva. Brazilian lawmakers last week granted the nation’s elections chief unilateral power to force tech companies to remove misinformation within two hours of the content being posted — one of the most aggressive legal measures against social media giants that any country has taken.
The right-wing Bolsonaro has repeatedly alleged without evidence that voting machines used for a quarter century in Brazil are prone to fraud. The rhetoric of Bolsonaro supporters has often appeared to echo that of President Donald Trump supporters during the 2020 U.S. Election, who questioned election results under the banner Stop the Steal.
Bolsonaro vs. Lula: A referendum on Brazil’s young democracySome of the top narratives that circulated in Brazil before the first-round vote on Oct. 2 included specific allegations of fraud, messages attacking the Supreme Electoral Court, and false calls for “inspectors” at the polls, according to Brazilian researchers and the left-leaning human rights group Avaaz. Viral audio and video on Telegram, WhatsApp, Facebook and TikTok alleged that ballot boxes were being pre-filled with votes for former president Luiz Inácio Lula da Silva.
Misinformation has also been spread by the left. The messages include false allegations that Bolsonaro has confessed to cannibalism and pedophilia.
Major social media companies allowed Stop the Steal content to spread virtually unchecked until the violent consequences of the rhetoric became clear on Jan. 6, 2021. Groups on Facebook in particular have been found by researchers to be major vectors for organizing ahead of the Stop the Steal rally at the Capitol, and that Facebook’s own software algorithms played a significant role in helping such groups gain members.
Since then, companies including Meta and to a lesser extent TikTok have promised to do better, and in particular to clamp down on election-related content that could lead to violence. But the latest evidence shows that the companies are failing to keep their promises — particularly outside of the United States.
“What we are seeing is Meta and Google taking the protection of Brazilian voters less seriously than [that] of their American counterparts,” said Nell Greenberg, deputy director of Avaaz. Ahead of the U.S. Midterm elections next month, she noted, the companies have been labeling, downgrading and removing content that incites violence and spreads false information about the election.
“There are still crucial actions they can take to help ensure a safe Election Day, and prevent a potential Brazilian ‘January 6th,’ ” she wrote in an email. “The question is, will they do any of them?”
Meta spokesman Tom Reynolds said the company has updated its search tools in recent weeks in preparation for the election. He said top search results now direct users to information from Brazilian authorities.
“We’ve worked to remove several keyword recommendations that may lead to misinformation and applied labels on election-related posts on both apps,” he said. “Around 30 million people in Brazil have clicked on those electoral labels on Facebook and were directed to the Electoral Justice’s website.”
Out of prison and leading in polls, Lula nears full political comebackTikTok spokeswoman Jamie Favazza said the company has invested in protecting the site ahead of Brazil’s elections.
“We take our responsibility to protect the integrity of our platform and elections with utmost seriousness and appreciate feedback from NGOs, academics and other experts,” she said in an email. “We continue to invest in our policy, safety, and security teams to counter election misinformation as we also provide access to authoritative information through our Election Guide.”
The Brazilian research institute NetLab found that both Meta and Google allowed political candidates to run advertisements on their platforms during the first round of voting on Oct. 2, even though such advertising is prohibited by Brazilian law during this period. The group also found evidence of paid advertising encouraging military interventions in the election as voters went to the polls.
A test of Meta and YouTube’s ad systems by the human rights group Global Witness revealed that the companies approved large numbers of misleading ads, including spots that encouraged people not to vote or gave false dates for when ballots could be posted. YouTube said it “reviewed the ads in question and removed those that violated our policies,” although the Global Witness report showed all the ads submitted were approved by the Google-owned site.
Bolsonaro's hometown is as divided over him as the rest of BrazilTo study how platforms pushed people toward misinformation, the SumOfUs researchers created dummy accounts on Facebook, Instagram and TikTok. They then typed in the terms “ballot,” “interventions” and “fraud” into the search bars of these social media services and counted up the results.
They found that five out of seven of the groups recommended by Facebook under searches for the term “intervention” were pushing for a military intervention in Brazil’s election, while five out of seven of the groups recommended under the search term “fraud” encouraged people to join groups that questioned the election’s integrity. The groups have names such “Intervention to Save Brazil” and “Military intervention already.”
Overall, the group found, 60 percent of all content recommended by Facebook and Instagram pushed misinformation about the electoral process.
.jpeg)
0 Comments