© 2024 Public Radio Tulsa
800 South Tucker Drive
Tulsa, OK 74104
(918) 631-2577

A listener-supported service of The University of Tulsa
classical 88.7 | public radio 89.5
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Facebook, Twitter, Google CEOs Testify Before Congress: 4 Things To Know

Google's Sundar Pichai, Facebook's Mark Zuckerberg and Twitter's Jack Dorsey face Congressional scrutiny over the spread of misinformation on their platforms.
Michael Reynolds-Pool
/
Getty Images/Composite by NPR
Google's Sundar Pichai, Facebook's Mark Zuckerberg and Twitter's Jack Dorsey face Congressional scrutiny over the spread of misinformation on their platforms.

Support for the siege on the U.S. Capitol. Bogus promises of COVID-19 cures. Baseless rumors about vaccines.

Who should be held accountable for the spread of extremism and hoaxes online?

The CEOs of three influential tech companies — Facebook, Google and Twitter — will answer that question, and more about how their platforms handle misinformation and its most damaging consequences, when they appear before Congress on Thursday.

It will be the first time the executives testify since some of the same lawmakers in the hearing room were attacked at the Capitol by a pro-Trump mob on Jan. 6.

The House Energy and Commerce Committee will also discuss whether laws should be changed to hold tech companies more responsible for what their users post, especially when those posts are linked to real-world harm.

Appearing by video, the CEOs will give contrasting views of how they think Internet regulations should be rewritten, with Facebook's Mark Zuckerberg proposing reforms to a key legal shield and Google's Sundar Pichai warning any changes to that law could backfire.

But all sides seem to agree on one thing: new rules of the road are inevitable.

"There is a bipartisan agreement that the status quo is just not working," Rep. Jan Schakowsky, D-Ill., who chairs one of the subcommittees hosting the hearing, said on Monday. "Big tech has failed to respond to these grave challenges time and time again."

Here are four things to know ahead of the hearing:

What role did misinformation on Facebook, Twitter and Google's YouTube play in the Jan. 6 riot?

Debunked allegations that the presidential race was stolen and calls for violence swirled across social media platforms in the wake of November's election, culminating in the storming of the Capitol on the day President Joe Biden's victory was certified, according to researchers who track misinformation.

Many rioters shared plans and extensively documented the events of Jan. 6 on Facebook, Twitter and Google's YouTube, as well as smaller platforms such as Parler, Gab and the livestreaming service DLive.

Nearly half of rioters charged in federal court allegedly used social media to post photos, livestreams and other evidence of their involvement, according to a review of charging documents by George Washington University's Program on Extremism.

Facebook in particular has been criticized for not cracking down more quickly on "Stop the Steal" groups that were involved in promoting the Jan 6. event, and, more broadly, for how its automated recommendation systems may steer people towards more extreme content and groups.

A report out this week from the nonprofit advocacy organization Avaaz found Facebook groups that repeatedly shared misinformation nearly tripled their interactions — meaning likes, comments, shares and other reactions on posts — between October 2019 and October 2020.

"We have over a year's worth of evidence that the platform helped drive billions of views to pages and content that confused voters, created division and chaos, and, in some instances, incited violence," said Fadi Quran, campaign director at Avaaz.

In response, Facebook said the report's methodology was flawed and that it "distorts the serious work we've been doing to fight violent extremism and misinformation on our platform."

Members of Congress are also expected to press the companies on what they are doing to stop the spread of false information and debunked claims about COVID-19 and vaccines.

On Wednesday, a group of 12 state attorneys general called on Facebook and Twitter to take "immediate steps" to root out vaccine misinformation, warning it is undermining efforts to end the pandemic and reopen the economy.

What do the companies say in their defense?

The CEOs plan to highlight the steps they've already taken to curb misinformation — while also emphasizing that their platforms mirror the problems of society at large.

"Conversations online will always reflect the conversations taking place in living rooms, on television, and in text messages and phone calls across the country," Zuckerberg plans to tell the hearing. "Our society is deeply divided, and we see that on our services too."

Twitter CEO Jack Dorsey will describe "a trust deficit" that has grown in the U.S. and around the world in recent years that "does not just impact the companies sitting at the table today but exists across the information ecosystem and, indeed, across many of our institutions."

All three executives' written statements discuss policy changes they've made, from sticking warning labels onto posts that may be false to efforts to connect people with credible sources of information about the pandemic and elections.

For example, Facebook says it has removed more than 12 million debunked posts about COVID-19 and vaccines, as it expanded its list of banned claims. The company also says 95% of the time it places warnings on posts its fact-checkers deem false, people don't click on them.

Facebook has also rolled out changes to how it recommends groups. It's vowed to crack down on those that repeatedly break its rules, after long-running concerns over the role Facebook groups play in the spread of misinformation and radicalization.

Google-owned YouTube said earlier this month it has removed more than 30,000 videos with debunked claims about COVID-19 vaccines since October, and 800,000 videos with harmful or misleading information about the coronavirus since February 2020. It also says it banned 13,000 channels between October and December for promoting violence and violent extremism.

Twitter has also started to crack down on vaccine misinformation, removing false claims, such as tweets suggesting that vaccines are used to harm or control people. Tweets that fall short of the bar for removal — such as rumors, disputed claims and incomplete or out of context information — are now getting labeled as potentially misleading. Dorsey will also point to Birdwatch, the company's new effort to get users involved in fact-checking.

Do lawmakers really want to impose rules on social media?

Members of Congress have floated a wide range of bills aimed at reining in Big Tech, from beefing up antitrust laws to overhauling Section 230 of the Communications Decency Act, a key legal shield that protects tech platforms from being held liable for most of what their users post.

"The central message is that the performance that they've shown us to date is largely, much of it, unacceptable," said Schakowsky, who is proposing her own Section 230 reform effort focused on consumer protection. "We are moving ahead with legislation and with regulation."

Zuckerberg plans to tell the committee he supports limits to Section 230, specifically requiring platforms to have "systems in place for identifying unlawful content and removing it" to earn the law's protections.

Rep. Anna Eshoo, D-Calif. dismissed that idea as "a masterful distraction" at a press conference on Wednesday. Eshoo has introduced a bill to strip the biggest platforms of immunity if their algorithms promote content that leads to violence.

Facebook "want[s] us to focus on putting out fires and not on the fact that their product is flammable," said Rep. Tom Malinowski, D-N.J., who is co-sponsoring the bill with Eshoo.

"What we're trying to incentivize is a change in the design of the social networks," he said. "A change in how they use algorithms to amplify content so that we have less spread of extremism, conspiracy theories, inflammatory content that is designed solely to maximize engagement on the platforms."

But the tech CEOs are not united on their views on regulation. Google's Pichai will tell lawmakers changes to Section 230 could have "unintended consequences," by forcing platforms like YouTube to "either over-filter content or not be able to filter content at all."

Twitter's Dorsey has also warned about the risks of changing the law, pointing out that Twitter, with under 200 million daily users, is much smaller than Facebook and Google, which count billions of users. At a hearing in October, he cautioned that some of the proposed changes risk further entrenching the power of the largest platforms.

Republicans in Congress have also called for the overhaul or even repeal of Section 230, saying it's given the tech companies cover for alleged bias against conservatives — a claim that's been undercut by research showing far-right content gets some of the highest levels of social media engagement.

What about former President Donald Trump?

After the Jan. 6 riot, all three companies kicked then-President Trump off their platforms — moves his supporters and some Republican politicians seized on as further evidence of anti-conservative bias.

Twitter banned Trump permanently. Facebook asked its Oversight Board, a panel of outside experts, to decide whether to reinstate the former president. That decision is due by the end of April.

YouTube also suspended Trump, but CEO Susan Wojcicki said recently that the video site will let Trump return "when we determine that the risk of violence has decreased." She gave no timeline for when that would be, but said the company will look at various signals to decide, including government warnings, what law enforcement is doing around the country, and how much violent rhetoric is on YouTube.

This week, however, Trump teased that he may not return to any of the major platforms. Instead, he may launch his own social media network.

"I'm doing things having to do with putting our own platform out there that you'll be hearing about soon," Trumptold Fox News contributor Lisa Boothe in a podcast released on Monday.

During his presidency, Trump pushed hard to revoke Section 230 as part of his long-running feud with tech platforms. If he does launch his own site, however, a weakened or repealed legal shield could leave him on the hook for whatever users post.

Editor's note: Facebook and Google are among NPR's financial supporters.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Shannon Bond
Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.