Home 2024 Elections Video: At Hearing on Foreign Threats to the 2024 U.S. Elections, Sen....

Video: At Hearing on Foreign Threats to the 2024 U.S. Elections, Sen. Mark Warner Calls Out “X, formerly known as Twitter, which wouldn’t even send a representative today”

Sen. Warner says Russia and Iran "spreading credible-seeming narratives to influence voters’ perceptions of the candidates and stoke political and ethnic division"

35
4

See below for video and the text of Sen. Mark Warner’s remarks, as prepared for delivery, at today’s US Senate Intelligence Committee hearing on “Foreign Threats to Elections in 2024 – Roles and Responsibilities of U.S. Tech Providers.”

I call this hearing to order. I want to welcome today’s witnesses: 

  • Mr. Kent Walker, President – Global Affairs and Chief Legal Officer, Alphabet
  • Mr. Nick Clegg, President – Global Affairs, Meta
  • Mr. Brad Smith, Vice Chair and President, Microsoft 

Today’s hearing builds on this Committee’s longstanding practice of educating the public about the intentions and practices of foreign adversaries seeking to manipulate our country’s democratic processes. We’ve come a long way since 2017, when, as folks may remember, there was some pronounced skepticism that our adversaries might have utilized American social media platforms for intelligence activities. 

It was almost exactly seven years ago that – in response to inquiries from this Committee – that Facebook shared the first evidence of what would become an expansive discovery, documenting Russia’s use of tens of thousands of inauthentic accounts across Facebook, Instagram, YouTube, Twitter, Reddit, LinkedIn… and even smaller platforms such as Vine, Gab, Tumblr, LiveJournal, Medium and Pinterest, to try to divide Americans and influence their votes. 

And through this Committee’s bipartisan investigation into Russian interference in the 2017 election, we learned that Russia had devoted millions to wide-ranging influence campaigns that generated hundreds of millions of online impressions; sowed political and racial division among Americans; and impersonated social, political and faith groups of all stripes to infiltrate and manipulate America’s political discourse. 

Our Committee’s bipartisan effort also resulted in a set of recommendations – for government, for the private sector, and for political campaigns… recommendations for which I hope today’s hearing will serve as a status check. These recommendations included: 

  • Greater information sharing between the U.S. government and private sector, and among industry, concerning foreign malicious activity;
  • Greater transparency measures by platforms to inform users about malicious activity, as well as more information on the origin and authenticity of information presented to them;
  • And facilitation of open-source research by academics and civil society organizations to better assist platforms and the public in identifying malicious use of social media by foreign actors. 

On the USG-side, we’ve seen some significant progress, with 2020 widely acknowledged by election experts to have been the most secure election in U.S. history, thanks in no small part to leadership from key parts of the last Administration. The progress has been made through a combination of: 

  • Bipartisan appropriation of funding for election upgrades, facilitating things like paper records for ballots and use of risk-limiting audits to verify results;
  • A better-postured national security community, tracking – and then exposing or disrupting – foreign adversary election threats; and
  • A successful effort to share threat information about foreign influence activity with the private sector. 

U.S. tech companies have made some significant, albeit uneven, progress since 2016. 

These include important commitments this past February from all three of the companies before us today another 24 companies [“X, formerly known as Twitter, which wouldn’t even send a representative today”], memorialized in a Tech Accord to Combat Deceptive Use of AI in 2024 Election, agreed to at the Munich Security Conference earlier this year. And while I appreciate these commitments, I’m not sure we’ve seen much concrete action from them. In responses to letters I sent to the Accord’s signatories earlier this year, it was clear that cross-industry collaboration and information-sharing remains underdeveloped… and many companies have cut back on personnel that enforce platform content policies. 

More concerning, at least four new factors seriously jeopardize our collective ability to combat covert foreign influence attempts.

First, our adversaries are more incentivized than ever to intervene in our elections because they understand that it could directly affect their national interest. In the case of Russia, Putin understands that influencing public opinion and shaping the elections in the United States is a cheap way to erode Western support for Ukraine and undermine America’s standing in the world. Similarly, we’ve seen that the conflict between Israel and Hamas has been fertile ground for disinformation since October 7th for a number of foreign actors. Relatedly, Iran increasingly sees this election as an opportunity to stoke social discord in the U.S., while potentially seeking to shape election outcomes.

The exposures and disruption efforts we’ve seen the last eight weeks have put this on clear display, including: 

  • A covert influence project led by RT to bankroll unwitting U.S. political influencers on YouTube;
  • A wide-ranging Russian campaign that thus far has not gotten much media coverage impersonating major Western media institutions like the Washington Post and Fox News, with the goal of spreading credible-seeming narratives to influence voters’ perceptions of the candidates and stoke political and ethnic division; and
  • Efforts to infiltrate American protests over the conflict in Gaza by Iranian influence operatives, who hope to stoke division and may even seek to influence the election by denigrating former President Trump.

Second, the scale and sophistication of these sorts of attacks against our elections can be accelerated severalfold by cutting-edge AI tools, including deep-fake technology.

New text, image, audio, and video generation capabilities are not only at the fingertips of a wider variety of actors, but they’ve expanded the imagination of malicious actors in ways that IC officials, online intermediaries, and American policymakers are still grappling with.

I fear that Congress’s inability to establish new guardrails in the last 18 months leaves U.S. elections vulnerable to widespread, AI-enabled mischief… the first hints of which we saw conducted by domestic actors in this year’s primary season. While Congress hasn’t accomplished anything on this front, we’ve seen states take the lead – across the ideological spectrum – to pass some of the first guardrails around use of AI in elections, including Alabama, Texas, Michigan, Florida and California. Unfortunately, none of these guardrails are likely robust enough to impact foreign influence actors.

Many of the witnesses before us today have stressed that they have not yet seen significant impacts from AI in the foreign influence campaigns they track. And they have warned against the risk of exaggerating these capabilities – and thereby inflating Americans’ perception of the efficacy and scope of foreign influence.

While I very much take this warning to heart, I remain convinced that these tools will materially shape the foreign influence environment in coming years.

Third, we’ve witnessed increasingly large numbers of Americans – of all political stripes – who simply do not trust key U.S. institutions, from federal agencies and local law enforcement to mainstream media institutions. This is coupled with an increased reliance on virality-driven, easily manipulated internet media platforms.

Repeated IC assessments have noted that extent to which many foreign adversaries – including Russia, China, and Iran – capitalize on and exacerbate this trust gap as a primary objective of their influence campaigns. Shattering the capacity for American social and political consensus is, ultimately, a long-term goal of foreign adversary efforts.

And fourth, since 2022, we’ve seen a concerted litigation campaign that has sought to undermine the Federal Government’s ability to share vital threat information with U.S. social media platforms… and frankly, to bully many of the open source and academic researchers working on these issues into silence. We’ve seen, for instance, the shuttering of the election disinformation work at Stanford’s Internet Observatory, as well as the termination of a key research project at Harvard’s Shorenstein Center. Many academic researchers have publicly stated that they’re ending their work on these topics after sustained legal intimidation.

Sadly, since 2022, we’ve also seen social media companies – including some of the companies before us today – stepping back from their public commitments to invest in platform integrity. And we’ve seen the rise of a dominant social media platform – TikTok – headquartered in a country assessed to conduct election influence campaigns.

In our last open hearing on election security, we heard about what the federal government is doing do in detecting and disrupting efforts by foreign actors and adversaries ahead of the election. This Committee will continue to hold additional classified sessions with government briefers before and after Election Day.

But as our Committee’s bipartisan report concluded, combatting these insidious foreign threats depends on a whole-of-society effort, which is why I hope we’ll hear that U.S. technology firms are taking all relevant steps… including details on concrete new initiatives they are applying in the 48 days ahead of Election Day. And – perhaps just as importantly, particularly given efforts by foreign adversaries to sow post-election doubts in 2020 – what they are doing to combat foreign election influence after Election Day.

********************************************************


Sign up for the Blue Virginia weekly newsletter

Previous articleVideo: VA10 GOP Nominee Skips NAACP Forum; Eugene Vindman and Suhas Subramanyam Skewer VA07 GOP Nominee Derrick Anderson’s Lies, Nonsense on Abortion, Project 2025, etc.
Next articleMcConnell: Trump-Backed Shutdown Would Be “Politically Beyond Stupid” and GOP Would “Certainly Get the Blame”