Social Media Execs Submit to Time-Honored Public Lashing Before Congress

WhatsApp
Telegram
Facebook
Twitter
LinkedIn
Pinterest
Reddit


Chris Cox, Chief Product Officer for Meta, Neal Mohan, Chief Product Officer for YouTube, Vanessa Pappas, Chief Operating Officer for TikTok and Jay Sullivan, General Manager of Bluebird, Twitter, testify during a Senate Homeland Security and Governmental Affairs committee hearing to examine social media’s impact on homeland security, Wednesday, Sept. 14, 2022, on Capitol Hill in Washington.

Chris Cox, Chief Product Officer for Meta, Neal Mohan, Chief Product Officer for YouTube, Vanessa Pappas, Chief Operating Officer for TikTok and Jay Sullivan, General Manager of Bluebird, Twitter, testify during a Senate Homeland Security and Governmental Affairs committee hearing to examine social media’s impact on homeland security, Wednesday, Sept. 14, 2022, on Capitol Hill in Washington.
Photo: Alex Brandon (AP)

Executives at four social media giants appeared yesterday before a Senate committee to receive what’s become a traditional biannual walloping on CSPAN over the host of ills their products routinely visit upon their users and the rest of world.

Members of the Senate Homeland Security Committee laid into Meta over the deluge of child-sexual-abuse material traversing its platforms; into TikTok over the (potentially) unique risks posed by its Chinese ownership; and into each of the platforms over the various roles they’ve played in spreading QAnon conspiracies and misinformation about vaccines and elections more broadly.

“We know that social media has offered unprecedented connectivity, and that’s often very positive,” said Sen. Rob Portman, the committee’s ranking Republican. “But we also know it has raised serious concerns for our children, our civic culture, and our national security. Terrorists and violent extremists, drug cartels, criminals, authoritarian regimes, and other dangerous forces have used social media in furtherance of their goals. They’ve exploited your platforms.”

The lawmakers took their captive witnesses to task hours after the same committee heard separately from former vice presidents at Twitter and Facebook, who explained, in short, that placing trust in their erstwhile employers would be sheer folly: “Today you don’t know what’s happening with the companies. You have to trust them,” said Brian Boland, who until 2020 was one of Meta’s longest-tenured corporate officers, adding: “I lost my trust with the companies with what they were doing, and what Meta was doing.”

Chairman Gary Peters, a Democrat of Michigan, established a thread early on off the backs of the former executives, whose testimony, he said, had portrayed the social networks as lacking any real financial incentive to prioritize user safety. Instead, “like any for-profit company,” he argued, “your incentives are to prioritize user engagement, grow your platforms, and generate revenue.”

In opening remarks, each of the companies — Meta, Twitter, YouTube, TikTok — would try challenging this narrative.

“I care deeply about the work we do to help people connect with things and the people they care the most about,” Chris Cox, chief product officer at Meta, told the committee, emphasizing that he was one of the first 15 coders at the company. Reading from prepared notes, he continued: “It’s important to us that we help people feel safe on our apps. And we stand firmly against the exploitation of social media by those committed to inciting violence and hate.”

“That’s why we prohibit hate speech, terrorism, and other harmful content,” he said.

Cox went on to describe methods by which Meta enforces its policies — the hiring of global content review teams, and the billions invested in moderation technology. Yet the thrust of his argument seem to lie in those first few sentences, which consequently reveal little, but serve to distance the company from the problem in minute ways. Why does Meta oppose violence and hate? As Cox seems to understand it, violence and hate are something happening to social media, not because of it. Social media is itself, he explained, a victim. And why, specifically, is it “important” to Meta that Meta “help people feel safe”? (One could be forgiven for thinking Cox’s care simply runs that deep.)

A veritable word salad of explanations followed, as Cox further defined security and safety as “key to the product experience,” and “core to our ethos,” rules for which are enforced through use of “industry-leading technology.”

Neal Mohan, chief product officer at YouTube, leaned instead into portraying his Google-owned employer as a steward to an army of entrepreneurs contributing substantially to America’s GDP, quoting a report from a business forecaster with whom YouTube “worked closely.” YouTube’s “openness” — the impetus for its “creator economy,” he explained — works “hand in hand” the company’s “responsibility” to safety, which he described as its “number one priority.”

Mohan’s testimony comes a day after another report, by disinformation researchers at Bot Sentinel, which describe, as Rolling Stone put it, “a pattern of unchecked hate speech, misogyny, racism, and targeted harassment singularly focused on famous and identifiable women”; wherein Bot Sentinel founder Christopher Bouzy is quoted saying: “YouTube is to blame. A lot of these folks would not do what they’re doing if YouTube was not rewarding them. And let’s be clear here, they are rewarding them.”

Mohan offered another, if purely anecdotal response to the question of incentive: “The overwhelming majority of creators, viewers, and advertisers don’t want to be associated,” he said, with harmful and problematic content. “Meaning, it’s also bad for our business.” Therein lies the argument proffered by most, if not all, of the major social networks; one that runs counter to another popular narrative — that enragement drives engagement.

Whatever their desires, a Pew Research study in 2017 found that, on Facebook, “indignant rhetoric” and politically divisive posts were “far more likely to elicit user engagement than posts that did not.” That same year, Facebook’s ranking algorithm began to treat “emoji reactions” as five times more valuable than mere likes, applying the theory that posts eliciting such reactions were far more likely to keep users engaged. These included “angry faces,” which the company learned — two years after its experiment began — were disproportionately associated with “misinformation, toxicity and low-quality news,” according to the Washington Post. Red flags were raised, but occasionally brushed aside, as employees engaged in Socratic debates over the virtues of fostering the full range of human emotions rather than focusing simply on the impact of its product on society and its politics.

Facebook researchers would note in a since-leaked documents that the loudest, most active political groups on its platform were those dedicated to spreading hoaxes about vaccines and other health measures.

Sen. Peters cited Meta CEO Mark Zuckerberg on the subject at the top of the hearing. In a post titled, “A Blueprint for Content Governance and Enforcement,” Zuckerberg wrote: “One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content.” He went on to say that research has shown that, no matter where Facebook draws the line, the closer a piece of content gets to it, the more engagement it receives. He offered no concrete solutions, except to say that it is “worth considering” that Facebook “should simply move the line defining what is acceptable.” This work, he defined as the “most important” underway at the company.

Cox, in what has become a staple of testimony from Meta, vehemently denied that the company’s goal is to increase the amount of time users spend on the app. But such arguments seem to hinge largely on wordplay. As we’ve noted in the past, Meta’s own financial disclosures have for years warned investors that its revenue stream would face serious risk from users simply choosing to “decrease their level of engagement with our products.”

Yet on the flipside, Facebook, in particular, faces serious issues with retention, and the quality of information on its app — or lack therefore — is one reason that users are choosing to abandon the platform, as its own internal research has shown. The company is essentially walking a highwire, doing its best to keep users engaged with high-volume, revenue generating content while avoiding the cost and fatigue that inevitably follows being embroiled in quotidian cage matches over divisive political issues.

Turning to TikTok, Sen. Portman led with questions about the company’s association to China, saying that while subject to the laws of the United States, the company also remains subject “to the laws of other countries in which it operates.” More than half of America’s youth, he said, have joined the video sharing platform. Vanessa Pappas, TikTok’s chief operating officer, fielded a range of questions replete with vague semantic distinctions:

“Does TikTok have an office and employees in Beijing,” Portman asked, to which Pappas replied, in part: “TikTok does not operate in China.” “Do you have employees in Beijing?” Portman pressed again. “Yes we do, as do many global tech companies,” Pappas replied. “And is your parent company, ByteDance, headquartered in China?” Portman asked. “No, they are not,” Pappas said. “ByteDance is founded in China, but we do not have an official headquarters. It’s a global company.” Asked a second time where ByteDance is headquartered, Pappas reiterated: “We are distributed company, we have offices around the world.” “You have to be headquartered somewhere,” Portman said, “and I think it’s the Cayman Islands.”

TikTok, which is owned by ByteDance, which is founded in China, but is incorporated in the Cayman Islands, continues to face a bevy of suspicion over its Beijing-based employees gaining access to information gathered about users in the United States, as first reported by BuzzFeed in June. China, like every major world player, has enormous interest in gathering intricate data on its geopolitical rivals. But the singular focus of the U.S. government on TikTok, given the relative ease with which data on Americans can be bought on sold on the global market, has left some experts questioning its motives.

Lily Hay Newman, writing for Wired this month, noted the peculiarity of the situation, as now multiple White House administrations have threatened to sanction or take even more stringent measures against China: “Huge quantities of sensitive data about people living in the US are already available in various forms for purchase or the taking through other public social media platforms, the digital marketing industry, data brokers, and leaked stolen data troves… So, is it protectionism? Xenophobia? Special insight into US national security?”

Indeed, if access to the personal data of Americans poses a unique national security risk, then why is Congress sitting on its hands while multinational companies buy and sell it daily as if it were a commodity — or worse, as Facebook has repeatedly done, treat the data itself no different than currency?

“TikTok does not operate in China. The app is not available,” Pappas said Thursday. “As it relates to our compliance with law, given we are incorporated in the United States, we comply with local law.” The question lawmakers might ask — rather than if social media executives trust in the goodwill of the Chinese Communist Party — is whether their own governance of American’s privacy is sorely deficient in virtually every respect.



Source link

Share it with your friends

WhatsApp
Facebook
Twitter
LinkedIn
Pinterest
Reddit
Crowded Hell

Crowded Hell

Leave a Reply

Your email address will not be published.