TR Daily Sen. Lee Slams Big Tech on Content Moderation, Plans Hearing
News
Thursday, September 10, 2020

Sen. Lee Slams Big Tech on Content Moderation, Plans Hearing

Sen. Mike Lee (R., Utah), the chairman of the Senate Judiciary Committee’s antitrust, competition policy, and consumer rights subcommittee, said today that the responses he has received from Alphabet, Inc., Facebook, Inc., Squarespace, Inc., and Twitter, Inc., to his requests for an explanation of their content-moderation practices “were completely unpersuasive” with regard to denials of an anti-conservative bias.

He also announced the final witness list for a Sept. 15 subcommittee hearing on whether Alphabet’s subsidiary Google has harmed competition in online advertising (TR Daily, Sept. 8). Google President–global partnerships and corporate development Don Harrison will be the only witness on the first panel. A second panel will include Adam Heimlich, chief executive officer of Chalice Custom Algorithms; David Dinielli, senior advisor–beneficial technology for Omidyar Network; and Carl Szabo, vice president and general counsel of NetChoice.

The hearing is scheduled to begin at 2:30 p.m. in Room 562 of the Dirksen Senate Office Building and will be streamed online from the committee’s website.

In July, Chairman Lee asked the companies’ chief executive officers to account for content-moderation practices that he described as “wielding their power unilaterally to silence opinions they dislike” and to provide transparency over how they police their platforms. Among other things, he asked for their standards for removing content related to COVID-19; “violent riots, and how you distinguish them from peaceful protests”; “hate speech”; “protections for the unborn”; misinformation; and “terrorist influence” (TR Daily, July 31).

Today, he said, “I continue to be concerned about the ideological discrimination going on at these firms and I believe further oversight will be necessary in order to obtain the facts and answers that the American people deserve.”

In a letter dated Sept. 1 and released by Sen. Lee today, Google Vice President–government affairs and public policy Markham Erickson said, “To be clear, our content moderation standards are apolitical, unbiased and do not preference one point of view over another. We apply our policies to all content creators across the board and will not allow any form of political bias. Our platforms empower a wide range of people and organizations from across the political spectrum, giving them a voice and new ways to reach their audiences. Some of our biggest critics on the right and left have gotten millions of views and subscribers through our platforms.”

Mr. Erickson explained that content can be removed for violating laws in a country in which a platform operates or for violating content policies or community guidelines for a specific product or service, such as YouTube.

In response to a question about whether Google coordinates the removal of specific content with other platforms, Mr. Erickson said, “Through the Tech Coalition, we make cutting-edge technology available to qualifying industry and non-governmental organizations for free in order to help identify, remove, and report illegal CSAM [child sexual abuse material] more quickly and at a greater scale.”

In addition, he said that the Global Internet Forum to Counter Terrorism (GIFCT) “allows participating companies and organizations to submit hashes, or ‘digital fingerprints,’ of identified terrorist and violent extremist content to a database so that it can be swiftly removed from all participating platforms. By sharing best practices and collaborating on cross-platform tools we have been able to increase our hash-sharing database to 300,000 hashes.”

In a Sept. 4 letter, Facebook Vice President–U.S. policy Kevin Martin said, “[W]e asked former Senator Jon Kyl to conduct a review of potential anticonservative bias at Facebook. He and his team at Covington & Burling met with more than 130 conservative politicians and organizations and produced a report outlining the key concerns they heard as well as the changes Facebook has already made to address them. These changes include making our decisions more transparent by providing more information as to why people are seeing specific posts on News Feed; helping Page managers see when enforcement action takes place; launching an appeals process; creating a new Oversight Board for content, made up of people with a diverse range of ideological views; and changes to how we label ads concerning social issues, elections, or politics. The Kyl report also addressed our policy on banning images of patients with medical tubes, which had been applied inconsistently in the past, and was inadvertently impacting pro life advertising. This was an issue you raised with us in a Senate Judiciary Subcommittee hearing last year, as well. We incorporated that feedback and adjusted our policies to try to prevent these unintended consequences. We take the concerns set forth in the report seriously, and we will continue to work with Senator Kyl and his team to examine and, where necessary, adjust our policies and practices going forward.”

In a letter dated Aug. 31, Twitter Head–U.S. federal policy Lauren Culbertson said, “Twitter does not use political viewpoints, perspectives, ideology or party affiliation to make any decisions, whether related to automatically ranking content or how we enforce our rules.”

Ms. Culbertson described a variety of false, misleading, or unverifiable claims that Twitter does not allow regarding COVID-19. With respect to how Twitter differentiates between violent riots and peaceful protests, she said, “Twitter prohibits individuals to use Twitter to make violent threats. We define violent threats as statements of an intent to kill or inflict serious physical harm on a specific person or group of people. Under this policy, an individual cannot state an intention to inflict violence on a specific person or group of people.”

In a two-page Aug. 20 letter that did not respond specifically to each of the 11 questions posed by Sen. Lee in July, Anthony Casalena of Squarespace said, “Our share of the [content management system (CMS) platform] market has been estimated by third parties to be less than 3% among CMS platforms. Given that your letter suggests that some of the other recipients may exercise monopoly power, we want to be clear that Squarespace has neither monopoly power nor market power. We are also not in the same market as the other recipients of your letter.”

He added, “Moreover, we do not provide a free service, we are not an advertising platform, we do not provide advertising on the websites we host, and we do not promote or feed content to users. Rather, we operate a paid service for publishing websites and sell subscriptions to our customers.”

Mr. Casalena continued, “We understand your inclusion of Squarespace in the letter to be based on the removal of the website americasfrontlinedoctors.com which was hosted on our platform. In that case, we received multiple complaints about the website and a video accessible through the website. In reviewing those complaints, we found that the video claimed that there was a cure for COVID-19. The FDA has said that such content could lead to serious and life-threatening harm.”

He further said that many of the questions posed by Chairman Lee “do not relate to us because we do not have market power, we do not have a comments section, and we do not engage in content removal coordination with other online platforms or competitors. These questions also do not appear to be tailored to paid CMS platforms like Squarespace.” —Lynn Stanton, [email protected]

MainStory: FederalNews Congress InternetIoT Covid19

Back to Top

Interested in submitting an article?

Submit your information to us today!

Learn More