The Justice Department today proposed a series of section 230 “reforms” to “realign the scope of Section 230 with the realities of the modern internet,” and Sen. Josh Hawley (R., Mo.) introduced a bill that would deny section 230 liability protections to “Big Tech companies” unless they include a pledge to “operate in good faith” in their terms and subject them to a fine if they violate those pledges.
The Justice Department said its proposals—which include carve-outs from the liability protection for content that involves child abuse, terrorism, and cyber-stalking and greater transparency about content moderation—are aimed at providing “stronger incentives for online platforms to address illicit material on their services, while continuing to foster innovation and free speech.”
These actions follow a recent executive order by President Trump aimed at limiting companies’ ability to claim the liability shield for third-party content in section 230 of the 1996 Communications Decency Act if they remove or limit access to content (TR Daily, May 28). Among the actions called for in the executive order were Justice Department recommendations for legislative changes to section 230.
Among the provisions of section 230 is protection from civil liability for third-party content when an interactive computer service provider or user voluntarily acts in good faith to restrict access or availability to material it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
Republicans have complained in recent years that social media platforms like Facebook and Twitter discriminate against conservative content when restricting access to user posts, but some Democrats have criticized social media platforms for not being proactive enough in restricting disinformation, violence-inciting posts, and hate speech.
Specifically, the Justice Department outlined four areas related to section 230 that it said are “ripe for reform.”
The first area would involve “incentivizing online platforms to address illicit content” through carving out from the section 230 “good Samaritan” protection “truly bad actors that purposefully facilitate or solicit third-party content or activity that would violate federal criminal law; through “exempting from immunity specific categories of claims that address particularly egregious content, including (1) child exploitation and sexual abuse, (2) terrorism, and (3) cyber-stalking”; and through case-specific carve-outs “where a platform had actual knowledge or notice that the third party content at issue violated federal criminal law or where the platform was provided with a court judgment that content is unlawful in any respect.”
The second area would involve clarifying federal government enforcement capabilities to address unlawful content by making “clear that the immunity provided by Section 230 does not apply to civil enforcement actions brought by the federal government.” The statute already exempts criminal enforcement from its liability immunity.
The third area would be aimed at promoting competition by clarifying that section 230 does not apply to federal antitrust claims. “Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players. It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech,” the Justice Department said.
The fourth area would be aimed at promoting open discourse and greater transparency by replacing the phrase “otherwise objectionable” in section 230(c)(2) with “unlawful” and “promotes terrorism”; by providing a statutory definition of “good faith” that “would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and accompanied by a reasonable explanation, unless such notice would impede law enforcement or risk imminent harm to others”; and by explicitly overruling the New York Supreme Court’s 1995 pre-CDA ruling in “Stratton Oakmont, Inc., v. Prodigy Services Co.,” which held that Prodigy became liable as a publisher of users’ content by moderating message boards. Instead, the department proposed “clarifying that a platform’s removal of content pursuant to Section 230(c)(2) or consistent with its terms of service does not, on its own, render the platform a publisher or speaker for all other content on its service.”
Meanwhile, Sen. Hawley’s proposed Limiting Section 230 Immunity to Good Samaritans Act, which is co-sponsored by Sens. Marco Rubio (R., Fla.), Mike Braun (R., Ind.), and Tom Cotton (R., Ark.), would create a private right of action for users against “the major Big Tech companies for breaching their contractual duty of good faith,” the senator’s office explained in a press release.
The bill would establish a duty of good faith prohibiting the tech firms from discriminating in the enforcement of their terms of terms and from failing to honor their commitments, the press release said. A breach of the duty of good faith would be subject to a payment of “$5,000 plus attorney’s fees to each user who prevails,” it said.
Sen. Richard Blumenthal (D., Conn.), who has called for restrictions on section 230 liability immunity and who authored legislation to carve out sex-trafficking content from section 230, said, “I’ve certainly been one of Congress’ loudest critics of Section 230, but I have no interest in being an agent of Bill Barr’s speech police. Whatever the criticisms I may have of the current law, using overblown claims of online ‘anti-conservative bias’ to suppress free speech is absolutely unacceptable. My bipartisan efforts to narrow the scope of Section 230 have been carefully crafted to address Big Tech’s most egregious failures to protect survivors of human trafficking and children who have been horrifically abused, while including ironclad protections for free expression. I’m deeply concerned that President Trump and Attorney General Barr are exploiting Big Tech’s complicity in human misery to advance their own political agenda.”
Much of the immediate reaction from tech-sector stakeholders to the Justice Department’s recommendations and Sen. Hawley’s bill was critical.
Regarding the Justice Department proposal, Computer & Communications Industry Association President Matt Schruers said, “This is a shockingly ill-conceived proposal. Amid a pandemic, pervasive racial injustice, in an election season, the Justice Department proposes to remove from this critical statute the language that provides legal certainty for the removal of everything from coronavirus misinformation to racism to disinformation by foreign intelligence operatives. Why would the Justice Department want to limit companies’ ability to fight these threats?”
Regarding the Hawley bill, Information Technology and Innovation Foundation Vice President Daniel Castro said, “Social media companies provide an important platform for users of many different political orientations to share ideas, and content moderation is necessary to limit the spread of misinformation, hate speech, and other objectionable material. Unfortunately, this legislation would expose popular social media services to a wave of new lawsuits that would undercut their ability and willingness to effectively moderate their platforms.”
Mr. Castro added, “While this legislation is unlikely to gain broad support, it represents yet another attempt to intimidate companies from enforcing fair and reasonable online content moderation practices by threatening to poke holes in the intermediary liability protections that have formed the legal foundation of the Internet economy. There is room for debate on reforms, but these reforms should be focused on reducing unlawful activities, not restricting lawful speech.”
Commenting earlier in the day on the Hawley bill and the then-still-expected Justice Department recommendations, Public Knowledge Legal Director John Bergmayer said, “While there is room to debate how platforms can better address illegal and harmful content, some details of these proposals seem intended to directly regulate the content moderation choices that platforms make in ways that are flatly unconstitutional. Platforms are, and should be, free to make editorial decisions about what content to allow on their services, and to apply their subjective judgment as to what content is ‘objectionable,’ and what content is not.”
Mr. Bergmayer added, “The government of course has a role in ensuring that unlawful content is taken down, and in limiting the harms caused by content that platforms distribute. But this cannot be a cover for overriding a platform's editorial choices, however much particular policymakers might disagree with them. We welcome proposals designed to address legitimate online harms and to give users more rights of redress, including the right of users to challenge take-downs they believe are mistaken. Measures that would increase transparency or, in some areas such as public health information, promote consistency across platforms, may also be valuable. But these efforts cannot be overshadowed by efforts to override the editorial decisions that platforms must make every day, which could subject platforms to lawsuits for taking down harmful, abusive, and misleading content.”
He said, “Speech regulations of this kind are, if anything, more likely to lead to platforms taking a ‘hands off’ approach to content moderation, stymying efforts to get platforms to do more to combat online misinformation, fraud, criminality, and abuse. People who find the editorial and content choices of major platforms objectionable should support measures that empower users, such as interoperability and competition rules that allow users, not the government, to decide what kind of platform they want to use.”
Regarding the Hawley bill, Internet Association interim President and Chief Executive Officer Jon Berroya said, “Section 230 is the law that empowers online platforms to moderate and delete harmful content that no reasonable individual would want online. Opening up those moderation decisions to second-guessing via a never ending slew of frivolous lawsuits would not make the internet better or safer. The First Amendment exists to protect individuals and entities from exactly this type of governmental intrusion into private activity, something courts have repeatedly affirmed.”
And, like Public Knowledge, commenting ahead of the expected Justice Department recommendations, Mr. Berroya said, “Rolling back Section 230 protections will make it harder, not easier, for online platforms to make their platforms safe. The world before Section 230 was one where platforms faced liability for removing things like spam or profanity. Weakening Section 230 brings us closer to that world. The threat of litigation for every content moderation decision would hamper IA member companies’ ability to set and enforce community guidelines and quickly respond to new challenges in order to make their services safe, enjoyable places for Americans.”
Commenting ahead of the expected Justice Department recommendations, Free Press Action Senior Policy Counsel Gaurav Laroia said, “The Department of Justice’s Section 230 proposal is unworkable, unconstitutional and would make the internet nearly unusable. There are perhaps workable fixes to Section 230 that would make companies more proactive in taking down harassing content, scams and other criminal activity occurring on their sites. Attempting this through an avalanche of civil suits is a mistake. We can combat activity that is already illegal without Attorney General Bill Barr’s radical changes to the statute.”
Mr. Laroia added, “The proposal would reportedly involve the government in determining whether platforms are acting in good faith when they attempt to enforce their terms of service and community guidelines. That would clearly violate the First Amendment, which protects the speech of private entities from government interference. No matter how much the administration would like to pretend otherwise, the First Amendment definitely doesn’t protect presidents from private parties.”
He continued, “The Barr proposal invites the government to evaluate whether websites are consistently applying their terms of service, and requires them to provide the government with explanations for content removals. This is conservative speech-policing cloaked in talk of fairness and transparency. The administration has repeatedly questioned the ability of sites like Facebook and Twitter to flag misinformation or violent rhetoric from the president. This plan is an unconstitutional attempt to give the Trump administration the power to stop critics and fact-checkers alike from commenting on this president’s violent and dangerous pronouncements.” —Lynn Stanton, [email protected]
MainStory: FederalNews Congress InternetIoT
Interested in submitting an article?
Submit your information to us today!Learn More