On the day of the January 6 attack on the Capitol Building, Twitter suspended then-President Donald Trump’s account because of the risk of further incitement of violence. Some audiences saw the suspension as an appropriate action for violation of Twitter’s policies, while other groups called the suspension an attack on free speech. Twitter’s actions reignited longstanding debates about ways in which social media can promote or impede free speech and democratic engagement.
At a virtual NYU Law Forum sponsored by Latham & Watkins on April 15, Dean Trevor Morrison moderated a discussion on social media’s role in spreading misinformation and hate speech, liability protections granted to social media companies under Section 230, and possibilities for regulating social media platforms.
Selected remarks from the discussion:
Christopher Jon Sprigman, Murray and Kathleen Bring Professor of Law, NYU Law
“[Section 230 gives online service providers like Facebook or Google immunity for third-party content, content that is posted to those platforms by third parties. Section 230 also protects online service providers when they act in good faith to remove content that they find to be obscene or threatening, harassing—otherwise objectionable. Even if that content is constitutionally protected, the service provider is shielded from liability. Now, what Section 230 doesn’t do: so, Section 230 doesn’t apply to federal criminal liability, it doesn’t apply to electronic privacy claims, it doesn’t apply to violations of federal and state sex trafficking laws. And most importantly, it does not give online service providers immunity for content that they themselves produce.” (video 21:38)
Faiza Patel ’91, director, Liberty & National Security Program, Brennan Center for Justice, NYU Law
"I am concerned [about companies’ decisions regarding content moderation and removal], again, you know, because we don’t have enough information about who is impacted by [social media] takedowns and content moderation generally. We have our suspicions, we have…anecdotes here and there, so I think that any sort of changes in the liability regime must be accompanied by some kind of mandatory disclosure requirement, at least the larger platforms." (video: 1:22:42)
Watch the full discussion on video: