James Goodale, the former vice chairman of the New York Times, published an article on Friday in the New York Law Journal (registration required) on CDA 230 and the highly publicized Doe v. Ciolli case. Goodale argues that CDA 230, the federal law that shields providers of "interactive computer service[s]" from liability for defamation and other torts for publishing the statements of third parties, should be amended to impose liability in cases where a website operator "knowingly causes defamation by refusing to take down libelous posts." Goodale, a distinguished media lawyer, is not alone in his concern that Congress and the courts have "gone too far" in the direction of protecting website operators at the expense of individuals whose reputations may have been damaged. The argument depends, to a large extent, on the claim that CDA 230 somehow leaves injured plaintiffs with no remedy or recourse for the harm done to them. (See, for instance, Ron Coleman's post on December 7.)
But the simple reality is that courts routinely order discovery of the identities of anonymous and pseudonymous posters. This is nothing new -- courts have been doing so since the early days of the Internet. True, some courts have adopted heightened standards before ordering discovery of an anonymous poster's identity (Doe v. Cahill, Dendrite Int'l v. Does, Mobilisa v. Doe, Greenbaum v. Google), but these courts are not creating a situation where no recourse is available for a legitimately injured plaintiff. They simply ask for some evidence to support the claim before going forward, and they don't require evidence on those elements that are nearly impossible to prove without knowledge of the defendant's identity, such as actual malice. All this is consistent with Goodale's own view that "[t]here are excellent First Amendment reasons, in the ordinary course, not to unmask [anonymous posters]." What's more, some court decisions still allow discovery based on "good faith" or facially sufficient allegations (Alvis Coatings v. Doe, In re Subpoena Duces Tecus to America Online). Just this fall, in Essent v. Doe, a Texas state court applied a good-faith standard and ordered discovery of the identity of an anoymous blogger who writes critically about a hospital in Paris, Texas. The blogger petitioned for a writ of mandamus, but there is no guarantee that the appellate court will be swayed by the reasoning in Cahill, Dendrite, et al.
To his credit, Goodale recognizes that the law is still developing in this area, and that even the more rigorous standards are not unreasonably high -- "if one can make out a pretty good case of libel, then the court will unmask the speaker." So, what's the problem? Why do we need to go after website operators when a plaintiff can pursue the wrongdoer directly? Coming up with sufficient evidence shouldn't be that difficult -- (1) identify the statement with specificity; (2) draft an affidavit outlining why it's not true; (3) document some economic harm if the statement is not libel per se. Remember, the hard part --establishing the requisite degree of fault-- comes later, after the defendant's identity has been disclosed. Goodale emphasizes that the plaintiffs in Doe v. Ciolli are private figures. So much the better -- plaintiffs like them need only prove negligence (at a later stage in the litigation anyway), and courts like Mobilisa and Dendrite that require an independent balancing of the parties' interests can take this factor into account. In fact, Goodale recognizes it as a "good bet" that the court in the AutoAdmit case will order disclosure of the identities of the anonymous posters.
Instead, according to Goodale, the problem is Tor, a software program that disguises your IP address by requesting websites on your behalf through a network of proxy servers. Goodale writes:
Anyone can download the instructions for Tor (I did), including terrorists. While apparently invented by the U.S. Navy and useful for legitimate purposes, U.S. Intelligence officials are reportedly panicked by potential (or actual) use of the system by terrorists.Tor's inventors boast in their materials there is no 'backdoor.' By that they mean there is no way to defeat the system. they even go further to say if intelligence officials ask them to take Tor off the Web, they will fight them in court.
And so, the Yale law students may never find out who defamed them.
Tor is a pretty slim reed on which to urge dilution of a law that has played such a substantial role in the development of a vibrant and open Internet, and I'm not sure what terrorists have to do with it. Tor is pretty well known here at the Berkman Center. Geeks and bloggers operating under oppressive regimes also likely are fans. But this is hardly a widespread phenomenon -- somehow I doubt that the "The Ayatollah of Rock-n-Rollah" and "Sleazy Z" knew about Tor (or had the foresight/patience required to use it correctly) when they made their vulgar postings on AutoAdmit, and there's certainly no indication that Tor was involved here -- nor in any other online defamation case. It strikes me as a speculative basis for an argument with such potentially huge ramifications.
What is not speculative is that CDA 230 has provided vital breathing space for the development and operation of the interactive Internet -- what is now known by the cliche Web 2.0. The benefits have been real and dramatic. As Duncan Riley wrote today on Techcrunch:
Whilst it may be easy to mock the utterances of hundreds of millions of bloggers and social networking site users, the 21st century will be remembered as the time that communication was democratized, a time where the power of a few was replaced by the power of many.
What CDA 230 does is free website operators from having to make constant judgment calls about whether or not particular statements are defamatory. Recall that they usually will have to make this call blind, with no information about the underlying facts and no special legal expertise. Sure, it's easy to see that statements like "She has herpes" and "she likes having group sex while family members watch" are probably defamatory, but Doe v. Ciolli is a particularly oddball case. How about "He's a traitor to his country"? How about calling someone a "Kahanist swine"? (See my post last week on Neuwirth v. Silverstein for the latter statement.) Is it reasonable to expect a website operator to know what's true and what's not? Does a website operator know what's protected opinion and what's not, or what is a public versus a private issue? Surely the unsupported specter of widespread use of Tor shoudn't be a sufficient boogeyman to threaten the interactivity of the Internet.
Moreover, if the law demanded that website operators make this kind of determination, they would simply take down third-party content at the first hint of trouble. There would be no way to limit this effect to "real" defamatory speech aimed at private parties. In fact, what we have observed in many defamation threats against websites is that the objects of legitimate, often important criticism seek to use the threat of legal action as a way to censor or chill speech they don't like. Subjecting websites to a requirement that they take down critical third-party speech upon demand or face liability would eliminate a great deal of important, socially valuable discussion of precisely the sort Goodale ordinarily would fight to protect.
(Disclaimer: James Goodale is on the CMLP Board of Advisors.)