Tag Archives: First Amendment

School district has to stop filtering web content

PFLAG v. Camdenton R–III School Dist., 2012 WL 510877 (W.D.Mo. Feb. 16, 2012)

Several website publishers that provide supportive resources directed at lesbian, gay, bisexual, and transgender (LGBT) youth filed a First Amendment lawsuit against a school district over the district’s use of internet filtering software. Plaintiffs asked the court for an injunction against the district’s alleged practice of preventing students’ access to websites that expressed a positive viewpoint toward LGBT individuals.

The court granted a preliminary injunction. It found that by using URL Blacklist software, the district (despite its assertions to the contrary) engaged in intentional viewpoint discrimination, in violation of the website publishers’ First Amendment rights. The URL Blacklist software — which relied in large part on dmoz.org — classified positive materials about LGBT issues within the software’s “sexuality” filter, and it put LGBT-negative materials under “religion,” which were not blocked.

It found that the plaintiffs had a fair chance of success on the merits of their First Amendment claims. The school district had claimed it was simply trying to comply with a federal law that required the blocking of content harmful to minors. But the court found that the chosen method of filtering was not narrowly tailored to meet that interest.

One may wonder whether Section 230 of the Communications Decency Act could have protected the school district in this lawsuit. After all, 47 U.S.C. 230(c)(2)(A) provides that:

No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected. . . . (Emphasis added.)

Section 230 would probably not have been much help, because the plaintiffs were seeking injunctive relief, not money damages. An old case called Mainstream Loudoun v. Bd. of Trustees of Loudoun, 24 F. Supp. 2d 552 (E.D. Va. 1998) tells us that:

[Section] 230 provides immunity from actions for damages; it does not, however, immunize [a] defendant from an action for declaratory and injunctive relief. . . . If Congress had intended the statute to insulate Internet providers from both liability and declaratory and injunctive relief, it would have said so.

One could understand the undesirability of applying Section 230 to protect filtering of this sort even without the Mainstream Loudoun holding. If Section 230 completely immunized government-operated interactive computer service providers, allowing them to engage freely in viewpoint-based filtering, free speech would suffer in obvious ways. And it would be unfortunate to subject Section 230 to this kind of analysis, whereby it would face the severe risk of being unconstitutional as applied.

Oregon media shield law did not protect blogger from having to reveal her sources

Obsidian Finance Group, LLC v. Cox, 2011 WL 5999334 (D.Or. November 30, 2011)

Plaintiff filed a defamation lawsuit against defendant, who self-identified as an “investigative blogger” and a member of the “media.” Defendant asked the court to protect her from having to turn over the identity of the sources she spoke with in connection with drafting the allegedly defamatory content. She claimed that she was covered under Oregon’s media shield law, which provides in relevant part that:

No person connected with, employed by or engaged in any medium of communication to the public shall be required by … a judicial officer … to disclose, by subpoena or otherwise … [t]he source of any published or unpublished information obtained by the person in the course of gathering, receiving or processing information for any medium of communication to the public[.]

The court gave two reasons for finding that defendant was not covered by the shield law. First, although defendant thought of herself as the “media,” the record failed to show that she was affiliated with any newspaper, magazine, periodical, book, pamphlet, news service, wire service, news or feature syndicate, broadcast station or network, or cable television system. Thus, according to the court, she was not entitled to the protections of the law in the first instance.

Second, even if defendant were otherwise entitled to those protections, another part of the statute specifically provides that “[t]he provisions of [the shield law] do not apply with respect to the content or source of allegedly defamatory information, in [a] civil action for defamation wherein the defendant asserts a defense based on the content or source of such information.” Because this case was a civil action for defamation, defendant could not rely on the media shield law.

Employee’s Facebook status update was protected by the First Amendment

Mattingly v. Milligan, 2011 WL 5184283 (E.D.Ark. November 1, 2011)

Plaintiff worked in the county clerk’s office. Her old boss, whom she had supported in the election, lost. Her new boss (the newly-elected county clerk) began cleaning house and laid off some of the staff. Plaintiff survived that round of cuts, but lamented those terminations in a Facebook status update. Empathetic comments from county residents ensued.

The new boss found out about the status update and the comments. So he fired plaintiff. She sued, alleging that the termination violated her right to free speech. The boss moved for summary judgment, but the court denied the motion, sending the case to trial.

Here is some of the relevant Facebook content:

Plaintiff’s status update: So this week not going so good bad stuff all around.

Friend’s comment: Will be praying. Speak over those bad things positively.

Plaintiff’s comment: I am trying my heart goes out to the ladies in my office that were told by letter they were no longer needed…. It’s sad.

* * *

Friend’s comment: He’s making a mistake, but I knew he would, too bad….

* * *

Friend’s comment: I can’t believe a letter would be the manner of delivering such a message! I’m with the others…they will find some thing better and tell them this is an opportunity and not a closed door. Prayers for you and friends.

* * *

Friend’s comment: How could you expect anything else from [defendant], he was an…well nevermind.

Courts addressing claims by public employees who contend that they have been discharged for exercising their right to free speech must employ a two-step inquiry: First, the court must determine whether the speech may be described as “speech on a matter of public concern.” If so, the second step involves balancing the employee’s right to free speech against the interests of the public employer.

In this case, the court found the speech to be on a matter of public concern because:

  • the statements were made in a “public domain”
  • those who saw the statements (many of whom were residents of the county) understood them to be about terminations in the clerk’s office
  • some of the comments contained criticism of the termination decision
  • six constituents of the new clerk called his office to complain
  • the press and media had covered the situation

As for the second step in the analysis, namely, balancing the employee’s right to free speech against the interests of the public employer, the court did not even undertake a balancing test, as there simply was no evidence that the status update and the comments disrupted the operations of the clerk’s office.

Online threats made by blogger were not protected by the First Amendment

State v. Turner, 2011 WL 4424754 (Conn. Super. September 6, 2011)

A Connecticut state court held that prosecuting a blogger for posting content online encouraging others to use violence did not violate the blogger’s First Amendment right to free speech.

Defendant was charged under a Connecticut statute prohibiting individuals from “inciting injury to persons or property.” Angry about a bill in the state General Assembly that would have removed financial oversight of Catholic parishes from priests and bishops, defendant posted the following statements to his blog:

  • [T]he Founding Fathers gave us the tools necessary to resolve [this] tyranny: The Second Amendment
  • [My organization] advocates Catholics in Connecticut take up arms and put down this tyranny by force. To that end, THIS WEDNESDAY NIGHT ON [my radio show], we will be releasing the home addresses of the Senator and Assemblyman who introduced bill 1098 as well as the home address of [a state ethics officer].
  • These beastly government officials should be made an example of as a warning to others in government: Obey the Constitution or die.
  • If any state attorney, police department or court thinks they’re going to get uppity with us about this, I suspect we have enough bullets to put them down too

Defendant challenged the application of the state statute as unconstitutional. The court disagreed, finding there to be “little dispute that the defendant’s message explicitly advocate[ed] using violence.” Moreover, the court found the threatened violence to be “imminent and likely.” The blog content said that the home address of the legislators and government officials would be released the following day.

Though the court did not find that a substantial number of persons would actually take up arms, it did note, in a nod to 9/11, “the devastation that religious fanaticism can produce in this country.” As such, there was a sufficient basis to say that defendant’s vitriolic language had a substantial capacity to propel action to kill or injure a person.

District judge stays magistrate’s order requiring identification of anonymous defendants

This is a post by Jonathan Rogers. Jon is a licensed attorney in California, with a focus on technology and entertainment law. You can reach him by email at jon@jonarogers.com or follow him on Twitter at @jonarogers.

Faconnable USA Corp. v. Doe, Slip Copy, 2011 WL 2173736 (D.Colo., Jun 2, 2011)

Faconnable issued a subpoena duces tecum to Skybeam, an Internet Service Provider, requesting identifying information about the users associated with two different IP addresses. A magistrate judge denied Skybeam’s motion for protective order, and required Skybeam to provide the requested information. Skybeam sought review of the denial of the protective order with the district court, asking for a stay of the magistrate’s order requiring the disclosure of the information. The court granted the motion to stay.

The court looked at four factors to determine whether it was appropriate to issue a stay against providing the information.

  • the likelihood of success on appeal (to the district judge)
  • the threat of irreparable harm if the stay or injunction is not granted
  • the absence of harm to opposing parties if the stay or injunction is granted
  • any risk of harm to the public interest

The court noted that if the last three factors are in a moving party’s favor, the first factor of likelihood of success is given less importance.

The court determined that if the stay were denied, the ISP would have to disclose the Does’ identities, which could impact their First Amendment interests to speak anonymously. However, if the stay were allowed, the ISP could preserve the information for production later, the only harm being a possible delay for Faconnable’s suit.

The court found that, on balance, the risk of losing First Amendment freedoms was a greater harm than delayed litigation.

Texas supreme court says identities of anonymous bloggers should not be disclosed

In re Does, — S.W.3d —, 2011 WL 1447544 (Texas, April 15, 2011)

The issue of anonymity is a hot topic in internet law. The question of whether an internet user known only by an IP address or username or website name should be identified arises fairly often in the early stages of internet defamation and certain copyright infringement cases. For example, the issue is a big one in the numerous copyright cases that have been brought recently against BitTorrent users who get subpoenas after being accused of trading copyrighted works online.

The supreme court of Texas has issued an opinion that protects the anonymity of a couple of bloggers who were accused of defamation, copyright infringement and invasion of privacy by another blogger. The court ordered that a subpoena served on Google (who hosted the Blogger accounts in question) be quashed.

Texas rules of procedure (Rule 202) allow a petitioner to take depositions before a lawsuit is filed in order to investigate a potential claim. The petitioner in this case filed such an action, and Google agreed to turn over the information about the anonymous Blogger users.

But the anonymous bloggers objected, and moved to quash the deposition subpoena, arguing that the findings required for the discovery to be taken had not been made.

The trial court was required to find that:

(1) allowing the petitioner to take the requested depositions may prevent a failure or delay of justice in an anticipated suit; or

(2) the likely benefit of allowing the petitioner to take the requested deposition to investigate a potential claim outweighs the burden or expense of the procedure.

Neither of these findings were made. Petitioner had tried to argue that the findings were not necessary because he had gotten the agreement of Google to turn over the information.

But the court saw how that missed the point. It held that without the required findings, the discovery could not be taken in the face of objections brought by other interested parties (the parties whose identities were at risk of being revealed).

While many courts have evaluated this kind of question using a first amendment analysis (i.e., is the John Doe’s interest in speaking anonymously outweighed by the plaintiff’s right to seek redress), the court in this case looked to more general concerns of avoiding litigation abuse. Citing to a law review article by Professor Hoffman, the court observed that there is “cause for concern about insufficient judicial attention to petitions to take presuit discovery” and that “judges should maintain an active oversight role to ensure that [such discovery is] not misused”.

Facebook victorious in lawsuit brought by kicked-off user

Young v. Facebook, 2010 WL 4269304 (N.D. Cal. October 25, 2010)

Plaintiff took offense to a certain Facebook page critical of Barack Obama and spoke out on Facebook in opposition. In response, many other Facebook users allegedly poked fun at plaintiff, sometimes using offensive Photoshopped versions of her profile picture. She felt harassed.

But maybe that harassment went both ways. Plaintiff eventually got kicked off of Facebook because she allegedly harassed other users, doing things like sending friend requests to people she did not know.

When Facebook refused to reactivate plaintiff’s account (even after she drove from her home in Maryland to Facebook’s California offices twice), she sued.

Facebook moved to dismiss the lawsuit. The court granted the motion.

Constitutional claims

Plaintiff claimed that Facebook violated her First and Fourteenth Amendment rights. The court dismissed this claim because plaintiff failed to demonstrate that the complained-of conduct on Facebook’s part (kicking her off) “was fairly attributable to the government.” Plaintiff attempted to get around the problem of Facebook-as-private-actor by pointing to the various federal agencies that have Facebook pages. But the court was unmoved, finding that the termination of her account had nothing to do with these government-created pages.

Breach of contract

Plaintiff’s breach of contract claim was based on other users harassing her when she voiced her disapproval of the Facebook page critical of the president. She claimed that in failing to take action against this harassment, Facebook violated its own Statement of Rights and Responsibilities.

The court rejected this argument, finding that although the Statement of Rights and Responsibilities may place restrictions on users’ behavior, it does not create affirmative obligations on the part of Facebook. Moreover, Facebook expressly disclaims any responsibility in the Statement of Rights and Responsibilities for policing the safety of the network.

Good faith and fair dealing

Every contract (under California law and under the laws of most other states) has an implied duty of good faith and fair dealing, which means that there is an implied “covenant by each party not to do anything which will deprive the other parties . . . of the benefits of the contract.” Plaintiff claimed Facebook violated this implied duty in two ways: by failing to provide the safety services it advertised and violating the spirit of the terms of service by terminating her account.

Neither of these arguments worked. As for failing to provide the safety services, the court looked again to how Facebook disclaimed responsibility for such actions.

The court gave more intriguing treatment to plaintiff’s claim that Facebook violated the spirit of its terms of service. It looked to the contractual nature of the terms of service, and Facebook’s assertions that users’ accounts should not be terminated other than for reasons described in the Statement of Rights and Responsibilities. The court found that “it is at least conceivable that arbitrary or bad faith termination of user accounts, or even termination . . . with no explanation at all, could implicate the implied covenant of good faith and fair dealing.”

But plaintiff’s claim failed anyway, because of the way she had articulated her claim. She asserted that Facebook violated the implied duty by treating her coldly in the termination process, namely, by depriving her of human interaction. The court said that termination process was okay, given that the Statement of Rights and Responsibilities said that it would simply notify users by email in the event their accounts are terminated. There was no implied obligation to provide a more touchy-feely way to terminate.

Negligence

Among other things, to be successful in a negligence claim, a plaintiff has to allege a duty on the part of the defendant. Plaintiff’s negligence claim failed in this case because she failed to establish that Facebook had any duty to “condemn all acts or statements that inspire, imply, incite, or directly threaten violence against anyone.” Finding that plaintiff provided no basis for such a broad duty, the court also looked to the policy behind Section 230 of the Communications Decency Act (47 U.S.C. 230) which immunizes website providers from content provided by third parties that may be lewd or harassing.

Fraud

The court dismissed plaintiff’s fraud claim, essentially finding that plaintiff’s allegations that Facebook’s “terms of agreement [were] deceptive in the sense of misrepresentation and false representation of company standards,” simply were not specific enough to give Facebook notice of the claim alleged.

Court orders anonymous accused Bittorrent defendants to be identified

West Bay One v. Does 1 – 1,653, — F.Supp.2d. —, 2010 WL 3522265 (D.D.C. September 10, 2010)

Achte/Neunte Boll Kino Beteiligungs v. Does 1 – 4,577, — F.Supp.2d —, 2010 WL 3522256 (D.D.C. September 10, 2010)

In mass copyright infringement cases against alleged traders of copyrighted movies via Bittorrent, unknown defendants had no reasonable expectation of privacy in their subscriber information held by internet service provider.

Several unknown “Doe” defendants who were sued for copyright infringement for trading movies via Bittorrent moved to quash the subpoenas that the plaintiff copyright owners served on the defendants’ internet service providers.

The subpoenas sought subscriber information such as the defendants’ names, addresses and MAC addresses, so that they could be named as defendants in the copyright litigation.

Defendants moved to quash the subpoenas, arguing that their subscriber information was private information that should not be disclosed pursuant to a Rule 45 subpoena. The court denied the motions and ordered the subscriber information produced.

The court held that the defendants did not have a reasonable expectation of privacy in their subscriber information held by the internet service providers. It cited to a number of cases that supported this holding, each of which had found that a person loses his or her expectation of privacy in information when that information is disclosed to a third party. See Guest v. Leis (6th Cir.), U.S. v. Hambrick (4th Cir.), and U.S. v. Kennedy (D. Kan.).

In footnotes, the court also addressed the potential First Amendment rights that the defendants would have to engage in anonymous file sharing. It quickly dispensed with any notion that such activities were protected in this case, as the pleadings on file set forth a prima facie case of infringement. “[C]ourts have routinely held that a defendant’s First Amendment privacy interests are exceedingly small where the ‘speech’ is the alleged infringement of copyrights.”

Ohio internet obscenity statute constitutional

American Booksellers Foundation for Free Expression v. Strickland, — F.3d —, 2010 WL 1488123 (6th Cir. April 15, 2010)

Court holds that statute prohibiting distribution of material harmful to minors directly via the internet is not overly broad and therefore not unconstitutional.

Ohio has a statute that criminalizes sending juveniles material that is harmful to those juveniles (ORC 2907.31). Section D of that statute specifically addresses communications “by means of an electronic method of remotely transmitting information.”

A group of booksellers and publishers challenged this statute on First Amendmendment grounds, arguing that the provisions are overly broad. After a complex procedural journey that began in 2002, the Sixth Circuit Court of Appeals has held that the statute is not unconstitutional.

The court held that the statute was not overly broad because it only apllies to personally directed communications. For that reason, the plaintiffs were unable to demonstrate from the text of the statute that a “substantial number of instances exist in which the law cannot be applied constitutionally.”

Unlike a typical First Amendment case, the court did not apply the “strict scrutiny” test for constitutionality, because the statute does not affect protected speech among adults. But the court noted that even if that test applied, it would have survived strict scrutiny, given the compelling interests in protecting children from predators.

(Photo: Derived from an image licensed under a Creative Commons Attribution Share-Alike (2.0) from iboy’s photostream)

Is banning sex offenders from social networking sites constitutional?

Mashable and others are reporting on a law that the governor of Illinois signed earlier this week, banning use of social networking sites by convicted sex offenders. The big criticism of that law seems to be that it may be unconstitutional. That question is worth thinking about.

The most likely constitutional challenge will be that the law is too broad. For a law to prohibit certain speech and not run afoul of the First Amendment, it must be narrowly tailored to serve a compelling government interest. Clearly there is a compelling government interest in protecting children and other victims of sex crimes from perpetrators. So the real analysis comes from examining whether this restriction on the use of social networking sites is narrowly tailored to serve that purpose.

What the law says

Let’s back up and take a look at what the new law actually says. In short, it requires any sex offender that is on parole, supervised release, probation, conditional release or court supervision to “refrain from accessing or using a social networking website.” Note that the restriction is not a lifetime ban, but just a restriction to be in effect during the sentence.

There are a number of terms to unpack.

There is a prohibition on “accessing” and “using.” This is kind of redundant, because the statute defines “access” as “to use, instruct, communicate with, store data in, retrieve or intercept data from, or otherwise utilize any services of a computer.” (The redundant part comes from the fact that to “use” is part of the definition of “access”.)

The most important definition for our discussion is that of a “social networking website”:

“Social networking website” means an Internet website containing profile web pages of the members of the website that include the names or nicknames of such members, photographs placed on the profile web pages by such members, or any other personal or personally identifying information about such members and links to other profile web pages on social networking websites of friends or associates of such members that can be accessed by other members or visitors to the website. A social networking website provides members of or visitors to such website the ability to leave messages or comments on the profile web page that are visible to all or some visitors to the profile web page and may also include a form of electronic mail for members of the social networking website.

This is a tortured definition plagued by a couple of runon sentences, but in essence, a social networking website, as defined under Illinois law, is any site that has:

  • profile pages that contain
  • identifying information such as names, usernames or photographs, and which are
  • linked to other profile pages of “friends or associates” that can be
  • accessed by other members or visitors to the website, and
  • provides the ability to leave messages or comments on the profile visible to others

In a rather strange style for legislative writing, the definition says that a social networking site “may also include” direct messaging. That’s weird to say in a statute — does it have to include direct messaging to be considered a social networking site? One could argue either way. So that part of the definition does nothing to assist.

How one can run afoul of the law

By merely accessing a social networking site, a sex offender violates this new law. He or she doesn’t have to actually use any of the social networking functionality, all that is necessary is to merely retrieve data from the computer on which the site is stored. Clearly it would be verboten to use MySpace and Facebook. But also off limits would be LinkedIn and Focus. Flickr? YouTube? No way, even if the offender is just going there to passively view content for completely benign purposes.

The constitutional problem

Remember, the law has to be narrowly tailored to meet the compelling state interest. That means that if there is some less restrictive alternative than the law as enacted to fix the problem, the law is too broad and therefore unconstitutional. It would certainly seem that there is something less restrictive than a prohibition on merely visiting a website with social media functionality. A good start would be more aggressively targeting the actual online conduct that might put people at risk — actual online interaction through social media.

But it is far from clear. The Seventh Circuit (which is the federal appellate court that would hear a constitutional challenge to an Illinois law) has held that a convicted sex offender can lawfully be prohibited from visiting a city park. See Doe v. City of Lafayette, 377 F.3d 757 (7th. Cir. 2004). In a city park there is plenty of conduct one can undertake which is not unlawful or does not threaten others. And the court held that restriction was not unconstitutional. There is plenty of conduct one can engage in on a “social networking site” as defined by the statute that is not harmful as well.

Is the comparison between a city park and a social networking site justified?

Keyboard image courtesy Flickr user striatic under this Creative Commons License.