Blog

Section 230 protects Snapchat against lawsuit brought by assault victim

section 230

A young girl named C.O. found much misfortune using Snapchat. Her parents (the plaintiffs in this lawsuit) alleged that the app’s features caused her to become addicted to the app, to be exposed to sexual content, and to eventually be victimized on two occasions, including once by a registered sex offender.

Suing Snapchat

Plaintiffs sued Snap and related entities, asserting claims including strict product liability, negligence, and invasion of privacy, emphasizing the platform’s failure to protect minors and address reported abuses. Defendants moved to strike the complaint.

The court granted the motion to strike. It held that the allegations of the complaint fell squarely within the ambit of immunity afforded under Section 230 to “an interactive computer service” that acts as as a “publisher or speaker” of information provided by another “information content provider.” Plaintiffs “clearly allege[d] that the defendants failed to regulate content provided by third parties” when such third parties used Snapchat to harm plaintiff.

Publisher or speaker? How about those algorithms!

Plaintiffs had argued that their claims did not seek to treat defendants as publishers or speakers, and therefore Section 230 immunity did not apply. Instead, plaintiffs argued, they were asserting claims that defendants breached their duty as manufacturers to design a reasonably safe product.

Of particular interest was the plaintiffs’ claim concerning Snapchat’s algorithms which recommended connections and which allegedly caused children to become addicted. But in line with the case of Force v. Facebook, Inc., 934 F.3d 53 (2nd Cir. 2019), the court refused to find that use of algorithms in this way was outside the traditional role of a publisher. It was careful to distinguish the case from Lemmon v. Snap, Inc., 995 F.3d 1085 (9th Cir 2021), in which that court held Section 230 did not immunize Snapchat from products liability claims. In that case, the harm to plaintiffs did not result from third party content but rather from the design of the platform which tempted the users to drive fast. In this case, the harm to plaintiffs was the result of particular actions of third parties who had transmitted content using Snapchat, to lure C.O.

Sad facts, sad result

The court seemed to express some trepidation about its result, using the same language the First Circuit Court of Appeals used in Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 15 (1st Cir. 2016): “This is a hard case-hard not in the sense that the legal issues defy resolution, but hard in the sense that the law requires that [the court] … deny relief to plaintiffs whose circumstances evoke outrage.” And citing from Vazquez v. Buhl, 90 A.3d 331 (2014), the court observed that “[w]ithout further legislative action, however, there is little [this court can] do in [its] limited role but join with other courts and commentators in expressing [its] concern with the statute’s broad scope.”

V.V. v. Meta Platforms, Inc. et al., 2024 WL 678248 (Conn. Super. Ct., February 16, 2024)

See also:

Fourth Circuit overturns massive jury verdict in copyright case against internet service provider

music infringement

Plaintiff copyright holders sued defendant internet service provider alleging both vicarious and contributory copyright infringement liability arising from defendant’s customers downloading or distributing songs using BitTorrent. The jury found defendant liable and awarded $1 billion in statutory damages. Defendant sought review with the Fourth Circuit. On appeal, the court affirmed the jury’s finding of willful contributory infringement but remanded the action for a new trial on damages because it found plaintiffs failed to prove vicarious liability, as defendant did not profit from its subscribers’ acts of infringement.

No vicarious liability

Citing to Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913 (2005) and CoStar Grp., Inc. v. LoopNet, Inc., 373 F.3d 544 (4th Cir. 2004), the court observed that “[a] defendant may be held vicariously liable for a third party’s copyright infringement if the defendant ‘[1] profits directly from the infringement and [2] has a right and ability to supervise the direct infringer.’” In this case, the court found that plaintiffs failed to prove that defendant profited directly from its subscribers’ copyright infringement.

The crux of the financial benefit inquiry was whether a causal relationship existed between the subscribers’ infringing activity and defendant’s financial benefit. To prove vicarious liability, plaintiff had to show that defendant profited from its subscribers’ infringing download and distribution of plaintiffs’ copyrighted songs. The court found that plaintiffs did not meet that burden.

The appellate court disagreed with the lower court’s determination that defendant’s repeated refusal to terminate infringing subscribers’ accounts was enough to show financial benefit for these purposes. Instead, the court found that continued payment of monthly fees for internet service, even by repeat infringers, was not a financial benefit flowing directly from the copyright infringement itself. “Indeed, Cox would receive the same monthly fees even if all of its subscribers stopped infringing.”

The court rejected plaintiffs’ alternative theories for financial benefit. Plaintiffs argued that the high volume of infringing activity on defendant’s network, with roughly 13% of traffic from peer-to-peer activity and over 99% of that being infringing, suggested that the ability to infringe attracted customers to defendant’s internet service. However, the evidence did not conclusively show that customers chose defendant’s service specifically for its potential to facilitate copyright infringement. The argument overlooked the fact that internet service is essential for many aspects of modern life, and there was no specific evidence that defendant’s internet service was selected over competitors due to a more lenient stance on copyright infringement.

Additionally, plaintiffs claimed that defendant’s subscribers were willing to pay more for internet services that allowed for copyright infringement, citing defendant’s tiered pricing and the correlation between peer-to-peer activity and higher data usage. However, there was no substantial evidence to support the claim that subscribers chose higher internet speeds with the intention of infringing copyright. Plaintiffs’ own expert acknowledged that increased data usage could be attributed to numerous legal activities like streaming and gaming. The argument failed to establish a direct link between the desire for higher internet speeds and the intent to infringe copyright, leaving plaintiffs’ assertion that defendant profited from copyright infringement unsubstantiated. Consequently, the court found no basis for vicarious liability on defendant’s part for its subscribers’ copyright infringements, making it necessary to overturn the lower court’s decision on this issue.

Contributory liability upheld

The court upheld the lower court’s determination that defendant was contributorily liable for its subscribers’ infringement, finding that defendant was aware of and materially contributed to the infringing activities. The court emphasized the need for defendant to have knowledge of specific instances of infringement and the substantial certainty of continued infringement by particular subscribers. Despite defendant’s tiered internet services and a variety of lawful uses, the evidence presented at trial demonstrated defendant’s knowledge of repeat infringements and its decision to continue providing service to infringing subscribers, primarily to avoid losing revenue. The court rejected defendant’s arguments against contributory liability, affirming that providing a service with knowledge of its use for infringement, especially when specific instances are known, constituted material contribution to infringement.

But what are the damages now?

Because the $1 billion damages award was not allocated between the two theories of liability, and the jury was instructed to consider various factors, including the profits defendant earned from the infringements, the court could be sure that the vicarious liability verdict did not impact the damages awarded. Given this uncertainty and the significant discretion granted to the jury in determining statutory damages, the court vacated the damages award and remanded for a new trial on the damages issue.

Sony Music Entertainment v. Cox Communications, Inc., 2024 WL 676432 (4th Cir., February 20, 2024)

See also:

Peloton did not infringe trademark rights of fitness app maker

Plaintiff used the mark BIKE+ in connection with a fitness tracking app and obtained a federal registration for the mark. It sued defendant Peloton for trademark infringement over Peloton’s adoption and use of the mark PELOTON BIKE+. Defendant moved for summary judgment arguing, among other things, that there was no likelihood of confusion. The court granted defendant’s motion for summary judgment.

Sitting in the Ninth Circuit, the court embarked on an analysis under AMF, Inc. v. Sleekcraft Boats, 599 F.2d 341 (9th Cir. 1979) to assess the likelihood of consumer confusion. It considered several factors, including the strength of the mark, the similarity of the products, marketing channels used, and the intent behind the choice of mark, among others. The court determined that plaintiff’s BIKE+ mark, being descriptive of the app’s functionality to enhance biking experiences, did not possess the inherent distinctiveness that warrants a broad scope of protection. This was compounded by the existence of similar marks in the app marketplace, further reducing the strength of plaintiff’s mark.

The court then looked to the relatedness of the goods offered by both parties, the similarity of the marks in appearance, sound, and meaning, and the absence of evidence of actual consumer confusion. Despite the complementary nature of the defendant’s physical product and the plaintiff’s app, and some similarities in the marks, the lack of actual confusion evidence, along with divergent marketing channels and the sophistication of the consumers, weighed against the likelihood of confusion. The defendant’s intent in selecting its mark did not suggest a deliberate attempt to create confusion, further diminishing the plaintiff’s stance.

Ultimately, the court concluded that the descriptive nature of the plaintiff’s mark, the lack of significant commercial strength, and the minimal impact of relatively recent development activity made confusion unlikely. Defendant’s commercial prominence and extensive marketing efforts did not overshadow the plaintiff’s app to a degree that would cause confusion among consumers. Given all the circumstances and the specific context of each factor considered, the court found confusion to be possible but not probable, leading to the grant of summary judgment in favor of defendant.

World Champ Tech LLC v. Peloton Interactive, Inc., 2024 WL 665181 (N.D. California, February 16, 2024)

See also:

Software reseller not entitled to preliminary injunction to protect customer relationships

Plaintiff CD appointed defendant SST to be the exclusive reseller to certain customers of CD’s software development platform. CD sued SST for breach, and SST likewise filed counterclaims for breach of contract and fraudulent inducement. SST sought a preliminary injunction against CD, asking that the court prohibit CD from unilaterally terminating the reseller agreement.

SST asserted, among other things, that it would suffer irreparable harm from this termination, citing potential loss of solicited clients and reputational damage. CD argued, however, that these asserted harms could be remedied monetarily, and thus did not qualify as irreparable.

The court agreed with CD, finding SST’s arguments regarding reputational damage and loss of client relationships to be speculative and unsupported by concrete evidence. As such, these claims did not meet the stringent criteria for irreparable harm, which requires a clear, immediate threat of injury that monetary compensation could not redress.

Further undermining SST’s claim of irreparable harm was the notion that any potential financial losses due to CD’s actions, including the costs associated with resolving issues with target accounts or transitioning to alternative software solutions, were quantifiable and thus recoverable in monetary terms. The court noted that SST’s reluctance to make additional payments to CD for resolving software access issues did not constitute irreparable harm, as those could be recouped in resolution of the contract dispute. Moreover, the court pointed out that SST’s concerns about CD not restoring access post-payment were speculative and lacked evidentiary support, given the record showing ongoing negotiations and concrete offers from CD.

Citizen Developer, LLC v. System Soft Tech., Inc., 2024 WL 554140 (M.D. Penn. February 12, 2024)

See also:

Kids Online Safety Act: Quick Facts

What is KOSA?

Senators Blackburn and Blumenthal have introduced a new version of KOSA – the Kids Online Safety Act, which seeks to protect minors from online harms by requiring social media companies to prioritize children’s safety in product design and offer more robust parental control tools. Garnering bipartisan support with 62 Senate cosponsors in the wake of a significant hearing with Big Tech CEOs, the bill emphasizes accountability for tech companies, transparency in algorithms, and enhanced safety measures. The legislation has been refined following extensive discussions with various stakeholders, including tech companies, advocacy groups, and parents, to ensure its effectiveness and alignment with the goal of safeguarding young internet users from bullying, harassment, and other online risks.

Critics of the statute argue that the KOSA, despite amendments, remains a threat to constitutional rights, effectively censoring online content and empowering state officials to target undesirable services and speech. See, e.g., the EFF’s blog post about the statute. They contend that KOSA mandates extensive filtering and blocking of legal speech across numerous websites, apps, and platforms, likely leading to age verification requirements. Concerns are raised about the potential harm to minors’ access to important information, particularly for groups such as LGBTQ+ youth, those seeking health and reproductive information, and activists. The modifications in the 2024 version, including the removal of the authority for state attorneys general to sue for non-compliance with the “duty of care” provision, are seen as insufficient to address the core issues related to free speech and censorship. Critics urge opposition to KOSA, highlighting its impact not just on minors but on all internet users who could be subjected to a “second-class internet” due to restricted access to information.

What does the proposed law actually say? Below are some key facts about the contents of the legislation:

Who would be subject to the law:

The statute would place various obligations on “covered platforms”:

  • A “covered platform” encompasses online platforms, video games, messaging applications, and video streaming services accessible via the internet and used or likely to be used by minors.
  • Exclusions from the definition of “covered platform” include common carrier services, broadband internet access services, email services, specific teleconferencing or video conferencing services, and direct wireless messaging services not linked to an online platform.
  • Entities not for profit, educational institutions, libraries, news or sports news websites/apps with specific criteria, business-to-business software, and cloud services not functioning as online platforms are also excluded.
  • Virtual private networks and similar services that solely route internet traffic are not considered “covered platforms.”

Design and Implementation Requirements

  • Covered platforms are required to exercise reasonable care in designing and implementing features to prevent and mitigate harms to minors, including mental health disorders, addiction-like behaviors, physical violence, bullying, harassment, sexual exploitation, and certain types of harmful marketing.
  • The prevention of harm includes addressing issues such as anxiety, depression, eating disorders, substance abuse, suicidal behaviors, online bullying, sexual abuse, and the promotion of narcotics, tobacco, gambling, and alcohol to minors.
  • Despite these protections, platforms are not required to block minors from intentionally seeking content or from accessing resources aimed at preventing or mitigating these harms, including providing evidence-informed information and clinical resources.

Required Safeguards for Minors

  • Covered platforms must provide minors with safeguards to limit communication from others, restrict access to their personal data, control compulsive platform usage features, manage personalized recommendation systems, and protect their geolocation data. (One has to consider whether these would pass First Amendment scrutiny, particularly in light of recent decisions such as the one in NetChoice v. Yost).
  • Platforms are required to offer options for minors to delete their accounts and personal data, and limit their time on the platform, with the most protective privacy and safety settings enabled by default for minors.
  • Parental tools must be accessible and easy-to-use, allowing parents to manage their child’s privacy, account settings, and platform usage, including the ability to restrict purchases and view and limit time spent on the platform.
  • A reporting mechanism for harms to minors must be established, with platforms required to respond substantively within specified time frames, and immediate action required for reports involving imminent threats to minors’ safety.
  • Advertising of illegal products such as narcotics, tobacco, gambling, and alcohol to minors is strictly prohibited.
  • Safeguards and parental tools must be clear, accessible, and designed without “dark patterns” that could impair user autonomy or choice, with considerations for uninterrupted gameplay and offline device or account updates.

Disclosure Requirements

  • Before a minor registers or purchases on a platform, clear notices about data policies, safeguards for minors, and risks associated with certain features must be provided.
  • Platforms must inform parents about safeguards and parental tools for their children and obtain verifiable parental consent before a child uses the platform.
  • Platforms may consolidate notice and consent processes with existing obligations under the Children’s Online Privacy Protection Act (COPPA). (Like COPPA, a “child” under the act is one under 13 years of age.)
  • Platforms using personalized recommendation systems must clearly explain their operation, including data usage, and offer opt-out options for minors or their parents.
  • Advertising targeted at minors must be clearly labeled, explaining why ads are shown to them and distinguishing between content and commercial endorsements.
  • Platforms are required to provide accessible information to minors and parents about data policies and access to safeguards, ensuring resources are available in relevant languages.

Reporting Requirements

  • Covered platforms must annually publish a report, based on an independent audit, detailing the risks of harm to minors and the effectiveness of prevention and mitigation measures. (Providing these audit services is no doubt a good business opportunity for firms with such capabilities; unfortunately this will increase the cost of operating a covered platform.)
  • This requirement applies to platforms with over 10 million active monthly users in the U.S. that primarily host user-generated content and discussions, such as social media and virtual environments.
  • Reports must assess platform accessibility by minors, describe commercial interests related to minor usage, and provide data on minor users’ engagement, including time spent and content accessed.
  • The reports should identify foreseeable risks of harm to minors, evaluate the platform’s design features that could affect minor usage, and detail the personal data of minors collected or processed.
  • Platforms are required to describe safeguards and parental tools, interventions for potential harms, and plans for addressing identified risks and circumvention of safeguards.
  • Independent auditors conducting the risk assessment must consult with parents, youth experts, and consider research and industry best practices, ensuring privacy safeguards are in place for the reported data.

Keep an eye out to see if Congress passes this legislation in the spirit of “for the children.”

How did Ohio’s efforts to regulate children’s access to social media violate the constitution?

children social media

Ohio passed a law called the Parental Notification by Social Media Operators Act which sought to require certain categories of online services to obtain parental consent before allowing any unemancipated child under the age of sixteen to register or create accounts with the service.

Plaintiff internet trade association – representing platforms including Google, Meta, X, Nextdoor, and Pinterest – sought a preliminary injunction that would prohibit the State’s attorney general from enforcing the law. Finding the law to be unconstitutional, the court granted the preliminary injunction.

Likelihood of success on the merits: First Amendment Free Speech

The court found that plaintiff was likely to succeed on its constitutional claims. Rejecting the State’s argument that the law sought only to regulate commerce (i.e., the contracts governing use of social media platforms) and not speech, it held that the statute was a restriction on speech, implicating the First Amendment. It held that the law was a content-based restriction because the social media features the statute singled out in defining which platforms were subject to the law – e.g., the ability to interact socially with others – were “inextricable from the content produced by those features.” And the law violated the rights of minors living in Ohio because it infringed on minors’ rights to both access and produce First Amendment protected speech.

Given these attributes of the law, the court applied strict scrutiny to the statute. The court held that the statute failed to pass strict scrutiny for several reasons. First, the Act was not narrowly tailored to address the specific harms identified by the State, such as protecting minors from oppressive contract terms with social media platforms. Instead of targeting the contract terms directly, the Act broadly regulated access to and dissemination of speech, making it under-inclusive in addressing the specific issue of contract terms and over-inclusive by imposing sweeping restrictions on speech. Second, while the State aimed to protect minors from mental health issues and sexual predation related to social media use, the Act’s approach of requiring parental consent for minors under sixteen to access all covered websites was an untargeted and blunt instrument, failing to directly address the nuanced risks posed by specific features of social media platforms. Finally, in attempting to bolster parental authority, the Act mirrored previously rejected arguments that imposing speech restrictions, subject to parental veto, was a legitimate means of aiding parental control, making it over-inclusive by enforcing broad speech restrictions rather than focusing on the interests of genuinely concerned parents.

Likelihood of success on the merits: Fourteenth Amendment Due Process

The statute violated the Due Process Clause of the Fourteenth Amendment because its vague language failed to provide clear notice to operators of online services about the conduct that was forbidden or required. The Act’s broad and undefined criteria for determining applicable websites, such as targeting children or being reasonably anticipated to be accessed by children, left operators uncertain about their legal obligations. The inclusion of an eleven-factor list intended to clarify applicability, which contained vague and subjective elements like “design elements” and “language,” further contributed to the lack of precise guidance. The Act’s exception for “established” and “widely recognized” media outlets without clear definitions for these terms introduced additional ambiguity, risking arbitrary enforcement. Despite the State highlighting less vague aspects of the Act and drawing parallels with the federal Children Online Privacy Protection Act of 1998 (COPPA), these did not alleviate the overall vagueness, particularly with the Act’s broad and subjective exceptions.

Irreparable harm and balancing of the equities

The court found that plaintiff’s members would face irreparable harm through non-recoverable compliance costs and the potential for civil liability if the Act were enforced, as these monetary harms could not be fully compensated. Moreover, the Act’s infringement on constitutional rights, including those protected under the First Amendment, constituted irreparable harm since the loss of such freedoms, even for short durations, is considered significant.

The balance of equities and the public interest did not favor enforcing a statute that potentially violated constitutional principles, as the enforcement of unconstitutional laws serves no legitimate public interest. The argument that the Act aimed to protect minors did not outweigh the importance of upholding constitutional rights, especially when the statute’s measures were not narrowly tailored to address specific harms. Therefore, the potential harm to plaintiff’s members and the broader implications for constitutional rights underscored the lack of public interest in enforcing this statute.

NetChoice, LLC v. Yost, 2024 WL 55904 (S.D. Ohio, February 12, 2024)

See also: 

Using AI generated fake cases in court brief gets pro se litigant fined $10K

fake ai cases

Plaintiff sued defendant and won on summary judgment. Defendant sought review with the Missouri Court of Appeals. On appeal, the court dismissed the appeal and awarded damages to plaintiff/respondent because of the frivolousness of the appeal.

“Due to numerous fatal briefing deficiencies under the Rules of Appellate Procedure that prevent us from engaging in meaningful review, including the submission of fictitious cases generated by [AI], we dismiss the appeal.” With this, the court began its roast of the pro se appellant’s conduct.

The court detailed appellant’s numerous violations of the applicable Rules of Appellate Procedures. The appellate brief was unsigned, it had no required appendix, and had an inadequate statement of facts. It failed to provide points relied on, and a detailed table of cases, statutes and other authorities.

But the court made the biggest deal about how “the overwhelming majority of the [brief’s] citations are not only inaccurate but entirely fictitious.” Only two out of the twenty-four case citations in the brief were genuine.

Though appellant apologized for the fake cases in his reply brief, the court was not moved, because “the deed had been done.” It characterized the conduct as “a flagrant violation of the duties of candor” appellant owed to the court, and an “abuse of the judicial system.”

Because appellant “substantially failed to comply with court rules,” the court dismissed the appeal and ordered appellant to pay $10,000 in damages for filing a frivolous appeal.

Kruse v. Karlen, — S.W.3d —, 2024 WL 559497 (Mo. Ct. App. February 13, 2024)

See also:

GenAI and copyright: Court dismisses almost all claims against OpenAI in authors’ suit

copyright social media

Plaintiff authors sued large language model provider OpenAI and related entities for copyright infringement, alleging that plaintiffs’ books were used to train ChatGPT. Plaintiffs asserted six causes of action against various OpenAI entities: (1) direct copyright infringement, (2) vicarious infringement, (3) violation of Section 1202(b) of the Digital Millennium Copyright Act (“DMCA”), (4) unfair competition under  Cal. Bus. & Prof. Code Section 17200, (5) negligence, and (6) unjust enrichment.

Open AI moved to dismiss all of these claims except for the direct copyright infringement claim. The court granted the motion as to almost all the claims.

Vicarious liability claim dismissed

The court dismissed the claim for vicarious liability because plaintiffs did not successfully plead that direct copying occurs from use of the software. Citing to A&M Recs., Inc. v. Napster, Inc., 239 F.3d 1004, 1013 n.2 (9th Cir. 2001) aff’d,  284 F.3d 1091 (2002) the court noted that “[s]econdary liability for copyright infringement does not exist in the absence of direct infringement by a third party.” More specifically, the court dismissed the claim because plaintiffs had not alleged either direct copying when the outputs are generated, nor had they alleged “substantial similarity” between the ChatGPT outputs and plaintiffs’ works.

DMCA claims dismissed

The DMCA – at 17 U.S.C. 1202(b) – requires a defendant’s knowledge or “reasonable grounds to know” that the defendant’s removal of copyright management information (“CMI”) would “induce, enable, facilitate, or conceal an infringement.” Plaintiffs alleged “by design,” OpenAI removed CMI from the copyrighted books during the training process. But the court found that plaintiffs provided no factual support for that assertion. Moreover, the court found that even if plaintiffs had successfully asserted such facts, they had not provided any facts showing how the omitted CMI would induce, enable, facilitate or conceal infringement.

The other portion of the DMCA relevant to the lawsuit – Section 1202(b)(3) – prohibits the distribution of a plaintiff’s work without the plaintiff’s CMI included. In rejecting plaintiff’s assertions that defendants violated this provision, the court looked to the plain language of the statute. It noted that liability requires distributing the original “works” or “copies of [the] works.” Plaintiffs had not alleged that defendants distributed their books or copies of their books. Instead, they alleged that “every output from the OpenAI Language Models is an infringing derivative work” without providing any indication as to what such outputs entail – i.e., whether they were the copyrighted books or copies of the books.

Unfair competition claim survived

Plaintiffs asserted that defendants had violated California’s unfair competition statute based on “unlawful,” “fraudulent,” and “unfair” practices. As for the unlawful and fraudulent practices, these relied on the DMCA claims, which the court had already dismissed. So the unfair competition theory could not move forward on those grounds. But the court did find that plaintiffs had alleged sufficient facts to support the claim that it was “unfair” to use plaintiffs works without compensation to train the ChatGPT model.

Negligence claim dismissed

Plaintiffs alleged that defendants owed them a duty of care based on the control of plaintiffs’ information in their possession and breached their duty by “negligently, carelessly, and recklessly collecting, maintaining, and controlling systems – including ChatGPT – which are trained on Plaintiffs’ [copyrighted] works.” The court dismissed this claim, finding that there were insufficient facts showing that defendants owed plaintiffs a duty in this situation.

Unjust enrichment claim dismissed

Plaintiffs alleged that defendants were unjustly enriched by using plaintiffs’ copyright protected works to train the large language model. The court dismissed this claim because plaintiff had not alleged sufficient facts to show that plaintiffs had conferred any benefit onto OpenAI through “mistake, fraud, or coercion.”

Tremblay v. OpenAI, Inc., 2024 WL 557720 (N.D. Cal., February 12, 2024)

See also:

When can you serve a lawsuit by email?

email service

One of the biggest challenges brand owners face in enforcing their intellectual property rights online is in tracking down the infringer – often located in a foreign country – so that a lawsuit can be served on the infringer. Generally, for due process reasons, the Federal Rules of Civil Procedure require that a complaint and summons be served personally – that is, by handing the papers to the person directly. But there are exceptions to this, particularly in situations involving overseas defendants, in which “alternative service” may be available. A recent case from federal court in the state of Washington provides an example of where Amazon and certain sellers were able to serve the lawsuit on overseas defendants via email.

Learning about the counterfeiters

Plaintiffs sued defendants, accusing defendants of selling counterfeit goods on Amazon. Plaintiffs alleged that defendants resided in Ukraine. Even after working with a private investigator and seeking third party discovery from defendants’ virtual bank account providers, plaintiffs could not find any valid physical addresses for defendants. So plaintiffs asked the court to permit service of the complaint and summons on defendants’ email addresses registered with their Amazon selling accounts. They knew those email addresses must be valid because test messages did not receive any error notices or bounce backs that would indicate the messages failed to deliver.

What is required

The court looked at Federal Rule of Civil Procedure 4(f) which allows for service of process on individuals in foreign countries through several methods, including (`1) internationally agreed methods such as those authorized by the Hague Convention, (2) according to the foreign country’s law if no international agreement exists, or (3) by other means not prohibited by international agreements as the court orders. A plaintiff must show that the specific circumstances require court intervention. Furthermore, any method of service must align with constitutional due process, meaning it must be designed to effectively inform interested parties of the ongoing action and give them a chance to object, ensuring fairness and the opportunity for defense.

The court said okay

The court found that plaintiffs had shown court intervention was necessary because plaintiffs could not find valid physical addresses but could show that the email addresses apparently were valid. As for the Hague Convention, no method it provided was available without valid physical addresses. Moreover, the court observed that whether or not the Hague Convention applied, email service on individuals in Ukraine was not prohibited by the Convention nor by any other international agreement.

And the court found email service comported with constitutional due process. Defendants conducted business through these email accounts and tests confirmed their functionality. Although defendants’ Amazon accounts were blocked, evidence suggested these email addresses were still active. The court thus concluded that email service met due process standards by being “reasonably calculated” to notify defendants, allowing them the chance to present objections.

Amazon.com Inc. v. Ananchenko, 2024 WL 492283 (W.D. Washington, February 7, 2024)

See also:

DMCA subpoena to “mere conduit” ISP was improper

DMCA defamatory

Because ISP acted as a conduit for the transmission of material that allegedly infringed copyright, it fell under the DMCA safe harbor in 17 U.S.C. § 512(a), and therefore § 512(h) did not authorize the subpoena issued in the case.

Some copyright owners needed to find out who was anonymously infringing their works, so they issued a subpoena to the users’ internet service provider (Cox Communications) under the Digital Millennium Copyright Act’s (“DMCA”) at 17 U.S.C. § 512(h). After the ISP notified one of the anonymous users – referred to as John Doe in the case – of the subpoena, Doe filed a motion to quash. The magistrate judge recommended the subpoena be quashed, and the district judge accepted such recommendation.

Contours of the Safe Harbor

The court explained how the DMCA enables copyright owners to send subpoenas for the identification of alleged infringers, contingent upon providing a notification that meets specific criteria outlined in the DMCA. However, the DMCA also establishes safe harbors for Internet Service Providers (ISPs), notably exempting those acting as “mere conduits” of information, like in peer-to-peer (P2P) filesharing, from liability and thus from the obligations of the notice and takedown provisions found in other parts of the DMCA. This distinction has led courts, including the Eighth and D.C. Circuits, to conclude that subpoenas under § 512(h) cannot be used to compel ISPs, which do not store or directly handle the infringing material but merely transmit it, to reveal the identities of P2P infringers.

Who is in?

The copyright owners raised a number of objections to quashing the subpoena. Their primary concerns were with the court’s interpretation of the ISP’s role as merely a “conduit” in the alleged infringement, arguing that the ISP’s assignment of IP addresses constituted a form of linking to infringing material, thus meeting the DMCA’s notice requirements. They also disputed the court’s conclusion that the material in question could not be removed or access disabled by the ISP due to its nature of transmission, and they took issue with certain factual conclusions drawn without input from the parties involved. Additionally, the petitioners objected to the directive to return or destroy any information obtained through the subpoena, requesting that such measures apply only to the information related to the specific subscriber John Doe.

Conduits are.

Notwithstanding these various arguments, the court upheld the magistrate judge’s recommendation, agreeing that the subpoena issued to the ISP was invalid due to non-compliance with the notice provisions required by 17 U.S.C. § 512(c)(3)(A). The petitioners’ arguments, suggesting that the ISP’s assignment of IP addresses to users constituted a form of linking to infringing material under § 512(d), were rejected. The court clarified that in the context of P2P file sharing, IP addresses do not serve as “information location tools” as defined under § 512(d) and that the ISP’s role was limited to providing internet connectivity, aligning with the “mere conduit” provision under § 512(a). The court also dismissed the petitioners’ suggestion that the ISP could disable access to infringing material by null routing, emphasizing the distinction between disabling access to material and terminating a subscriber’s account, with the latter being a more severe action than what the DMCA authorizes. The court suggested that the petitioners could pursue the infringer’s identity through other legal avenues, such as a John Doe lawsuit, despite potential challenges highlighted by the petitioners.

In re: Subpoena of Internet Subscribers of Cox Communications, LLC and Coxcom LLC, 2024 WL 341069 (D. Hawaii, January 30, 2024)

 

Scroll to top