Author Archives: Evan Brown (@internetcases)

Is a DMCA subpoena to identify unknown infringers valid if the infringement has ended?

The Digital Millennium Copyright Act (“DMCA”) is well-known for its notice and takedown provisions. But the DMCA provides a number of other interesting mechanisms, including a procedure for potential copyright plaintiffs to send subpoenas to online service providers to learn the identity of users who posted infringing content to that service. A recent case involving some subpoenas that a copyright owner sent to eBay examines the relationship between the notice and takedown procedures on one hand, and the subpoena mechanism on the other. The question before the court was whether a DMCA subpoena is valid if, by the time it is served on the online service provider, that online service provider has already removed or has disabled access to that content.

Section 512(h) (17 U.S.C. 512(h)) spells out the DMCA subpoena process, and how it relates to the notice and takedown provisions. An online service provider must act expeditiously to identify the user who uploaded infringing content “[u]pon receipt of the issued subpoena, either accompanying or subsequent to the receipt of a [takedown request].” That plain language seems straightforward — an online service provider has to provide the identifying information in response to any subpoena it receives either with or subsequent to a takedown notice.

But it was not so straightforward in a 2011 case, where some confusing facts made for some confusing law. In Maximized Living, Inc., v. Google, Inc., 2011 WL 6749017 (N.D. Cal. December 22, 2011), the copyright holder sent a subpoena to the online service provider after the copyright holder had sent a DMCA takedown notice. That would appear to comport with the statute — the subpoena came subsequent to the takedown notice. But the problem in that case was that the takedown notice was not valid. By the time it was sent, the alleged infringer had already removed the infringing content. From that, the Maximized Living case pronounced that “the subpoena power of §512(h) is limited to currently infringing activity and does not reach former infringing activity that has ceased and thus can no longer be removed or disabled.”

In the recent case of In re DMCA Subpoena to eBay, Inc., eBay, as the recipient of subpoenas to identify some of its users, picked up on the Maximized Living holding to argue that it did not have to answer the subpoenas because it had already taken down the offending content pursuant to previous takedown notices. Since the subpoenas did not relate to “currently infringing activity,” eBay argued à la Maximized Living, that the subpoenas had not been issued under §512(h)’s power and were therefore invalid.

The court rejected eBay’s argument. The key distinction in this case was that, unlike in Maximized Living, the takedown notices in this case, when they issued, related to content that was on the eBay servers at the time the takedown notices were issued. Granted, some of those takedown notices went all the way back to early 2012 (query whether the subpoena should be valid if it would only uncover the identity of an infringer for whom the 3-year copyright statute of limitations had passed; but that wasn’t before the court).

So to simply state the rule in this case — for a DMCA subpoena to be valid, it has to relate to a valid DMCA takedown notice. That DMCA takedown notice is not valid unless it was served at a time when infringing content resided on the service. An online service provider cannot avoid the obligation of responding to a subpoena by taking down the content, thereby causing there to be no “currently infringing activity”. Such a rule would, as the court observed, cause the online service provider’s safe harbor protection to also shield the alleged infringer from being identified. That would indeed be an odd application of the DMCA’s protection. The court in this case avoided that outcome.

In re DMCA Subpoena to eBay, Inc., 2015 WL 3555270 (S.D. Cal. June 5, 2015).

Evan Brown is a Chicago attorney helping clients in matters dealing with copyright, technology, the internet and new media. Call him at (630) 362-7237, send email to ebrown [at] internetcases dot com, or follow him on Twitter @internetcases

Photo courtesy of Flickr user Thomas Galvez under this Creative Commons license.

Casual website visitor who watched videos was not protected under the Video Privacy Protection Act

A recent federal court decision from the Southern District of New York sheds light on what is required to be considered a “consumer” who is protected under the Video Privacy Protection Act (VPPA). The court held that a website visitor who merely visited a website once in awhile to watch videos — without establishing a more “deliberate and durable” affiliation with the website — was not a “subscriber” to the website’s services and thus the VPPA did not prohibit the alleged disclosure of information about the website visitor’s viewing habits.

Defendant was a television network that maintains a website offering video clips and episodes of many of its television shows. The website also incorporated Facebook’s software development kit which, among other things, let visitors log into websites using their Facebook credentials. This mechanism relied on cookies. If a person had chosen to remain logged into Facebook by checking the “keep me logged in” button on Facebook’s homepage, the relevant cookie would continue to operate, regardless of what the user did with the web browser. Plaintiff alleged that this mechanism caused AMC to transmit information to Facebook about the video clips she watched on the AMC site.

Plaintiff sued under the VPPA. Defendant moved to dismiss, arguing that plaintiff lacked standing under the statute and that she was not a protected “consumer” as required by the statute.

The court found that plaintiff had standing. It rejected defendant’s argument that a VPPA plaintiff must allege some injury in addition to asserting that defendant had violated the statute. “It is true . . . that Congress cannot erase Article III’s standing requirements by statutorily granting the right to sue to a plaintiff who would not otherwise have standing.” But Congress “can broaden the injuries that can support constitutional standing.”

The court next looked to whether plaintiff was a “consumer” protected under the statute. The VPPA defines the term “consumer” to include “any renter, purchaser, or subscriber of goods or services from a video tape service provider.” Absent any assertion that plaintiff was a renter or purchaser of AMC’s goods, the parties and the court focused on whether she was a “subscriber” (a term not defined in the statute).

Because plaintiff’s allegations failed to establish a relationship with defendant sufficient to characterize her as a subscribers of defendant’s goods or services, the court dismissed the VPPA claim with leave to amend. It observed: “Conventionally, ‘subscription’ entails an exchange between subscriber and provider whereby the subscriber imparts money and/or personal information in order to receive a future and recurrent benefit, whether that benefit comprises, for instance, periodical magazines, club membership, cable services, or email updates.” In this case, “[s]uch casual consumption of web content, without any attempt to affiliate with or connect to the provider, exhibit[ed] none of the critical characteristics of ‘subscription’ and therefore [did] not suffice to render [plaintiff] a subscriber of [defendant’s] services.”

Austin-Spearman v. AMC Network Entertainment LLC, 2015 WL 1539052 (S.D.N.Y. April 7, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Complaint site does not have to identify its users

Petitioner filed an action in New York state court seeking to compel PissedConsumer.com to disclose the identity of the person or persons who posted certain statements to the site. These statements criticized petitioner for allegedly failing to fulfill an advertising promise to give the user a $500 gas card. The anonymous user went on to complain that petitioner “will forget about you and … all the promises they made to you” once “you sign on the dotted line.”

The trial court denied the petition to compel PissedConsumer.com to turn over the names of its users. Petitioner sought review with the Appellate Division. On appeal, the court affirmed.

It held that the lower court properly denied the petition since petitioner failed to demonstrate that it had a meritorious cause of action as required to obtain pre-action discovery:

Nothing in the petition identifies specific facts that are false and when the statements complained of are viewed in context, they suggest to a reasonable reader that the writer was a dissatisfied customer who utilized respondent’s consumers’ grievance website to express an opinion. Although some of the statements are based on undisclosed, unfavorable facts known to the writer, the disgruntled tone, anonymous posting, and predominant use of statements that cannot be definitively proven true or false, supports the finding that the challenged statements are only susceptible of a non-defamatory meaning, grounded in opinion.

The court seemed to recognize the importance of anonymous speech, and that one must not lightly cast aside its protections. If you’re going to go after an online critic, best have a cause of action that you can actually plead.

Woodbridge Structured Funding, LLC v. Pissed Consumer, — N.Y.S.2d —, 2015 WL 686383, (February 19, 2015)

Evan Brown is an attorney in Chicago helping clients with technology, intellectual property and new media issues.

internetcases turns 10 years old today

Ten years ago today, somewhat on a whim, yet to fulfill a need I saw for discussion about the law of the internet in the “blogosphere” (a term we loved dearly back then), I launched internetcases.

What started as a one-page handwritten pamphlet that I would mimeograph in the basement of my one-bedroom apartment and then foist upon unsuspecting people on street corners has in ten years turned into a billion dollar conglomerate and network. internetcases is now translated into 7 languages daily and employs a staff of thousands to do the Lord’s work fighting Ebola and terrorism on 4 continents. Or it’s a WordPress install on some cheap GoDaddy space and I write when I can.

All seriousness aside, on this 10th anniversary, I want to sincerely thank my loyal readers and followers. Writing this blog has been the single most satisfying thing I’ve done in my professional life, and I am immensely grateful for the knowledge it has helped me develop, the opportunities for personal brand development it has given (speaking, press, media opportunities), but most of all, I’m grateful for the hundreds of people it has enabled me to connect with and get to know.

Blogging (and the web in general) has changed a lot in 10 years. And the legal issues arising from the internet continue to challenge us to stretch our thinking and amp up our powers of analysis. It’s great to have a platform on the web from which to share news and thoughts about the role that technology plays in shaping our legal rules and our culture.

Thanks all.

Best practices for providers of goods and services on the Internet of Things

Today the United States Federal Trade Commission issued a report in which it detailed a number of consumer-focused issues arising from the growing Internet of Things (IoT). Companies should pay attention to the portion of the report containing the Commission’s recommendations on best practices to participants (such as device manufacturers and service providers) in the IoT space.

The Commission structured its recommendations around four of the “FIPPs” – the Fair Information Practice Principles – which first appeared in the 1970s and which inform much of the world’s regulation geared to protect personal data. The recommendations focused on data security, data minimization, notice and choice.

DATA SECURITY

IoT participants should implement reasonable data security. The Commission noted that “[o]f course, what constitutes reasonable security for a given device will depend on a number of factors, including the amount and sensitivity of data collected and the costs of remedying the security vulnerabilities.” Nonetheless, companies should:

  • Implement “security by design”
  • Ensure their personnel practices promote good security
  • Retain and oversee service providers that provide reasonable security
  • Implement “defense-in-depth” approach where appropriate
  • Implement reasonable access control measures
  • Monitor products in the marketplace and patch vulnerabilities

Security by Design

Companies should implement “security by design” into their devices at the outset, rather than as an afterthought by:

  • Conducting a privacy or security risk assessment to consider the risks presented by the collection and retention of consumer information.
  • Incorporating the use of “smart defaults” such as requiring consumers to change default passwords during the set-up process.
  • Considering how to minimize the data collected and retained.
  • Testing security measures before launching products.

Personnel Practices and Good Security

Companies should ensure their personnel practices promote good security by making security an executive-level concern and training employees about good security practices. A company should not assume that the ability to write code is equivalent to an understanding of the security of an embedded device.

Retain and Oversee Service Providers That Provide Reasonable Security

The Commission urged IoT participants to retain service providers that are capable of maintaining reasonable security and to oversee those companies’ performance to ensure that they do so. On this point, the Commission specifically noted that failure to do so could result in FTC law enforcement action. It pointed to a recent (non IoT) case in which a medical transcription company outsourced its services to independent typists in India who stored their notes in clear text on an unsecured server. Patients in the U.S. were shocked to find their confidential medical information showing up in web searches.

The “Defense-in-Depth” Approach

The Commission urged companies to take additional steps to protect particularly sensitive information (e.g., health information). For example, instead of relying on the user to ensure that data passing over his or her local wireless network is encrypted using the Wi-Fi password, companies should undertake additional efforts to ensure that data is not publicly available.

Reasonable Access Control Measures

While tools such as strong authentication could be used to permit or restrict IoT devices from interacting with other devices or systems, the Commission noted companies should ensure that they do not unduly impede the usability of the device.

Monitoring of Products and Patching of Vulnerabilities

Companies may reasonably decide to limit the time during which they provide security updates and software patches, but must weigh these decisions carefully. IoT participants should also be forthright in their representations about providing ongoing security updates and software patches to consumers. Disclosing the length of time companies plan to support and release software updates for a given product line will help consumers better understand the safe “expiration dates” for their commodity internet-connected devices.

DATA MINIMIZATION

Data minimization refers to the concept that companies should limit the data they collect and retain, and dispose of it once they no longer need it. The Commission acknowledged the concern that requiring data minimization might curtail innovative uses of data. A new enterprise may not be able to reasonably foresee the types of uses it may have for information gathered in the course of providing a connected device or operating a service in conjunction with connected devices. Despite certain concerns against data minimization, the Commission recommended that companies should consider reasonably limiting their collection and retention of consumer data.

The Commission observed how data minimization mitigates risk in two ways. First, the less information in a database, the less attractive the database is as a target for hackers. Second, having less data reduces the risk that the company providing the device or service will use the information in a way that the consumer does not expect.

The Commission provided a useful example of how data minimization might work in practice. It discussed a hypothetical startup that develops a wearable device, such as a patch, that can assess a consumer’s skin condition. The device does not need to collect precise geolocation information in order to work, but it has that capability. The device manufacturer believes that such information could be useful for a future product feature that would enable users to find treatment options in their area. The Commission observed that as part of a data minimization exercise, the company should consider whether it should wait to collect geolocation information until after it begins to offer the new product feature, at which time it could disclose the new collection and seek consent. The company should also consider whether it could offer the same feature while collecting less information, such as by collecting zip code rather than precise geolocation. If the company does decide it needs the precise geolocation information, the Commission would recommend that the company provide a prominent disclosure about its collection and use of this information, and obtain consumers’ affirmative express consent. And the company should establish reasonable retention limits for the data it does collect.

As an aspect of data minimization, the Commission also discussed de-identification as a “viable option in some contexts” to help minimize data and the risk of potential consumer harm. But as with any conversation about de-identification, the Commission addressed the risks associated with the chances of re-identification. On this note, the Commission referred to its 2012 Privacy Report in which it said that companies should:

  • take reasonable steps to de-identify the data, including by keeping up with technological developments;
  • publicly commit not to re-identify the data; and
  • have enforceable contracts in place with any third parties with whom they share the data, requiring the third parties to commit not to re-identify the data.

This approach ensures that if the data is not reasonably de-identified and then is re-identified
in the future, regulators can hold the company responsible.

NOTICE AND CHOICE

Giving consumers notice that information is being collected, and the ability to make choices as to that collection is problematic in many IoT contexts. Data is collected continuously, by many integrated devices and systems, and getting a consumer’s consent in each context might discourage use of the technology. Moreover, often there is no easy user interface through which to provide notice and offer choice.

With these concerns in mind, the Commission noted that “not every data collection requires choice.” As an alternative, the Commission acknowledged the efficacy of a use-based approach. Companies should not be compelled, for example, to provide choice before collecting and using consumer data for practices that are consistent with the context of a transaction or the company’s relationship with a consumer. By way of example, the Commission discussed a hypothetical purchaser of a “smart oven”. The company could use temperature data to recommend another of the company’s kitchen products. The consumer would expect that. But a consumer would not expect the company to disclose information to a data broker or an ad network without having been given notice of that sharing and the ability to choose whether it should occur.

Given the practical difficulty of notice and choice on the IoT, the Commission acknowledged there is no one-size-fits all approach. But it did suggest a number of mechanisms for communications of this sort, including:

  • Choices at point of sale
  • Tutorials (like the one Facebook uses)
  • QR codes on the device
  • Choices during setup
  • Management portals or dashboards
  • Icons
  • Out-of-band notifications (e.g., via email or text)
  • User-experience approach – “learning” what the user wants, and adjusting automatically

Conclusion

The Commission’s report does not have the force of law, but is useful in a couple of ways. From a practical standpoint, it serves as a guide for how to avoid engaging in flagrant privacy and security abuses on the IoT. But it also serves to frame a larger discussion about how providers of goods and services can and should approach the innovation process for the development of the Internet of Things.

Facebook wins against alleged advertising fraudster

Defendant set up more than 70 bogus Facebook accounts and impersonated online advertising companies (including by sending Facebook falsified bank records) to obtain an advertising credit line from Facebook. He ran more than $340,000 worth of ads for which he never paid. Facebook sued, among other things, for breach of contract, fraud, and violation of the Computer Fraud and Abuse Act (CFAA). Despite the court giving defendant several opportunities to be heard, defendant failed to answer the claims and the court entered a default.

The court found that Facebook had successfully pled a CFAA claim. After Facebook implemented technological measures to block defendant’s access, and after it sent him two cease-and-desist letters, defendant continued to intentionally access Facebook’s “computers and servers to obtain account credentials, Facebook credit lines, Facebook ads, and other information.” The court entered an injunction against defendant accessing or using any Facebook website or service in the future, and set the matter over for Facebook to prove up its $340,000 in damages. It also notified the U.S. Attorney’s Office.

Facebook, Inc. v. Grunin, 2015 WL 124781 (N.D. Cal. January 8, 2015)

Court orders Twitter to identify anonymous users

Defamation plaintiffs’ need for requested information outweighed any impact on Doe defendants’ free speech right to tweet anonymously.

Plaintiff company and its CEO sued several unknown defendants who tweeted that plaintiff company encouraged domestic violence and misogyny and that the CEO visited prostitutes. The court allowed plaintiffs to serve subpoenas on Twitter to seek the identity of the unknown Twitter users. Twitter would not comply with the subpoenas unless and until the court ruled on whether the production of information would violate the users’ First Amendment rights.

The court ruled in favor of the plaintiffs and ordered Twitter to turn over identifying information about the unknown users. In reaching this decision, the court applied the Ninth Circuit analysis for unmasking anonymous internet speakers set out in Perry v. Schwarzenegger, 591 F.3d. 1126 (9th Cir. 2009). The court found that the requested discovery raised the possibility of “arguable first amendment infringement,” so it continued its analysis by weighing the balance between the aggrieved plaintiffs’ interests with the anonymous defendants’ free speech rights.

The Perry balancing test places a burden on the party seeking discovery to show that the information sought is rationally related to a compelling governmental interest and that the requested discovery is the least restrictive means of obtaining the desired information.

In this case, the court found that the subpoenas were narrowly tailored to plaintiffs’ need to uncover the identities of the anonymous defendants so that plaintiffs could serve process. It also found that the “nature” of defendants’ speech weighed in favor of enforcing the subpoena. The challenged speech went “beyond criticism into what appear[ed] to be pure defamation, ostensibly unrelated to normal corporate activity.”

Music Group Macao Commercial Offshore Ltd. v. Does I-IX, 2015 WL 75073 (N.D. Cal., January 6, 2015).

Domain name case under ACPA failed because trademark was not distinctive

Federal appeals court holds that plaintiff failed to satisfy all elements of the Anticybersquatting Consumer Protection Act in action against competing airline

The federal Anticybersquatting Consumer Protection Act (ACPA) [15 U.S.C. 1125(d)] is a provision in U.S. law that gives trademark owners a cause of action against one who has wrongfully registered a domain name. In general, the ACPA gives rights to owners of trademarks that are either distinctive or famous at the time the defendant registered the offending domain name.

The Eleventh Circuit Court of Appeals recently affirmed the decision of a lower court that dismissed an ACPA claim, holding that the plaintiff failed to plead that its mark was distinctive at the time of the domain name registration.

Plaintiff sued its competitor, who registered the domain name tropicoceanairways.com. Defendant moved to dismiss, and the lower court granted the motion, finding that plaintiff failed to plead that its mark TROPIC OCEAN AIRWAYS was distinctive and thus protected under the ACPA. On appeal, the Eleventh Circuit affirmed the dismissal, holding that plaintiff’s complaint failed to allege that the mark was either suggestive or had acquired secondary meaning as an indicator of source for plaintiff’s services.

Suggestive marks are considered distinctive because they require “a leap of the imagination to get from the mark to the product.” (The court provided the example of a penguin used as a mark for refrigerators.) In this case, the court found the term “tropic ocean airways” was not suggestive, as it merely “inform[ed] consumers about the service [plaintiff provided]: flying planes across the ocean to tropical locations.”

The court rejected plaintiff’s argument that a pending application at the United States Patent and Trademark Office to register the mark proved that it was suggestive. While a certificate of registration may establish a rebuttable presumption that a mark is distinctive, the court held plaintiff was not entitled to such a presumption here, where the application remained pending. Moreover, the court observed in a footnote that the presumption of distinctiveness will generally only go back to the date the application was filed. In this case, the trademark application was not filed until about a year after the domain name was registered.

As for the argument the mark had acquired secondary meaning, the court found plaintiff’s allegations to be insufficient. The complaint instead made conclusory allegations about secondary meaning that were insufficient to survive a motion to dismiss. The court held that plaintiff failed to allege the nature and extent of its advertising and promotion, and, more importantly, did not allege any facts about the extent to which the public identified the mark with plaintiff’s services.

Tropic Ocean Airways, Inc. v. Floyd, — Fed.Appx. —, 2014 WL 7373625 (11th Cir., Dec. 30, 2014)

Evan Brown is an attorney in Chicago helping clients with domain name, trademark, and other matters involving technology and intellectual property.

Forum selection clause in browsewrap agreement did not bind parties in bitcoin fraud case

We all know that clickwrap agreements are preferable to browsewrap agreements, assuming, of course, the objective is to establish binding contracts between participants in online transactions. Nonetheless, some online platforms still (try to) rely on browsewrap agreements to establish terms of service. That avoidance of best practices gives us situations like the recent case of Hussein v. Coinabul, LLC, in which a federal court in Illinois refused to enforce a forum selection clause in a “bitcoin to gold marketplace” browsewrap agreement.

Plaintiff alleged that he sent about $175,000 worth of bitcoins to defendants in June 2013, expecting to get gold in return. (Plaintiff alleges he transferred 1,644.54 BTC. The average exchange value in June 2013 was $107.82/BTC. You can get historical bitcoin price data here: http://www.coindesk.com/price) When the gold never arrived, plaintiff sued for fraud.

Defendants moved to dismiss, citing a forum selection clause contained in a browsewrap agreement found on its website. That purported agreement required all disputes to be heard in the state courts of Wyoming, and for Wyoming law to apply. The court denied the motion to dismiss, finding that the browsewrap agreement afforded plaintiff neither actual nor constructive knowledge of its terms and conditions.

The court observed that the hyperlink that directed users to defendants’ Terms of Service was listed among ten other hyperlinks at the bottom of each page. (See this Wayback Machine capture of the website from June 2013).

As for lack of actual knowledge, the court credited plaintiff’s allegations that he did not review or even know of defendants’ Terms of Service when he entered the bitcoin transaction. And there was no evidence to the contrary in the record.

And as for lack of constructive knowledge, the court found that the hyperlink, “buried at the bottom of the webpage – [was] without some additional act of notification, insufficient for the purpose of providing reasonable notice.”

Hussein v. Coinabul, LLC, No. 14-5735, 2014 WL 7261240 (N.D. Ill. December 19, 2014)

Court allows class action plaintiffs to set up social media accounts to draw in other plaintiffs

Some former interns sued Gawker media under the Fair Labor Standards Act. The court ordered the parties to meet and confer about the content and dissemination of the proposed notice to other potential class members. Plaintiffs suggested, among other things, that they establish social media accounts (Facebook, Twitter, LinkedIn) titled “Gawker Intern Lawsuit” or “Gawker Class Action”. Gawker objected.

The court permitted the establishment of the social media accounts. It rejected Gawker’s argument that the lack of evidence that any former intern used social media would make the notice ineffective. The court found it “unrealistic” that the former interns did not maintain social media accounts.

Gawker also argued that social media to give notice would take control of the dissemination out of the court’s hands. Since users could comment on the posted content, Gawker argued, the court would be “deprived” of its ability to oversee the message. The court likewise rejected this argument, holding that its “role [was] to ensure the fairness and accuracy of the parties’ communications with potential plaintiffs – not to be the arbiter of all discussions not involving the parties that may take place thereafter.”

Mark v. Gawker Media LLC, No. 13-4347, 2014 WL 5557489 (S.D.N.Y. November 3, 2014)