Is the Sixth Circuit willing to recognize a right to be forgotten under U.S. law?

Recent FOIA decision questions the 20-year-old notion that defendants have no interest in preventing release of booking photographs during ongoing criminal proceedings.

The Freedom of Information Act (“FOIA”) implements “a general philosophy of full agency disclosure” of government records. Since the mid-90s, the Sixth Circuit has required law enforcement to turn over booking photographs of defendants while ongoing criminal proceedings are occurring.

Plaintiff sought the booking photos of four criminal defendants from the U.S. Marshall’s office. When the U.S. Marshall refused to turn the photos over, plaintiff filed suit. The district court found in plaintiff’s favor, citing the Sixth Circuit case of Detroit Free Press v. United States Department of Justice, 73 F.3d 93 (1996). Defendant sought review with the Sixth Circuit and, bound by the 1996 decision, a panel of the Sixth Circuit affirmed, ordering that the photos be turned over.

But the panel was far from comfortable in its holding. Although it was bound to follow the earlier Sixth Circuit precedent, it urged the court to consider en banc whether an exception to FOIA applies to booking photographs. “In particular, we question the panel’s conclusion that defendants have no interest in preventing the public release of their booking photographs during ongoing criminal proceedings.”

The general theory behind the current requirement that booking photos be released is that the suspects have already appeared publicly in court, and the release of the photos and their names conveys no further information to implicate a protectible privacy interest. But this panel of the court noted that “[s]uch images convey an ‘unmistakable badge of criminality’ and, therefore, provide more information to the public than a person’s mere appearance.”

Something like a right to be forgotten appears in the court’s discussion of how photos can linger online: “[B]ooking photographs often remain publicly available on the Internet long after a case ends, undermining the temporal limitations presumed” by Sixth Circuit case law that calls for the release of photos during ongoing proceedings.

Detroit Free Press v. U.S. Dept. of Justice, — F.3d —, 2015 WL 4745284 (6th Cir. August 12, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Facebook hacking victim’s CFAA and SCA claims not barred by statutes of limitation

Knowledge that email account had been hacked did not start the statutes of limitation clock ticking for Computer Fraud and Abuse Act and Stored Communications Act claims based on alleged related hacking of Facebook account occurring several months later.

Plaintiff sued her ex-boyfriend in federal court for allegedly accessing her Facebook and Aol email accounts. She brought claims under the Computer Fraud and Abuse Act, 18 U.S.C. § 1030 (“CFAA”), and the Stored Communications Act, 18 U.S.C. § 2701, et seq. (“SCA”).

Both the CFAA and the SCA have two-year statutes of limitation. Defendant moved to dismiss, arguing that the limitation periods had expired.

The district court granted the motion to dismiss, but plaintiff sought review with the Second Circuit Court of Appeals. On appeal, the court affirmed the dismissal as to the email account, but reversed and remanded as to the Facebook account.

In August 2011, plaintiff discovered that someone had altered her Aol email account password. Later that month someone used her email account to send lewd and derogatory sexually-themed messages about her to people in her contact list. A few months later, similar things happened with her Facebook account — she discovered she could not log in in February 2012, and in March 2012 someone publicly posted sexually-themed messages using her account. She figured out it was her (now married) ex-boyfriend and filed suit.

The district court dismissed the claims because it found plaintiff first discovered facts giving rise to the claims in August 2011, but did not file suit until more than two years later, in January 2014. The Court of Appeals agreed with the district court as to the email account. She had enough facts in 2011 to know her Aol account had been compromised, and waited too long to file suit over that. But that was not the case with the Facebook account. The district court had concluded plaintiff knew in 2011 that her “computer” had been compromised. The Court of Appeals observed that the lower court failed to properly recognize the nuance concerning which computer systems were being accessed without authorization. Unauthorized access to the Facebook server gave rise to the claims relating to the Facebook account. The 2011 knowledge about her email being hacked did not bear on whether she knew her Facebook account would be compromised. The court observed:

We take judicial notice of the fact that it is not uncommon for one person to hold several or many Internet accounts, possibly with several or many different usernames and passwords, less than all of which may be compromised at any one time. At least on the facts as alleged by the plaintiff, it does not follow from the fact that the plaintiff discovered that one such account — AOL e-mail — had been compromised that she thereby had a reasonable opportunity to discover, or should be expected to have discovered, that another of her accounts — Facebook — might similarly have become compromised.

The decision gives us an opportunity to think about how users’ interests in having their data kept secure from third party access attaches to devices and systems that may be quite remote from where the user is located. The typical victim of a hack or data breach these days is not going to be the owner of the server that is compromised. Instead, the incident will typically involve the compromising of a system somewhere else that is hosting the user’s information or communications. This decision from the Second Circuit recognizes that reality, and contributes to the reasonable opportunity for redress in those situations.

Sewell v. Bernardin, — F.3d —, 2015 WL 4619519 (2nd Cir. August 4, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Casual website visitor who watched videos was not protected under the Video Privacy Protection Act

A recent federal court decision from the Southern District of New York sheds light on what is required to be considered a “consumer” who is protected under the Video Privacy Protection Act (VPPA). The court held that a website visitor who merely visited a website once in awhile to watch videos — without establishing a more “deliberate and durable” affiliation with the website — was not a “subscriber” to the website’s services and thus the VPPA did not prohibit the alleged disclosure of information about the website visitor’s viewing habits.

Defendant was a television network that maintains a website offering video clips and episodes of many of its television shows. The website also incorporated Facebook’s software development kit which, among other things, let visitors log into websites using their Facebook credentials. This mechanism relied on cookies. If a person had chosen to remain logged into Facebook by checking the “keep me logged in” button on Facebook’s homepage, the relevant cookie would continue to operate, regardless of what the user did with the web browser. Plaintiff alleged that this mechanism caused AMC to transmit information to Facebook about the video clips she watched on the AMC site.

Plaintiff sued under the VPPA. Defendant moved to dismiss, arguing that plaintiff lacked standing under the statute and that she was not a protected “consumer” as required by the statute.

The court found that plaintiff had standing. It rejected defendant’s argument that a VPPA plaintiff must allege some injury in addition to asserting that defendant had violated the statute. “It is true . . . that Congress cannot erase Article III’s standing requirements by statutorily granting the right to sue to a plaintiff who would not otherwise have standing.” But Congress “can broaden the injuries that can support constitutional standing.”

The court next looked to whether plaintiff was a “consumer” protected under the statute. The VPPA defines the term “consumer” to include “any renter, purchaser, or subscriber of goods or services from a video tape service provider.” Absent any assertion that plaintiff was a renter or purchaser of AMC’s goods, the parties and the court focused on whether she was a “subscriber” (a term not defined in the statute).

Because plaintiff’s allegations failed to establish a relationship with defendant sufficient to characterize her as a subscribers of defendant’s goods or services, the court dismissed the VPPA claim with leave to amend. It observed: “Conventionally, ‘subscription’ entails an exchange between subscriber and provider whereby the subscriber imparts money and/or personal information in order to receive a future and recurrent benefit, whether that benefit comprises, for instance, periodical magazines, club membership, cable services, or email updates.” In this case, “[s]uch casual consumption of web content, without any attempt to affiliate with or connect to the provider, exhibit[ed] none of the critical characteristics of ‘subscription’ and therefore [did] not suffice to render [plaintiff] a subscriber of [defendant’s] services.”

Austin-Spearman v. AMC Network Entertainment LLC, 2015 WL 1539052 (S.D.N.Y. April 7, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Best practices for providers of goods and services on the Internet of Things

Today the United States Federal Trade Commission issued a report in which it detailed a number of consumer-focused issues arising from the growing Internet of Things (IoT). Companies should pay attention to the portion of the report containing the Commission’s recommendations on best practices to participants (such as device manufacturers and service providers) in the IoT space.

The Commission structured its recommendations around four of the “FIPPs” – the Fair Information Practice Principles – which first appeared in the 1970s and which inform much of the world’s regulation geared to protect personal data. The recommendations focused on data security, data minimization, notice and choice.

DATA SECURITY

IoT participants should implement reasonable data security. The Commission noted that “[o]f course, what constitutes reasonable security for a given device will depend on a number of factors, including the amount and sensitivity of data collected and the costs of remedying the security vulnerabilities.” Nonetheless, companies should:

  • Implement “security by design”
  • Ensure their personnel practices promote good security
  • Retain and oversee service providers that provide reasonable security
  • Implement “defense-in-depth” approach where appropriate
  • Implement reasonable access control measures
  • Monitor products in the marketplace and patch vulnerabilities

Security by Design

Companies should implement “security by design” into their devices at the outset, rather than as an afterthought by:

  • Conducting a privacy or security risk assessment to consider the risks presented by the collection and retention of consumer information.
  • Incorporating the use of “smart defaults” such as requiring consumers to change default passwords during the set-up process.
  • Considering how to minimize the data collected and retained.
  • Testing security measures before launching products.

Personnel Practices and Good Security

Companies should ensure their personnel practices promote good security by making security an executive-level concern and training employees about good security practices. A company should not assume that the ability to write code is equivalent to an understanding of the security of an embedded device.

Retain and Oversee Service Providers That Provide Reasonable Security

The Commission urged IoT participants to retain service providers that are capable of maintaining reasonable security and to oversee those companies’ performance to ensure that they do so. On this point, the Commission specifically noted that failure to do so could result in FTC law enforcement action. It pointed to a recent (non IoT) case in which a medical transcription company outsourced its services to independent typists in India who stored their notes in clear text on an unsecured server. Patients in the U.S. were shocked to find their confidential medical information showing up in web searches.

The “Defense-in-Depth” Approach

The Commission urged companies to take additional steps to protect particularly sensitive information (e.g., health information). For example, instead of relying on the user to ensure that data passing over his or her local wireless network is encrypted using the Wi-Fi password, companies should undertake additional efforts to ensure that data is not publicly available.

Reasonable Access Control Measures

While tools such as strong authentication could be used to permit or restrict IoT devices from interacting with other devices or systems, the Commission noted companies should ensure that they do not unduly impede the usability of the device.

Monitoring of Products and Patching of Vulnerabilities

Companies may reasonably decide to limit the time during which they provide security updates and software patches, but must weigh these decisions carefully. IoT participants should also be forthright in their representations about providing ongoing security updates and software patches to consumers. Disclosing the length of time companies plan to support and release software updates for a given product line will help consumers better understand the safe “expiration dates” for their commodity internet-connected devices.

DATA MINIMIZATION

Data minimization refers to the concept that companies should limit the data they collect and retain, and dispose of it once they no longer need it. The Commission acknowledged the concern that requiring data minimization might curtail innovative uses of data. A new enterprise may not be able to reasonably foresee the types of uses it may have for information gathered in the course of providing a connected device or operating a service in conjunction with connected devices. Despite certain concerns against data minimization, the Commission recommended that companies should consider reasonably limiting their collection and retention of consumer data.

The Commission observed how data minimization mitigates risk in two ways. First, the less information in a database, the less attractive the database is as a target for hackers. Second, having less data reduces the risk that the company providing the device or service will use the information in a way that the consumer does not expect.

The Commission provided a useful example of how data minimization might work in practice. It discussed a hypothetical startup that develops a wearable device, such as a patch, that can assess a consumer’s skin condition. The device does not need to collect precise geolocation information in order to work, but it has that capability. The device manufacturer believes that such information could be useful for a future product feature that would enable users to find treatment options in their area. The Commission observed that as part of a data minimization exercise, the company should consider whether it should wait to collect geolocation information until after it begins to offer the new product feature, at which time it could disclose the new collection and seek consent. The company should also consider whether it could offer the same feature while collecting less information, such as by collecting zip code rather than precise geolocation. If the company does decide it needs the precise geolocation information, the Commission would recommend that the company provide a prominent disclosure about its collection and use of this information, and obtain consumers’ affirmative express consent. And the company should establish reasonable retention limits for the data it does collect.

As an aspect of data minimization, the Commission also discussed de-identification as a “viable option in some contexts” to help minimize data and the risk of potential consumer harm. But as with any conversation about de-identification, the Commission addressed the risks associated with the chances of re-identification. On this note, the Commission referred to its 2012 Privacy Report in which it said that companies should:

  • take reasonable steps to de-identify the data, including by keeping up with technological developments;
  • publicly commit not to re-identify the data; and
  • have enforceable contracts in place with any third parties with whom they share the data, requiring the third parties to commit not to re-identify the data.

This approach ensures that if the data is not reasonably de-identified and then is re-identified
in the future, regulators can hold the company responsible.

NOTICE AND CHOICE

Giving consumers notice that information is being collected, and the ability to make choices as to that collection is problematic in many IoT contexts. Data is collected continuously, by many integrated devices and systems, and getting a consumer’s consent in each context might discourage use of the technology. Moreover, often there is no easy user interface through which to provide notice and offer choice.

With these concerns in mind, the Commission noted that “not every data collection requires choice.” As an alternative, the Commission acknowledged the efficacy of a use-based approach. Companies should not be compelled, for example, to provide choice before collecting and using consumer data for practices that are consistent with the context of a transaction or the company’s relationship with a consumer. By way of example, the Commission discussed a hypothetical purchaser of a “smart oven”. The company could use temperature data to recommend another of the company’s kitchen products. The consumer would expect that. But a consumer would not expect the company to disclose information to a data broker or an ad network without having been given notice of that sharing and the ability to choose whether it should occur.

Given the practical difficulty of notice and choice on the IoT, the Commission acknowledged there is no one-size-fits all approach. But it did suggest a number of mechanisms for communications of this sort, including:

  • Choices at point of sale
  • Tutorials (like the one Facebook uses)
  • QR codes on the device
  • Choices during setup
  • Management portals or dashboards
  • Icons
  • Out-of-band notifications (e.g., via email or text)
  • User-experience approach – “learning” what the user wants, and adjusting automatically

Conclusion

The Commission’s report does not have the force of law, but is useful in a couple of ways. From a practical standpoint, it serves as a guide for how to avoid engaging in flagrant privacy and security abuses on the IoT. But it also serves to frame a larger discussion about how providers of goods and services can and should approach the innovation process for the development of the Internet of Things.

1 2 3 4 5 19 20 21