Puzzling privacy analysis in decision to unmask anonymous accused copyright infringers

photo-267569

Plaintiff porn company sued an unknown bittorrent user (identified as John Doe) alleging that defendant had downloaded and distributed more than 20 of plaintiff’s films. Plaintiff asked the court for leave to serve a subpoena on Optimum Online – the ISP associated with defendant’s IP address – prior to the Rule 26(f) conference. (As we have recently discussed, leave of court is required to start discovery before the Rule 26(f) conference, but a plaintiff cannot have that conference unless it knows who the defendant is.) Plaintiff already knew defendant’s IP address. It needed to serve the subpoena on the ISP to learn defendant’s real name and physical address so it could serve him with the complaint.

The court went through a well-established test to determine that good cause existed for allowing the expedited discovery. Drawing heavily on the case of Sony Music Entm’t, Inc. v. Does 1-40, 326 F. Supp. 2d 556 (S.D.N.Y. 2004), the court evaluated:

(1) the concreteness of the plaintiff’s showing of a prima facie claim of copyright infringement,

(2) the specificity of the discovery request,

(3) the absence of alternative means to obtain the subpoenaed information,

(4) the need for the subpoenaed information to advance the claim, and

(5) the objecting party’s expectation of privacy.

The court’s conclusions were not surprising on any of these elements. But it’s discussion under the fifth point, namely, the defendant’s expectation of privacy, was puzzling, and the court may have missed an important point.

It looked to the recent case involving Dred Pirate Roberts and Silk Road, namely, United States v. Ulbricht, 858 F.3d 71 (2d Cir. 2017). Leaning on the Ulbricht case, the court concluded that defendant had no reasonable expectation of privacy in the sought-after information (name and physical address) because there is no expectation of privacy in “subscriber information provided to an internet provider,” such as an IP address, and such information has been “voluntarily conveyed to third parties.”

While the court does not misquote the Ulbricht case, one is left to wonder why it would use that case to support discovery of the unknown subscriber’s name and physical address. At issue in Ulbricht was whether the government violated Dred Pirate Roberts’s Fourth Amendment rights when it obtained the IP address he was using. In this case, however, the plaintiff already knew the IP address from its forensic investigations. The sought-after information here was the name and physical address, not the IP address he used.

So looking to Ulbricht to say that the Doe defendant had no expectation of privacy in his IP address does nothing to shed information on the kind of expectation of privacy, if any, he should have had on his real name and physical address.

The court’s decision ultimately is not incorrect, but it did not need to consult with Ulbricht. As in the Sony Music case from which it drew the 5-part analysis, and in many other similar expedited discovery cases, the court could have simply found there was no reasonable expectation of privacy in the sought-after information, because the ISP’s terms of service put the subscriber on notice that it will turn over the information to third parties in certain circumstances like the ones arising in this case.

Strike 3 Holdings, LLC v. Doe, 2017 WL 5001474 (D.Conn., November 1, 2017)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

No liability for cable company that retained customer information in violation of law

Court essentially holds “no harm, no foul” in case involving violation of federal privacy statute. The case fails to provide an incentive for “privacy by design”.

Can a company that is obligated by law to destroy information about its former customers be held liable under that law if, after the contract with the customer ends, the company does not destroy the information as required? A recent decision from the United States Court of Appeals for the Eighth Circuit (which is located in St. Louis) gives some insight into that issue. The case is called Braitberg v. Charter Communications, Inc., — F.3d —, 2016 WL 4698283 (8th Cir., Sep. 8, 2016).

12493182714_859e827fe6_z

Plaintiff filed a lawsuit against his former cable company after he learned that the company held on to his personally identifiable information, including his social security number, years after he had terminated his cable service. The cable company was obligated under the federal Cable Communications Policy Act to “destroy personally identifiable information if the information is no longer necessary for the purpose for which it was collected.”

The lower court dismissed the lawsuit on the basis that plaintiff had not properly demonstrated that he had standing to bring the lawsuit. Plaintiff appealed to the Eighth Circuit. On review, the court of appeals affirmed the dismissal of the lawsuit.

The appellate court’s decision was informed by the recent Supreme Court decision in Spokeo, Inc. v. Robins, 136 S.Ct. 1540 (S.Ct. 2016), which addressed, among other things, the question of whether a plaintiff asserting violation of a privacy statute has standing to sue.

As a general matter, Article III of the Constitution limits the jurisdiction of the federal courts to actual “cases or controversies”. A party invoking federal jurisdiction must show, among other things, that the alleged injury is both “concrete and particularized” and “actual or imminent, not conjectural or hypothetical”.

In this case, the Court of Appeals found that plaintiff had not alleged an injury in fact as required under Article III and the Spokeo decision. His complaint asserted merely “a bare procedural violation, divorced from any concrete harm.”

The court’s opinion goes on to provide some examples of when the violation of a privacy statute would give rise to standing. It does this by noting certain things that plaintiff did not allege. He did not, for example allege that defendant had disclosed information to a third party, that any other party accessed the data, or that defendant used the information in any way after the termination of the agreement. Simply stated, he identified no material risk of harm from the retention. This speculative or hypothetical risk was insufficient for him to bring the lawsuit.

One unfortunate side effect of this decision is that it does little to encourage the implementation of “privacy by design” in the development of online platforms. As we have discussed before, various interests, including the federal government, have encouraged companies to develop systems in a way that only keeps data around for as long as it is needed. The federal courts’ unwillingness to recognize liability in situations where data is indeed kept around longer than necessary, even in violation of the law, does not provide an incentive for the utilization of privacy by design practices.

Braitberg v. Charter Communications, Inc., — F.3d —, 2016 WL 4698283 (8th Cir., Sep. 8, 2016)

Photo courtesy Flickr user Justin Hall under this Creative Commons license.

Evan_BrownAbout the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

Facebook’s Terms of Service protect it from liability for offensive fake account

0723_facebook-screenshot
Someone set up a bogus Facebook account and posted, without consent, images and video of Plaintiff engaged in a lewd act. Facebook finally deleted the account, but not until two days had passed and Plaintiff had threatened legal action.

Plaintiff sued anyway, alleging, among other things, intrusion upon seclusion, public disclosure of private facts, and infliction of emotional distress. In his complaint, Plaintiff emphasized language from Facebook’s Terms of Service that prohibited users from posting content or taking any action that “infringes or violates someone else’s rights or otherwise would violate the law.”

Facebook moved to dismiss the claims, making two arguments: (1) that the claims contradicted Facebook’s Terms of Service, and (2) that the claims were barred by the Communications Decency Act at 47 U.S.C. 230. The court granted the motion to dismiss.

It looked to the following provision from Facebook’s Terms of Service:

Although we provide rules for user conduct, we do not control or direct users’ actions on Facebook and are not responsible for the content or information users transmit or share on Facebook. We are not responsible for any offensive, inappropriate, obscene, unlawful or otherwise objectionable content or information you may encounter on Facebook. We are not responsible for the conduct, whether online or offline, of any user of Facebook.

The court also examined the following language from the Terms of Service:

We try to keep Facebook up, bug-free, and safe, but you use it at your own risk. We are providing Facebook as is without any express or implied warranties including, but not limited to, implied warranties of merchantability, fitness for a particular purpose, and non-infringement. We do not guarantee that Facebook will always be safe, secure or error-free or that Facebook will always function without disruptions, delays or imperfections. Facebook is not responsible for the actions, content, information, or data of third parties, and you release us, our directors, officers, employees, and agents from any claims and damages, known and unknown, arising out of or in any way connected with any claims you have against any such third parties.

The court found that by looking to the Terms of Service to support his claims against Facebook, Plaintiff could not likewise disavow those portions of the Terms of Service which did not support his case. Because the Terms of Service said, among other things, that Facebook was not responsible for the content of what its users post, and that the a user uses the service as his or her on risk, the court could not place the responsibility onto Facebook for the offensive content.

Moreover, the court held that the Communications Decency Act shielded Facebook from liability. The CDA immunizes providers of interactive computer services against liability arising from content created by third parties. The court found that Facebook was an interactive computer service as contemplated under the CDA, the information for which Plaintiff sought to hold Facebook liable was information provided by another information content provider, and the complaint sought to hold Facebook as the publisher or speaker of that information.

Caraccioli v. Facebook, 2016 WL 859863 (N.D. Cal., March 7, 2016)

About the Author: Evan Brown is a Chicago attorney advising enterprises on important aspects of technology law, including software development, technology and content licensing, and general privacy issues.

California court okays lawsuit against mugshot posting website

The Court of Appeal of California has held that defendant website operator – who posted arrestees’ mugshots and names, and generated revenue from advertisements using arrestees’ names and by accepting money to take the photos down – was not entitled to have the lawsuit against it dismissed. Defendant’s profiting from the photos and their takedown was not in connection with an issue of public interest, and therefore did not entitle defendant to the relief afforded by an anti-SLAPP motion.

Plaintiff filed a class action lawsuit against defendant website operator, arguing that the website’s practice of accepting money to take down mugshots it posted violated California laws against misappropriation of likeness, and constituted unfair and unlawful business practices.

Defendant moved to dismiss, arguing plaintiff’s claims comprised a “strategic lawsuit against public participation” (or “SLAPP”). California has an anti-SLAPP statute that allows defendants to move to strike any cause of action “arising from any act of that person in furtherance of the person’s right of petition or free speech under the United States Constitution or the California Constitution in connection with a public issue …, unless the court determines that the plaintiff has established that there is a probability that the plaintiff will prevail on the claim.”

The court held that the posting of mugshots was in furtherance of defendant’s free speech rights and was in connection with a public issue. But the actual complained-of conduct – the generating of revenue through advertisements, and from fees generated for taking the photos down – was not protected activity under the anti-SLAPP statute.

Because the claims did not arise from the part of defendant’s conduct that would be considered “protected activity” under the anti-SLAPP statute, but instead arose from other, non-protected activity (making money off of people’s names and photos), the anti-SLAPP statute did not protect defendant. Unless the parties settle, the case will proceed.

Rogers v. Justmugshots.Com, Corp., 2015 WL 5838403, (Not Reported in Cal.Rptr.3d) (October 7, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Is the Sixth Circuit willing to recognize a right to be forgotten under U.S. law?

Recent FOIA decision questions the 20-year-old notion that defendants have no interest in preventing release of booking photographs during ongoing criminal proceedings.

The Freedom of Information Act (“FOIA”) implements “a general philosophy of full agency disclosure” of government records. Since the mid-90s, the Sixth Circuit has required law enforcement to turn over booking photographs of defendants while ongoing criminal proceedings are occurring.

Plaintiff sought the booking photos of four criminal defendants from the U.S. Marshall’s office. When the U.S. Marshall refused to turn the photos over, plaintiff filed suit. The district court found in plaintiff’s favor, citing the Sixth Circuit case of Detroit Free Press v. United States Department of Justice, 73 F.3d 93 (1996). Defendant sought review with the Sixth Circuit and, bound by the 1996 decision, a panel of the Sixth Circuit affirmed, ordering that the photos be turned over.

But the panel was far from comfortable in its holding. Although it was bound to follow the earlier Sixth Circuit precedent, it urged the court to consider en banc whether an exception to FOIA applies to booking photographs. “In particular, we question the panel’s conclusion that defendants have no interest in preventing the public release of their booking photographs during ongoing criminal proceedings.”

The general theory behind the current requirement that booking photos be released is that the suspects have already appeared publicly in court, and the release of the photos and their names conveys no further information to implicate a protectible privacy interest. But this panel of the court noted that “[s]uch images convey an ‘unmistakable badge of criminality’ and, therefore, provide more information to the public than a person’s mere appearance.”

Something like a right to be forgotten appears in the court’s discussion of how photos can linger online: “[B]ooking photographs often remain publicly available on the Internet long after a case ends, undermining the temporal limitations presumed” by Sixth Circuit case law that calls for the release of photos during ongoing proceedings.

Detroit Free Press v. U.S. Dept. of Justice, — F.3d —, 2015 WL 4745284 (6th Cir. August 12, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Facebook hacking victim’s CFAA and SCA claims not barred by statutes of limitation

Knowledge that email account had been hacked did not start the statutes of limitation clock ticking for Computer Fraud and Abuse Act and Stored Communications Act claims based on alleged related hacking of Facebook account occurring several months later.

Plaintiff sued her ex-boyfriend in federal court for allegedly accessing her Facebook and Aol email accounts. She brought claims under the Computer Fraud and Abuse Act, 18 U.S.C. § 1030 (“CFAA”), and the Stored Communications Act, 18 U.S.C. § 2701, et seq. (“SCA”).

Both the CFAA and the SCA have two-year statutes of limitation. Defendant moved to dismiss, arguing that the limitation periods had expired.

The district court granted the motion to dismiss, but plaintiff sought review with the Second Circuit Court of Appeals. On appeal, the court affirmed the dismissal as to the email account, but reversed and remanded as to the Facebook account.

In August 2011, plaintiff discovered that someone had altered her Aol email account password. Later that month someone used her email account to send lewd and derogatory sexually-themed messages about her to people in her contact list. A few months later, similar things happened with her Facebook account — she discovered she could not log in in February 2012, and in March 2012 someone publicly posted sexually-themed messages using her account. She figured out it was her (now married) ex-boyfriend and filed suit.

The district court dismissed the claims because it found plaintiff first discovered facts giving rise to the claims in August 2011, but did not file suit until more than two years later, in January 2014. The Court of Appeals agreed with the district court as to the email account. She had enough facts in 2011 to know her Aol account had been compromised, and waited too long to file suit over that. But that was not the case with the Facebook account. The district court had concluded plaintiff knew in 2011 that her “computer” had been compromised. The Court of Appeals observed that the lower court failed to properly recognize the nuance concerning which computer systems were being accessed without authorization. Unauthorized access to the Facebook server gave rise to the claims relating to the Facebook account. The 2011 knowledge about her email being hacked did not bear on whether she knew her Facebook account would be compromised. The court observed:

We take judicial notice of the fact that it is not uncommon for one person to hold several or many Internet accounts, possibly with several or many different usernames and passwords, less than all of which may be compromised at any one time. At least on the facts as alleged by the plaintiff, it does not follow from the fact that the plaintiff discovered that one such account — AOL e-mail — had been compromised that she thereby had a reasonable opportunity to discover, or should be expected to have discovered, that another of her accounts — Facebook — might similarly have become compromised.

The decision gives us an opportunity to think about how users’ interests in having their data kept secure from third party access attaches to devices and systems that may be quite remote from where the user is located. The typical victim of a hack or data breach these days is not going to be the owner of the server that is compromised. Instead, the incident will typically involve the compromising of a system somewhere else that is hosting the user’s information or communications. This decision from the Second Circuit recognizes that reality, and contributes to the reasonable opportunity for redress in those situations.

Sewell v. Bernardin, — F.3d —, 2015 WL 4619519 (2nd Cir. August 4, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Casual website visitor who watched videos was not protected under the Video Privacy Protection Act

A recent federal court decision from the Southern District of New York sheds light on what is required to be considered a “consumer” who is protected under the Video Privacy Protection Act (VPPA). The court held that a website visitor who merely visited a website once in awhile to watch videos — without establishing a more “deliberate and durable” affiliation with the website — was not a “subscriber” to the website’s services and thus the VPPA did not prohibit the alleged disclosure of information about the website visitor’s viewing habits.

Defendant was a television network that maintains a website offering video clips and episodes of many of its television shows. The website also incorporated Facebook’s software development kit which, among other things, let visitors log into websites using their Facebook credentials. This mechanism relied on cookies. If a person had chosen to remain logged into Facebook by checking the “keep me logged in” button on Facebook’s homepage, the relevant cookie would continue to operate, regardless of what the user did with the web browser. Plaintiff alleged that this mechanism caused AMC to transmit information to Facebook about the video clips she watched on the AMC site.

Plaintiff sued under the VPPA. Defendant moved to dismiss, arguing that plaintiff lacked standing under the statute and that she was not a protected “consumer” as required by the statute.

The court found that plaintiff had standing. It rejected defendant’s argument that a VPPA plaintiff must allege some injury in addition to asserting that defendant had violated the statute. “It is true . . . that Congress cannot erase Article III’s standing requirements by statutorily granting the right to sue to a plaintiff who would not otherwise have standing.” But Congress “can broaden the injuries that can support constitutional standing.”

The court next looked to whether plaintiff was a “consumer” protected under the statute. The VPPA defines the term “consumer” to include “any renter, purchaser, or subscriber of goods or services from a video tape service provider.” Absent any assertion that plaintiff was a renter or purchaser of AMC’s goods, the parties and the court focused on whether she was a “subscriber” (a term not defined in the statute).

Because plaintiff’s allegations failed to establish a relationship with defendant sufficient to characterize her as a subscribers of defendant’s goods or services, the court dismissed the VPPA claim with leave to amend. It observed: “Conventionally, ‘subscription’ entails an exchange between subscriber and provider whereby the subscriber imparts money and/or personal information in order to receive a future and recurrent benefit, whether that benefit comprises, for instance, periodical magazines, club membership, cable services, or email updates.” In this case, “[s]uch casual consumption of web content, without any attempt to affiliate with or connect to the provider, exhibit[ed] none of the critical characteristics of ‘subscription’ and therefore [did] not suffice to render [plaintiff] a subscriber of [defendant’s] services.”

Austin-Spearman v. AMC Network Entertainment LLC, 2015 WL 1539052 (S.D.N.Y. April 7, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Best practices for providers of goods and services on the Internet of Things

Today the United States Federal Trade Commission issued a report in which it detailed a number of consumer-focused issues arising from the growing Internet of Things (IoT). Companies should pay attention to the portion of the report containing the Commission’s recommendations on best practices to participants (such as device manufacturers and service providers) in the IoT space.

The Commission structured its recommendations around four of the “FIPPs” – the Fair Information Practice Principles – which first appeared in the 1970s and which inform much of the world’s regulation geared to protect personal data. The recommendations focused on data security, data minimization, notice and choice.

DATA SECURITY

IoT participants should implement reasonable data security. The Commission noted that “[o]f course, what constitutes reasonable security for a given device will depend on a number of factors, including the amount and sensitivity of data collected and the costs of remedying the security vulnerabilities.” Nonetheless, companies should:

  • Implement “security by design”
  • Ensure their personnel practices promote good security
  • Retain and oversee service providers that provide reasonable security
  • Implement “defense-in-depth” approach where appropriate
  • Implement reasonable access control measures
  • Monitor products in the marketplace and patch vulnerabilities

Security by Design

Companies should implement “security by design” into their devices at the outset, rather than as an afterthought by:

  • Conducting a privacy or security risk assessment to consider the risks presented by the collection and retention of consumer information.
  • Incorporating the use of “smart defaults” such as requiring consumers to change default passwords during the set-up process.
  • Considering how to minimize the data collected and retained.
  • Testing security measures before launching products.

Personnel Practices and Good Security

Companies should ensure their personnel practices promote good security by making security an executive-level concern and training employees about good security practices. A company should not assume that the ability to write code is equivalent to an understanding of the security of an embedded device.

Retain and Oversee Service Providers That Provide Reasonable Security

The Commission urged IoT participants to retain service providers that are capable of maintaining reasonable security and to oversee those companies’ performance to ensure that they do so. On this point, the Commission specifically noted that failure to do so could result in FTC law enforcement action. It pointed to a recent (non IoT) case in which a medical transcription company outsourced its services to independent typists in India who stored their notes in clear text on an unsecured server. Patients in the U.S. were shocked to find their confidential medical information showing up in web searches.

The “Defense-in-Depth” Approach

The Commission urged companies to take additional steps to protect particularly sensitive information (e.g., health information). For example, instead of relying on the user to ensure that data passing over his or her local wireless network is encrypted using the Wi-Fi password, companies should undertake additional efforts to ensure that data is not publicly available.

Reasonable Access Control Measures

While tools such as strong authentication could be used to permit or restrict IoT devices from interacting with other devices or systems, the Commission noted companies should ensure that they do not unduly impede the usability of the device.

Monitoring of Products and Patching of Vulnerabilities

Companies may reasonably decide to limit the time during which they provide security updates and software patches, but must weigh these decisions carefully. IoT participants should also be forthright in their representations about providing ongoing security updates and software patches to consumers. Disclosing the length of time companies plan to support and release software updates for a given product line will help consumers better understand the safe “expiration dates” for their commodity internet-connected devices.

DATA MINIMIZATION

Data minimization refers to the concept that companies should limit the data they collect and retain, and dispose of it once they no longer need it. The Commission acknowledged the concern that requiring data minimization might curtail innovative uses of data. A new enterprise may not be able to reasonably foresee the types of uses it may have for information gathered in the course of providing a connected device or operating a service in conjunction with connected devices. Despite certain concerns against data minimization, the Commission recommended that companies should consider reasonably limiting their collection and retention of consumer data.

The Commission observed how data minimization mitigates risk in two ways. First, the less information in a database, the less attractive the database is as a target for hackers. Second, having less data reduces the risk that the company providing the device or service will use the information in a way that the consumer does not expect.

The Commission provided a useful example of how data minimization might work in practice. It discussed a hypothetical startup that develops a wearable device, such as a patch, that can assess a consumer’s skin condition. The device does not need to collect precise geolocation information in order to work, but it has that capability. The device manufacturer believes that such information could be useful for a future product feature that would enable users to find treatment options in their area. The Commission observed that as part of a data minimization exercise, the company should consider whether it should wait to collect geolocation information until after it begins to offer the new product feature, at which time it could disclose the new collection and seek consent. The company should also consider whether it could offer the same feature while collecting less information, such as by collecting zip code rather than precise geolocation. If the company does decide it needs the precise geolocation information, the Commission would recommend that the company provide a prominent disclosure about its collection and use of this information, and obtain consumers’ affirmative express consent. And the company should establish reasonable retention limits for the data it does collect.

As an aspect of data minimization, the Commission also discussed de-identification as a “viable option in some contexts” to help minimize data and the risk of potential consumer harm. But as with any conversation about de-identification, the Commission addressed the risks associated with the chances of re-identification. On this note, the Commission referred to its 2012 Privacy Report in which it said that companies should:

  • take reasonable steps to de-identify the data, including by keeping up with technological developments;
  • publicly commit not to re-identify the data; and
  • have enforceable contracts in place with any third parties with whom they share the data, requiring the third parties to commit not to re-identify the data.

This approach ensures that if the data is not reasonably de-identified and then is re-identified
in the future, regulators can hold the company responsible.

NOTICE AND CHOICE

Giving consumers notice that information is being collected, and the ability to make choices as to that collection is problematic in many IoT contexts. Data is collected continuously, by many integrated devices and systems, and getting a consumer’s consent in each context might discourage use of the technology. Moreover, often there is no easy user interface through which to provide notice and offer choice.

With these concerns in mind, the Commission noted that “not every data collection requires choice.” As an alternative, the Commission acknowledged the efficacy of a use-based approach. Companies should not be compelled, for example, to provide choice before collecting and using consumer data for practices that are consistent with the context of a transaction or the company’s relationship with a consumer. By way of example, the Commission discussed a hypothetical purchaser of a “smart oven”. The company could use temperature data to recommend another of the company’s kitchen products. The consumer would expect that. But a consumer would not expect the company to disclose information to a data broker or an ad network without having been given notice of that sharing and the ability to choose whether it should occur.

Given the practical difficulty of notice and choice on the IoT, the Commission acknowledged there is no one-size-fits all approach. But it did suggest a number of mechanisms for communications of this sort, including:

  • Choices at point of sale
  • Tutorials (like the one Facebook uses)
  • QR codes on the device
  • Choices during setup
  • Management portals or dashboards
  • Icons
  • Out-of-band notifications (e.g., via email or text)
  • User-experience approach – “learning” what the user wants, and adjusting automatically

Conclusion

The Commission’s report does not have the force of law, but is useful in a couple of ways. From a practical standpoint, it serves as a guide for how to avoid engaging in flagrant privacy and security abuses on the IoT. But it also serves to frame a larger discussion about how providers of goods and services can and should approach the innovation process for the development of the Internet of Things.

When is it okay to use social media to make fun of people?

There is news from California that discusses a Facebook page called 530 Fatties that was created to collect photos of and poke fun at obese people. It’s a rude project, and sets the context for discussing some intriguing legal and normative issues.

Apparently the site collects photos that are taken in public. One generally doesn’t have a privacy interest in being photographed while in public places. And that seems pretty straightforward if you stop and think about it — you’re in public after all. But should technology change that legal analysis? Mobile devices with good cameras connected to high speed broadband networks make creation, sharing and shaming much easier than it used to be. A population equipped with these means essentially turns all public space into a panopticon. Does that mean the individual should be given more of something-like-privacy when in public? If you think that’s crazy, consider it in light of what Justice Sotomayor wrote in her concurrence in the 2012 case of U.S. v. Jones: “I would ask whether people reasonably expect that their movements will be recorded and aggregated in a manner that enables [one] to ascertain, more or less at will, their political and religious beliefs, sexual habits, and so on.”

Apart from privacy harms, what else is at play here? For the same reasons that mobile cameras + social media jeopardizes traditional privacy assurances, the combination can magnify the emotional harms against a person. The public shaming that modern technology occasions can inflict deeper wounds because of the greater spatial and temporal characteristics of the medium. One can now easily distribute a photo or other content to countless individuals, and since the web means the end of forgetting, that content may be around for much longer than the typical human memory.

Against these concerns are the free speech interests of the speaking parties. In the U.S. especially, it’s hardwired into our sensibilities that each of us has great freedom to speak and otherwise express ourselves. The traditional First Amendment analysis will protect speech — even if it offends — unless there is something truly unlawful about it. For example, there is no free speech right to defame, to distribute obscene materials, or to use “fighting words.” Certain forms of harassment fall into the category of unprotected speech. How should we examine the role that technology plays in moving what would otherwise be playground-like bullying (like calling someone a fatty) to unlawful speech that can subject one to civil or even criminal liability? Is the impact that technology’s use makes even a valid issue to discuss?

Finally, we should examine the responsibility of the intermediaries here. A social media platform generally is going to be protected by the Communications Decency Act at 47 USC 230 from liability for third party content. But we should discuss the roles of the intermediary in terms other than pure legal ones. Many social media platforms are proactive in taking down otherwise lawful content that has the tendency to offend. The pervasiveness of social media underscores the power that these platforms have to shape normative values around what is appropriate behavior among individuals. This power is indeed potentially greater than any legal or governmental power to constrain the generation and distribution of content.

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

Company facing liability for accessing employee’s Twitter and Facebook accounts

While plaintiff was away from the office for a serious brain injury she suffered in a work-related auto accident, some of her co-workers accessed and posted, allegedly without authorization, from her Twitter and Facebook accounts. (There was some dispute as to whether those accounts were personal to plaintiff or whether they were intended to promote the company.) Plaintiff sued, alleging several theories, including violations of the Lanham Act and the Stored Communications Act. Defendants moved for summary judgment. The court dismissed the Lanham Act claim but did not dismiss the Stored Communications Act claim.

Plaintiff had asserted a Lanham Act “false endorsement” claim, which occurs when a person’s identity is connected with a product or service in such a way that consumers are likely to be misled about that person’s sponsorship or approval of the product or service. The court found that although plaintiff had a protectable interest in her “personal brand,” she had not properly put evidence before the court that she suffered the economic harm necessary for a Lanham Act violation. The record showed that plaintiff’s alleged damages related to her mental suffering, something not recoverable under the Lanham Act.

As for the Stored Communications Act claim, the court found that the question of whether defendants were authorized to access and post using plaintiff’s social media accounts should be left up to the jury (and not determined on summary judgment). Defendants had also argued that plaintiff’s Stored Communications Act claim should be thrown out because she had not shown any actual damages. But the court held plaintiff could be entitled to the $1,000 minimum statutory damages under the act even without a showing of actual harm.

Maremont v. Susan Fredman Design Group, Ltd., 2014 WL 812401 (N.D.Ill. March 3, 2014)

Posts navigation

1 2 3 4 10 11 12