Court denies request of plaintiffs in right of publicity suit to exhume the body of Aunt Jemima

The great-grandsons of Anna S. Harringon, whose image formed the basis for Aunt Jemima, sued Quaker Oats Company and others for $2 billion claiming that defendants failed to pay royalties to Harrington’s estate after her death in 1955. One of the allegations in the case is that defendants played a role in Harrington’s death. Apparently, in an effort to support those allegations, plaintiffs sought an order from the US District Court for the Northern District of Illinois (where the matter is pending) allowing them to exhume the body of their great-grandmother for evidence of this malfeasance.

The court denied the request. Apart from it being just a bizarre ask, it turns out the “evidence” upon which the defendants’ role in Aunt Jemima’s death was based on a parody article from Uncyclopedia. In denying the motion, the court found the following:

The motion is primarily based on statements purportedly made by Quaker Oats executives about the death of the woman who had been identified as “Aunt Jemima.” But the source of the information is an uncyclopedia.wikia.com article, which is a parody website of Wikipedia. Uncyclopedia proudly bills itself as “an encyclopedia full of misinformation and utter lies.” See uncyclopedia.wikia.com/wiki/Uncyclopedia:About.

The court also threatened the pro se plaintiffs: “Plaintiffs must take greater care in their submissions to the Court, or else face sanctions and, if litigation abuse continues, outright dismissal of the case.”

Hunter et al. v. PepsiCo Inc. et al., No. 1:14-cv-06011 (N.D. Ill. October 21, 2014)

BTW: Some info about Anna Harrington’s grave.

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

GitHub jeopardizes its DMCA safe harbor status by launching its new policy

GitHub has baked in some feelgood to its new DMCA takedown policy. The new setup features clearer language, a refusal to automatically disable all forks of an allegedly infringing repository, and a 24-hour window in which the target of a takedown notice may make changes. The mechanisms of this third point ought to cause one to consider whether GitHub is risking the protections of the DMCA safe harbor.

If a DMCA takedown notice alleges that only certain files (as opposed to the whole repository) infringe, under the new policy, GitHub “will contact the user who created the repository and give them approximately 24 hours to delete or modify the content specified in the notice.” If the user makes changes to the repository, the burden shifts back to the sender of the DMCA notice. This shifing-the-burden-back seems problematic under the DMCA.

GitHub’s policy says:

If the user makes changes, the copyright owner must review them and renew or revise their takedown notice if the changes are insufficient. GitHub will not take any further action unless the copyright owner contacts us to either renew the original takedown notice or submit a revised one. If the copyright owner is satisfied with the changes, they may either submit a formal retraction or else do nothing. GitHub will interpret silence longer than two weeks as an implied retraction of the takedown notice.

The DMCA protects a party in GitHub’s position so long as the party “responds expeditiously to remove, or disable access to, the material that is claimed to be infringing upon notification of claimed infringement”. Read that provision carefully — the response must be to take down, not merely take steps to work with the alleged infringer to make it right. GitHub’s new mechanism of interpreting silence as a retraction is not an expeditious removal of or disabling access to allegedly infringing material. Nothing in the DMCA requires the sender of the takedown notice to have to ask twice.

You’ve got to hand it to GitHub for trying to make the world a better place through this new policy. The intended net effect is to reduce the number of instances in which entire repositories are taken down simply because of a few allegedly infringing files. But GitHub is putting something of great value, namely, its DMCA safe harbor protection, at risk.

Many copyright plaintiffs look for every possible angle to pin liability. You can almost be certain that a copyright owner will challenge GitHub’s safe harbor status on the ground that GitHub did not respond expeditiously. It seems odd GitHub would be willing to toss a perfectly good affirmative defense. One would think the better approach would be to go ahead and take the repository down after 24 hours, rather than leaving it up and risk a finding on “non-expeditiousness”.

Related:

Microsoft letter to GitHub over DRM-free music software is not the first copyright-ironic action against an intermediary

Evan Brown is an attorney in Chicago advising clients on matters dealing with copyright, technology, the internet and new media.

YouTube has been a billion dollar boon to big media

This NBC News piece reports that since 2007, YouTube’s ContentID program has enabled copyright holders to monetize content posted to the service and get paid a billion dollars in the process. (Also included in the report is the staggering statistic that ContentID scans 400 years of content every day — we live in content-producing world of crazy proportions!)

So we see that with this kind of cash rolling in, it’s no wonder that Viacom finally came to its senses earlier this year when it decided to discontinue its litigation against YouTube. The billion dollar notion is also interesting — that’s the very amount Viacom sought when it filed suit in March 2007.

Copyright, not privacy, motivated Reddit to take down photos of nude celebrities

This VentureBeat piece with Reddit CEO Yishan Wong brings up a number of interesting facts concerning Reddit in the wake of its receiving an additional $50 million funding round. One of those pieces of interesting information concerns Reddit’s decision to take down a subreddit devoted to the sharing of recently-leaked celebrity nude photos.

Says Wong:

If there’s any confusion: [Reddit] did not shut down /r/TheFappening due to content linking to nude celebrity photos. The subreddit was shut down because users were reposting content already taken down due to valid DMCA requests, and because spammers began posting links to the images hosted on their own pay-per-click sites, or sites intended to spread malware.

We can’t read too much from this comment, but it does implicate that the dignitary interests of the celebrities involved did not motivate Reddit to do the right thing. Instead, the risk of copyright liability (or, more precisely, the risk that DMCA safe harbor protection may be eliminated) was a stronger motivation.

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

Microsoft letter to GitHub over DRM-free music software is not the first copyright-ironic action against an intermediary

TorrentFreak has reported that Microsoft demanded that GitHub take the code repository of an app that provides access to unprotected Xbox Music tracks. Some are calling it ironic, given that Microsoft is offering access to DRM-free music through its API.

The situation is reminiscent (though not legally identical) to the weirdness we observed way back in 2006 when YouTube asked TechCruch to take down a tool that allowed people to download video clips. We recognized early on that YouTube was a copyright renegade. So it was surprising that it would take such an aggressive tactic toward purveyors of software that would make use of copyrighted works easier.

The Microsoft of today is certainly not the YouTube of 2006. So naturally its interests are different. But comparing the two scenarios yields the common conundrum of how one company that wants to more smoothly make content available deals with other technologies and platforms that do the same thing, but cut out the main monetizing opportunity.

It could be a phenomenon of copyright’s outdatedness. Both YouTube and Microsoft took action against others who were distributing technologies that touched on infringement by means of making copies of the works. That will likely remain an important protection under copyright law even after meaningful reform. But what is really at stake is the right to access content. If that were a meaningful right under the Copyright Act, companies would be less likely to take enforcement actions that appear on the surface to be ironic.

Evan Brown is an attorney in Chicago advising clients on matters dealing with copyright, technology, the internet and new media.

One must conscientiously and systematically perform abstraction-filtration-comparison test in software copyright infringement matters

In all copyright infringement cases, a plaintiff must prove, among other things, that the defendant copied elements of plaintiff’s work that are protected by copyright. This is key because not all copying is infringement – some of what is copied may be merely ideas, processes, facts, in the public domain, or scenes a faire material. It’s not illegal to copy those things. So a successful plaintiff has to show more than “copying in fact”. It must show “illegal copying”.

Software infringement cases present some nuance for this analysis. A computer program has different levels of abstraction (i.e. from main purpose down to object code), and when the only similarities are at higher levels of abstraction, there is less chance that infringement has occurred. Some courts employ the “abstraction-filtration-comparison test” to evaluate whether a defendant accused of infringing the copyright in software has indeed illegally copied protected elements of the plaintiff’s work:

At the abstraction step, we separate the ideas (and basic utilitarian functions), which are not protectable, from the particular expression of the work. Then, we filter out the nonprotectable components of the product from the original expression. Finally, we compare the remaining protected elements to the allegedly copied work to determine if the two works are substantially similar.

Plaintiff sued one of its founders for copyright infringement after that founder had moved to another company and had developed software allegedly similar to software he had created while at plaintiff-company. The parties agreed to have a special master evaluate the parties’ software to opine on whether defendant had infringed. The special master found there to be infringement, and the district court agreed, ordering that copies of defendant’s software be destroyed.

Defendant sought review in the Tenth Circuit. On appeal, the court vacated and remanded. It held that the special master failed to properly document the steps involved in conducting the abstraction-filtration-comparison test.

The court found there was little evidence the special master performed the abstraction step. Although “[a]pplication of the abstractions test will necessarily vary from case-to-case and program-to-program,” a “computer program can often be parsed into at least six levels of generally declining abstraction: (i) the main purpose, (ii) the program structure or architecture, (iii) modules, (iv) algorithms and data structures, (v) source code, and (vi) object code.” In the court’s mind, “[a]bstraction is important, and it cannot be neglected here.”

The failure to “conscientiously and systematically” perform the abstraction step tainted the remainder of the three-part test. The court criticized the special master’s application of the filtration and comparison steps, observing that the special master apparently proceeded from the false premise that an infringement analysis begins and ends with “copying in fact.” The special master went to great lengths to show that defendant took steps to conceal his copying of source code (e.g., by omitting comments). But having not first properly separated out (by filtering) the unprotected elements after abstraction, the special master’s report was not sturdy enough to support a finding in the district court that infringement had occurred.

Paycom Payroll, LLC v. Richison, — F.3d —, 2014 WL 3377679 (10th Cir. July 11, 2014)

What should we do when trademarks offend?

Trademarks are symbols that convey meaning, and ostensibly that meaning is ontologically linked to the purveyor of the goods or services with which the trademark is connected. But those symbols can relate to different ontologies as well, be they freighted with racism/prejudice, religious offense, or plain old poor taste. Take for example the ongoing Redskins dispute, Muslims protesting a sacred symbol on perfume, and the weird attempt by a Malaysian company to get an Australian trademark for MH17.

The law and social advocacy step in to critique these brand owners’ selection of marks. For example, the USPTO found the Redskins marks to so disparage Native Americans that the football team should not enjoy the protections of a federal trademark registration. Ticked-off Sufis protested their holy symbol being used in a concupiscent manner. And we all sort of scratch our heads at why a company would think it should capitalize commercially on the tragedy of an airliner downed in a war zone.

But do the law and social advocacy really have any role to play here? Of course. So perhaps the more critical question is whether those roles should be primary ones. Trademarks exist to regulate commerce. More specifically, trademark law seeks primarily to ensure that a purchaser’s decision making process will be unmessed-with by others seeking to muddy that purchaser’s picture of who is providing the goods or services. If trademarks can have multiple meanings, which of course they sometimes will, shouldn’t we just let the marketplace sort that out? At the same time that trademark law is guiding a purchaser’s decision in an environment hopefully free of confusion, why not just let the sensibilities of the purchasing majority decide what products – some branded with offensive symbols while others not – be sustained?

Evan Brown is an attorney in Chicago advising clients on matters dealing with trademarks, copyright, technology, the internet and new media.

Lawsuit against Yelp over how it marketed its review filters can move forward

Plaintiff restaurant owner sued Yelp under California unfair competition law, claiming that certain statements Yelp made about the filters it uses to ascertain the unreliability or bias of user reviews were misleading and untrue. For example, plaintiff alleged that Yelp advertised that its filtering process “takes the reviews that are the most trustworthy and from the most established sources and displays them on the business page.” But, according to plaintiff, the filter did not give consumers the most trusted reviews, excluded legitimate reviews, and included reviews that were demonstrably false and biased.

Yelp filed an Anti-SLAPP motion to strike plaintiff’s complaint under California Code of Civil Procedure section 425.16, arguing that the complaint sought to interfere with Yelp’s free speech rights, and targeted speech that appeared in a public forum and was a matter of public interest. The trial court granted the motion, and plaintiff sought review with the Court of Appeal of California. On appeal, the court reversed.

It held that a motion to strike under the mechanism of California’s Anti-SLAPP statute was unavailable under section 425.17 (c), which prohibits Anti-SLAPP motions against “any cause of action brought against a person primarily engaged in the business of selling or leasing goods or services,” where certain other conditions are met, including the statement being made for purposes of promoting the speaker’s goods or services.

The appellate court disagreed with the lower court which found that Yelp’s statements about its filters were mere “puffery”. Instead, the court held that these actions disqualified the Anti-SLAPP motion under the very language of the statute pertaining to commercial speech.

Demetriades v. Yelp, Inc., 2014 WL 3661491 (Cal. Ct. App. July 24, 2014)

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

When is news reporting fair use under copyright law?

Blogger claims fair use supports his challenge to DMCA takedown of YouTube video. But “news reporting” aspect of fair use can be tricky.

An embattled California pastor sent a DMCA takedown notice to YouTube over a video clip that a blogger used “to report accurately the relationship” between two organizations. The blogger sent a counternotification and explained that he believes copyright fair use protects him against the takedown (and apparently against infringement as well).

The blogger invokes, among other things, the news reporting aspect of fair use, which one finds set forth in Section 107 of the Copyright Act. A recent fair use case, Swatch Group Management Services Ltd. v. Bloomberg, 742 F. 3d 17 (2d Cir. 2014) might shed some interesting light on how news reporting plays into the analysis. In that case, the court found that defendant was protected by fair use when it distributed an audio recording of a company’s earnings call. Unlike many fair use cases, in which the analysis under the first factor (purpose and character of the use) becomes a question of whether the subsequent use is “transformative,” the court observes the following:

In the context of news reporting and analogous activities … the need to convey information to the public accurately may in some instances make it desirable and consonant with copyright law for a defendant to faithfully reproduce an original work rather than transform it.

A defendant may in some circumstances provide transformative material along with the faithful reproduction of an original work. But the absence of that transformative material will not disqualify a defendant from showing fair use:

[B]y disseminating not just a written transcript or article but an actual sound recording, [defendant] was able to convey with precision not only what [plaintiff's] executives said, but also how they said it. This latter type of information may be just as valuable … as the former, since a speaker’s demeanor, tone, and cadence can often elucidate his or her true beliefs far beyond what a stale transcript or summary can show.

So we see that the news reporting aspect of fair use can be conceptually separated from transformative use.

There is a slippery slope risk here, and the court recognized that. It cited to the Supreme Court’s Harper & Row decision to observe that “[t]he promise of copyright would be an empty one if it could be avoided merely by dubbing the infringement a fair use ‘news report’”. In this case, however, the “independent informational value inherent in a faithful recording” carried the day. From this we see a rule or guide: use of a piece of content is more likely to be newsworthy if the piece of content itself, and not just the raw information within the content, is a news event.

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

When is it okay to use social media to make fun of people?

There is news from California that discusses a Facebook page called 530 Fatties that was created to collect photos of and poke fun at obese people. It’s a rude project, and sets the context for discussing some intriguing legal and normative issues.

Apparently the site collects photos that are taken in public. One generally doesn’t have a privacy interest in being photographed while in public places. And that seems pretty straightforward if you stop and think about it — you’re in public after all. But should technology change that legal analysis? Mobile devices with good cameras connected to high speed broadband networks make creation, sharing and shaming much easier than it used to be. A population equipped with these means essentially turns all public space into a panopticon. Does that mean the individual should be given more of something-like-privacy when in public? If you think that’s crazy, consider it in light of what Justice Sotomayor wrote in her concurrence in the 2012 case of U.S. v. Jones: “I would ask whether people reasonably expect that their movements will be recorded and aggregated in a manner that enables [one] to ascertain, more or less at will, their political and religious beliefs, sexual habits, and so on.”

Apart from privacy harms, what else is at play here? For the same reasons that mobile cameras + social media jeopardizes traditional privacy assurances, the combination can magnify the emotional harms against a person. The public shaming that modern technology occasions can inflict deeper wounds because of the greater spatial and temporal characteristics of the medium. One can now easily distribute a photo or other content to countless individuals, and since the web means the end of forgetting, that content may be around for much longer than the typical human memory.

Against these concerns are the free speech interests of the speaking parties. In the U.S. especially, it’s hardwired into our sensibilities that each of us has great freedom to speak and otherwise express ourselves. The traditional First Amendment analysis will protect speech — even if it offends — unless there is something truly unlawful about it. For example, there is no free speech right to defame, to distribute obscene materials, or to use “fighting words.” Certain forms of harassment fall into the category of unprotected speech. How should we examine the role that technology plays in moving what would otherwise be playground-like bullying (like calling someone a fatty) to unlawful speech that can subject one to civil or even criminal liability? Is the impact that technology’s use makes even a valid issue to discuss?

Finally, we should examine the responsibility of the intermediaries here. A social media platform generally is going to be protected by the Communications Decency Act at 47 USC 230 from liability for third party content. But we should discuss the roles of the intermediary in terms other than pure legal ones. Many social media platforms are proactive in taking down otherwise lawful content that has the tendency to offend. The pervasiveness of social media underscores the power that these platforms have to shape normative values around what is appropriate behavior among individuals. This power is indeed potentially greater than any legal or governmental power to constrain the generation and distribution of content.

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.