Microsoft letter to GitHub over DRM-free music software is not the first copyright-ironic action against an intermediary

TorrentFreak has reported that Microsoft demanded that GitHub take the code repository of an app that provides access to unprotected Xbox Music tracks. Some are calling it ironic, given that Microsoft is offering access to DRM-free music through its API.

The situation is reminiscent (though not legally identical) to the weirdness we observed way back in 2006 when YouTube asked TechCruch to take down a tool that allowed people to download video clips. We recognized early on that YouTube was a copyright renegade. So it was surprising that it would take such an aggressive tactic toward purveyors of software that would make use of copyrighted works easier.

The Microsoft of today is certainly not the YouTube of 2006. So naturally its interests are different. But comparing the two scenarios yields the common conundrum of how one company that wants to more smoothly make content available deals with other technologies and platforms that do the same thing, but cut out the main monetizing opportunity.

It could be a phenomenon of copyright’s outdatedness. Both YouTube and Microsoft took action against others who were distributing technologies that touched on infringement by means of making copies of the works. That will likely remain an important protection under copyright law even after meaningful reform. But what is really at stake is the right to access content. If that were a meaningful right under the Copyright Act, companies would be less likely to take enforcement actions that appear on the surface to be ironic.

Evan Brown is an attorney in Chicago advising clients on matters dealing with copyright, technology, the internet and new media.

One must conscientiously and systematically perform abstraction-filtration-comparison test in software copyright infringement matters

In all copyright infringement cases, a plaintiff must prove, among other things, that the defendant copied elements of plaintiff’s work that are protected by copyright. This is key because not all copying is infringement – some of what is copied may be merely ideas, processes, facts, in the public domain, or scenes a faire material. It’s not illegal to copy those things. So a successful plaintiff has to show more than “copying in fact”. It must show “illegal copying”.

Software infringement cases present some nuance for this analysis. A computer program has different levels of abstraction (i.e. from main purpose down to object code), and when the only similarities are at higher levels of abstraction, there is less chance that infringement has occurred. Some courts employ the “abstraction-filtration-comparison test” to evaluate whether a defendant accused of infringing the copyright in software has indeed illegally copied protected elements of the plaintiff’s work:

At the abstraction step, we separate the ideas (and basic utilitarian functions), which are not protectable, from the particular expression of the work. Then, we filter out the nonprotectable components of the product from the original expression. Finally, we compare the remaining protected elements to the allegedly copied work to determine if the two works are substantially similar.

Plaintiff sued one of its founders for copyright infringement after that founder had moved to another company and had developed software allegedly similar to software he had created while at plaintiff-company. The parties agreed to have a special master evaluate the parties’ software to opine on whether defendant had infringed. The special master found there to be infringement, and the district court agreed, ordering that copies of defendant’s software be destroyed.

Defendant sought review in the Tenth Circuit. On appeal, the court vacated and remanded. It held that the special master failed to properly document the steps involved in conducting the abstraction-filtration-comparison test.

The court found there was little evidence the special master performed the abstraction step. Although “[a]pplication of the abstractions test will necessarily vary from case-to-case and program-to-program,” a “computer program can often be parsed into at least six levels of generally declining abstraction: (i) the main purpose, (ii) the program structure or architecture, (iii) modules, (iv) algorithms and data structures, (v) source code, and (vi) object code.” In the court’s mind, “[a]bstraction is important, and it cannot be neglected here.”

The failure to “conscientiously and systematically” perform the abstraction step tainted the remainder of the three-part test. The court criticized the special master’s application of the filtration and comparison steps, observing that the special master apparently proceeded from the false premise that an infringement analysis begins and ends with “copying in fact.” The special master went to great lengths to show that defendant took steps to conceal his copying of source code (e.g., by omitting comments). But having not first properly separated out (by filtering) the unprotected elements after abstraction, the special master’s report was not sturdy enough to support a finding in the district court that infringement had occurred.

Paycom Payroll, LLC v. Richison, — F.3d —, 2014 WL 3377679 (10th Cir. July 11, 2014)

What should we do when trademarks offend?

Trademarks are symbols that convey meaning, and ostensibly that meaning is ontologically linked to the purveyor of the goods or services with which the trademark is connected. But those symbols can relate to different ontologies as well, be they freighted with racism/prejudice, religious offense, or plain old poor taste. Take for example the ongoing Redskins dispute, Muslims protesting a sacred symbol on perfume, and the weird attempt by a Malaysian company to get an Australian trademark for MH17.

The law and social advocacy step in to critique these brand owners’ selection of marks. For example, the USPTO found the Redskins marks to so disparage Native Americans that the football team should not enjoy the protections of a federal trademark registration. Ticked-off Sufis protested their holy symbol being used in a concupiscent manner. And we all sort of scratch our heads at why a company would think it should capitalize commercially on the tragedy of an airliner downed in a war zone.

But do the law and social advocacy really have any role to play here? Of course. So perhaps the more critical question is whether those roles should be primary ones. Trademarks exist to regulate commerce. More specifically, trademark law seeks primarily to ensure that a purchaser’s decision making process will be unmessed-with by others seeking to muddy that purchaser’s picture of who is providing the goods or services. If trademarks can have multiple meanings, which of course they sometimes will, shouldn’t we just let the marketplace sort that out? At the same time that trademark law is guiding a purchaser’s decision in an environment hopefully free of confusion, why not just let the sensibilities of the purchasing majority decide what products – some branded with offensive symbols while others not – be sustained?

Evan Brown is an attorney in Chicago advising clients on matters dealing with trademarks, copyright, technology, the internet and new media.

Lawsuit against Yelp over how it marketed its review filters can move forward

Plaintiff restaurant owner sued Yelp under California unfair competition law, claiming that certain statements Yelp made about the filters it uses to ascertain the unreliability or bias of user reviews were misleading and untrue. For example, plaintiff alleged that Yelp advertised that its filtering process “takes the reviews that are the most trustworthy and from the most established sources and displays them on the business page.” But, according to plaintiff, the filter did not give consumers the most trusted reviews, excluded legitimate reviews, and included reviews that were demonstrably false and biased.

Yelp filed an Anti-SLAPP motion to strike plaintiff’s complaint under California Code of Civil Procedure section 425.16, arguing that the complaint sought to interfere with Yelp’s free speech rights, and targeted speech that appeared in a public forum and was a matter of public interest. The trial court granted the motion, and plaintiff sought review with the Court of Appeal of California. On appeal, the court reversed.

It held that a motion to strike under the mechanism of California’s Anti-SLAPP statute was unavailable under section 425.17 (c), which prohibits Anti-SLAPP motions against “any cause of action brought against a person primarily engaged in the business of selling or leasing goods or services,” where certain other conditions are met, including the statement being made for purposes of promoting the speaker’s goods or services.

The appellate court disagreed with the lower court which found that Yelp’s statements about its filters were mere “puffery”. Instead, the court held that these actions disqualified the Anti-SLAPP motion under the very language of the statute pertaining to commercial speech.

Demetriades v. Yelp, Inc., 2014 WL 3661491 (Cal. Ct. App. July 24, 2014)

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

When is news reporting fair use under copyright law?

Blogger claims fair use supports his challenge to DMCA takedown of YouTube video. But “news reporting” aspect of fair use can be tricky.

An embattled California pastor sent a DMCA takedown notice to YouTube over a video clip that a blogger used “to report accurately the relationship” between two organizations. The blogger sent a counternotification and explained that he believes copyright fair use protects him against the takedown (and apparently against infringement as well).

The blogger invokes, among other things, the news reporting aspect of fair use, which one finds set forth in Section 107 of the Copyright Act. A recent fair use case, Swatch Group Management Services Ltd. v. Bloomberg, 742 F. 3d 17 (2d Cir. 2014) might shed some interesting light on how news reporting plays into the analysis. In that case, the court found that defendant was protected by fair use when it distributed an audio recording of a company’s earnings call. Unlike many fair use cases, in which the analysis under the first factor (purpose and character of the use) becomes a question of whether the subsequent use is “transformative,” the court observes the following:

In the context of news reporting and analogous activities … the need to convey information to the public accurately may in some instances make it desirable and consonant with copyright law for a defendant to faithfully reproduce an original work rather than transform it.

A defendant may in some circumstances provide transformative material along with the faithful reproduction of an original work. But the absence of that transformative material will not disqualify a defendant from showing fair use:

[B]y disseminating not just a written transcript or article but an actual sound recording, [defendant] was able to convey with precision not only what [plaintiff's] executives said, but also how they said it. This latter type of information may be just as valuable … as the former, since a speaker’s demeanor, tone, and cadence can often elucidate his or her true beliefs far beyond what a stale transcript or summary can show.

So we see that the news reporting aspect of fair use can be conceptually separated from transformative use.

There is a slippery slope risk here, and the court recognized that. It cited to the Supreme Court’s Harper & Row decision to observe that “[t]he promise of copyright would be an empty one if it could be avoided merely by dubbing the infringement a fair use ‘news report’”. In this case, however, the “independent informational value inherent in a faithful recording” carried the day. From this we see a rule or guide: use of a piece of content is more likely to be newsworthy if the piece of content itself, and not just the raw information within the content, is a news event.

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

When is it okay to use social media to make fun of people?

There is news from California that discusses a Facebook page called 530 Fatties that was created to collect photos of and poke fun at obese people. It’s a rude project, and sets the context for discussing some intriguing legal and normative issues.

Apparently the site collects photos that are taken in public. One generally doesn’t have a privacy interest in being photographed while in public places. And that seems pretty straightforward if you stop and think about it — you’re in public after all. But should technology change that legal analysis? Mobile devices with good cameras connected to high speed broadband networks make creation, sharing and shaming much easier than it used to be. A population equipped with these means essentially turns all public space into a panopticon. Does that mean the individual should be given more of something-like-privacy when in public? If you think that’s crazy, consider it in light of what Justice Sotomayor wrote in her concurrence in the 2012 case of U.S. v. Jones: “I would ask whether people reasonably expect that their movements will be recorded and aggregated in a manner that enables [one] to ascertain, more or less at will, their political and religious beliefs, sexual habits, and so on.”

Apart from privacy harms, what else is at play here? For the same reasons that mobile cameras + social media jeopardizes traditional privacy assurances, the combination can magnify the emotional harms against a person. The public shaming that modern technology occasions can inflict deeper wounds because of the greater spatial and temporal characteristics of the medium. One can now easily distribute a photo or other content to countless individuals, and since the web means the end of forgetting, that content may be around for much longer than the typical human memory.

Against these concerns are the free speech interests of the speaking parties. In the U.S. especially, it’s hardwired into our sensibilities that each of us has great freedom to speak and otherwise express ourselves. The traditional First Amendment analysis will protect speech — even if it offends — unless there is something truly unlawful about it. For example, there is no free speech right to defame, to distribute obscene materials, or to use “fighting words.” Certain forms of harassment fall into the category of unprotected speech. How should we examine the role that technology plays in moving what would otherwise be playground-like bullying (like calling someone a fatty) to unlawful speech that can subject one to civil or even criminal liability? Is the impact that technology’s use makes even a valid issue to discuss?

Finally, we should examine the responsibility of the intermediaries here. A social media platform generally is going to be protected by the Communications Decency Act at 47 USC 230 from liability for third party content. But we should discuss the roles of the intermediary in terms other than pure legal ones. Many social media platforms are proactive in taking down otherwise lawful content that has the tendency to offend. The pervasiveness of social media underscores the power that these platforms have to shape normative values around what is appropriate behavior among individuals. This power is indeed potentially greater than any legal or governmental power to constrain the generation and distribution of content.

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

DMCA’s protection of copyright management information applied to non-electronic works

The Digital Millennium Copyright Act (DMCA) provides safe harbors from copyright infringement liability for online service providers (17 U.S.C. 512) and makes it unlawful to circumvent technological measures that effectively control access to copyrighted works (17 U.S.C. 1201). A lesser-known (and lesser-litigated) provision of the DMCA (17 U.S.C. 1202) makes it illegal to intentionally remove or alter any copyright management information or to distribute copies of works knowing that copyright information has been removed or altered without authority of the copyright owner or the law. “Copyright management information” includes information conveyed in connection with copies of the work, such as the title and the name of the author.

A recent case from federal court in Florida considered whether this regulation of copyright management information in the DMCA applies only to electronic works intended for distribution over the internet, or whether it applies to more traditional works such as hard copy technical drawings. The court interpreted the DMCA broadly to apply to all kinds of works, whether on the internet or not.

Plaintiff alleged that defendant violated the DMCA by distributing copies of plaintiff’s drawings “knowing that [plaintiff’s] name had been removed therefrom and/or that another entity’s name had been added thereto.” Defendant argued that plaintiff failed to state a claim for a violation of the DMCA because the DMCA only applies to “technological” infringement. Defendant cited to Textile Secrets Intl’l, Inc. v. Ya–Ya Brand, Inc., 524 F.Supp.2d 1184 (C.D.Cal.2007), which found that § 1202(b) “was [not] intended to apply to circumstances that have no relation to the Internet, electronic commerce, automated copyright protections, or management systems, public registers, or other technological measures or processes as contemplated in the DMCA as a whole.” To read it otherwise, the court in that case reasoned, would contradict the “legislative intent behind the DMCA to facilitate electronic and Internet commerce.”

In this case, however, the court noted that other courts, focusing on the plain language of the DMCA, have held differently, and approved the DMCA’s application to non-technological contexts. See Murphy v. Millennium Radio Group LLC, 650 F.3d 295 (3d Cir.2011); Agence France Presse v. Morel, 769 F.Supp.2d 295 (S.D.N.Y.2011). Although in this case, as in Murphy, the legislative history of the DMCA was consistent with defendant’s interpretation, it did not actually contradict it. Instead, Section 1202(b) simply established a cause of action for the removal of, among other things, the name of the author of a work when it has been conveyed in connection with copies of the work.

So the court, as it found it was required to do, considered the statute’s plain meaning before it considered its legislative history. Under that analysis, it held that plaintiff’s allegations were sufficient to a state a claim for violation of the DMCA.

Roof & Rack Products, Inc. v. GYB Investors, LLC, 2014 WL 3183278 (S.D. Fla. July 8, 2014)

Evan Brown is an attorney in Chicago advising clients on matters dealing with copyright, technology, the internet and new media.

Is the Aereo decision a setback for innovation?

I have written about last week’s Aereo decision over on my firm’s blog:

One of the big questions preceding the Supreme Court’s decision in the Aereo case … was whether a holding against Aereo would put cloud services into such a legally precarious position that the innovation and investment climate would chill. While the decision clearly makes Aereo’s use of its technology illegal, one should not be too quick to foretell a drastic impact on all hosted services. Here are some reasons why. [Read the entire post]

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

No Section 230 immunity for healthcare software provider

Company could be liable for modifications made to its software that provided abbreviated third-party warnings for prescription drugs.

Cases dealing with the Communications Decency Act often involve websites. See, for example, the recent decision from the Sixth Circuit involving thedirty.com, and earlier cases about Roommates.com and Amazon. But this case considered a sort of unique suggested application of Section 230 immunity. The question was whether a provider of software that facilitated the delivery of prescription monographs (including warning information) could claim immunity. It’s unusual for Section 230 to show up in a products liability/personal injury action, but that is how it happened here.

Plaintiff suffered blindness and other injuries allegedly from taking medication she says she would not have taken had it been accompanied with certain warnings. She sued several defendants, including a software company that provided the technology whereby warnings drafted by third parties were provided to pharmacy retailers.

Defendant software company moved to dismiss on several grounds, including immunity under the Communications Decency Act, 47 U.S.C. 230. The trial court denied the motion to dismiss and defendant sought review. On appeal, the court affirmed the denial of the motion to dismiss, holding that Section 230 immunity did not apply.

At the request of the retailer that sold plaintiff her medicine, defendant software company modified its software to provide only abbreviated product warnings. Plaintiff’s claims against defendant arose from that modification.

Defendant argued that Section 230 immunity should protect it because defendant did not play any role in the decisions of the product warning. Instead, defendant was an independent provider of software that distributed drug information to pharmacy customers. Its software enabled pharmacies to access a third party’s database of product warnings. Defendant did not author the warnings but instead, provided the information under an authorization in a data license agreement. Defendant thus functioned as a pass through entity to distribute warnings that were prepared by third parties to retailers selling prescription drugs, and were printed and distributed to the individual customer when a prescription was filled.

The court found unpersuasive defendant’s claim that Section 230 immunized it from liability for providing electronic access to third party warnings. Section 230 provides, in relevant part, that (1) “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” and (2) “[n]o cause of action may be brought and no liability may be imposed under any State or local rule that is inconsistent with this section.”

It held that plaintiff’s claim against defendant did not arise from defendant’s role as the software or service provider that enabled the retailer to access the third-party drafted warnings. Instead, the court found that plaintiff’s claim arose from defendant’s modification of its software to allow the retailer to distribute abbreviated drug monographs that automatically omitted warnings of serious risks. The appellate court agreed with the trial court which found, “this is not a case in which a defendant merely distributed information from a third party author or publisher.” Instead, in the court’s view, defendant’s conduct in modifying the software so that only abbreviated warnings would appear, it participated in creating or modifying the content.

Hardin v. PDX, Inc., 2014 WL 2768863 (Cal. App. 1st June 19, 2014)

Sixth Circuit holds thedirty.com entitled to Section 230 immunity

Plaintiff Jones (a high school teacher and Cincinnati Bengals cheerleader) sued the website thedirty.com and its operator for defamation over a number of third party posts that said mean things about plaintiff. Defendants moved for summary judgment, arguing that the Communications Decency Act — 47 USC § 230(c)(1) — afforded them immunity from liability for the content created by third parties. Articulating a “goofy legal standard,” the district court denied the motion, and the case was tried twice. The first trial ended in a mistrial, and the second time the jury found in favor of plaintiff.

Defendants sought review with the Sixth Circuit Court of Appeals on the issue of whether whether the district court erred in denying defendants’ motion for judgment as a matter of law by holding that the CDA did not bar plaintiff’s state tort claims. On appeal, the court reversed the district court and ordered that judgment as a matter of law be entered in defendants’ favor.

Section 230(c)(1) provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” At its core, § 230 grants immunity to defendant service providers in lawsuits seeking to hold the service provider liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content.

But the grant of immunity is not without limits. It applies only to the extent that an interactive computer service provider is not also the information content provider of the content at issue. A defendant is not entitled to protection from claims based on the publication of information if the defendant is responsible, in whole or in part, for the creation or development of the information.

The district court held that “a website owner who intentionally encourages illegal or actionable third-party postings to which he adds his own comments ratifying or adopting the posts becomes a ‘creator’ or ‘developer’ of that content and is not entitled to immunity.” Thus, the district court concluded that “[d]efendants, when they re-published the matters in evidence, had the same duties and liabilities for re-publishing libelous material as the author of such materials.”

The appellate court held that the district court’s test for what constitutes “creation” or “development” was too broad. Instead, the court looked to the Ninth Circuit’s decision in Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008) and adopted the material contribution test from that opinion:

[W]e interpret the term “development” as referring not merely to augmenting the content generally, but to materially contributing to its alleged unlawfulness. In other words, a website helps to develop unlawful content, and thus falls within the exception to section 230, if it contributes materially to the alleged illegality of the conduct.

In the Sixth Circuit’s language, “[A] material contribution to the alleged illegality of the content does not mean merely taking action that is necessary to the display of allegedly illegal content. Rather, it means being responsible for what makes the displayed content allegedly unlawful.”

In this case, the defendants did not author the statements at issue. But they did select the statements for publication. The court held that defendants did not materially contribute to the defamatory content of the statements simply because those posts were selected for publication. Moreover, the website did not require users to post illegal or actionable content as a condition of use. The website’s content submission form simply instructed users generally to submit content. The court found the tool to be neutral (both in orientation and design) as to what third parties submit. Accordingly, the website design did not constitute a material contribution to any defamatory speech that was uploaded.

Jones v. Dirty World, No. 13-5946 (6th Cir. June 16, 2014)

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media. Contact him.