Why be concerned with social media estate planning?

The headline of this recent blog post by the U.S. government promises to answer the question of why you should do some social media estate planning. But the post falls short of providing a compelling reason to plan for how your social media accounts and other digital assets should be handled in the event of your demise. So I’ve come up with my own list of reasons why this might be good both for the individual and for our culture:

Security. People commit identity theft on both the living and the dead. (See, for example, the story of the Tennessee woman who collected her dead aunt’s Social Security checks for 22 years.) While the living can run credit checks and otherwise monitor the use of their personal information, the deceased are not so diligent. Ensuring that the dataset comprising a person’s social media identity is accounted for and monitored should reduce the risk of that information being used nefariously.

Avoiding sad reminders. Spammers have no qualms with commandeering a dead person’s email account. As one Virginia family knows, putting a stop to that form of “harassment” can be painful and inconvenient.

Keeping social media uncluttered. This reason lies more in the public interest than in the interest of the deceased and his or her relatives. The advertising model for social media revenue generation relies on the accuracy and effectiveness of information about the user base. The presence of a bunch of dead peoples’ accounts, which are orphaned, so to speak, dilutes the effectiveness of the other data points in the social graph. So it is a good thing to prune the accounts of the deceased, or otherwise see that they are properly curated.

Preserving our heritage for posterity. Think of the ways you know about your family members that came before you. Stories and oral tradition are generally annotated by photo albums, personal correspondence and other snippets of everyday life. Social media is becoming a preferred substrate for the collection of those snippets. To have that information wander off into the digital ether unaccounted for is to forsake a means of knowing about the past.

How big a deal is this, anyway? This Mashable article commenting on the U.S. government post says that last year about 500,000 Facebook users died. That’s about 0.0006% of the user base. (Incidentally, Facebook users seem much less likely to die than the general population, as 0.007% of the world’s entire population died last year. Go here if you want to do the math yourself.)

I say it’s kind of a big deal, but a deal that’s almost certain to get bigger.

No restraining order against uncle posting family photos on Facebook

Court refuses to consider common law invasion of privacy tort to support restraining order under Minnesota statute.

Olson v. LaBrie, 2012 WL 426585 (Minn. App. February 13, 2012)

Appellant sought a restraining order against his uncle, saying that his uncle engaged in harassment by posting family photos of appellant (including one of him in front of a Christmas tree) and mean commentary on Facebook. The trial court denied the restraining order. Appellant sought review with the state appellate court. On appeal, the court affirmed the denial of the restraining order.

It found that the photos and the commentary were mean and disrespectful, but that they could not form the basis for harassment. The court held that whether harassment occurred depended only on a reading of the statute (which provides, among other things, that a restraining order is appropriate to guard against “substantial adverse effects” on the privacy of another). It was not appropriate, the court held, to look to tort law on privacy to determine whether the statute called for a restraining order.

Teacher fired over Facebook post gets her job back

Court invokes notion of “contextual integrity” to evaluate social media user’s online behavior.

Rubino v. City of New York, 2012 WL 373101 (N.Y. Sup. February 1, 2012)

The day after a student drowned at the beach while on a field trip, a fifth grade teacher updated her Facebook status to say:

After today, I am thinking the beach sounds like a wonderful idea for my 5th graders! I HATE THEIR GUTS! They are the devils (sic) spawn!

Three days later, she regretted saying that enough to delete the post. But the school had already found out about it and fired her. After going through the administrative channels, the teacher went to court to challenge her termination.

The court agreed that getting fired was too stiff a penalty. It found that the termination was so disproportionate to the offense, in the light of all the circumstances, that it was “shocking to one’s sense of fairness.” The teacher had an unblemished record before this incident, and what’s more, she posted the content outside of school and after school hours. And there was no evidence it affected her ability to teach.

But the court said some things about the teacher’s use of social media that were even more interesting. It drew on a notion of what scholars have called “contextual integrity” to evaluate the teacher’s online behavior:

[E]ven though petitioner should have known that her postings could become public more easily than if she had uttered them during a telephone call or over dinner, given the illusion that Facebook postings reach only Facebook friends and the fleeting nature of social media, her expectation that only her friends, all of whom are adults, would see the postings is not only apparent, but reasonable.

So while the court found the teacher’s online comments to be “repulsive,” having her lose her job over them went too far.

Six interesting technology law issues raised in the Facebook IPO

Patent trolls, open source, do not track, SOPA, PIPA and much, much more: Facebook’s IPO filing has a real zoo of issues.

The securities laws require that companies going public identify risk factors that could adversely affect the company’s stock. Facebook’s S-1 filing, which it sent to the SEC today, identified almost 40 such factors. A number of these risks are examples of technology law issues that almost any internet company would face, particularly companies whose product is the users.

(1) Advertising regulation. In providing detail about the nature of this risk, Facebook mentions “adverse legal developments relating to advertising, including legislative and regulatory developments” and “the impact of new technologies that could block or obscure the display of our ads and other commercial content.” Facebook is likely concerned about the various technological and legal restrictions on online behavioral advertising, whether in the form of mandatory opportunities for users to opt-out of data collection or or the more aggressive “do not track” idea. The value of the advertising is of course tied to its effectiveness, and any technological, regulatory or legislative measures to enhance user privacy is a risk to Facebook’s revenue.

(2) Data security. No one knows exactly how much information Facebook has about its users. Not only does it have all the content uploaded by its 845 million users, it has the information that could be gleaned from the staggering 100 billion friendships among those users. [More stats] A data breach puts Facebook at risk of a PR backlash, regulatory investigations from the FTC, and civil liability to its users for negligence and other causes of action. But Facebook would not be left without remedy, having in its arsenal civil actions under the Computer Fraud and Abuse Act and the Stored Communications Act (among other laws) against the perpetrators. It is also likely the federal government would step in to enforce the criminal provisions of these acts as well.

(3) Changing laws. The section of the S-1 discussing this risk factor provides a laundry list of the various issues that online businesses face. Among them: user privacy, rights of publicity, data protection, intellectual property, electronic contracts, competition, protection of minors, consumer protection, taxation, and online payment services. Facebook is understandably concerned that changes to any of these areas of the law, anywhere in the world, could make doing business more expensive or, even worse, make parts of the service unlawful. Though not mentioned by name here, SOPA, PIPA, and do-not-track legislation are clearly in Facebook’s mind when it notes that “there have been a number of recent legislative proposals in the United States . . . that would impose new obligations in areas such as privacy and liability for copyright infringement by third parties.”

(4) Intellectual property protection. The company begins its discussion of this risk with a few obvious observations, namely, how the company may be adversely affected if it is unable to secure trademark, copyright or patent registration for its various intellectual property assets. Later in the disclosure, though, Facebook says some really interesting things about open source:

As a result of our open source contributions and the use of open source in our products, we may license or be required to license innovations that turn out to be material to our business and may also be exposed to increased litigation risk. If the protection of our proprietary rights is inadequate to prevent unauthorized use or appropriation by third parties, the value of our brand and other intangible assets may be diminished and competitors may be able to more effectively mimic our service and methods of operations.

(5) Patent troll lawsuits. Facebook notes that internet and technology companies “frequently enter into litigation based on allegations of infringement, misappropriation, or other violations of intellectual property or other rights.” But it goes on to give special attention to those “non-practicing entities” (read: patent trolls) “that own patents and other intellectual property rights,” which “often attempt to aggressively assert their rights in order to extract value from technology companies.” Facebook believes that as its profile continues to rise, especially in the glory of its IPO, it will increasingly become the target of patent trolls. For now it does not seem worried: “[W]e do not believe that the final outcome of intellectual property claims that we currently face will have a material adverse effect on our business.” Instead, those endeavors are a suck on resources: “[D]efending patent and other intellectual property claims is costly and can impose a significant burden on management and employees….” And there is also the risk that these lawsuits might turn out badly, and Facebook would have to pay judgments, get licenses, or develop workarounds.

(6) Tort liability for user-generated content. Facebook acknowledges that it faces, and will face, claims relating to information that is published or made available on the site by its users, including claims concerning defamation, intellectual property rights, rights of publicity and privacy, and personal injury torts. Though it does not specifically mention the robust immunity from liability over third party content provided by 47 U.S.C. 230, Facebook indicates a certain confidence in the protections afforded by U.S. law from tort liability. It is the international scene that gives Facebook concern here: “This risk is enhanced in certain jurisdictions outside the United States where our protection from liability for third-party actions may be unclear and where we may be less protected under local laws than we are in the United States.”

You have to hand it to the teams of professionals who have put together Facebook’s IPO filing. I suppose the billions of dollars at stake can serve as a motivation for thoroughness. In any event, the well-articulated discussion of these risks in the S-1 is an interesting read, and can serve to guide the many lesser-valued companies out there.

1 2 3 4 5 6 7 8 9 10