Google has captured your mind

Searches reveal who we are and how we think. True intellectual privacy requires safeguarding these records

Google has captured your mind
(Credit: Kuzma via iStock/Salon)

The Justice Department’s subpoena was straightforward enough. It directed Google to disclose to the U.S. government every search query that had been entered into its search engine for a two-month period, and to disclose every Internet address that could be accessed from the search engine. Google refused to comply. And so on Wednesday January 18, 2006, the Department of Justice filed a court motion in California, seeking an order that would force Google to comply with a similar request—a random sample of a million URLs from its search engine database, along with the text of every “search string entered onto Google’s search engine over a one-week period.” The Justice Department was interested in how many Internet users were looking for pornography, and it thought that analyzing the search queries of ordinary Internet users was the best way to figure this out. Google, which had a 45-percent market share at the time, was not the only search engine to receive the subpoena. The Justice Department also requested search records from AOL, Yahoo!, and Microsoft. Only Google declined the initial request and opposed it, which is the only reason we are aware that the secret request was ever made in the first place.

The government’s request for massive amounts of search history from ordinary users requires some explanation. It has to do with the federal government’s interest in online pornography, which has a long history, at least in Internet time. In 1995 Time Magazine ran its famous “Cyberporn” cover, depicting a shocked young boy staring into a computer monitor, his eyes wide, his mouth agape, and his skin illuminated by the eerie glow of the screen. The cover was part of a national panic about online pornography, to which Congress responded by passing the federal Communications Decency Act (CDA) the following year. This infamous law prevented all websites from publishing “patently offensive” content without first verifying the age and identity of its readers, and the sending of indecent communications to anyone under eighteen. It tried to transform the Internet into a public space that was always fit for children by default.


The CDA prompted massive protests (and litigation) charging the government with censorship. The Supreme Court agreed in the landmark case of Reno v. ACLU (1997), which struck down the CDA’s decency provisions. In his opinion for the Court, Justice John Paul Stevens explained that regulating the content of Internet expression is no different from regulating the content of newspapers.The case is arguably the most significant free speech decision over the past half century since it expanded the full protection of the First Amendment to Internet expression, rather than treating the Internet like television or radio, whose content may be regulated more extensively. In language that might sound dated, Justice Stevens announced a principle that has endured: “Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The Internet, in other words, was now an essential forum for free speech.

In the aftermath of Reno, Congress gave up on policing Internet indecency, but continued to focus on child protection. In 1998 it passed the Children’s Online Protection Act, also known as COPA. COPA punished those who engaged in web communications made “for commercial purposes” that were accessible and “harmful to minors” with a $50,000 fine and prison terms of up to six months. After extensive litigation, the Supreme Court in Ashcroft v. ACLU (2004) upheld a preliminary injunction preventing the government from enforcing the law. The Court reasoned that the government hadn’t proved that an outright ban of “harmful to minors” material was necessary. It suggested that Congress could have instead required the use of blocking or filtering software, which would have had less of an impact on free speech than a ban, and it remanded the case for further proceedings. Back in the lower court, the government wanted to create a study showing that filtering would be ineffective, which is why it wanted the search queries from Google and the other search engine companies in 2006.

Judge James Ware ruled on the subpoena on March 17, 2006, and denied most of the government’s demands. He granted the release of only 5 percent of the requested randomly selected anonymous search results and none of the actual search queries. Much of the reason for approving only a tiny sample of the de-identified search requests had to do with privacy. Google had not made a direct privacy argument, on the grounds that de-identified search queries were not “personal information,” but it argued that disclosure of the records would expose its trade secrets and harm its goodwill from users who believed that their searches were confidential. Judge Ware accepted this oddly phrased privacy claim, and added one of his own that Google had missed. The judge explained that Google users have a privacy interest in the confidentiality of their searches because a user’s identity could be reconstructed from their queries and because disclosure of such queries could lead to embarrassment (searches for, e.g., pornography or abortions) or criminal liability (searches for, e.g., “bomb placement white house”). He also placed the list of disclosed website addresses under a protective order to safeguard Google’s trade secrets.

Two facets of Judge Ware’s short opinion in the “Search Subpoena Case” are noteworthy. First, the judge was quite correct that even search requests that have had their user’s identities removed are not anonymous, as it is surprisingly easy to re-identify this kind of data. The queries we enter into search engines like Google often unwittingly reveal our identities. Most commonly, we search our own names, out of vanity, curiosity, or to discover if there are false or embarrassing facts or images of us online. But other parts of our searches can reveal our identities as well. A few months after the Search Subpoena Case, AOL made public twenty million search queries from 650,000 users of its search engine users. AOL was hoping this disclosure would help researchers and had replaced its users’ names with numerical IDs to protect their privacy. But two New York Times reporters showed just how easy it could be to re-identify them. They tracked down AOL user number 4417749 and identified her as Thelma Arnold, a sixty-two-year old widow in Lilburn, Georgia. Thelma had made hundreds of searches including “numb fingers,” “60 single men,” and “dog that urinates on everything.” The New York Times reporters used old-fashioned investigative techniques, but modern sophisticated computer science tools make re-identification of such information even easier. One such technique allowed computer scientists to re-identify users in the Netflix movie-watching database, which that company made public to researchers in 2006.

The second interesting facet of the Search Subpoena Case is its theory of privacy. Google won because the disclosure threatened its trade secrets (a commercial privacy, of sorts) and its business goodwill (which relied on its users believing that their searches were private). Judge Ware suggested that a more direct kind of user privacy was at stake, but was not specific beyond some generalized fear of embarrassment (echoing the old theory of tort privacy) or criminal prosecution (evoking the “reasonable expectation of privacy” theme from criminal law). Most people no doubt have an intuitive sense that their Internet searches are “private,” but neither our intuitions nor the Search Subpoena Case tell us why. This is a common problem in discussions of privacy. We often use the word “privacy” without being clear about what we mean or why it matters. We can do better.

Internet searches implicate our intellectual privacy. We use tools like Google Search to make sense of the world, and intellectual privacy is needed when we are making sense of the world. Our curiosity is essential, and it should be unfettered. As I’ll show in this chapter, search queries implicate a special kind of intellectual privacy, which is the freedom of thought.

Freedom of thought and belief is the core of our intellectual privacy. This freedom is the defining characteristic of a free society and our most cherished civil liberty. This right encompasses the range of thoughts and beliefs that a person might hold or develop, dealing with matters that are trivial and important, secular and profane. And it protects the individual’s thoughts from scrutiny or coercion by anyone, whether a government official or a private actor such as an employer, a friend, or a spouse. At the level of law, if there is any constitutional right that is absolute, it is this one, which is the precondition for other political and religious rights guaranteed by the Western tradition. Yet curiously, although freedom of thought is widely regarded as our most important civil liberty, it has not been protected in our law as much as other rights, in part because it has been very difficult for the state or others to monitor thoughts and beliefs even if they wanted to.

Freedom of Thought and Intellectual Privacy

In 1913 the eminent Anglo-Irish historian J. B. Bury published A History of Freedom of Thought, in which he surveyed the importance of freedom of thought in the Western tradition, from the ancient Greeks to the twentieth century. According to Bury, the conclusion that individuals should have an absolute right to their beliefs free of state or other forms of coercion “is the most important ever reached by men.” Bury was not the only scholar to have observed that freedom of thought (or belief, or conscience) is at the core of Western civil liberties. Recognitions of this sort are commonplace and have been made by many of our greatest minds. René Descartes’s maxim, “I think, therefore I am,” identifies the power of individual thought at the core of our existence. John Milton praised in Areopagitica “the liberty to know, to utter, and to argue freely according to conscience, above all [other] liberties.”

In the nineteenth century, John Stuart Mill developed a broad notion of freedom of thought as an essential element of his theory of human liberty, which comprised “the inward domain of consciousness; demanding liberty of conscience, in the most comprehensive sense; liberty of thought and feeling; absolute freedom of opinion and sentiment on all subjects, practical or speculative, scientific, moral, or theological.” In Mill’s view, free thought was inextricably linked to and mutually dependent upon free speech, with the two concepts being a part of a broader idea of political liberty. Moreover, Mill recognized that private parties as well as the state could chill free expression and thought.

Law in Britain and America has embraced the central importance of free thought as the civil liberty on which all others depend. But it was not always so. People who cannot think for themselves, after all, are incapable of self-government. In the Middle Ages, the crime of “constructive treason” outlawed “imagining the death of the king” as a crime that was punishable by death. Thomas Jefferson later reflected that this crime “had drawn the Blood of the best and honestest Men in the Kingdom.” The impulse for political uniformity was related to the impulse for religious uniformity, whose story is one of martyrdom and burnings of the stake. As Supreme Court Justice William O. Douglas put it in 1963:

While kings were fearful of treason, theologians were bent on stamping out heresy. . . . The Reformation is associated with Martin Luther. But prior to him it broke out many times only to be crushed. When in time the Protestants gained control, they tried to crush the Catholics; and when the Catholics gained the upper hand, they ferreted out the Protestants. Many devices were used. Heretical books were destroyed and heretics were burned at the stake or banished. The rack, the thumbscrew, the wheel on which men were stretched, these were part of the paraphernalia.

Thankfully, the excesses of such a dangerous government power were recognized over the centuries, and thought crimes were abolished. Thus, William Blackstone’s influential Commentaries stressed the importance of the common law protection for the freedom of thought and inquiry, even under a system that allowed subsequent punishment for seditious and other kinds of dangerous speech. Blackstone explained that:

Neither is any restraint hereby laid upon freedom of thought or inquiry: liberty of private sentiment is still left; the disseminating, or making public, of bad sentiments, destructive of the ends of society, is the crime which society corrects. A man (says a fine writer on this subject) may be allowed to keep poisons in his closet, but not publicly to vend them as cordials.

Even during a time when English law allowed civil and criminal punishment for many kinds of speech that would be protected today, including blasphemy, obscenity, seditious libel, and vocal criticism of the government, jurists recognized the importance of free thought and gave it special, separate protection in both the legal and cultural traditions.

The poisons metaphor Blackstone used, for example, was adapted from Jonathan Swift’s Gulliver’s Travels, from a line that the King of Brobdingnag delivers to Gulliver. Blackstone’s treatment of freedom of thought was itself adopted by Joseph Story in his own Commentaries, the leading American treatise on constitutional law in the early Republic. Thomas Jefferson and James Madison also embraced freedom of thought. Jefferson’s famous Virginia Statute for Religious Freedom enshrined religious liberty around the declaration that “Almighty God hath created the mind free,” and James Madison forcefully called for freedom of thought and conscience in his Memorial and Remonstrance Against Religious Assessments.

Freedom of thought thus came to be protected directly as a prohibition on state coercion of truth or belief. It was one of a handful of rights protected by the original Constitution even before the ratification of the Bill of Rights. Article VI provides that “state and federal legislators, as well as officers of the United States, shall be bound by oath or affirmation, to support this Constitution; but no religious test shall ever be required as a qualification to any office or public trust under the United States.” This provision, known as the “religious test clause,” ensured that religious orthodoxy could not be imposed as a requirement for governance, a further protection of the freedom of thought (or, in this case, its closely related cousin, the freedom of conscience). The Constitution also gives special protection against the crime of treason, by defining it to exclude thought crimes and providing special evidentiary protections:

Treason against the United States, shall consist only in levying war against them, or in adhering to their enemies, giving them aid and comfort. No person shall be convicted of treason unless on the testimony of two witnesses to the same overt act, or on confession in open court.

By eliminating religious tests and by defining the crime of treason as one of guilty actions rather than merely guilty minds, the Constitution was thus steadfastly part of the tradition giving exceptional protection to the freedom of thought.

Nevertheless, even when governments could not directly coerce the uniformity of beliefs, a person’s thoughts remained relevant to both law and social control. A person’s thoughts could reveal political or religious disloyalty, or they could be relevant to a defendant’s mental state in committing a crime or other legal wrong. And while thoughts could not be revealed directly, they could be discovered by indirect means. For example, thoughts could be inferred either from a person’s testimony or confessions, or by access to their papers and diaries. But both the English common law and the American Bill of Rights came to protect against these intrusions into the freedom of the mind as well.

The most direct way to obtain knowledge about a person’s thoughts would be to haul him before a magistrate as a witness and ask him under penalty of law. The English ecclesiastical courts used the “oath ex officio” for precisely this purpose. But as historian Leonard Levy has explained, this practice came under assault in Britain as invading the freedom of thought and belief. As the eminent jurist Lord Coke later declared, “no free man should be compelled to answer for his secret thoughts and opinions.” The practice of the oath was ultimately abolished in England in the cases of John Lilburne and John Entick, men who were political dissidents rather than religious heretics.

In the new United States, the Fifth Amendment guarantee that “No person . . . shall be compelled in any criminal case to be a witness against himself ” can also be seen as a resounding rejection of this sort of practice in favor of the freedom of thought. Law of course evolves, and current Fifth Amendment doctrine focuses on the consequences of a confession rather than on mental privacy, but the origins of the Fifth Amendment are part of a broad commitment to freedom of thought that runs through our law. The late criminal law scholar William Stuntz has shown that this tradition was not merely a procedural protection for all, but a substantive limitation on the power of the state to force its enemies to reveal their unpopular or heretical thoughts. As he put the point colorfully, “[i]t is no coincidence that the privilege’s origins read like a catalogue of religious and political persecution.”

Another way to obtain a person’s thoughts would be by reading their diaries or other papers. Consider the Fourth Amendment, which protects a person from unreasonable searches and seizures by the police:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Today we think about the Fourth Amendment as providing protection for the home and the person chiefly against unreasonable searches for contraband like guns or drugs. But the Fourth Amendment’s origins come not from drug cases but as a bulwark against intellectual surveillance by the state. In the eighteenth century, the English Crown had sought to quash political and religious dissent through the use of “general warrants,” legal documents that gave agents of the Crown the authority to search the homes of suspected dissidents for incriminating papers.

Perhaps the most infamous dissident of the time was John Wilkes. Wilkes was a progressive critic of Crown policy and a political rogue whose public tribulations, wit, and famed personal ugliness made him a celebrity throughout the English-speaking world. Wilkes was the editor of a progressive newspaper, the North Briton, a member of Parliament, and an outspoken critic of government policy. He was deeply critical of the 1763 Treaty of Paris ending the Seven Years War with France, a conflict known in North America as the French and Indian War. Wilkes’s damning articles angered King George, who ordered the arrest of Wilkes and his co-publishers of the North Briton, authorizing general warrants to search their papers for evidence of treason and sedition. The government ransacked numerous private homes and printers’ shops, scrutinizing personal papers for any signs of incriminating evidence. In all, forty-nine people were arrested, and Wilkes himself was charged with seditious libel, prompting a long and inconclusive legal battle of suits and countersuits.

By taking a stand against the king and intrusive searches, Wilkes became a cause célèbre among Britons at home and in the colonies. This was particularly true for many American colonists, whose own objections to British tax policy following the Treaty of Paris culminated in the American Revolution. The rebellious colonists drew from the Wilkes case the importance of political dissent as well as the need to protect dissenting citizens from unreasonable (and politically motivated) searches and seizures.

The Fourth Amendment was intended to address this problem by inscribing legal protection for “persons, houses, papers, and effects” into the Bill of Rights. A government that could not search the homes and read the papers of its citizens would be less able to engage in intellectual tyranny and enforce intellectual orthodoxy. In a pre-electronic world, the Fourth Amendment kept out the state, while trespass and other property laws kept private parties out of our homes, paper, and effects.

The Fourth and Fifth Amendments thus protect the freedom of thought at their core. As Stuntz explains, the early English cases estab- lishing these principles were “classic First Amendment cases in a system with no First Amendment.” Even in a legal regime without protection for dissidents who expressed unpopular political or religious opinions, the English system protected those dissidents in their private beliefs, as well as the papers and other documents that might reveal those beliefs.

In American law, an even stronger protection for freedom of thought can be found in the First Amendment. Although the First Amendment text speaks of free speech, press, and assembly, the freedom of thought is unquestionably at the core of these guarantees, and courts and scholars have consistently recognized this fact. In fact, the freedom of thought and belief is the closest thing to an absolute right guaranteed by the Constitution. The Supreme Court first recognized it in the 1878 Mormon polygamy case of Reynolds v. United States, which ruled that although law could regulate religiously inspired actions such as polygamy, it was powerless to control “mere religious belief and opinions.” Freedom of thought in secular matters was identified by Justices Holmes and Brandeis as part of their dissenting tradition in free speech cases in the 1910s and 1920s. Holmes declared crisply in United States v. Schwimmer that “if there is any principle of the Constitution that more imperatively calls for attachment than any other it is the principle of free thought—not free thought for those who agree with us but freedom for the thought that we hate.” And in his dissent in the Fourth Amendment wiretapping case of Olmstead v. United States, Brandeis argued that the framers of the Constitution “sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations.” Brandeis’s dissent in Olmstead adapted his theory of tort privacy into federal constitutional law around the principle of freedom of thought.

Freedom of thought became permanently enshrined in constitutional law during a series of mid-twentieth century cases that charted the contours of the modern First Amendment. In Palko v. Connecticut, Justice Cardozo characterized freedom of thought as “the matrix, the indispensable condition, of nearly every other form of freedom.” And in a series of cases involving Jehovah’s Witnesses, the Court developed a theory of the First Amendment under which the rights of free thought, speech, press, and exercise of religion were placed in a “preferred position.” Freedom of thought was central to this new theory of the First Amendment, exemplified by Justice Jackson’s opinion in West Virginia State Board of Education v. Barnette, which invalidated a state regulation requiring that public school children salute the flag each morning. Jackson declared that:

If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. . . .

[The flag-salute statute] transcends constitutional limitations on [legislative] power and invades the sphere of intellect and spirit which it is the purpose of the First Amendment to our Constitution to reserve from all official control.

Modern cases continue to reflect this legacy. The Court has repeatedly declared that the constitutional guarantee of freedom of thought is at the foundation of what it means to have a free society. In particular, freedom of thought has been invoked as a principal justification for preventing punishment based upon possessing or reading dangerous media. Thus, the government cannot punish a person for merely possessing unpopular or dangerous books or images based upon their content. As Alexander Meiklejohn put it succinctly, the First Amendment protects, first and foremost, “the thinking process of the community.”

Freedom of thought thus remains, as it has for centuries, the foundation of the Anglo-American tradition of civil liberties. It is also the core of intellectual privacy.

“The New Home of Mind”

“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind.” So began “A Declaration of Independence of Cyberspace,” a 1996 manifesto responding to the Communications Decency Act and other attempts by government to regulate the online world and stamp out indecency. The Declaration’s author was John Perry Barlow, a founder of the influential Electronic Frontier Foundation and a former lyricist for the Grateful Dead. Barlow argued that “[c]yberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.” This definition of the Internet as a realm of pure thought was quickly followed by an affirmation of the importance of the freedom of thought. Barlow insisted that in Cyberspace “anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” The Declaration concluded on the same theme: “We will spread ourselves across the Planet so that no one can arrest our thoughts. We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.”

In his Declaration, Barlow joined a tradition of many (including many of the most important thinkers and creators of the digital world) who have expressed the idea that networked computing can be a place of “thought itself.” As early as 1960, the great computing visionary J. C. R. Licklider imagined that “in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought.” Tim Berners-Lee, the architect of the World Wide Web, envisioned his creation as one that would bring “the workings of society closer to the workings of our minds.”

Barlow’s utopian demand that governments leave the electronic realm alone was only partially successful. The Communications Decency Act was, as we have seen, struck down by the Supreme Court, but today many laws regulate the Internet, such as the U.S. Digital Millenium Copyright Act6and the EU Data Retention Directive. The Internet has become more (and less) than Barlow’s utopian vision—a place of business as well as of thinking. But Barlow’s description of the Internet as a world of the mind remains resonant today.

It is undeniable that today millions of people use computers as aids to their thinking. In the digital age, computers are an essential and intertwined supplement to our thoughts and our memories. Discussing Licklider’s prophesy from half a century ago, legal scholar Tim Wu notes that virtually every computer “program we use is a type of thinking aid—whether the task is to remember things (an address book), to organize prose (a word processor), or to keep track of friends (social network software).” These technologies have become not just aids to thought but also part of the thinking process itself. In the past, we invented paper and books, and then sound and video recordings to preserve knowledge and make it easier for us as individuals and societies to remember information. Digital technologies have made remembering even easier, by providing cheap storage, inexpensive retrieval, and global reach. Consider the Kindle, a cheap electronic reader that can hold 1,100 books, or even cheaper external hard drives that can hold hundreds of hours of high-definition video in a box the size of a paperback novel.

Even the words we use to describe our digital products and experiences reflect our understanding that computers and cyberspace are devices and places of the mind. IBM has famously called its laptops “ThinkPads,” and many of us use “smartphones.” Other technologies have been named in ways that affirm their status as tools of the mind—notebooks, ultrabooks, tablets, and browsers. Apple Computer produces iPads and MacBooks and has long sold its products under the slogan, “Think Different.” Google historian John Battelle has famously termed Google’s search records to be a “database of intentions.” Google’s own slogan for its web browser Chrome is “browse the web as fast as you think,” revealing how web browsing itself is not just a form of reading, but a kind of thinking itself. My point here is not just that common usage or marketing slogans connect Internet use to thinking, but a more important one: Our use of these words reflects a reality. We are increasingly using digital technologies not just as aids to our memories but also as an essential part of the ways we think.

Search engines in particular bear a special connection to the processes of thought. How many of us have asked a factual question among friends, only for smartphones to appear as our friends race to see who can look up the answer the fastest? In private, we use search engines to learn about the world. If you have a moment, pull up your own search history on your phone, tablet, or computer, and recall your past queries. It usually makes for interesting reading—a history of your thoughts and wonderings.

But the ease with which we can pull up such a transcript reveals another fundamental feature of digital technologies—they are designed to create records of their use. Think again about the profile a search engine like Google has for you. A transcript of search queries and links followed is a close approximation to a transcript of the operation of your mind. In the logs of search engine companies are vast repositories of intellectual wonderings, questions asked, and mental whims followed. Similar logs exist for Internet service providers and other new technology companies. And the data contained in such logs is eagerly sought by government and private entities interested in monitoring intellectual activity, whether for behavioral advertising, crime and terrorism prevention, and possibly other, more sinister purposes.

Searching Is Thinking

With these two points in mind—the importance of freedom of thought and the idea of the Internet as a place where thought occurs—we can now return to the Google Search Subpoena with which this chapter opened. Judge Ware’s opinion revealed an intuitive understanding that the disclosure of search records was threatening to privacy, but was not clear about what kind of privacy was involved or why it matters.

Intellectual privacy, in particular the freedom of thought, supplies the answer to this problem. We use search engines to learn about and make sense of the world, to answer our questions, and as aids to our thinking. Searching, then, in a very real sense is a kind of thinking. And we have a long tradition of protecting the privacy and confidentiality of our thoughts from the scrutiny of others. It is precisely because of the importance of search records to human thought that the Justice Department wanted to access the records. But if our search records were more public, we wouldn’t merely be exposed to embarrassment like Thelma Arnold of Lilburn, Georgia. We would be less likely to search for unpopular or deviant or dangerous topics. Yet in a free society, we need to be able to think freely about any ideas, no matter how dangerous or unpopular. If we care about freedom of thought—and our political institutions are built on the assumption that we do—we should care about the privacy of electronic records that reveal our thoughts. Search records illustrate the point well, but this idea is not just limited to that one important technology. My argument about freedom of thought in the digital age is this: Any technology that we use in our thinking implicates our intellectual privacy, and if we want to preserve our ability to think fearlessly, free of monitoring, interference, or repercussion, we should embody these technologies with a meaningful measure of intellectual privacy.

Excerpted from “Intellectual Privacy: Rethinking Civil Liberties in the Digital Age” by Neil Richards. Published by Oxford University Press. Copyright 2015 by Neil Richards. Reprinted with permission of the publisher. All rights reserved.

Neil Richards is a Professor of Law at Washington University, where he teaches and writes about privacy, free speech, and the digital revolution.

The poor fetish: commodifying working class culture

By Joseph Todd On February 25, 2015

Post image for The poor fetish: commodifying working class cultureBullshit jobs and a pointless existence are increasingly driving London’s spiritually dead middle class towards a fetishization of working class culture.

Photo: Fruit stall in Shoreditch, London (Source: Flickr/Garry Knight).

Literally, he paints her portrait, then he can fuck off  —  he can leave. When Leonardo DiCaprio is freezing in water, she notices that he’s dead, and starts to shout, ‘I will never let you go,’ but while she is shouting this, she is pushing him away. It’s not even a love story. Again, Captains Courageous: upper classes lose their life, passion, vitality and act like a vampire to suck vitality from a lower-class guy. Once they replenish their energy, he can fuck off.

– Slavoj Zizek on Titanic

London’s middle class are in crisis — they feel empty and clamor for vitality. Their work is alienating and meaningless, many of them in “bullshit jobs” that are either socially useless, overly bureaucratic or divorced from any traditional notion of labor.

Financial services exist to grow the fortunes of capitalists, advertising to exploit our insecurities and public relations to manage the reputations of companies that do wrong. Society would not collapse without these industries. We could cope without the nexus of lobbyists, corporate lawyers and big firm accountants whose sole purpose is to protect the interests of capital. How empty if must feel to work a job that could be abolished tomorrow. One that at best makes no tangible difference to society and at worst encourages poverty, hunger and ecological collapse.

At the same time our doctors, teachers, university professors, architects, lawyers, solicitors and probation officers are rendered impotent. Desperate to just do their jobs yet besieged by bureaucracy and box-ticking. Their energies are focused not on helping the sick, teaching the young or building hospitals but on creating and maintaining the trail of paperwork that is a prerequisite of any meaningful action in late capitalist society. Talk to anybody in these professions, from the public or private sector, and the frustration that comes up again and again is that they spend the majority of their time writing reports, filling in forms and navigating bureaucratic labyrinths that serve only to justify themselves.

This inaction hurts the middle-class man. He feels impotent in the blue glare of his computer screen. Unable to do anything useful, alienated from physical labor and plagued by the knowledge that his father could use his hands, and the lower classes still do. Escape, however, is impossible. Ever since the advent of the smartphone the traditional working day has been abolished. Office workers are at the constant mercy of email, a culture of overwork and a digitalization of work. Your job can be done anytime, anywhere and this is exactly what capital demands. Refuge can only be found in sleep, another domain which capital isdetermined to control.

And when the middle classes are awake and working, they cannot even show contempt for their jobs. Affective (or emotional) labor has always been a part of nursing and prostitution, be it fluffing pillows or faking orgasms, but now it has infected both the shop floor of corporate consumer chains and the offices of middle-management above. Staff working at Pret-à-Manger are encouraged to touch each other, “have presence” and “be happy to be themselves.” In the same way the open plan, hyper-extroverted modern office environment enforces positivity. Offering a systemic critique of the very nature of your work does not make you a ‘team player.’ In such an environment, bringing up the pointlessness of your job is akin to taking a shit on the boss’s desk.

This culture is symptomatic of neoliberal contradiction, one which tells us to be true to ourselves and follow our passions in a system that makes it nearly impossible to do so. A system where we work longer hours, for less money and are taught to consume instead of create. Where fulfilling vocations such as teaching, caring or the arts are either vilified, badly paid or not paid at all. Where the only work that will enable you to have a comfortable life is meaningless, bureaucratic or evil. In such a system you are left with only one option: to embrace the myth that your job is your passion while on a deeper level recognizing that it is actually bullshit.

This is London’s middle class crisis.

But thankfully capital has an antidote. Just as in Titanic, when Kate Winslet saps the life from the visceral, working class Leonardo DiCaprio, middle-class Londoners flock to bars and clubs that sell a pre-packaged, commodified experience of working class and immigrant culture. Pitched as a way to re-connect with reality, experience life on the edge and escape the bureaucratic, meaningless, alienated dissonance that pervades their working lives.

The problem, however, is that the symbols, aesthetics and identities that populate these experiences have been ripped from their original contexts and re-positioned in a way that is acceptable to the middle class. In the process, they are stripped of their culture and assigned an economic value. In this way, they are emptied of all possible meaning.

Visit any bar in the hip districts of Brixton, Dalston or Peckham and you will invariably end up in a warehouse, on the top floor of a car park or under a railway arch. Signage will be minimal and white bobbing faces will be crammed close, a Stockholm syndrome recreation of the twice-daily commute, enjoying their two hours of planned hedonism before the work/sleep cycle grinds back into gear.

Expect gritty, urban aesthetics. Railway sleepers grouped around fire pits, scuffed tables and chairs reclaimed from the last generation’s secondary schools and hastily erected toilets with clattering wooden doors and graffitied mixed sex washrooms. Notice the lack of anything meaningful. Anything with politics or soul. Notice the ubiquity of Red Stripe, once an emblem of Jamaican culture, now sold to white ‘creatives’ at £4 a can.

The warehouse, once a site of industry, has trudged down this path of appropriation. At first it was squatters and free parties, the disadvantaged of a different kind, transforming a space of labor into one of hedonistic illegality and sound system counter-culture. Now the warehouse resides in the middle-class consciousness as the go-to space for every art exhibition or party. Any meaning it may once have had is dead. Its industrial identity has been destroyed and the transgressive thrill the warehouse once represented has been neutered by money, legality and middle-class civility.

Nonetheless many still function as clubs across Southeast London, pumping out reggae and soul music appropriated from the long-established Afro-Caribbean communities to white middle-class twenty-somethings who can afford the £15 entrance. Eventually the warehouse aesthetic will make its way to the top of the pay scale and, as the areas in which they reside reach an acceptable level of gentrification, they will become blocks of luxury flats. Because what else does London need but more kitsch, high ceiling hideaways to shield capital from tax?

The ‘street food revolution’ was not a revolution but a middle-class realization that they could abandon their faux bourgeois restaurants and reach down the socioeconomic ladder instead of up. Markets that once sold fruit and vegetables for a pound a bowl to working class and immigrant communities became venues that commodified and sold the culture of their former clientèle. Vendors with new cute names but the same gritty aesthetics serve over-priced ethnic food and craft beer to a bustling metropolitan crowd, paying not for the cuisine or the cold but for the opportunity to bathe in the edgy cool aesthetic of a former working class space.

This is the romantic illusion that these bars, clubs and street food markets construct; that their customers are the ones on the edge of life, running the gauntlet of Zola’s Les Halles, eating local on makeshift benches whilst drinking beer from the can. Yet this zest is vicarious. Only experienced secondhand through objects and spaces appropriated from below. Spaces which are dully sanitized of any edge and rendered un-intimidating enough for the middle classes to inhabit. Appealing enough for them to trek to parts of London in which they’d never dare live in search of something meaningful. In the hope that some semblance of reality will slip back into view.

The illusion is delicate and fleeting. In part it explains the roving zeitgeist of the metropolitan hipster whose anatomy Douglas Haddow so brilliantly managed to pin down. Because as soon as a place becomes inhabited with too many white, middle-class faces it becomes difficult to keep playing penniless. The braying accents crowd in and the illusion shatters. Those who aren’t committed to the working class aesthetic, yuppies dressed in loafers and shirts rather than scruffy plimsoles and vintage wool coats, begin to dominate and it all becomes just a bit too West London. And in no-time at all the zeitgeist rolls on to the next market, pool hall or dive bar ripe for discovery, colonization and commodification.

Not all businesses understand this delicacy. Champagne and Fromage waded into the hipster darling food market of Brixton Village, upsetting locals and regulars alike. This explicitly bourgeois restaurant, attracted by the hip kudos and ready spending of the area, inadvertently pointed out that the emperor had no clothes. That the commodified working class experience the other restaurants had been pedaling was nothing more than an illusion.

The same anxiety that fuels this cultural appropriation also drives first wave gentrifiers to ‘discover’ new areas that have been populated by working class or immigrant communities for decades. Cheap rents beckon but so does the chance of emancipation from the bourgeois culture of their previous North London existence. The chance to live in an area that is gritty, genuine and real. But this reality is always kept at arm’s length. Gentrifiers have the income to inoculate themselves from how locals live. They plump for spacious Georgian semi-detached houses on a quiet street away from the tower blocks. They socialize in gastro-pubs and artisan cafés. They can do without sure start centers, food banks and the local comprehensive.

Their experience will always be confined to dancing in a warehouse, drinking cocktails from jam jars or climbing the stairs of a multi-story car park in search of a new pop-up restaurant. Never will they face the grinding monotony of mindless work, the inability to pay bills or feed their children, nor the feeling of guilt and hopelessness that comes from being at the bottom of a system that blames the individual but offers no legitimate means by which they can escape.

This partial experience is deliberate. Because with intimate knowledge of how the other half live comes an ugly truth: that middle-class privilege is in many ways premised on working class exploitation. That the rising house prices and cheap mortgages from which they have benefited create a rental market shot with misery. That the money inherited from their parents goes largely untaxed while benefits for both the unemployed and working poor are slashed. That the unpaid internships they can afford to take sustains a culture that excludes the majority from comfortable, white collar jobs. That their accent, speech patterns and knowledge of institutions, by their very deployment in the job market, perpetuate norms that exclude those who were born outside of the cultural elite.

Effie Trinket of the Hunger Games is the ideal manifestation of this contradiction. She is Kaitness and Peeta’s flamboyant chaperone who goes from being a necessary annoyance in the first film towards nominal acceptance in the second. The relationship climaxes when, just as Kaitness and Peeta are about to re-enter the arena, Effie presents Hamich and Peeta with a gold band and necklace, a consumerist expression of their heightened intimacy. And in that very moment, her practiced façade of enthusiastic positivity finally breaks. Through her sobs she cries “I’m sorry, I’m so sorry” and backs away, absent for the rest of the film.

For Effie, the contradiction surfaced and was too much to bear. She realized that the misery and oppression of those in the districts was in some way caused by her privilege. But her tears were shed for a more fundamental truth — that although she recognizes the horror of the world, she enjoys the material comfort exploitation brings. That if given the choice between the status quo and revolution, she wouldn’t change a thing.

Joseph Todd is a writer and activist who has been published in The Baffler, Salon and CounterFire, among others. For more writings, visit his website.

Curing the fear of death

How “tripping out” could change everything

A chemical called “psilocybin” shows remarkable therapeutic promise. Only problem? It comes from magic mushrooms

 

 Curing the fear of death: How "tripping out" could change everything

(Credit: stilikone, Objowl via Shutterstock/Salon)

The second time I ate psychedelic mushrooms I was at a log cabin on a lake in northern Maine, and afterwards I sat in a grove of spruce trees for three and a half hours, saying over and over, “There’s so much to see!”

The mushrooms converted my worldview from an uninspired blur to childlike wonderment at everything I glimpsed. And now, according to recent news, certain cancer patients are having the same experience. The active ingredient in psychedelic mushrooms, psilocybin, is being administered on a trial basis to certain participating cancer patients to help them cope with their terminal diagnosis and enjoy the final months of their lives. The provisional results show remarkable success, with implications that may be much, much bigger.

As Michael Pollan notes in a recent New Yorker piece, this research is still in its early stages. Psychedelic mushrooms are presently classified as a Schedule 1 drug, meaning, from the perspective of our federal government, they have no medical use and are prohibited. But the scientific community is taking some steps that – over time, and after much deliberation – could eventually change that.

Here’s how it works: In a controlled setting, cancer patients receive psilocybin plus coaching to help them make the most of the experience. Then they trip, an experience that puts ordinary life, including their cancer, in a new perspective. And that changed outlook stays with them over time. This last part might seem surprising, but at my desk I keep a picture of the spot where I had my own transcendental experience several years ago; it reminds me that my daily tribulations are not all there is to existence, nor are they what actually matter.

The preliminary research findings are convincing. You could even call them awe-inspiring. In one experiment, an astounding two-thirds of participants said the trip was “among the top five most spiritually significant experiences of their lives.” Pollan describes one cancer patient in detail, a man whose psilocybin session was followed by months that were “the happiest in his life” — even though they were also his last. Said the man’s wife: “[After his trip] it was about being with people, enjoying his sandwich and the walk on the promenade. It was as if we lived a lifetime in a year.”



Which made me do a fist pump for science: Great work, folks. Keep this up! Researchers point out that these studies are small and there’s plenty they don’t know. They also stress the difference between taking psilocybin in a clinical setting — one that’s structured and facilitated by experts — and taking the drug recreationally. (By a lake in Maine, say.) Pollan suggests that the only commonality between the two is the molecules being ingested. My (admittedly anecdotal) experience suggests matters aren’t quite that clear-cut. But even that distinction misses a larger point, which is the potential for this research to help a great many people, with cancer or without, to access a deeper sense of joy in their lives. The awe I felt by that lake in Maine — and the satisfaction and peacefulness that Pollan’s cancer patient felt while eating his sandwich and walking on the promenade — is typically absent from regular life. But that doesn’t mean it has to be.

The growing popularity of mindfulness and meditation suggests that many of us would like to inject a bit more wonder into our lives. As well we should. Not to be a damp towel or anything, but we’re all going to die. “We’re all terminal,” as one researcher said to Pollan. While it’s possible that you’ll live to be 100, and hit every item on your bucket list, life is and always will be uncertain. On any given day, disaster could strike. You could go out for some vigorous exercise and suffer a fatal heart attack, like my dad did. There’s just no way to know.

In the meantime, most of us are caught in the drudgery of to-do lists and unread emails. Responsibility makes us focus on the practical side of things — the rent isn’t going to pay itself, after all — while the force of routine makes it seem like there isn’t anything dazzling to experience anyhow. Even if we’d like to call carpe diem our motto, what we actually do is more along the lines of the quotidian: Work, commute, eat, and nod off to sleep.

With that for a backdrop, it’s not surprising that many of us experience angst about our life’s purpose, not to mention a deep-seated dread over the unavoidable fact of our mortality. It can be a wrenching experience, one that sometimes results in panic attacks or depression. We seek out remedies to ease the discomfort: Some people meditate, others drink. If you seek formal treatment, though, you’ll find that the medical establishment doesn’t necessarily consider existential dread to be a disorder. That’s because it’s normal for us to question our existence and fear our demise. In the case of debilitating angst, though, a doctor is likely to recommend the regimen for generalized anxiety — some combo of therapy and meds.

Both of these can be essential in certain cases, of course; meds tend to facilitate acceptance of the way things are, while therapy can help us, over a long stretch of time, change the things that we can to some degree control. But psychedelics are different from either of these. They seem to open a door to a different way of experiencing life. Pollan quotes one source, a longtime advocate for the therapeutic use of psilocybin, who identifies the drug’s potential for “the betterment of well people.” Psychedelics may help ordinary people, who are wrestling with ordinary angst about death and the meaning of life, to really key into, and treasure, the various experiences of their finite existence.

In other words, psychedelics could possibly help us to be more like kids.

Small children often view the world around them with mystic wonder — pushing aside blades of grass to inspect a tiny bug that’s hidden underneath, or perhaps looking wide-eyed at a bright yellow flower poking through a crack in the sidewalk. (Nothing but a common dandelion, says the adult.) Maybe the best description of psilocybin’s effect is a reversion to that childlike awe at the complexity of the world around us, to the point that we can actually relish our lives.

What’s just as remarkable is that we’re not talking about a drug that needs to be administered on a daily or weekly or even monthly basis in order to be effective. These studies gave psilocybin to cancer patients a single time. Then, for months afterward, or longer, the patients reaped enormous benefit.

(The fact that psychedelics only need to be administered once could actually make it less likely that the research will receive ample funding, because pharmaceutical companies don’t see dollar signs in a drug that’s dispensed so sparingly. But that’s another matter )

Of course, some skepticism may be warranted. Recreational use of psychedelics has been associated with psychotic episodes. That’s a good reason for caution. And a potential criticism here is that psilocybin is doing nothing more than playing a hoax on the brain — a hoax that conjures up a mystical experience and converts us into spellbound kids. You might reasonably ask, “do I even want to wander around awe-struck at a dandelion the same way a 3-year-old might?”

So caution is reasonably advised. But what the research demonstrates is nonetheless remarkable: the way the experience seems to shake something loose in participants’ consciousness, something that lets them see beyond the dull gray of routine, or the grimness of cancer, to the joy in being with loved ones, the sensory pleasure of a good meal, or the astounding pink visuals of the sunset.

 

Ten Years After Hunter S. Thompson’s Death, the Debate Over Suicide Rages On

ten-years-after-hunter-s-thompsons-death-the-debate-over-suicide-rages-on-220-1424463839-crop_lede

February 20, 2015

Today, February 20, marks the tenth anniversary of Hunter S. Thompson killing himself with a .45-caliber handgun in his home in Woody Creek, Colorado. Since his suicide, the right-to-die movement has gained a stronger foothold in American consciousness—even if the country is just as divided as ever on whether doctors should be assisting patients in ending their own lives.

“Poling has always shown a majority of people believing that someone has a moral right to commit suicide under some circumstances, but that majority has been increasing over time,” says Matthew Wynia, Director of Center for Bioethics and Humanities at the University of Colorado Anschutz Medical Campus. Wynia believes a chief factor in that change has been “more and more people say they’ve given a good deal of thought on this issue. And the more people tend to give thought to this issue, the more likely they are to say they are in favor of people having a moral right to commit suicide, under certain circumstances.”

The sticking point is what constitutes a justifiable reason to kill yourself or have a doctor do so for you. In Thompson’s case, he was suffering from intense physical discomfortdue to a back injury, broken leg, hip replacement surgery, and a lung infection. But his widow, Anita, says that while the injuries were significant, they did not justify his suicide.

“His pain was unbearable at times, but was by no means terminal,” Anita tells me via email. “That is the rub. If it were a terminal illness, the horrible aftermath would have been different for me and his loved ones. None of us minded caring for him.”

A mix of popular culture and legislative initiatives have shifted the terrain since then. When Thompson made his big exit in 2005, Jack Kevorkian was still incarcerated for helping his patients shuffle off their mortal coil. He was released in 2007, and shortly before his death a few years later, HBO chronicled his struggles to change public opinion of physician-assisted suicide in the film You Don’t Know Jack, starring Al Pacino.

Last year, suicide seemed to cross a threshold of legitimacy in America. When terminally ill 29-year-old Brittany Maynard appeared on the cover of People magazine next to the headline, “My Decision to Die,” the issue was thrust into the faces of every supermarket shopper in the US. Earlier in the year, the season finale of Girls closed with one of the main characters agreeing to help her geriatric employer end her life, only to have the woman back out after swallowing a fistfull of pills, shouting, “I don’t want to die!”

After the self-inflicted death of Robin Williams last summer, those with strong moral opposition to suicide used the tragedy as an illustration of how much taking your life hurts those around you. “I simply cannot understand how any parent could kill themselves,” Henry Rollins wrote in an editorial for LA Weekly. “I don’t care how well adjusted your kid might be—choosing to kill yourself, rather than to be there for that child, is every shade of awful, traumatic and confusing. I think as soon as you have children, you waive your right to take your own life… I no longer take this person seriously. Their life wasn’t cut short—it was purposely abandoned.”

A decade earlier, Rollins’s comments might have gone unnoticed. As might have Fox News’ Shepard Smith when he referred to Williams as “such a coward” for abandoning his children. Of course, both received a good lashing in the court of public opinion for being so dismissive toward someone suffering from depression. “To the core of my being, I regret it,” Smith apologized in a statement. Rollins followed suit, saying, “I should have known better, but I obviously did not.”

A 2013 Pew Research Poll found that 38 percent of Americans believed that a person has a moral right to commit suicide if “living has become a burden.” But if the person is described as “suffering great pain and have no hope of improvement,” the number increased to 62 percent, a seven-point jump from the way Americans felt about the issue in 1990.

“Psychic suffering is as important as physical suffering when determining if a person should have help to die.”

Still, only 47 percent of Americans in a Pew poll last October said that a doctor should be allowed to facilitate a suicide, barely different from numbers at the time of Thompson’s death. Wynia believes an enduring factor here this is the public’s fear that assisted suicide will be applied as a cost-cutting measure to an already overburdened healthcare system.

“There is worry that insurance companies will cover medication to end your life, but they won’t cover treatments that allow you to extend your life,” he says. “And then the family is stuck with either ponying up the money to extend that person’s life, or they could commit suicide. That puts a lot of pressure on both the family and the individual. Also, there is the issue of the doctor being seen as a double agent who isn’t solely looking out for their best interest.”

As with abortion before Roe v. Wade, when determined citizens are denied medical assistance and left to their own devices, the results can sometimes be disastrous. “There are people who try and fail at suicide, and sometimes they end up in much worse positions than they started,” Wynia adds. “I’ve cared for someone who tried to commit suicide by drinking Drano; that’s a good way to burn out your entire esophagus, and if you survive it, you’re in very bad shape afterward.”

A 2014 Gallup poll showed considerably more support for doctors’ involvement in ending a patient’s life. When asked if physicians should be allowed to “legally end a patient’s life by some painless means,” 69 percent of Americans said they were in favor of such a procedure. But when the question is whether physicians should be able to “assist the patient to commit suicide,” support dropped to 58 percent. This has lead many advocacy groups to adopt the term “aid in dying” as opposed to “assisted suicide.”

A statement on the Compassion and Choices website states: “It is wrong to equate ‘suicide,’ which about 30,000 Americans, suffering from mental illness, tragically resort to each year, with the death-with-dignity option utilized by only 160 terminally ill, but mentally competent, patients in Oregon and Washington last year.”

According to Oregon’s Death With Dignity Act—which permitted Brittany Maynard to be prescribed a lethal dose of drugs from her physician—a patient must be over 18 years old, of sound mind, and diagnosed with a terminal illness with less than six months to live in order to be given life-ending care. Currently, four other states have bills similar to Oregon’s, while 39 states have laws banning physician-assisted suicide. Earlier this month, legislators in Colorado attempted to pass their own version of an assisted suicide bill, but it failed in committee.

In 1995, Australia’s Northern Territory briefly legalized euthanasia through the Rights of the Terminally Ill Act. Dr. Philip Nitschke was the first doctor to administer a voluntary lethal injection to a patient, followed by three more before the law was overturned by the Australian Parliament in 1997. Nitschke retired from medicine that year and began working to educate the public on how to administer their own life-ending procedure without medical supervision or assistance. Last summer, the Australian Medical Board suspended his medical registration, a decision which he is appealing.

Nitschke says two states in Australia currently offer life in prison as a penalty for anyone assisting in another’s suicide, and that he’s been contacted by the British police, who say he may be in violation of the United Kingdom’s assisted suicide laws for hosting workshops educating Brits on how to kill themselves. Unlike more moderate groups like Compassion and Choices, Nitschke’s Exit International doesn’t shy away from words like “suicide,” and feels that the right to die should be expanded dramatically.

A proponent of both left-wing social justice and right-wing rhetoric about personal freedoms, Thompson had very strong feelings about the role of government in our daily lives, particularly when it came to what we were allowed to do with our own bodies.

Laws in most countries that allow physician-assisted suicide under specific circumstances do not consider psychological ailments like depression a justifiable reason for ending your life. Nitschke sees a circular hypocrisy in this, arguing that everyone should be granted the right to end their own life regardless of health, and that those suffering a mental illness are still able to give informed consent.

“Psychic suffering is as important as physical suffering when determining if a person should have help to die,” Nitschke tells me. “The prevailing medical board [in Australia] views almost any psychiatric illness as a reason why one cannot give consent—but the catch-22 is that anyone contemplating suicide, for whatever reason, must be suffering psychiatric illness.”

These days, Nitschke is avoiding criminal prosecution by merely providing information on effective suicide techniques. So long as he doesn’t physically administer a death agent to anyone—the crime that resulted in Kevorkian being hit with a second-degree murder conviction and eight years in prison—he’ll most likely steer clear of jail time.

Philip Nitschke’s euthanasia machine. Photo via Wikimedia Commons

“I think our society is very confused about liberty,” Andrew Solomon, author of The Noonday Demon: An Atlas of Depression, wrote in 2012. “I don’t think it makes sense to force women to carry children they don’t want, and I don’t think it makes sense to prevent people who wish to die from doing so. Just as my marrying my husband doesn’t damage the marriages of straight people, so people who end their lives with assistance do not threaten the lives or decisions of other people.”

While support for laws banning physician-assisted suicide typically come from conservative religious groups and those mistrustful of government-run healthcare, the idea that the government has a role in deciding your end of life care is rooted in a left-leaning philosophy.

“The theory used to be that the state has an interest in the health and wellbeing of its citizens,” acccording to Wynia, “and therefore you as a citizen do not have a right to kill yourself, because you are, in essence, a property of the state.”

This conflicted greatly with the philosophy of Hunter S. Thompson. A proponent of both left-wing social justice and right-wing rhetoric about personal freedoms, Thompson had very strong feelings about the role of government in our daily lives, particularly when it came to what we were allowed to do with our own bodies.

“He once said to me, ‘I’d feel real trapped in this life, Ralph, if I didn’t know I could commit suicide at any moment,'” remembered friend and longtime collaborator Ralph Steadman in a recent interview with Esquire.

Sitting in a New York hotel room while writing the introduction to The Great Shark Hunt, a collection of his essays and journalism published in 1979, Thompson described feeling an existential angst when reflecting on the body of work. “I feel like I might as well be sitting up here carving the words for my own tombstone… and when I finish, the only fitting exit will be right straight off this fucking terrace and into The Fountain, 28 stories below and at least 200 yards out into the air and across Fifth Avenue… The only way I can deal with this eerie situation at all is to make conscious decision that I have already lived and finished the life I planned to live—(13 years longer, in fact).”

Thompson’s widow, Anita, was on the phone with her husband when he took his life. To this day, she feels that the situation was far from hopeless, that his injuries weren’t beyond repair, and that he still had plenty of years left in him.

“He was about to have back surgery again, which meant that the problem would soon be fixed and he could commence his recovery,” she tells me. “My belief is that supporting somebody’s ‘freedom’ to commit suicide because he or she is in some pain or depressed is much different than a chronic or terminal illness. Although I’ve healed from the tragedy, the fact that his personal decision was actually hurried by a series of events and people that later admitted they supported his decision, still haunts me today.”

In September 2005, Rolling Stone published what has come to be known as Hunter Thompson’s suicide note. Despite being written four days beforehand, the brief message does contain the weighty despair of a man unable to inspire in himself the will to go on:

No More Games. No More Bombs. No More Walking. No More Fun. No More Swimming. 67. That is 17 years past 50. 17 more than I needed or wanted. Boring. I am always bitchy. No Fun — for anybody. 67. You are getting Greedy. Act your old age. Relax—This won’t hurt.

Seeing as he lived his life as an undefinable political anomaly—he was an icon of the the hedonism of the 60s and 70s, and also a card-carrying member of the NRA—it’s only fitting that Thompson’s exit from this earth was through the most divisive and controversial doorway possible.

“The fundamental beliefs that underlie our nation are sometimes in conflict with each other—and these issues get at some of the basic tensions in what we value as Americans,” says Wynia. “We value our individual liberties, we value our right to make decisions for ourselves, but we also are a religious community, and we are mistrustful of authority. When you talk about giving the power to doctors or anyone else to help you commit suicide, it makes a lot of people nervous. Even though we also have a libertarian streak that believes, ‘I should be allowed to do this, and I should be allowed to ask my doctor to help me.’ I think this is bound to be a contentious issue for some time to come.”

If you are feeling hopeless of suicidal, there are people you can talk to. Please call the Suicide Prevention Lifeline at 1-800-273-8255.

Follow Josiah M. Hesse on Twitter.

 

http://www.vice.com/read/ten-years-after-hunter-s-thompsons-death-the-debate-over-suicide-rages-on-220?utm_source=vicefbus

Are America’s High Rates of Mental Illness Actually Based on Sham Science?

feature1

The real purpose behind many of these statistics is to change our attitudes and political positions.

About one in five American adults (18.6%) has a mental illness in any given year, according to recent statistics from the National Institute of Mental Health. This statistic has been widely reported with alarm and concern. It’s been used to back up demands for more mental health screening in schools, more legislation to forcibly treat the unwilling, more workplace psychiatric interventions, and more funding for the mental health system. And of course, personally, whenever we or someone we know is having an emotional or psychological problem, we now wonder, is it a mental illness requiring treatment? If one in five of us have one….

But what NIMH quietly made disappear from its website is the fact that this number actually represented a dramatic drop. “An estimated 26.2 percent of Americans ages 18 and older — about one in four adults — suffer from a diagnosable mental disorder in a given year,” the NIMH website can still be found to say in Archive.org’s Wayback Machine. Way back, that is, in 2013.

A reduction in the prevalence of an illness by eight percent of America’s population—25 million fewer victims in one year—is extremely significant. So isn’t that the real story? And isn’t it also important that India recently reported that mental illnesses affect 6.5% of its population, a mere one-third the US rate?

And that would be the real story, if any of these statistics were even remotely scientifically accurate or valid. But they aren’t. They’re nothing more than manipulative political propaganda.

Pharmaceutical companies fund the tests

First, that 18.6% is comprised of a smaller group who have “serious” mental illness and are functionally impaired (4.1%), and a much larger group who are “mildly to moderately” mentally ill and not functionally impaired by it­. Already, we have to wonder how significant a lot of these “mental illnesses” are, if they don’t at all impair someone’s functioning.

NIMH also doesn’t say how long these illnesses last. We only know that, sometime in the year, 18.6% of us met criteria for a mental illness of some duration. But if some depressions or anxieties ­last only a week or month, then it’s possible that at any time as few as 1-2% of the population are mentally ill. That’s a much less eye-popping number that, critics like Australian psychiatrist Jon Jureidini argue, is more accurate.

But even that number may be overblown. That’s because these national-level statistics come from surveys of the general population using mental health screening questionnaires that produce extremely high “false positive” rates.

Virtually all of the screening tools have been designed by people, institutions or companies that profit from providing mental health treatments. The Kutcher Adolescent Depression Scale, for example, will “find” mental illnesses wrongly about seven times as often as it finds them correctly. The screening tool’s author, psychiatrist Stan Kutcher, has taken money from over a dozen pharmaceutical companies. He also co-authored the massively influential study that promoted the antidepressant Paxil as safe and effective for depression in children – a study which, according to a $3 billion US Justice Department settlement with GlaxoSmithKlin­e, had actually secretly found that Paxil was ineffective and unsafe for children. ­Similarly, t­he widely used PHQ-9 and GAD-7 adult mental health questionnaires were created by the pharmaceutical company Pfizer.

This year’s NIMH numbers came from population surveys conducted by the Substance Abuse and Mental Health Services Administration (SAMHSA) and National Survey on Drug Use, which included the Kessler-6 screening tool as a central component — the author of which, Ronald C. Kessler, has received funding from numerous pharmaceutical companies. How misleading is the Kessler-6? It has just six questions. “During the past 30 days, about how often did you feel: 1) nervous, 2) worthless, 3) hopeless, 4) restless or fidgety, 5) that nothing could cheer you up, or 6) that everything was an effort?” For each, responses range from “none of the time” to “all of the time.” If you answer that for “some of the time” over the past month you felt five of those six emotions, then that’s typically enough for a score of 15 and a diagnosis of mild to moderate mental illness. That may sound like the Kessler-6 is a fast way to diagnose as “mentally ill” a lot of ordinary people who are really just occasionally restless, nervous, despairing about the state of the world, and somewhat loose in how they define “some of the time” in a phone survey.

And indeed, that’s exactly what it is.

How 80% accuracy leads to 20 times as much mental illness

Under optimal conditions, the best mental health screening tools like the Kessler-6 have sometimes been rated at a sensitivity of 90% and specificity of 80%. Sensitivity is the rate at which people who have a disease are correctly identified as ill. Specificity is the rate at which people who don’t have a disease are correctly identified as disease-free. Many people assume 90% sensitivity and 80% specificity mean that a test will be wrong around 10-20% of the time. But the accuracy depends on the prevalence of the illness being screened for. So for example if you’re trying to find a few needles in a big haystack, and you can distinguish needles from hay with 90% accuracy, how many stalks of hay will you wrongly identify as needles?

The answer is: A lot of hay. With a 10% prevalence rate of mental illnesses among 1,000 people, any online screening tool calculator can be used to help show that of the 100 who are mentally ill, we will identify 90 of them. Not too bad. However, at 80% specificity, of the 900 who are well, 180 will be wrongly identified as mentally ill. Ultimately, then, our test will determine that 270 people out of 1,000 are mentally ill, nearly tripling the mental illness rates we started with to 27%. And if mental illnesses are less prevalent, the performance of the test is mathematically worse: ­When only 10 in 1,000 are mentally ill, our test will determine that over twenty times that many are.

Mental illness diagnosing is a scientific bottomless pit

This is a common problem with most medical screening tests. They are typically calibrated to miss as few ill people as possible, but consequently they also then scoop up a lot of healthy people who become anxious or ­depressed while getting subjected to lots of increasingly invasive follow-up tests or unnecessary, dangerous treatments. That’s why even comparably much more reliable tests like mammography, cholesterol measuring, annual “physicals,” and many other screening programs are coming under increasing criticism.

The designers of mental health screening tools acknowledge all this in the scientific literature, if not often openly to the general public. As explained deep in their report, SAMHSA tried to compensate for the Kessler-6’s false positive rates; however, the main method they used was to give a sub-sample of their participa­nts a Standard Clinical Interview for DSM Disorders (SCID).

SCID is the “gold standard” for diagnosing mental illnesses in accordance with the Diagnostic and Statistical Manual of Mental Disorders, SAMHSA stated. In fact, SCID simply employs a much larger number of highly subjective questions designed to divide people into more specific diagnoses. For example, the SCID asks if there’s ever been “anything that you have been afraid to do or felt uncomfortable doing in front of other people, like speaking, eating, or writing.” Answering “yes” puts you on a fast path to having anxiety disorder with social phobia. Have you ever felt “like checking something several times to make sure that you’d done it right?” You’re on your way to an obsessive compulsive disorder diagnosis.

That’s why SCID actually isn’t any more reliable than the Kessler-6, according to Ronald Kessler. He should know; Harvard University’s Kessler is author of the Kessler-6 and co-author of the World Health Organization’s popular screening survey, the World Mental Health Composite International Diagnostic Interview (WMH-CIDI). In their scientific report on the development of the WMH-CIDI, Kessler’s team explained that they simply abandoned the whole idea of trying to create a mental health screening tool that was “valid” or “accurate.”

The underlying problem, they wrote, is that, unlike with cancer, there’s no scientific way to definitively determine the absence of any mental illnesses and thereby verify the accuracy of a screening tool. “As no clinical gold standard assessment is available,” Kessler et al wrote, “we adopted the goal of calibration rather than validation; that is, we asked whether WMH-CIDI diagnoses are ‘consistent’ with diagnoses based on a state-of-the-art clinical research diagnostic interview [the SCID], rather than whether they are ‘correct’.” Essentially, creating an impression of scientific consensus between common screening and diagnostic tools was considered to be more important than achieving scientific accuracy with any one of them.

And where that “consensus” lies has shifted over time. Until the 1950s, it wasn’t uncommon to see studies finding that up to 80% of Americans were mentally ill. Throughout the ’90s, NIMH routinely assessed that 10% of Americans were mildly to seriously mentally ill. In 2000, the US Surgeon General’s report declared that the number was 20%, and the NIMH that year doubled its reported prevalence rates, too. In recent years, NIMH was steadily pushing its rate up to a high of 26.2%, but changed it several months ago to 18.6% to match the latest SAMHSA rate.

Suicide and mental illness and other influential sham statistics

Yet as a society we don’t seem to care that there’s a scientific bottomless pit at the heart of all mental illness statistics and diagnosing. One example which highlights how ridiculously overblown and yet influential such epidemiological statistics have become is the claim that, “Over 90% of people who commit suicide are mentally ill.” This number is frequently pumped by the National Alliance on Mental Illness, American Foundation for Suicide Prevention, American Psychiatric Association, and the National Institute of Mental Health, and it has dominated public policy discussions about suicide prevention for years.

The statistic comes from “psychological autopsy” studies. Psychological autopies involve getting friends or relatives of people who committed suicide to complete common mental health screening questionnaires on behalf of the dead people.

As researchers in the journal Death Studies in 2012 exhaustively detailed, psychological autopsies are even less reliable than mental health screening tests administered under normal conditions. Researchers doing psychological autopsies typically don’t factor in false positive rates. They don’t account for the fact that the questions about someone’s feelings and thoughts in the weeks leading up to suicide couldn’t possibly be reliably answered by someone else, and they ignore the extreme biases that would certainly exist in such answers coming from grieving friends and family. Finally, the studies often include suicidal thinking as itself a heavily weighted sign of mental illness—making these studies’ conclusions rarely more than tautology: “Suicidal thinking is a strong sign of mental illness, therefore people who committed suicide have a strong likelihood of having been mentally ill.”

Unfortunately, there is immense political significance to framing suicidal feelings and other psychological challenges this way, if not any substantive scientific significance. These alleged high rates of mental illness are becoming increasingly influential when we discuss policy questions with respect to issues as diverse as prison populations, troubled kids, pregnant and postpartum women, the homeless, gun violence, and the supposed vast numbers of untreated mentally ill. They draw attention, funding and resources into mental health services and treatments at the expense of many other, arguably more important factors in people’s overall psychological wellness that we could be working on, such as poverty, social services, fragmented communities, and declining opportunities for involvement with nature, the arts, or self-actualizing work. At the individual level, we all become more inclined to suspect we might need a therapist or pill for our troubles, where before we might have organized with others for political change.

And that reveals what the real purpose behind many of these statistics is: To change our attitudes and political positions. They are public relations efforts coming from extremely biased sources.

The politics of “mental illness”

Why is 18.6% the going rate of mental illnesses in America? SAMHSA’s report takes many pages to explain all the adjustments they made to arrive at the numbers they did. However, it’s easy to imagine why they’d avoid going much higher or lower. If SAMHSA scored 90% of us as mentally ill, how seriously would we take them? Conversely, imagine if they went with a cut-off score that determined only 0.3% were mentally ill, while the rest of us were just sometimes really, really upset. How would that affect public narratives on America’s mental health “crisis” and debates about the importance of expanding mental health programs?

However well-meaning, the professional mental health sector develops such statistics to create public concern and support for their positions, to steer people towards their services, and to coax money out of public coffers. These statistics are bluffs in a national game of political poker. The major players are always pushing the rates as high as possible, while being careful not to push them so high that others skeptically demand to see the cards they’re holding. This year, 18.6% is the bet.

Addiction to Truth: David Carr, the Measure of a Person, and the Uncommon Art of Elevating the Common Record

by

“People remember what they can live with more often than how they lived.”

We spend our lives pulled asunder by the two poles of our potentiality — our basest nature and our most expansive goodness. To elevate oneself from the lowest end of that spectrum to the highest is the great accomplishment of the human spirit. To do this for another person is to give them an invaluable gift. To do it for a group of people — a community, an industry, a culture — is the ultimate act of generosity and grace.

This is what David Carr (September 8, 1956–February 12, 2015) did for us.

He called out what he saw as the product of our lesser selves. He celebrated that which he deemed reflective of our highest potential. And by doing so over and over, with passion and integrity and unrelenting idealism, he nudged us closer to the latter.

He wrote to me once, in his characteristic lowercase: “am missing you. how to fix?” Such was his unaffected sweetness. But, more than that, such was the spirit in which he approached the world — seeing what is missing, seeing what is lacking, and pointing it out, but only for the sake of fixing it. He was a critic but not a cynic in a culture where the difference between the two is increasingly endangered and thus increasingly precious. The caring bluntness of his criticism was driven by the rare give-a-shitness of knowing that we can do better and believing, unflinchingly, that we must.

This is what David Carr did for us — but only because he did it for himself first.

David Carr (Photograph: Chester Higgins Jr. courtesy of The New York Times

The test of one’s decency — the measure of a person — is the honesty one can attain with oneself, the depth to which one is willing to go to debunk one’s own myth and excavate the imperfect, uncomfortable, but absolutely necessary truth beneath. That’s precisely what Carr did in The Night of the Gun (public library) — an exquisitely rigorous, utterly harrowing and utterly heartening memoir of his journey from the vilest depths of crack addiction to his job at The New York Times, where he became the finest and most revered media reporter of our century, and how between these two poles he managed to raise his twin daughters as a single father. It’s the story of how he went from “That Guy, a dynamo of hilarity and then misery” to “This Guy, the one with a family, a house, and a good job.” It’s also a larger story reminding us that we each carry both capacities within us and must face the choice, daily, of which one to let manifest.

The story begins with Carr’s point of reluctant awakening upon being fired from his job as a newspaper reporter in Minneapolis:

For an addict the choice between sanity and chaos is sometimes a riddle, but my mind was suddenly epically clear.

“I’m not done yet.”

With his flair for the unsensationalist drama of real life, he recalls the aftermath of one particularly bad trip, which precipitated his journey out of the abyss:

Every hangover begins with an inventory. The next morning mine began with my mouth. I had been baking all night, and it was as dry as a two-year-old chicken bone. My head was a small prison, all yelps of pain and alarm, each movement seeming to shift bits of broken glass in my skull. My right arm came into view for inspection, caked in blood, and then I saw it had a few actual pieces of glass still embedded in it. So much for metaphor. My legs both hurt, but in remarkably different ways.

[…]

It was a daylight waterfall of regret known to all addicts. It can’t get worse, but it does. When the bottom arrives, the cold fact of it all, it is always a surprise. Over fifteen years, I had made a seemingly organic journey from pothead to party boy, from knockaround guy to friendless thug. At thirty-one, I was washed out of my profession, morally and physically corrupt, but I still had almost a year left in the Life. I wasn’t done yet.

It isn’t hard to see the parallels between that experience and the counterpoint upon which Carr eventually built his career and his reputation. His work as a journalist was very much about taking inventory of our cultural hangovers — the things we let ourselves get away with, the stories we tell ourselves and are told by the media about why it’s okay to do so, and the addiction to untruth that we sustain in the process.

David Carr with his daughter Erin

In fact, this dance between mythmaking and truth is baked into the book’s title — a reference to an incident that took place the night of that bad trip, during which Carr had behaved so badly that his best friend had to point a gun at him to keep him at bay. At least that’s the story Carr told himself for years, only to realize later upon revisiting the incident with a journalist’s scrutiny that the memory — like all memory — was woven of more myth than truth. He writes:

Recollection is often just self-fashioning. Some of it is reflexive, designed to bury truths that cannot be swallowed, but other “memories” are just redemption myths writ small. Personal narrative is not simply opening up a vein and letting the blood flow toward anyone willing to stare. The historical self is created to keep dissonance at bay and render the subject palatable in the present.

We are most concerned, he suggests, with making ourselves palatable to ourselves. (One need only look at Salinger’s architecture of personal mythologyand the story of how Freud engineered his own myth for evidence.) But nowhere do we warp our personal narratives more than in our mythologies of conquering adversity — perhaps because to magnify the gap between who we were and who we are is to magnify our achievement of personal growth. Carr admonishes against this tendency:

The meme of abasement followed by salvation is a durable device in literature, but does it abide the complexity of how things really happened? Everyone is told just as much as he needs to know, including the self. In Notes from Underground, Fyodor Dostoevsky explains that recollection — memory, even — is fungible, and often leaves out unspeakable truths, saying, “Man is bound to lie about himself.”

I am not an enthusiastic or adept liar. Even so, can I tell you a true story about the worst day of my life? No. To begin with, it was far from the worst day of my life. And those who were there swear it did not happen the way I recall, on that day and on many others. And if I can’t tell a true story about one of the worst days of my life, what about the rest of those days, that life, this story?

[…]

The power of a memory can be built through repetition, but it is the memory we are recalling when we speak, not the event. And stories are annealed in the telling, edited by turns each time they are recalled until they become little more than chimeras. People remember what they can live with more often than how they lived.

In this experience one finds the seed of Carr’s zero-tolerance policy for untruth — not only in his own life, but in journalism and the media world on which he reported. If anything, the mind-boggling archive of 1,776 articles he wrote for the Times was his way of keeping our collective memory accurate and accountable — an active antidote to the self-interested amnesia of cultural and personal mythmaking. He toiled tirelessly to keep truthful and honorable what Vannevar Bush — another patron saint of media from a different era — poetically called “the common record.”

David Carr with his daughter Meagan

Carr writes of the moment he chose sanity over chaos:

Slowly, I remembered who I was. Hope floats. The small pleasures of being a man, of being a drunk who doesn’t drink, an addict who doesn’t use, buoyed me.

So much of Carr’s character lives in this honest yet deeply poetic sentiment. He was, above all, an idealist. He understood that our addiction to untruths and mythologies spells the death of our ideals, and ideals are the material of the human spirit. He floated us by his hope. He was the E.B. White of twenty-first-century journalism — like White, who believed that “writers do not merely reflect and interpret life, they inform and shape life,” Carr shaped for a living; like White, who believed that a writer should “lift people up, not lower them down,” Carr buoyed us with his writing.

In the remainder of The Night of the Gun, Carr goes on to chronicle how he raised his daughters “in the vapor trail of adults who had a lot of growing up to do themselves,” why he relapsed into alcoholism after fourteen years of sobriety and “had to spin out again to remember those very basic lessons” before climbing back out, and what it really means to be “normal” for any person in any life.

Toward the end, he writes:

You are always told to recover for yourself, but the only way I got my head out of my own ass was to remember that there were other asses to consider.

I now inhabit a life I don’t deserve, but we all walk this earth feeling we are frauds. The trick is to be grateful and hope the caper doesn’t end any time soon.

David Carr by Wendy MacNaughton

Am missing you now, David — we all are. How to fix?

Perhaps some breakages can’t be fixed, but I suppose the trick is indeed to be grateful — even when, and especially when, the caper does end; to be grateful that it had begun in the first place.

 

http://www.brainpickings.org/2015/02/13/david-carr-night-of-the-gun/?mc_cid=62a64daceb&mc_eid=e0931c81b0