Google has captured your mind

Searches reveal who we are and how we think. True intellectual privacy requires safeguarding these records

Google has captured your mind
(Credit: Kuzma via iStock/Salon)

The Justice Department’s subpoena was straightforward enough. It directed Google to disclose to the U.S. government every search query that had been entered into its search engine for a two-month period, and to disclose every Internet address that could be accessed from the search engine. Google refused to comply. And so on Wednesday January 18, 2006, the Department of Justice filed a court motion in California, seeking an order that would force Google to comply with a similar request—a random sample of a million URLs from its search engine database, along with the text of every “search string entered onto Google’s search engine over a one-week period.” The Justice Department was interested in how many Internet users were looking for pornography, and it thought that analyzing the search queries of ordinary Internet users was the best way to figure this out. Google, which had a 45-percent market share at the time, was not the only search engine to receive the subpoena. The Justice Department also requested search records from AOL, Yahoo!, and Microsoft. Only Google declined the initial request and opposed it, which is the only reason we are aware that the secret request was ever made in the first place.

The government’s request for massive amounts of search history from ordinary users requires some explanation. It has to do with the federal government’s interest in online pornography, which has a long history, at least in Internet time. In 1995 Time Magazine ran its famous “Cyberporn” cover, depicting a shocked young boy staring into a computer monitor, his eyes wide, his mouth agape, and his skin illuminated by the eerie glow of the screen. The cover was part of a national panic about online pornography, to which Congress responded by passing the federal Communications Decency Act (CDA) the following year. This infamous law prevented all websites from publishing “patently offensive” content without first verifying the age and identity of its readers, and the sending of indecent communications to anyone under eighteen. It tried to transform the Internet into a public space that was always fit for children by default.


The CDA prompted massive protests (and litigation) charging the government with censorship. The Supreme Court agreed in the landmark case of Reno v. ACLU (1997), which struck down the CDA’s decency provisions. In his opinion for the Court, Justice John Paul Stevens explained that regulating the content of Internet expression is no different from regulating the content of newspapers.The case is arguably the most significant free speech decision over the past half century since it expanded the full protection of the First Amendment to Internet expression, rather than treating the Internet like television or radio, whose content may be regulated more extensively. In language that might sound dated, Justice Stevens announced a principle that has endured: “Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The Internet, in other words, was now an essential forum for free speech.

In the aftermath of Reno, Congress gave up on policing Internet indecency, but continued to focus on child protection. In 1998 it passed the Children’s Online Protection Act, also known as COPA. COPA punished those who engaged in web communications made “for commercial purposes” that were accessible and “harmful to minors” with a $50,000 fine and prison terms of up to six months. After extensive litigation, the Supreme Court in Ashcroft v. ACLU (2004) upheld a preliminary injunction preventing the government from enforcing the law. The Court reasoned that the government hadn’t proved that an outright ban of “harmful to minors” material was necessary. It suggested that Congress could have instead required the use of blocking or filtering software, which would have had less of an impact on free speech than a ban, and it remanded the case for further proceedings. Back in the lower court, the government wanted to create a study showing that filtering would be ineffective, which is why it wanted the search queries from Google and the other search engine companies in 2006.

Judge James Ware ruled on the subpoena on March 17, 2006, and denied most of the government’s demands. He granted the release of only 5 percent of the requested randomly selected anonymous search results and none of the actual search queries. Much of the reason for approving only a tiny sample of the de-identified search requests had to do with privacy. Google had not made a direct privacy argument, on the grounds that de-identified search queries were not “personal information,” but it argued that disclosure of the records would expose its trade secrets and harm its goodwill from users who believed that their searches were confidential. Judge Ware accepted this oddly phrased privacy claim, and added one of his own that Google had missed. The judge explained that Google users have a privacy interest in the confidentiality of their searches because a user’s identity could be reconstructed from their queries and because disclosure of such queries could lead to embarrassment (searches for, e.g., pornography or abortions) or criminal liability (searches for, e.g., “bomb placement white house”). He also placed the list of disclosed website addresses under a protective order to safeguard Google’s trade secrets.

Two facets of Judge Ware’s short opinion in the “Search Subpoena Case” are noteworthy. First, the judge was quite correct that even search requests that have had their user’s identities removed are not anonymous, as it is surprisingly easy to re-identify this kind of data. The queries we enter into search engines like Google often unwittingly reveal our identities. Most commonly, we search our own names, out of vanity, curiosity, or to discover if there are false or embarrassing facts or images of us online. But other parts of our searches can reveal our identities as well. A few months after the Search Subpoena Case, AOL made public twenty million search queries from 650,000 users of its search engine users. AOL was hoping this disclosure would help researchers and had replaced its users’ names with numerical IDs to protect their privacy. But two New York Times reporters showed just how easy it could be to re-identify them. They tracked down AOL user number 4417749 and identified her as Thelma Arnold, a sixty-two-year old widow in Lilburn, Georgia. Thelma had made hundreds of searches including “numb fingers,” “60 single men,” and “dog that urinates on everything.” The New York Times reporters used old-fashioned investigative techniques, but modern sophisticated computer science tools make re-identification of such information even easier. One such technique allowed computer scientists to re-identify users in the Netflix movie-watching database, which that company made public to researchers in 2006.

The second interesting facet of the Search Subpoena Case is its theory of privacy. Google won because the disclosure threatened its trade secrets (a commercial privacy, of sorts) and its business goodwill (which relied on its users believing that their searches were private). Judge Ware suggested that a more direct kind of user privacy was at stake, but was not specific beyond some generalized fear of embarrassment (echoing the old theory of tort privacy) or criminal prosecution (evoking the “reasonable expectation of privacy” theme from criminal law). Most people no doubt have an intuitive sense that their Internet searches are “private,” but neither our intuitions nor the Search Subpoena Case tell us why. This is a common problem in discussions of privacy. We often use the word “privacy” without being clear about what we mean or why it matters. We can do better.

Internet searches implicate our intellectual privacy. We use tools like Google Search to make sense of the world, and intellectual privacy is needed when we are making sense of the world. Our curiosity is essential, and it should be unfettered. As I’ll show in this chapter, search queries implicate a special kind of intellectual privacy, which is the freedom of thought.

Freedom of thought and belief is the core of our intellectual privacy. This freedom is the defining characteristic of a free society and our most cherished civil liberty. This right encompasses the range of thoughts and beliefs that a person might hold or develop, dealing with matters that are trivial and important, secular and profane. And it protects the individual’s thoughts from scrutiny or coercion by anyone, whether a government official or a private actor such as an employer, a friend, or a spouse. At the level of law, if there is any constitutional right that is absolute, it is this one, which is the precondition for other political and religious rights guaranteed by the Western tradition. Yet curiously, although freedom of thought is widely regarded as our most important civil liberty, it has not been protected in our law as much as other rights, in part because it has been very difficult for the state or others to monitor thoughts and beliefs even if they wanted to.

Freedom of Thought and Intellectual Privacy

In 1913 the eminent Anglo-Irish historian J. B. Bury published A History of Freedom of Thought, in which he surveyed the importance of freedom of thought in the Western tradition, from the ancient Greeks to the twentieth century. According to Bury, the conclusion that individuals should have an absolute right to their beliefs free of state or other forms of coercion “is the most important ever reached by men.” Bury was not the only scholar to have observed that freedom of thought (or belief, or conscience) is at the core of Western civil liberties. Recognitions of this sort are commonplace and have been made by many of our greatest minds. René Descartes’s maxim, “I think, therefore I am,” identifies the power of individual thought at the core of our existence. John Milton praised in Areopagitica “the liberty to know, to utter, and to argue freely according to conscience, above all [other] liberties.”

In the nineteenth century, John Stuart Mill developed a broad notion of freedom of thought as an essential element of his theory of human liberty, which comprised “the inward domain of consciousness; demanding liberty of conscience, in the most comprehensive sense; liberty of thought and feeling; absolute freedom of opinion and sentiment on all subjects, practical or speculative, scientific, moral, or theological.” In Mill’s view, free thought was inextricably linked to and mutually dependent upon free speech, with the two concepts being a part of a broader idea of political liberty. Moreover, Mill recognized that private parties as well as the state could chill free expression and thought.

Law in Britain and America has embraced the central importance of free thought as the civil liberty on which all others depend. But it was not always so. People who cannot think for themselves, after all, are incapable of self-government. In the Middle Ages, the crime of “constructive treason” outlawed “imagining the death of the king” as a crime that was punishable by death. Thomas Jefferson later reflected that this crime “had drawn the Blood of the best and honestest Men in the Kingdom.” The impulse for political uniformity was related to the impulse for religious uniformity, whose story is one of martyrdom and burnings of the stake. As Supreme Court Justice William O. Douglas put it in 1963:

While kings were fearful of treason, theologians were bent on stamping out heresy. . . . The Reformation is associated with Martin Luther. But prior to him it broke out many times only to be crushed. When in time the Protestants gained control, they tried to crush the Catholics; and when the Catholics gained the upper hand, they ferreted out the Protestants. Many devices were used. Heretical books were destroyed and heretics were burned at the stake or banished. The rack, the thumbscrew, the wheel on which men were stretched, these were part of the paraphernalia.

Thankfully, the excesses of such a dangerous government power were recognized over the centuries, and thought crimes were abolished. Thus, William Blackstone’s influential Commentaries stressed the importance of the common law protection for the freedom of thought and inquiry, even under a system that allowed subsequent punishment for seditious and other kinds of dangerous speech. Blackstone explained that:

Neither is any restraint hereby laid upon freedom of thought or inquiry: liberty of private sentiment is still left; the disseminating, or making public, of bad sentiments, destructive of the ends of society, is the crime which society corrects. A man (says a fine writer on this subject) may be allowed to keep poisons in his closet, but not publicly to vend them as cordials.

Even during a time when English law allowed civil and criminal punishment for many kinds of speech that would be protected today, including blasphemy, obscenity, seditious libel, and vocal criticism of the government, jurists recognized the importance of free thought and gave it special, separate protection in both the legal and cultural traditions.

The poisons metaphor Blackstone used, for example, was adapted from Jonathan Swift’s Gulliver’s Travels, from a line that the King of Brobdingnag delivers to Gulliver. Blackstone’s treatment of freedom of thought was itself adopted by Joseph Story in his own Commentaries, the leading American treatise on constitutional law in the early Republic. Thomas Jefferson and James Madison also embraced freedom of thought. Jefferson’s famous Virginia Statute for Religious Freedom enshrined religious liberty around the declaration that “Almighty God hath created the mind free,” and James Madison forcefully called for freedom of thought and conscience in his Memorial and Remonstrance Against Religious Assessments.

Freedom of thought thus came to be protected directly as a prohibition on state coercion of truth or belief. It was one of a handful of rights protected by the original Constitution even before the ratification of the Bill of Rights. Article VI provides that “state and federal legislators, as well as officers of the United States, shall be bound by oath or affirmation, to support this Constitution; but no religious test shall ever be required as a qualification to any office or public trust under the United States.” This provision, known as the “religious test clause,” ensured that religious orthodoxy could not be imposed as a requirement for governance, a further protection of the freedom of thought (or, in this case, its closely related cousin, the freedom of conscience). The Constitution also gives special protection against the crime of treason, by defining it to exclude thought crimes and providing special evidentiary protections:

Treason against the United States, shall consist only in levying war against them, or in adhering to their enemies, giving them aid and comfort. No person shall be convicted of treason unless on the testimony of two witnesses to the same overt act, or on confession in open court.

By eliminating religious tests and by defining the crime of treason as one of guilty actions rather than merely guilty minds, the Constitution was thus steadfastly part of the tradition giving exceptional protection to the freedom of thought.

Nevertheless, even when governments could not directly coerce the uniformity of beliefs, a person’s thoughts remained relevant to both law and social control. A person’s thoughts could reveal political or religious disloyalty, or they could be relevant to a defendant’s mental state in committing a crime or other legal wrong. And while thoughts could not be revealed directly, they could be discovered by indirect means. For example, thoughts could be inferred either from a person’s testimony or confessions, or by access to their papers and diaries. But both the English common law and the American Bill of Rights came to protect against these intrusions into the freedom of the mind as well.

The most direct way to obtain knowledge about a person’s thoughts would be to haul him before a magistrate as a witness and ask him under penalty of law. The English ecclesiastical courts used the “oath ex officio” for precisely this purpose. But as historian Leonard Levy has explained, this practice came under assault in Britain as invading the freedom of thought and belief. As the eminent jurist Lord Coke later declared, “no free man should be compelled to answer for his secret thoughts and opinions.” The practice of the oath was ultimately abolished in England in the cases of John Lilburne and John Entick, men who were political dissidents rather than religious heretics.

In the new United States, the Fifth Amendment guarantee that “No person . . . shall be compelled in any criminal case to be a witness against himself ” can also be seen as a resounding rejection of this sort of practice in favor of the freedom of thought. Law of course evolves, and current Fifth Amendment doctrine focuses on the consequences of a confession rather than on mental privacy, but the origins of the Fifth Amendment are part of a broad commitment to freedom of thought that runs through our law. The late criminal law scholar William Stuntz has shown that this tradition was not merely a procedural protection for all, but a substantive limitation on the power of the state to force its enemies to reveal their unpopular or heretical thoughts. As he put the point colorfully, “[i]t is no coincidence that the privilege’s origins read like a catalogue of religious and political persecution.”

Another way to obtain a person’s thoughts would be by reading their diaries or other papers. Consider the Fourth Amendment, which protects a person from unreasonable searches and seizures by the police:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Today we think about the Fourth Amendment as providing protection for the home and the person chiefly against unreasonable searches for contraband like guns or drugs. But the Fourth Amendment’s origins come not from drug cases but as a bulwark against intellectual surveillance by the state. In the eighteenth century, the English Crown had sought to quash political and religious dissent through the use of “general warrants,” legal documents that gave agents of the Crown the authority to search the homes of suspected dissidents for incriminating papers.

Perhaps the most infamous dissident of the time was John Wilkes. Wilkes was a progressive critic of Crown policy and a political rogue whose public tribulations, wit, and famed personal ugliness made him a celebrity throughout the English-speaking world. Wilkes was the editor of a progressive newspaper, the North Briton, a member of Parliament, and an outspoken critic of government policy. He was deeply critical of the 1763 Treaty of Paris ending the Seven Years War with France, a conflict known in North America as the French and Indian War. Wilkes’s damning articles angered King George, who ordered the arrest of Wilkes and his co-publishers of the North Briton, authorizing general warrants to search their papers for evidence of treason and sedition. The government ransacked numerous private homes and printers’ shops, scrutinizing personal papers for any signs of incriminating evidence. In all, forty-nine people were arrested, and Wilkes himself was charged with seditious libel, prompting a long and inconclusive legal battle of suits and countersuits.

By taking a stand against the king and intrusive searches, Wilkes became a cause célèbre among Britons at home and in the colonies. This was particularly true for many American colonists, whose own objections to British tax policy following the Treaty of Paris culminated in the American Revolution. The rebellious colonists drew from the Wilkes case the importance of political dissent as well as the need to protect dissenting citizens from unreasonable (and politically motivated) searches and seizures.

The Fourth Amendment was intended to address this problem by inscribing legal protection for “persons, houses, papers, and effects” into the Bill of Rights. A government that could not search the homes and read the papers of its citizens would be less able to engage in intellectual tyranny and enforce intellectual orthodoxy. In a pre-electronic world, the Fourth Amendment kept out the state, while trespass and other property laws kept private parties out of our homes, paper, and effects.

The Fourth and Fifth Amendments thus protect the freedom of thought at their core. As Stuntz explains, the early English cases estab- lishing these principles were “classic First Amendment cases in a system with no First Amendment.” Even in a legal regime without protection for dissidents who expressed unpopular political or religious opinions, the English system protected those dissidents in their private beliefs, as well as the papers and other documents that might reveal those beliefs.

In American law, an even stronger protection for freedom of thought can be found in the First Amendment. Although the First Amendment text speaks of free speech, press, and assembly, the freedom of thought is unquestionably at the core of these guarantees, and courts and scholars have consistently recognized this fact. In fact, the freedom of thought and belief is the closest thing to an absolute right guaranteed by the Constitution. The Supreme Court first recognized it in the 1878 Mormon polygamy case of Reynolds v. United States, which ruled that although law could regulate religiously inspired actions such as polygamy, it was powerless to control “mere religious belief and opinions.” Freedom of thought in secular matters was identified by Justices Holmes and Brandeis as part of their dissenting tradition in free speech cases in the 1910s and 1920s. Holmes declared crisply in United States v. Schwimmer that “if there is any principle of the Constitution that more imperatively calls for attachment than any other it is the principle of free thought—not free thought for those who agree with us but freedom for the thought that we hate.” And in his dissent in the Fourth Amendment wiretapping case of Olmstead v. United States, Brandeis argued that the framers of the Constitution “sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations.” Brandeis’s dissent in Olmstead adapted his theory of tort privacy into federal constitutional law around the principle of freedom of thought.

Freedom of thought became permanently enshrined in constitutional law during a series of mid-twentieth century cases that charted the contours of the modern First Amendment. In Palko v. Connecticut, Justice Cardozo characterized freedom of thought as “the matrix, the indispensable condition, of nearly every other form of freedom.” And in a series of cases involving Jehovah’s Witnesses, the Court developed a theory of the First Amendment under which the rights of free thought, speech, press, and exercise of religion were placed in a “preferred position.” Freedom of thought was central to this new theory of the First Amendment, exemplified by Justice Jackson’s opinion in West Virginia State Board of Education v. Barnette, which invalidated a state regulation requiring that public school children salute the flag each morning. Jackson declared that:

If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. . . .

[The flag-salute statute] transcends constitutional limitations on [legislative] power and invades the sphere of intellect and spirit which it is the purpose of the First Amendment to our Constitution to reserve from all official control.

Modern cases continue to reflect this legacy. The Court has repeatedly declared that the constitutional guarantee of freedom of thought is at the foundation of what it means to have a free society. In particular, freedom of thought has been invoked as a principal justification for preventing punishment based upon possessing or reading dangerous media. Thus, the government cannot punish a person for merely possessing unpopular or dangerous books or images based upon their content. As Alexander Meiklejohn put it succinctly, the First Amendment protects, first and foremost, “the thinking process of the community.”

Freedom of thought thus remains, as it has for centuries, the foundation of the Anglo-American tradition of civil liberties. It is also the core of intellectual privacy.

“The New Home of Mind”

“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind.” So began “A Declaration of Independence of Cyberspace,” a 1996 manifesto responding to the Communications Decency Act and other attempts by government to regulate the online world and stamp out indecency. The Declaration’s author was John Perry Barlow, a founder of the influential Electronic Frontier Foundation and a former lyricist for the Grateful Dead. Barlow argued that “[c]yberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.” This definition of the Internet as a realm of pure thought was quickly followed by an affirmation of the importance of the freedom of thought. Barlow insisted that in Cyberspace “anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” The Declaration concluded on the same theme: “We will spread ourselves across the Planet so that no one can arrest our thoughts. We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.”

In his Declaration, Barlow joined a tradition of many (including many of the most important thinkers and creators of the digital world) who have expressed the idea that networked computing can be a place of “thought itself.” As early as 1960, the great computing visionary J. C. R. Licklider imagined that “in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought.” Tim Berners-Lee, the architect of the World Wide Web, envisioned his creation as one that would bring “the workings of society closer to the workings of our minds.”

Barlow’s utopian demand that governments leave the electronic realm alone was only partially successful. The Communications Decency Act was, as we have seen, struck down by the Supreme Court, but today many laws regulate the Internet, such as the U.S. Digital Millenium Copyright Act6and the EU Data Retention Directive. The Internet has become more (and less) than Barlow’s utopian vision—a place of business as well as of thinking. But Barlow’s description of the Internet as a world of the mind remains resonant today.

It is undeniable that today millions of people use computers as aids to their thinking. In the digital age, computers are an essential and intertwined supplement to our thoughts and our memories. Discussing Licklider’s prophesy from half a century ago, legal scholar Tim Wu notes that virtually every computer “program we use is a type of thinking aid—whether the task is to remember things (an address book), to organize prose (a word processor), or to keep track of friends (social network software).” These technologies have become not just aids to thought but also part of the thinking process itself. In the past, we invented paper and books, and then sound and video recordings to preserve knowledge and make it easier for us as individuals and societies to remember information. Digital technologies have made remembering even easier, by providing cheap storage, inexpensive retrieval, and global reach. Consider the Kindle, a cheap electronic reader that can hold 1,100 books, or even cheaper external hard drives that can hold hundreds of hours of high-definition video in a box the size of a paperback novel.

Even the words we use to describe our digital products and experiences reflect our understanding that computers and cyberspace are devices and places of the mind. IBM has famously called its laptops “ThinkPads,” and many of us use “smartphones.” Other technologies have been named in ways that affirm their status as tools of the mind—notebooks, ultrabooks, tablets, and browsers. Apple Computer produces iPads and MacBooks and has long sold its products under the slogan, “Think Different.” Google historian John Battelle has famously termed Google’s search records to be a “database of intentions.” Google’s own slogan for its web browser Chrome is “browse the web as fast as you think,” revealing how web browsing itself is not just a form of reading, but a kind of thinking itself. My point here is not just that common usage or marketing slogans connect Internet use to thinking, but a more important one: Our use of these words reflects a reality. We are increasingly using digital technologies not just as aids to our memories but also as an essential part of the ways we think.

Search engines in particular bear a special connection to the processes of thought. How many of us have asked a factual question among friends, only for smartphones to appear as our friends race to see who can look up the answer the fastest? In private, we use search engines to learn about the world. If you have a moment, pull up your own search history on your phone, tablet, or computer, and recall your past queries. It usually makes for interesting reading—a history of your thoughts and wonderings.

But the ease with which we can pull up such a transcript reveals another fundamental feature of digital technologies—they are designed to create records of their use. Think again about the profile a search engine like Google has for you. A transcript of search queries and links followed is a close approximation to a transcript of the operation of your mind. In the logs of search engine companies are vast repositories of intellectual wonderings, questions asked, and mental whims followed. Similar logs exist for Internet service providers and other new technology companies. And the data contained in such logs is eagerly sought by government and private entities interested in monitoring intellectual activity, whether for behavioral advertising, crime and terrorism prevention, and possibly other, more sinister purposes.

Searching Is Thinking

With these two points in mind—the importance of freedom of thought and the idea of the Internet as a place where thought occurs—we can now return to the Google Search Subpoena with which this chapter opened. Judge Ware’s opinion revealed an intuitive understanding that the disclosure of search records was threatening to privacy, but was not clear about what kind of privacy was involved or why it matters.

Intellectual privacy, in particular the freedom of thought, supplies the answer to this problem. We use search engines to learn about and make sense of the world, to answer our questions, and as aids to our thinking. Searching, then, in a very real sense is a kind of thinking. And we have a long tradition of protecting the privacy and confidentiality of our thoughts from the scrutiny of others. It is precisely because of the importance of search records to human thought that the Justice Department wanted to access the records. But if our search records were more public, we wouldn’t merely be exposed to embarrassment like Thelma Arnold of Lilburn, Georgia. We would be less likely to search for unpopular or deviant or dangerous topics. Yet in a free society, we need to be able to think freely about any ideas, no matter how dangerous or unpopular. If we care about freedom of thought—and our political institutions are built on the assumption that we do—we should care about the privacy of electronic records that reveal our thoughts. Search records illustrate the point well, but this idea is not just limited to that one important technology. My argument about freedom of thought in the digital age is this: Any technology that we use in our thinking implicates our intellectual privacy, and if we want to preserve our ability to think fearlessly, free of monitoring, interference, or repercussion, we should embody these technologies with a meaningful measure of intellectual privacy.

Excerpted from “Intellectual Privacy: Rethinking Civil Liberties in the Digital Age” by Neil Richards. Published by Oxford University Press. Copyright 2015 by Neil Richards. Reprinted with permission of the publisher. All rights reserved.

Neil Richards is a Professor of Law at Washington University, where he teaches and writes about privacy, free speech, and the digital revolution.

You should actually blame America for everything you hate about internet culture

November 21

The tastes of American Internet-users are both well-known and much-derided: Cat videos. Personality quizzes. Lists of things that only people from your generation/alma mater/exact geographic area “understand.”

But in France, it turns out, even viral-content fiends are a bit more … sophistiqués.

“In France, articles about cats do not work,” Buzzfeed’s Scott Lamb told Le Figaro, a leading Parisian paper. Instead, he explained, Buzzfeed’s first year in the country has shown it that “the French love sharing news and politics on social networks – in short, pretty serious stuff.”

This is interesting for two reasons: first, as conclusive proof that the French are irredeemable snobs; second, as a crack in the glossy, understudied facade of what we commonly call “Internet culture.”

When the New York Times’s David Pogue tried to define the term in 2009, he ended up with a series of memes: the “Star Wars” kid, the dancing baby, rickrolling, the exploding whale. Likewise, if you look to anyone who claims to cover the Internet culture space — not only Buzzfeed, but Mashable, Gawker and, yeah, yours truly — their coverage frequently plays on what Lamb calls the “cute and positive” theme. They’re boys who work at Target and have swoopy hair, videos of babies acting like “tiny drunk adults,” hamsters eating burritos and birthday cakes.

That is the meaning we’ve assigned to “Internet culture,” itself an ambiguous term: It’s the fluff and the froth of the global Web.

But Lamb’s observations on Buzzfeed’s international growth would actually seem to suggest something different. Cat memes and other frivolities aren’t the work of an Internet culture. They’re the work of an American one.

American audiences love animals and “light content,” Lamb said, but readers in other countries have reacted differently. Germans were skeptical of the site’s feel-good frivolity, he said, and some Australians were outright “hostile.” Meanwhile, in France — land of la mode and le Michelin — critics immediately complained, right at Buzzfeed’s French launch, that the articles were too fluffy and poorly translated. Instead, Buzzfeed quickly found that readers were more likely to share articles about news, politics and regional identity, particularly in relation to the loved/hated Paris, than they were to share the site’s other fare.

A glance at Buzzfeed’s French page would appear to bear that out. Right now, its top stories “Ça fait le buzz” — that’s making the buzz, for you Americaines — are “21 photos that will make you laugh every time” and “26 images that will make you rethink your whole life.” They’re not making much buzz, though. Neither has earned more than 40,000 clicks — a pittance for the reigning king of virality, particularly in comparison to Buzzfeed’s versions on the English site.

All this goes to show that the things we term “Internet culture” are not necessarily born of the Internet, itself — the Internet is everywhere, but the insatiable thirst for cat videos is not. If you want to complain about dumb memes or clickbait or other apparent instances of socially sanctioned vapidity, blame America: We started it, not the Internet.

Appelons un chat un chat.

Caitlin Dewey runs The Intersect blog, writing about digital and Internet culture. Before joining the Post, she was an associate online editor at Kiplinger’s Personal Finance.
http://www.washingtonpost.com/news/the-intersect/wp/2014/11/21/you-should-actually-blame-america-for-everything-you-hate-about-internet-culture/

“The Internet’s Own Boy”: How the government destroyed Aaron Swartz

A film tells the story of the coder-activist who fought corporate power and corruption — and paid a cruel price

"The Internet's Own Boy": How the government destroyed Aaron Swartz
Aaron Swartz (Credit: TakePart/Noah Berger)

Brian Knappenberger’s Kickstarter-funded documentary “The Internet’s Own Boy: The Story of Aaron Swartz,” which premiered at Sundance barely a year after the legendary hacker, programmer and information activist took his own life in January 2013, feels like the beginning of a conversation about Swartz and his legacy rather than the final word. This week it will be released in theaters, arriving in the middle of an evolving debate about what the Internet is, whose interests it serves and how best to manage it, now that the techno-utopian dreams that sounded so great in Wired magazine circa 1996 have begun to ring distinctly hollow.

What surprised me when I wrote about “The Internet’s Own Boy” from Sundance was the snarky, dismissive and downright hostile tone struck by at least a few commenters. There was a certain dark symmetry to it, I thought at the time: A tragic story about the downfall, destruction and death of an Internet idealist calls up all of the medium’s most distasteful qualities, including its unique ability to transform all discourse into binary and ill-considered nastiness, and its empowerment of the chorus of belittlers and begrudgers collectively known as trolls. In retrospect, I think the symbolism ran even deeper. Aaron Swartz’s life and career exemplified a central conflict within Internet culture, and one whose ramifications make many denizens of the Web highly uncomfortable.

For many of its pioneers, loyalists and self-professed deep thinkers, the Internet was conceived as a digital demi-paradise, a zone of total freedom and democracy. But when it comes to specifics things get a bit dicey. Paradise for whom, exactly, and what do we mean by democracy? In one enduringly popular version of this fantasy, the Internet is the ultimate libertarian free market, a zone of perfect entrepreneurial capitalism untrammeled by any government, any regulation or any taxation. As a teenage programming prodigy with an unusually deep understanding of the Internet’s underlying architecture, Swartz certainly participated in the private-sector, junior-millionaire version of the Internet. He founded his first software company following his freshman year at Stanford, and became a partner in the development of Reddit in 2006, which was sold to Condé Nast later that year.



That libertarian vision of the Internet – and of society too, for that matter – rests on an unacknowledged contradiction, in that some form of state power or authority is presumably required to enforce private property rights, including copyrights, patents and other forms of intellectual property. Indeed, this is one of the principal contradictions embedded within our current form of capitalism, as the Marxist scholar David Harvey notes: Those who claim to venerate private property above all else actually depend on an increasingly militarized and autocratic state. And from the beginning of Swartz’s career he also partook of the alternate vision of the Internet, the one with a more anarchistic or anarcho-socialist character. When he was 15 years old he participated in the launch of Creative Commons, the immensely important content-sharing nonprofit, and at age 17 he helped design Markdown, an open-source, newbie-friendly markup format that remains in widespread use.

One can certainly construct an argument that these ideas about the character of the Internet are not fundamentally incompatible, and may coexist peaceably enough. In the physical world we have public parks and privately owned supermarkets, and we all understand that different rules (backed of course by militarized state power) govern our conduct in each space. But there is still an ideological contest between the two, and the logic of the private sector has increasingly invaded the public sphere and undermined the ancient notion of the public commons. (Former New York Mayor Rudy Giuliani once proposed that city parks should charge admission fees.) As an adult Aaron Swartz took sides in this contest, moving away from the libertarian Silicon Valley model of the Internet and toward a more radical and social conception of the meaning of freedom and equality in the digital age. It seems possible and even likely that the “Guerilla Open Access Manifesto” Swartz wrote in 2008, at age 21, led directly to his exaggerated federal prosecution for what was by any standard a minor hacking offense.

Swartz’s manifesto didn’t just call for the widespread illegal downloading and sharing of copyrighted scientific and academic material, which was already a dangerous idea. It explained why. Much of the academic research held under lock and key by large institutional publishers like Reed Elsevier had been largely funded at public expense, but was now being treated as private property – and as Swartz understood, that was just one example of a massive ideological victory for corporate interests that had penetrated almost every aspect of society. The actual data theft for which Swartz was prosecuted, the download of a large volume of journal articles from the academic database called JSTOR, was largely symbolic and arguably almost pointless. (As a Harvard graduate student at the time, Swartz was entitled to read anything on JSTOR.)

But the symbolism was important: Swartz posed a direct challenge to the private-sector creep that has eaten away at any notion of the public commons or the public good, whether in the digital or physical worlds, and he also sought to expose the fact that in our age state power is primarily the proxy or servant of corporate power. He had already embarrassed the government twice previously. In 2006, he downloaded and released the entire bibliographic dataset of the Library of Congress, a public document for which the library had charged an access fee. In 2008, he downloaded and released about 2.7 million federal court documents stored in the government database called PACER, which charged 8 cents a page for public records that by definition had no copyright. In both cases, law enforcement ultimately concluded Swartz had committed no crime: Dispensing public information to the public turns out to be legal, even if the government would rather you didn’t. The JSTOR case was different, and the government saw its chance (one could argue) to punish him at last.

Knappenberger could only have made this film with the cooperation of Swartz’s family, which was dealing with a devastating recent loss. In that context, it’s more than understandable that he does not inquire into the circumstances of Swartz’s suicide in “Inside Edition”-level detail. It’s impossible to know anything about Swartz’s mental condition from the outside – for example, whether he suffered from undiagnosed depressive illness – but it seems clear that he grew increasingly disheartened over the government’s insistence that he serve prison time as part of any potential plea bargain. Such an outcome would have left him a convicted felon and, he believed, would have doomed his political aspirations; one can speculate that was the point. Carmen Ortiz, the U.S. attorney for Boston, along with her deputy Stephen Heymann, did more than throw the book at Swartz. They pretty much had to write it first, concocting an imaginative list of 13 felony indictments that carried a potential total of 50 years in federal prison.

As Knappenberger explained in a Q&A session at Sundance, that’s the correct context in which to understand Robert Swartz’s public remark that the government had killed his son. He didn’t mean that Aaron had actually been assassinated by the CIA, but rather that he was a fragile young man who had been targeted as an enemy of the state, held up as a public whipping boy, and hounded into severe psychological distress. Of course that cannot entirely explain what happened; Ortiz and Heymann, along with whoever above them in the Justice Department signed off on their display of prosecutorial energy, had no reason to expect that Swartz would kill himself. There’s more than enough pain and blame to go around, and purely on a human level it’s difficult to imagine what agony Swartz’s family and friends have put themselves through.

One of the most painful moments in “The Internet’s Own Boy” arrives when Quinn Norton, Swartz’s ex-girlfriend, struggles to explain how and why she wound up accepting immunity from prosecution in exchange for information about her former lover. Norton’s role in the sequence of events that led to Swartz hanging himself in his Brooklyn apartment 18 months ago has been much discussed by those who have followed this tragic story. I think the first thing to say is that Norton has been very forthright in talking about what happened, and clearly feels torn up about it.

Norton was a single mom living on a freelance writer’s income, who had been threatened with an indictment that could have cost her both her child and her livelihood. When prosecutors offered her an immunity deal, her lawyer insisted she should take it. For his part, Swartz’s attorney says he doesn’t think Norton told the feds anything that made Swartz’s legal predicament worse, but she herself does not agree. It was apparently Norton who told the government that Swartz had written the 2008 manifesto, which had spread far and wide in hacktivist circles. Not only did the manifesto explain why Swartz had wanted to download hundreds of thousands of copyrighted journal articles on JSTOR, it suggested what he wanted to do with them and framed it as an act of resistance to the private-property knowledge industry.

Amid her grief and guilt, Norton also expresses an even more appropriate emotion: the rage of wondering how in hell we got here. How did we wind up with a country where an activist is prosecuted like a major criminal for downloading articles from a database for noncommercial purposes, while no one goes to prison for the immense financial fraud of 2008 that bankrupted millions? As a person who has made a living as an Internet “content provider” for almost 20 years, I’m well aware that we can’t simply do away with the concept of copyright or intellectual property. I never download pirated movies, not because I care so much about the bottom line at Sony or Warner Bros., but because it just doesn’t feel right, and because you can never be sure who’s getting hurt. We’re not going to settle the debate about intellectual property rights in the digital age in a movie review, but we can say this: Aaron Swartz had chosen his targets carefully, and so did the government when it fixed its sights on him. (In fact, JSTOR suffered no financial loss, and urged the feds to drop the charges. They refused.)

A clean and straightforward work of advocacy cinema, blending archival footage and contemporary talking-head interviews, Knappenberger’s film makes clear that Swartz was always interested in the social and political consequences of technology. By the time he reached adulthood he began to see political power, in effect, as another system of control that could be hacked, subverted and turned to unintended purposes. In the late 2000s, Swartz moved rapidly through a variety of politically minded ventures, including a good-government site and several different progressive advocacy groups. He didn’t live long enough to learn about Edward Snowden or the NSA spy campaigns he exposed, but Swartz frequently spoke out against the hidden and dangerous nature of the security state, and played a key role in the 2011-12 campaign to defeat the Stop Online Piracy Act (SOPA), a far-reaching government-oversight bill that began with wide bipartisan support and appeared certain to sail through Congress. That campaign, and the Internet-wide protest of American Censorship Day in November 2011, looks in retrospect like the digital world’s political coming of age.

Earlier that year, Swartz had been arrested by MIT campus police, after they noticed that someone had plugged a laptop into a network switch in a server closet. He was clearly violating some campus rules and likely trespassing, but as the New York Times observed at the time, the arrest and subsequent indictment seemed to defy logic: Could downloading articles that he was legally entitled to read really be considered hacking? Wasn’t this the digital equivalent of ordering 250 pancakes at an all-you-can-eat breakfast? The whole incident seemed like a momentary blip in Swartz’s blossoming career – a terms-of-service violation that might result in academic censure, or at worst a misdemeanor conviction.

Instead, for reasons that have never been clear, Ortiz and Heymann insisted on a plea deal that would have sent Swartz to prison for six months, an unusually onerous sentence for an offense with no definable victim and no financial motive. Was he specifically singled out as a political scapegoat by Eric Holder or someone else in the Justice Department? Or was he simply bulldozed by a prosecutorial bureaucracy eager to justify its own existence? We will almost certainly never know for sure, but as numerous people in “The Internet’s Own Boy” observe, the former scenario cannot be dismissed easily. Young computer geniuses who embrace the logic of private property and corporate power, who launch start-ups and seek to join the 1 percent before they’re 25, are the heroes of our culture. Those who use technology to empower the public commons and to challenge the intertwined forces of corporate greed and state corruption, however, are the enemies of progress and must be crushed.

”The Internet’s Own Boy” opens this week in Atlanta, Boston, Chicago, Cleveland, Denver, Los Angeles, Miami, New York, Toronto, Washington and Columbus, Ohio. It opens June 30 in Vancouver, Canada; July 4 in Phoenix, San Francisco and San Jose, Calif.; and July 11 in Seattle, with other cities to follow. It’s also available on-demand from Amazon, Google Play, iTunes, Vimeo, Vudu and other providers.

http://www.salon.com/2014/06/24/the_internets_own_boy_how_the_government_destroyed_aaron_swartz/?source=newsletter

Why the Web can’t abandon its misogyny

The Internet’s destructive gender gap:

People like Ezra Klein are showered with opportunity, while women face an online world hostile to their ambitions

, TomDispatch.com

The Internet's destructive gender gap: Why the Web can't abandon its misogynyEzra Klein (Credit: MSNBC)
This piece originally appeared on TomDispatch.

The Web is regularly hailed for its “openness” and that’s where the confusion begins, since “open” in no way means “equal.” While the Internet may create space for many voices, it also reflects and often amplifies real-world inequities in striking ways.

An elaborate system organized around hubs and links, the Web has a surprising degree of inequality built into its very architecture. Its traffic, for instance, tends to be distributed according to “power laws,” which follow what’s known as the 80/20 rule — 80% of a desirable resource goes to 20% of the population.

In fact, as anyone knows who has followed the histories of Google, Apple, Amazon, and Facebook, now among the biggest companies in the world, the Web is increasingly a winner-take-all, rich-get-richer sort of place, which means the disparate percentages in those power laws are only likely to look uglier over time.

Powerful and exceedingly familiar hierarchies have come to define the digital realm, whether you’re considering its economics or the social world it reflects and represents.  Not surprisingly, then, well-off white men are wildly overrepresented both in the tech industry and online.

Just take a look at gender and the Web comes quickly into focus, leaving you with a vivid sense of which direction the Internet is heading in and — small hint — it’s not toward equality or democracy.

Experts, Trolls, and What Your Mom Doesn’t Know

As a start, in the perfectly real world women shoulder a disproportionate share of household and child-rearing responsibilities, leaving them substantially less leisure time to spend online. Though a handful of high-powered celebrity “mommy bloggers” have managed to attract massive audiences and ad revenue by documenting their daily travails, they are the exceptions not the rule. In professional fields like philosophy, law, and science, where blogging has become popular, women are notoriously underrepresented; by one count, for instance, only around 20% of science bloggers are women.



An otherwise optimistic white paper by the British think tank Demos touching on the rise of amateur creativity online reported that white males are far more likely to be “hobbyists with professional standards” than other social groups, while you won’t be shocked to learn that low-income women with dependent children lag far behind. Even among the highly connected college-age set, research reveals a stark divergence in rates of online participation.

Socioeconomic status, race, and gender all play significant roles in a who’s who of the online world, with men considerably more likely to participate than women. “These findings suggest that Internet access may not, in and of itself, level the playing field when it comes to potential pay-offs of being online,” warns Eszter Hargittai, a sociologist at Northwestern University. Put simply, closing the so-called digital divide still leaves a noticeable gap; the more privileged your background, the more likely that you’ll reap the additional benefits of new technologies.

Some of the obstacles to online engagement are psychological, unconscious, and invidious. In a revealing study conducted twice over a span of five years — and yielding the same results both times — Hargittai tested and interviewed 100 Internet users and found that there was no significant variation in their online competency. In terms of sheer ability, the sexes were equal. The difference was in their self-assessments.

It came down to this: The men were certain they did well, while the women were wracked by self-doubt. “Not a single woman among all our female study subjects called herself an ‘expert’ user,” Hargittai noted, “while not a single male ranked himself as a complete novice or ‘not at all skilled.’” As you might imagine, how you think of yourself as an online contributor deeply influences how much you’re likely to contribute online.

The results of Hargittai’s study hardly surprised me. I’ve seen endless female friends be passed over by less talented, more assertive men. I’ve had countless people — older and male, always — assume that someone else must have conducted the interviews for my documentary films, as though a young woman couldn’t have managed such a thing without assistance. Research shows that people routinely underestimate women’s abilities, not least women themselves.

When it comes to specialized technical know-how, women are assumed to be less competent unless they prove otherwise. In tech circles, for example, new gadgets and programs are often introduced as being “so easy your mother or grandmother could use them.” A typical piece in the New York Times wastitled “How to Explain Bitcoin to Your Mom.”  (Assumedly, dad already gets it.)  This kind of sexism leapt directly from the offline world onto the Web and may only have intensified there.

And it gets worse. Racist, sexist, and homophobic harassment or “trolling” has become a depressingly routine aspect of online life.

Many prominent women have spoken up about their experiences being bullied and intimidated online — scenarios that sometimes escalate into the release of private information, including home addresses, e-mail passwords, and social security numbers, or simply devolve into an Internet version of stalking. Esteemed classicist Mary Beard, for example, “received online death threats and menaces of sexual assault” after a television appearance last year, as did British activist Caroline Criado-Perez after she successfully campaigned to get more images of women onto British banknotes.

Young women musicians and writers often find themselves targeted online by men who want to silence them. “The people who were posting comments about me were speculating as to how many abortions I’ve had, and they talked about ‘hate-fucking’ me,” blogger Jill Filipovic told the Guardian after photos of her were uploaded to a vitriolic online forum. Laurie Penny, a young political columnist who has faced similar persecution and recently published an ebook called Cybersexism, touched a nerve by calling a woman’s opinion the “short skirt” of the Internet: “Having one and flaunting it is somehow asking an amorphous mass of almost-entirely male keyboard-bashers to tell you how they’d like to rape, kill, and urinate on you.”

Alas, the trouble doesn’t end there. Women who are increasingly speaking out against harassers are frequently accused of wanting to stifle free speech. Or they are told to “lighten up” and that the harassment, however stressful and upsetting, isn’t real because it’s only happening online, that it’s just “harmless locker-room talk.”

As things currently stand, each woman is left alone to devise a coping mechanism as if her situation were unique. Yet these are never isolated incidents, however venomously personal the insults may be. (One harasser called Beard — and by online standards of hate speech this was mild — “a vile, spiteful excuse for a woman, who eats too much cabbage and has cheese straws for teeth.”)

Indeed, a University of Maryland study strongly suggests just how programmatic such abuse is. Those posting with female usernames, researchers were shocked to discover, received 25 times as many malicious messages as those whose designations were masculine or ambiguous. The findings were so alarming that the authors advised parents to instruct their daughters to use sex-neutral monikers online. “Kids can still exercise plenty of creativity and self-expression without divulging their gender,” a well-meaning professor said, effectively accepting that young girls must hide who they are to participate in digital life.

Over the last few months, a number of black women with substantial social media presences conducted an informal experiment of their own. Fed up with the fire hose of animosity aimed at them, Jamie Nesbitt Golden and others adopted masculine Twitter avatars. Golden replaced her photo with that of a hip, bearded, young white man, though she kept her bio and continued to communicate in her own voice. “The number of snarky, condescending tweets dropped off considerably, and discussions on race and gender were less volatile,” Golden wrote, marveling at how simply changing a photo transformed reactions to her. “Once I went back to Black, it was back tobusiness as usual.”

Old Problems in New Media

Not all discrimination is so overt. A study summarized on the Harvard Business Review website analyzed social patterns on Twitter, where female users actually outnumbered males by 10%. The researchers reported “that an average man is almost twice [as] likely to follow another man [as] a woman” while “an average woman is 25% more likely to follow a man than a woman.” The results could not be explained by varying usage since both genders tweeted at the same rate.

Online as off, men are assumed to be more authoritative and credible, and thus deserving of recognition and support. In this way, long-standing disparities are reflected or even magnified on the Internet.

In his 2008 book The Myth of Digital Democracy, Matthew Hindman, a professor of media and public affairs at George Washington University, reports that of the top 10 blogs, only one belonged to a female writer. A wider census of every political blog with an average of over 2,000 visitors a week, or a total of 87 sites, found that only five were run by women, nor were there “identifiable African Americans among the top 30 bloggers,” though there was “one Asian blogger, and one of mixed Latino heritage.” In 2008, Hindman surveyed the blogosphere and found it less diverse than the notoriously whitewashed op-ed pages of print newspapers. Nothing suggests that, in the intervening six years, things have changed for the better.

Welcome to the age of what Julia Carrie Wong has called “old problems in new media,” as the latest well-funded online journalism start-ups continue to be helmed by brand-name bloggers like Ezra Klein and Nate Silver. It is “impossible not to notice that in the Bitcoin rush to revolutionize journalism, the protagonists are almost exclusively — and increasingly — male and white,” Emily Bell lamented in a widely circulated op-ed. It’s not that women and people of color aren’t doing innovative work in reporting and cultural criticism; it’s just that they get passed over by investors and financiers in favor of the familiar.

As Deanna Zandt and others have pointed out, such real-world lack of diversity is also regularly seen on the rosters of technology conferences, even as speakers take the stage to hail a democratic revolution on the Web, while audiences that look just like them cheer. In early 2013, in reaction to the announcement of yet another all-male lineup at a prominent Web gathering, a pledge was posted on the website of the Atlantic asking men to refrain from speaking at events where women are not represented. The list of signatories was almost immediately removed “due to a flood of spam/trolls.” The conference organizer, a successful developer, dismissed the uproar over Twitter. “I don’t feel [the] need to defend this, but am happy with our process,” he stated. Instituting quotas, he insisted, would be a “discriminatory” way of creating diversity.

This sort of rationalization means technology companies look remarkably like the old ones they aspire to replace: male, pale, and privileged. Consider Instagram, the massively popular photo-sharing and social networking service, which was founded in 2010 but only hired its first female engineer last year. While the percentage of computer and information sciences degrees women earned rose from 14% to 37% between 1970 and 1985, that share had depressingly declined to 18% by 2008.

Those women who do fight their way into the industry often end up leaving — their attrition rate is 56%, or double that of men — and sexism is a big part of what pushes them out. “I no longer touch code because I couldn’t deal with the constant dismissing and undermining of even my most basic work by the ‘brogramming’ gulag I worked for,” wrote one woman in a roundup of answers to the question: Why there are so few female engineers?

In Silicon Valley, Facebook’s Sheryl Sandberg and Yahoo’s Marissa Mayer excepted, the notion of the boy genius prevails.  More than 85% of venture capitalists are men generally looking to invest in other men, and women make 49 cents for every dollar their male counterparts rake in — enough to make a woman long for the wage inequities of the non-digital world, where on average they take home a whopping 77 cents on the male dollar. Though 40% of private businesses are women-owned nationwide, only 8% of the venture-backed tech start-ups are.

Established companies are equally segregated. The National Center for Women and Information Technology reports that in the top 100 tech companies, only 6% of chief executives are women. The numbers of Asians who get to the top are comparable, despite the fact that they make up one-third of all Silicon Valley software engineers. In 2010, not even 1% of the founders of Silicon Valley companies were black.

Making Your Way in a Misogynist Culture

What about the online communities that are routinely held up as exemplars of a new, networked, open culture? One might assume from all the “revolutionary” and “disruptive” rhetoric that they, at least, are better than the tech goliaths. Sadly, the data doesn’t reflect the hype. Consider Wikipedia. A survey revealed that women make up less than 15% of the contributors to the site, despite the fact that they use the resource in equal numbers to men.

In a similar vein, collaborative filtering sites like Reddit and Slashdot, heralded by the digerati as the cultural curating mechanisms of the future, cater to users who are up to 87% male and overwhelmingly young, wealthy, and white. Reddit, in particular, has achieved notoriety for its misogynist culture, with threads where rapists have recounted their exploits and photos of underage girls got posted under headings like “Chokeabitch,” “N*****jailbait,” and “Creepshots.”

Despite being held up as a paragon of political virtue, evidence suggests that as few as 1.5% of open source programmers are women, a number far lower than the computing profession as a whole. In response, analysts have blamed everything from chauvinism, assumptions of inferiority, and outrageous examples of impropriety (including sexual harassment at conferences where programmers gather) to a lack of women mentors and role models. Yet the advocates of open-source production continue to insist that their culture exemplifies a new and ethical social order ruled by principles of equality, inclusivity, freedom, and democracy.

Unfortunately, it turns out that openness, when taken as an absolute, actually aggravates the gender gap. The peculiar brand of libertarianism in vogue within technology circles means a minority of members — a couple of outspoken misogynists, for example — can disproportionately affect the behavior and mood of the group under the cover of free speech. As Joseph Reagle, author of Good Faith Collaboration: The Culture of Wikipediapoints out, women are not supposed to complain about their treatment, but if they leave — that is, essentially are driven from — the community, that’s a decision they alone are responsible for.

“Urban” Planning in a Digital Age

The digital is not some realm distinct from “real” life, which means that the marginalization of women and minorities online cannot be separated from the obstacles they confront offline. Comparatively low rates of digital participation and the discrimination faced by women and minorities within the tech industry matter — and not just because they give the lie to the egalitarian claims of techno-utopians. Such facts and figures underscore the relatively limited experiences and assumptions of the people who design the systems we depend on to use the Internet — a medium that has, after all, become central to nearly every facet of our lives.

In a powerful sense, programmers and the corporate officers who employ them are the new urban planners, shaping the virtual frontier into the spaces we occupy, building the boxes into which we fit our lives, and carving out the routes we travel. The choices they make can segregate us further or create new connections; the algorithms they devise can exclude voices or bring more people into the fold; the interfaces they invent can expand our sense of human possibility or limit it to the already familiar.

What vision of a vibrant, thriving city informs their view? Is it a place that fosters chance encounters or does it favor the predictable? Are the communities they create mixed or gated? Are they full of privately owned shopping malls and sponsored billboards or are there truly public squares? Is privacy respected? Is civic engagement encouraged? What kinds of people live in these places and how are they invited to express themselves? (For example, is trolling encouraged, tolerated, or actively discouraged or blocked?)

No doubt, some will find the idea of engineering online platforms to promote diversity unsettling and — a word with some irony embedded in it — paternalistic, but such criticism ignores the ways online spaces are already contrived with specific outcomes in mind.  They are, as a start, designed to serve Silicon Valley venture capitalists, who want a return on investment, as well as advertisers, who want to sell us things. The term “platform,” which implies a smooth surface, misleads us, obscuring the ways technology companies shape our online lives, prioritizing certain purposes over others, certain creators over others, and certain audiences over others.

If equity is something we value, we have to build it into the system, developing structures that encourage fairness, serendipity, deliberation, and diversity through a process of trial and error. The question of how we encourage, or even enforce, diversity in so-called open networks is not easy to answer, and there is no obvious and uncomplicated solution to the problem of online harassment. As a philosophy, openness can easily rationalize its own failure, chalking people’s inability to participate up to choice, and keeping with the myth of the meritocracy, blaming any disparities in audience on a lack of talent or will.

That’s what the techno-optimists would have us believe, dismissing potential solutions as threats to Internet freedom and as forceful interference in a “natural” distribution pattern. The word “natural” is, of course, a mystification, given that technological and social systems are not found growing in a field, nurtured by dirt and sun. They are made by human beings and so can always be changed and improved.

Astra Taylor is a writer, documentary filmmaker (including Zizek! andExamined Life), and activist. Her new book, “The People’s Platform: Taking Back Power and Culture in the Digital Age (Metropolitan Books), has just been published. This essay is adapted from it. She also helped launch the Occupy offshoot Strike Debt and its Rolling Jubilee campaign.

Astra Taylor is working on a film about the theorist Slavoj Zizek, which is being produced by the Documentary Campaign.

http://www.salon.com/2014/04/10/the_internets_destructive_gender_gap_why_the_web_cant_abandon_its_misogyny_partner/?source=newsletter

25 things you might not know about the web on its 25th birthday

It sprang from the brain of one man, Tim Berners-Lee, and is the fastest-growing communication medium of all time. A quarter-century on, we examine how the web has transformed our lives

 

Tim Berners-Lee
Briton Tim Berners-Lee, the inventor of the world wide web, at the opening ceremony of the London 2012 Olympic Games. Photograph: Wang Lili/xh/Xinhua Press/Corbis

 

1 The importance of “permissionless innovation”

The thing that is most extraordinary about the internet is the way it enables permissionless innovation. This stems from two epoch-making design decisions made by its creators in the early 1970s: that there would be no central ownership or control; and that the network would not be optimised for any particular application: all it would do is take in data-packets from an application at one end, and do its best to deliver those packets to their destination.

It was entirely agnostic about the contents of those packets. If you had an idea for an application that could be realised using data-packets (and were smart enough to write the necessary software) then the network would do it for you with no questions asked. This had the effect of dramatically lowering the bar for innovation, and it resulted in an explosion of creativity.

What the designers of the internet created, in effect, was a global machine for springing surprises. The web was the first really big surprise and it came from an individual – Tim Berners-Lee – who, with a small group of helpers, wrote the necessary software and designed the protocols needed to implement the idea. And then he launched it on the world by putting it on the Cern internet server in 1991, without having to ask anybody’s permission.

2 The web is not the internet

Although many people (including some who should know better) often confuse the two. Neither is Google the internet, nor Facebook the internet. Think of the net as analogous to the tracks and signalling of a railway system, and applications – such as the web, Skype, file-sharing and streaming media – as kinds of traffic which run on that infrastructure. The web is important, but it’s only one of the things that runs on the net.

3 The importance of having a network that is free and open

The internet was created by government and runs on open source software. Nobody “owns” it. Yet on this “free” foundation, colossal enterprises and fortunes have been built – a fact that the neoliberal fanatics who run internet companies often seem to forget. Berners-Lee could have been as rich as Croesus if he had viewed the web as a commercial opportunity. But he didn’t – he persuaded Cern that it should be given to the world as a free resource. So the web in its turn became, like the internet, a platform for permissionless innovation. That’s why a Harvard undergraduate was able to launch Facebook on the back of the web.

4 Many of the things that are built on the web are neither free nor open

Mark Zuckerberg was able to build Facebook because the web was free and open. But he hasn’t returned the compliment: his creation is not a platform from which young innovators can freely spring the next set of surprises. The same holds for most of the others who have built fortunes from exploiting the facilities offered by the web. The only real exception is Wikipedia.

5 Tim Berners-Lee is Gutenberg’s true heir

In 1455, with his revolution in printing, Johannes Gutenberg single-handedly launched a transformation in mankind’s communications environment – a transformation that has shaped human society ever since. Berners-Lee is the first individual since then to have done anything comparable.

6 The web is not a static thing

The web we use today is quite different from the one that appeared 25 years ago. In fact it has been evolving at a furious pace. You can think of this evolution in geological “eras”. Web 1.0 was the read-only, static web that existed until the late 1990s. Web 2.0 is the web of blogging, Web services, mapping, mashups and so on – the web that American commentator David Weinberger describes as “small pieces, loosely joined”. The outlines of web 3.0 are only just beginning to appear as web applications that can “understand” the content of web pages (the so-called “semantic web”), the web of data (applications that can read, analyse and mine the torrent of data that’s now routinely published on websites), and so on. And after that there will be web 4.0 and so on ad infinitum.

7 Power laws rule OK

In many areas of life, the law of averages applies – most things are statistically distributed in a pattern that looks like a bell. This pattern is called the “normal distribution”. Take human height. Most people are of average height and there are relatively small number of very tall and very short people. But very few – if any – online phenomena follow a normal distribution. Instead they follow what statisticians call a power law distribution, which is why a very small number of the billions of websites in the world attract the overwhelming bulk of the traffic while the long tail of other websites has very little.

8 The web is now dominated by corporations

Despite the fact that anybody can launch a website, the vast majority of the top 100 websites are run by corporations. The only real exception is Wikipedia.

9 Web dominance gives companies awesome (and unregulated) powers

Take Google, the dominant search engine. If a Google search doesn’t find your site, then in effect you don’t exist. And this will get worse as more of the world’s business moves online. Every so often, Google tweaks its search algorithms in order to thwart those who are trying to “game” them in what’s called search engine optimisation. Every time Google rolls out the new tweaks, however, entrepreneurs and organisations find that their online business or service suffers or disappears altogether. And there’s no real comeback for them.

10 The web has become a memory prosthesis for the world

Have you noticed how you no longer try to remember some things because you know that if you need to retrieve them you can do so just by Googling?

11 The web shows the power of networking

The web is based on the idea of “hypertext” – documents in which some terms are dynamically linked to other documents. But Berners-Lee didn’t invent hypertext – Ted Nelson did in 1963 and there were lots of hypertext systems in existence long before Berners-Lee started thinking about the web. But the existing systems all worked by interlinking documents on the same computer. The twist that Berners-Lee added was to use the internet to link documents that could be stored anywhere. And that was what made the difference.

12 The web has unleashed a wave of human creativity

Before the web, “ordinary” people could publish their ideas and creations only if they could persuade media gatekeepers (editors, publishers, broadcasters) to give them prominence. But the web has given people a global publishing platform for their writing (Blogger, WordPress, Typepad, Tumblr), photographs (Flickr, Picasa, Facebook), audio and video (YouTube, Vimeo); and people have leapt at the opportunity.

13 The web should have been a read-write medium from the beginning

Berners-Lee’s original desire was for a web that would enable people not only to publish, but also to modify, web pages, but in the end practical considerations led to the compromise of a read-only web. Anybody could publish, but only the authors or owners of web pages could modify them. This led to the evolution of the web in a particular direction and it was probably the factor that guaranteed that corporations would in the end become dominant.

14 The web would be much more useful if web pages were machine-understandable

Web pages are, by definition, machine-readable. But machines can’t understand what they “read” because they can’t do semantics. So they can’t easily determine whether the word “Casablanca” refers to a city or to a movie. Berners-Lee’s proposal for the “semantic web” – ie a way of restructuring web pages to make it easier for computers to distinguish between, say, Casablanca the city and Casablanca the movie – is one approach, but it would require a lot of work upfront and is unlikely to happen on a large scale. What may be more useful are increasingly powerful machine-learning techniques that will make computers better at understanding context.

15 The importance of killer apps

A killer application is one that makes the adoption of a technology a no-brainer. The spreadsheet was the killer app for the first Apple computer. Email was the first killer app for the Arpanet – the internet’s precursor. The web was the internet’s first killer app. Before the web – and especially before the first graphical browser, Mosaic, appeared in 1993 – almost nobody knew or cared about the internet (which had been running since 1983). But after the web appeared, suddenly people “got” it, and the rest is history.

16 WWW is linguistically unique

Well, perhaps not, but Douglas Adams claimed that it was the only set of initials that took longer to say than the thing it was supposed to represent.

17 The web is a startling illustration of the power of software

Software is pure “thought stuff”. You have an idea; you write some instructions in a special language (a computer program); and then you feed it to a machine that obeys your instructions to the letter. It’s a kind of secular magic. Berners-Lee had an idea; he wrote the code; he put it on the net, and the network did the rest. And in the process he changed the world.

18 The web needs a micro-payment system

In addition to being just a read-only system, the other initial drawback of the web was that it did not have a mechanism for rewarding people who published on it. That was because no efficient online payment system existed for securely processing very small transactions at large volumes. (Credit-card systems are too expensive and clumsy for small transactions.) But the absence of a micro-payment system led to the evolution of the web in a dysfunctional way: companies offered “free” services that had a hidden and undeclared cost, namely the exploitation of the personal data of users. This led to the grossly tilted playing field that we have today, in which online companies get users to do most of the work while only the companies reap the financial rewards.

19 We thought that the HTTPS protocol would make the web secure. We were wrong

HTTP is the protocol (agreed set of conventions) that normally regulates conversations between your web browser and a web server. But it’s insecure because anybody monitoring the interaction can read it. HTTPS (stands for HTTP Secure) was developed to encrypt in-transit interactions containing sensitive data (eg your credit card details). The Snowden revelations about US National Security Agency surveillance suggest that the agency may have deliberately weakened this and other key internet protocols.

20 The web has an impact on the environment. We just don’t know how big it is

The web is largely powered by huge server farms located all over the world that need large quantities of electricity for computers and cooling. (Not to mention the carbon footprint and natural resource costs of the construction of these installations.) Nobody really knows what the overall environmental impact of the web is, but it’s definitely non-trivial. A couple of years ago, Google claimed that its carbon footprint was on a par with that of Laos or the United Nations. The company now claims that each of its users is responsible for about eight grams of carbon dioxide emissions every day. Facebook claims that, despite its users’ more intensive engagement with the service, it has a significantly lower carbon footprint than Google.

21 The web that we see is just the tip of an iceberg

The web is huge – nobody knows how big it is, but what we do know is that the part of it that is reached and indexed by search engines is just the surface. Most of the web is buried deep down – in dynamically generated web pages, pages that are not linked to by other pages and sites that require logins – which are not reached by these engines. Most experts think that this deep (hidden) web is several orders of magnitude larger than the 2.3 billion pages that we can see.

22 Tim Berners-Lee’s boss was the first of many people who didn’t get it initially

Berners-Lee’s manager at Cern scribbled “vague but interesting” on the first proposal Berners-Lee submitted to him. Most people confronted with something that is totally new probably react the same way.

23 The web has been the fastest-growing communication medium of all time

One measure is how long a medium takes to reach the first 50 million users. It took broadcast radio 38 years and television 13 years. The web got there in four.

24 Web users are ruthless readers

The average page visit lasts less than a minute. The first 10 seconds are critical for users’ decision to stay or leave. The probability of their leaving is very high during these seconds. They’re still highly likely to leave during the next 20 seconds. It’s only after they have stayed on a page for about 30 seconds that the chances improve that they will finish it.

25 Is the web making us stupid?

Writers like Nick Carr are convinced that it is. He thinks that fewer people engage in contemplative activities because the web distracts them so much. “With the exception of alphabets and number systems,” he writes, “the net may well be the single most powerful mind-altering technology that has ever come into general use.” But technology giveth and technology taketh away. For every techno-pessimist like Carr, there are thinkers like Clay Shirky, Jeff Jarvis, Yochai Benkler, Don Tapscott and many others (including me) who think that the benefits far outweigh the costs.

John Naughton’s From Gutenberg to Zuckerberg is published by Quercus

 

http://www.theguardian.com/technology/2014/mar/09/25-years-web-tim-berners-lee

Lets build our own internet, with blackjack and hookers.

 

The Pirate Bay, delving further into the anti-censorship battle, may have just invented a new type of internet, hosted peer-to-peer, and maintained using the Bitcoin protocol.

Love them or hate them, The Pirate Bay are always ahead of the curve when it comes to digital rights, especially when it comes to copyright, DRM and censorship. Now I’m not one to say ‘they give me free shit, awesome hur dur’. Artist remuneration is important to me and in many senses TPB circumvents this. But the current copyright system is broken. Fractions of the dollar go to the artists, and the archaic content distribution models mean lots of content can’t be seen legally without a 100 channels of cable or a $40 DVD.
Media pirates

People consume media differently and the market largely hasn’t caught up. Progressive media groups, like Netflix, actually use TPB stats to work out what programs to book. It’s acknowledged that freely distributing your content is a great way to get exposed. Most bands will seed a torrent in the hopes it goes viral. So clearly there’s merit to the model.

 

“Thanks Pirate Bay”

Now if all TPB did was make it easier for people to OD on Game Of Thrones I’d still be impressed. Their fractured cloud hosted solutions and domain hopping have been a beacon of hope to everyone that feels uncomfortable with bolder and bolder attempts to centralise and regulate an internet built by and for free thinkers.

But what matters now is what they’re doing to bypass censorship.

Thought police

You see the internet, and its contents, is a bit like an ocean. It’s huge, it’s untamed, it has dangerous disgusting depths and beautiful vistas. More and more however you, the user, are shunted onto the tourist beaches for your own good. You don’t even see “no access” signs for the areas that aren’t safe.  Through the wizardry of IP blocking they make it so you can’t even see they where there. So instead you paddle in the shallows, reading 9gag and sharing snapchats of your cats hat.

TPB’s first step was the pirate bay browser, very similar to the tor browser, however without IP masking (so you aren’t anonymous). This browser means users aren’t limited in their access because of their location.

It’s not just China that limits it’s internet access, most countries live in a media bubble, from blocking access to movies and shows because licensing doesn’t allow it, to restricting the news that is readily available. The people in office aren’t even being subtle anymore.  Consider the porn filter in the UK: they are restricting content based on the views of a moral minority who happen to hold political (and one would assume economic) power. If you think this is going to be anything other than more prevalent in the near future, or at this doesn’t effect you, then you need a better understanding of the role of free speech in government accountability.

 

The buccaneers behind pirate bay.

Fighting back

However even with IP masking, governments can still get right to the source, and block an IP address, confiscate servers, basically killing a website. All well and good to stop child porn and nuclear warhead plans from being distributed, however this is also more than likely to be used to silence boat rockers, dissidents and anyone that challenges the current politico-economic paradigm that keeps the suits in limos. Consider Wikkileaks, who have been under attack merely for holding the governments own actions up to the light for scrutiny.

The way TPB are addressing this will be a decentralised, peer to peer internet.

You heard me right.

This means domain blocking is impossible, server seizure can’t be achieved and the powers that be can’t do everything in their power to limit free speech that challenges the political or economic status quo.

Decentralise everything

The way it works is that it stores a sites indexable data when on your computer, so you host little chunks of the sites you visit, in much the same way as people host chunks of data when maintaining a seed for a torrent file.

Users will be able to register their ‘domain’ using bitcoin, on a first come first serve basis, renewing every year. This means that even the registration system is decentralised, in fact relying on a completely different decentralised network. That is one hell of a built in redundancy

It will be using a fake DNS system but there is no real IP address to take down, as the database will be scattered across a global decentralised network of users. No points of failure and no centralised control mechanisms means it could become a very robust platform to maintain free speech.

There are issues, for example what happens if you host illegal content unwittingly, or what happens if the bulk of sites you use are very data hungry? The system has just been announced so further news may quash or exacerbate these concerns.

Do we need it?

In a world where the original ideals of a free internet are being consumed by data discrimination, PRISM, the NSA and the TPP, this pirate web may be one of the few places where true subversive discussion can occur. It may just halt part of a concerted effort to turn the net into a homogenised tracking device, used to buy iPads and photograph food, whilst being spied on and lied to.

While people may ask why it is needed, it must be remembered that a benign government only stays so under constant scrutiny and absolute accountability to the governed. This can only occur where there is a completely unfettered platform for free speech and sharing to occur.

Love them or hate them, but what The Pirate Bay have done, are doing and will do with the peer to peer protocol may be key to your political freedoms and human rights in the future.

X

The Golden Age of Journalism?

Tomgram: Engelhardt, The Rise of the Reader
Posted by Tom Engelhardt at 8:08am, January 21, 2014.
Follow TomDispatch on Twitter @TomDispatch.

Your Newspaper, Your Choice
By Tom Engelhardt

It was 1949.  My mother — known in the gossip columns of that era as “New York’s girl caricaturist” — was freelancing theatrical sketches to a number of New York’s newspapers and magazines, including the Brooklyn Eagle.  That paper, then more than a century old, had just a few years of life left in it.  From 1846 to 1848, its editor had been the poet Walt Whitman.  In later years, my mother used to enjoy telling a story about the Eagle editor she dealt with who, on learning that I was being sent to Walt Whitman kindergarten, responded in the classically gruff newspaper manner memorialized in movies like His Girl Friday: “Are they still naming things after that old bastard?”

In my childhood, New York City was, you might say, papered with newspapers.  The Daily News, the Daily Mirror, the Herald Tribune, the Wall Street Journal… there were perhaps nine or 10 significant ones on newsstands every day and, though that might bring to mind some golden age of journalism, it’s worth remembering that a number of them were already amalgams.  The Journal-American, for instance, had once been the Evening Journal and the American, just as the World-Telegram & Sun had been a threesome, the World, the Evening Telegram, and the Sun.  In my own household, we got the New York Times (disappointingly comic-strip-less), the New York Post (then a liberal, not a right-wing, rag that ran Pogo and Herblock’s political cartoons) and sometimes the Journal-American (Believe It or Not and The Phantom).

Then there were always the magazines: in our house, Life, the Saturday Evening Post, Look, the New Yorker — my mother worked for some of them, too — and who knows what else in a roiling mass of print.  It was a paper universe all the way to the horizon, though change and competition were in the air.  After all, the screen (the TV screen, that is) was entering the American home like gangbusters. Mine arrived in 1953 when the Post assigned my mother to draw the Army-McCarthy hearings, which — something new under the sun — were to be televised live by ABC.

Still, at least in my hometown, it seemed distinctly like a golden age of print news, if not of journalism.  Some might reserve that label for the shake-up, breakdown era of the 1960s, that moment when the New Journalism arose, an alternative press burst onto the scene, and for a brief moment in the late 1960s and early 1970s, the old journalism put its mind to uncovering massacres, revealing the worst of American war, reporting on Washington-style scandal, and taking down a president.  In the meantime, magazines like Esquire and Harper’s came to specialize in the sort of chip-on-the-shoulder, stylish voicey-ness that would, one day, become the hallmark of the online world and the age of the Internet.  (I still remember the thrill of first reading Tom Wolfe’s “The Kandy-Kolored Tangerine-Flake Streamline Baby” on the world of custom cars.  It put the vrrrooom into writing in a dazzling way.)

However, it took the arrival of the twenty-first century to turn the journalistic world of the 1950s upside down and point it toward the trash heap of history.  I’m talking about the years that shrank the screen, and put it first on your desk, then in your hand, next in your pocket, and one day soon on your eyeglasses, made it the way you connected with everyone on Earth and they — whether as friends, enemies, the curious, voyeurs, corporate sellers and buyers, or the NSA — with you.  Only then did it became apparent that, throughout the print era, all those years of paper running off presses and newsboys and newsstands, from Walt Whitman to Woodward and Bernstein, the newspaper had been misnamed.

Journalism’s amour propre had overridden a clear-eyed assessment of what exactly the paper really was.  Only then would it be fully apparent that it always should have been called the “adpaper.”  When the corporation and the “Mad Men” who worked for it spied the Internet and saw how conveniently it gathered audiences and what you could learn about their lives, preferences, and most intimate buying habits, the ways you could slice and dice demographics and sidle up to potential customers just behind the ever-present screen, the ad began to flee print for the online world.  It was then, of course, that papers (as well as magazines) — left with overworked, ever-smaller staffs, evaporating funding, and the ad-less news — began to shudder, shrink, and in some cases collapse (as they might not have done if the news had been what fled).

New York still has four dailies (Murdoch’s Post, the Daily News, the New York Times, and the Wall Street Journal).  However, in recent years, many two-paper towns like Denver and Seattle morphed into far shakier one-paper towns as papers like the Rocky Mountain News and the Seattle Post-Intelligencer passed out of existence (or into only digital existence).  Meanwhile, the Detroit News and Detroit Free Press went over to a three-day-a-week home delivery print edition, and the Times Picayune of New Orleans went down to a three-day-a-week schedule (before returning as a four-day Picayune and a three-day-a-week tabloid in 2013).  The Christian Science Monitor stopped publishing a weekday paper altogether.  And so it went.  In those years, newspaper advertising took a terrible hit, circulation declined, sometimes precipitously, and bankruptcies were the order of the day.

The least self-supporting sections like book reviews simply evaporated and in the one place of significance that a book review remained, the New York Times, shrank.  Sunday magazines shriveled up.  Billionaires began to buy papers at bargain-basement prices as, in essence, vanity projects.  Jobs and staffs were radically cut (as were the TV versions of the same so that, for example, if you tune in to NBC’s Nightly News with Brian Williams, you often have the feeling that the estimable Richard Engel, with the job title of chief foreign correspondent, is the only “foreign correspondent” still on the job, flown eternally from hot spot to hot spot around the globe).

No question about it, if you were an established reporter of a certain age or anyone who worked in a newsroom, this was proving to be the aluminum age of journalism.  Your job might be in jeopardy, along with maybe your pension, too.  In these years, stunned by what was suddenly happening to them, the management of papers stood for a time frozen in place like the proverbial deer in the headlights as the voicey-ness of the Internet broke over them, turning their op-ed pages into the grey sisters of the reading world.  Then, in a blinding rush to save what could be saved, recapture the missing ad, or find any other path to a new model of profitability from digital advertising (disappointing) to pay walls (a mixed bag), papers rushed online.  In the process, they doubled the work of the remaining journalists and editors, who were now to service both the new newspaper and the old.

The Worst of Times, the Best of Times

In so many ways, it’s been, and continues to be, a sad, even horrific, tale of loss.  (A similar tale of woe involves the printed book.  It’s only advantage: there were no ads to flee the premises, but it suffered nonetheless — already largely crowded out of the newspaper as a non-revenue producer and out of consciousness by a blitz of new ways of reading and being entertained. And I say that as someone who has spent most of his life as an editor of print books.)  The keening and mourning about the fall of print journalism has gone on for years.  It’s a development that represents — depending on who’s telling the story — the end of an age, the fall of all standards, or the loss of civic spirit and the sort of investigative coverage that might keep a few more politicians and corporate heads honest, and so forth and so on.

Let’s admit that the sins of the Internet are legion and well-known: the massive programs of government surveillance it enables; the corporate surveillance it ensures; the loss of privacy it encourages; the flamers and trolls it births; the conspiracy theorists, angry men, and strange characters to whom it gives a seemingly endless moment in the sun; and the way, among other things, it tends to sort like and like together in a self-reinforcing loop of opinion.  Yes, yes, it’s all true, all unnerving, all terrible.

As the editor of TomDispatch.com, I’ve spent the last decade-plus plunged into just that world, often with people half my age or younger.  I don’t tweet.  I don’t have a Kindle or the equivalent.  I don’t even have a smart phone or a tablet of any sort.  When something — anything — goes wrong with my computer I feel like a doomed figure in an alien universe, wish for the last machine I understood (a typewriter), and then throw myself on the mercy of my daughter.

I’ve been overwhelmed, especially at the height of the Bush years, by cookie-cutter hate email — sometimes scores or hundreds of them at a time — of a sort that would make your skin crawl.  I’ve been threatened.  I’ve repeatedly received “critical” (and abusive) emails, blasts of red hot anger that would startle anyone, because the Internet, so my experience tells me, loosens inhibitions, wipes out taboos, and encourages a sense of anonymity that in the older world of print, letters, or face-to-face meetings would have been far less likely to take center stage.  I’ve seen plenty that’s disturbed me. So you’d think, given my age, my background, and my present life, that I, too, might be in mourning for everything that’s going, going, gone, everything we’ve lost.

But I have to admit it: I have another feeling that, at a purely personal level, outweighs all of the above.  In terms of journalism, of expression, of voice, of fine reporting and superb writing, of a range of news, thoughts, views, perspectives, and opinions about places, worlds, and phenomena that I wouldn’t otherwise have known about, there has never been an experimental moment like this.  I’m in awe.  Despite everything, despite every malign purpose to which the Internet is being put, I consider it a wonder of our age.  Yes, perhaps it is the age from hell for traditional reporters (and editors) working double-time, online and off, for newspapers that are crumbling, but for readers, can there be any doubt that now, not the 1840s or the 1930s or the 1960s, is the golden age of journalism?

Think of it as the upbeat twin of NSA surveillance.  Just as the NSA can reach anyone, so in a different sense can you.  Which also means, if you’re a website, anyone can, at least theoretically, find and read you.  (And in my experience, I’m often amazed at who can and does!)  And you, the reader, have in remarkable profusion the finest writing on the planet at your fingertips.  You can read around the world almost without limit, follow your favorite writers to the ends of the Earth.

The problem of this moment isn’t too little.  It’s not a collapsing world.  It’s way too much.  These days, in a way that was never previously imaginable, it’s possible to drown in provocative and illuminating writing and reporting, framing and opining.  In fact, I challenge you in 2014, whatever the subject and whatever your expertise, simply to keep up.

The Rise of the Reader

In the “golden age of journalism,” here’s what I could once do.  In the 1960s and early 1970s, I read the New York Times (as I still do in print daily), various magazines ranging from the New Yorker and Ramparts to “underground” papers like the Great Speckled Bird when they happened to fall into my hands, and I.F. Stone’s Weekly (to which I subscribed), as well as James Ridgeway and Andrew Kopkind’s Hard Times, among other publications of the moment.  Somewhere in those years or thereafter, I also subscribed to a once-a-week paper that had the best of the Guardian, the Washington Post, and Le Monde in it.  For the time, that covered a fair amount of ground.

Still, the limits of that “golden” moment couldn’t be more obvious now.  Today, after all, if I care to, I can read online every word of the Guardian, the Washington Post, and Le Monde (though my French is way too rusty to tackle it). And that’s every single day — and that, in turn, is nothing.

It’s all out there for you.  Most of the major dailies and magazines of the globe, trade publications, propaganda outfits, Pentagon handouts, the voiciest of blogs, specialist websites, the websites of individual experts with a great deal to say, websites, in fact, for just about anyone from historians, theologians, and philosophers to techies, book lovers, and yes, those fascinated with journalism.  You can read your way through the American press and the world press.  You can read whole papers as their editors put them together or — at least in your mind — you can become the editor of your own op-ed page every day of the week, three times, six times a day if you like (and odds are that it will be more interesting to you, and perhaps others, than the op-ed offerings of any specific paper you might care to mention).

You can essentially curate your own newspaper (or magazine) once a day, twice a day, six times a day.  Or — a particular blessing in the present ocean of words — you can rely on a new set of people out there who have superb collection and curating abilities, as well as fascinating editorial eyes.  I’m talking about teams of people at what I like to call “riot sites” — for the wild profusion of headlines they sport — like Antiwar.com (where no story worth reading about conflict on our planet seems to go unnoticed) or Real Clear Politics (Real Clear World/Technology/Energy/etc., etc., etc.).  You can subscribe to an almost endless range of curated online newsletters targeted to specific subjects, like the “morning brief” that comes to me every weekday filled with recommended pieces on cyberwar, terrorism, surveillance, and the like from the Center on National Security at Fordham Law School.  And I’m not even mentioning the online versions of your favorite print magazine, or purely online magazines like Salon.com, or the many websites I visit like Truthout, Alternet, Commondreams, and Truthdig with their own pieces and picks.  And in mentioning all of this, I’m barely scratching the surface of the world of writing that interests me.

There has, in fact, never been a DIY moment like this when it comes to journalism and coverage of the world.  Period.  For the first time in history, you and I have been put in the position of the newspaper editor.  We’re no longer simply passive readers at the mercy of someone else’s idea of how to “cover” or organize this planet and its many moving parts.  To one degree or another, to the extent that any of us have the time, curiosity, or energy, all of us can have a hand in shaping, reimagining, and understanding our world in new ways.

Yes, it is a journalistic universe from hell, a genuine nightmare; and yet, for a reader, it’s also an experimental world, something thrillingly, unexpectedly new under the sun.  For that reader, a strangely democratic and egalitarian Era of the Word has emerged.  It’s chaotic; it’s too much; and make no mistake, it’s also an unstable brew likely to morph into god knows what.  Still, perhaps someday, amid its inanities and horrors, it will also be remembered, at least for a brief historical moment, as a golden age of the reader, a time when all the words you could ever have needed were freely offered up for you to curate as you wish.  Don’t dismiss it.  Don’t forget it.

Tom Engelhardt, a co-founder of the American Empire Project and author of The United States of Fear as well as a history of the Cold War, The End of Victory Culture, runs the Nation Institute’s TomDispatch.com. His latest book, co-authored with Nick Turse, is Terminator Planet: The First History of Drone Warfare, 2001-2050.

Follow TomDispatch on Twitter and join us on Facebook or Tumblr. Check out the newest Dispatch Book, Ann Jones’s They Were Soldiers: How the Wounded Return From America’s Wars — The Untold Story.

Copyright 2014 Tom Engelhardt

Follow

Get every new post delivered to your Inbox.

Join 1,695 other followers