FCC ruling sides with tech companies on “net neutrality”

shutterstock_Net_Neutrality_1

By Mike Ingram
6 March 2015

The 3-2 vote of the Federal Communications Commission (FCC) February 26, in favor of new telecommunications rules has been hailed as a landmark ruling that will ensure “net neutrality,” defined as equal access to the Internet for all content providers. The reality is more complex and far less positive.

The FCC’s latest proposal does bar broadband service providers—giant companies like Comcast and the major telecoms that control so-called last-mile access to the Internet—from discriminating between different forms of content, either by offering price discounts or faster traffic speeds.

But the other major set of corporate giants, technology companies like Google, Yahoo and Netflix, will retain their monopoly control of over search and content provision. And the most dangerous enemy of a genuinely free Internet, the US government, with its vast panoply of spy agencies vacuuming up all web content, may gain additional authority over the Internet via the FCC itself.

By reclassifying broadband Internet services as a “telecommunications service” under Title II of the Communications Act, the ruling puts Internet Service Providers (ISPs) under the same regulatory framework as telecommunications. Given the monopolization and price gouging that prevails in that industry, the hosannas over the FCC ruling by some advocates of net neutrality are both premature and exaggerated.

The exact language of the rules has not yet been made public, but from the statements issued by the FCC, the two main changes from an early decision in 2010—subsequently thrown out in a court challenge—were the reclassification under Title II, and the decision to apply the ruling to mobile as well as fixed-line broadband services.

Net neutrality is a set of principles designed to prevent restrictions by Internet Service Providers (ISPs) and governments on content, sites, platforms or the kinds of equipment that may be used to access the Internet. Legitimate concern over the monopoly of broadband giants such as Comcast has generated broad support for net neutrality among online activists. The issue has prompted several online protests over recent years, including the so-called Internet slowdown of September 10 last year, when over 40,000 web sites solicited calls to senators and over 4.7 million comments to the FCC.

Following the February 25 vote, the site battleforthenet.com, which had played the major role in instigating the “slowdown,” proclaimed an “epic victory” stating, “Washington insiders said it couldn’t be done. But the public got loud in protest, the FCC gave in, and we won Title II net neutrality rules. Now Comcast is furious. They want to destroy our victory with their massive power in Congress. You won net neutrality. Now, are you ready to defend it?”

Such an analysis ignores the equally “massive power” of the tech industry both in Congress and in the Obama administration. A lengthy article published by the Washington Post March 1 describes Silicon Valley as “the new revolving door for Obama staffers.”

The article notes that the FCC decision “marked a major win for Silicon Valley, an industry that has built a close relationship with the president and his staff over the last six years.” The tech industry has “enriched Obama’s campaigns through donations” and “presented lucrative opportunities for staffers who leave for the private sector.”

The day of the FCC ruling, former White House press secretary Jay Carney joined Amazon as a senior vice president for global corporate affairs. Former Obama campaign manager David Plouffe now runs policy and strategy for car service start-up Uber. Facebook hired Marne Levine, chief of staff to former National Economic Council director Lawrence H. Summers. Airbnb has three former White House press staff on its books.

The revolving door goes both directions, with a large number of Silicon Valley executives heading to Washington for stints in the Obama administration. Facebook co-founder Chris Hughes helped create Obama’s online campaign and Google’s former vice president of global policy, Andrew McLaughlin worked on its tech policy agenda, according to the Post. McLaughlin later joined the Obama administration as a deputy to the chief technology officer. Obama’s former deputy chief technology officer Nicole Wong was an executive at both Twitter and Google. Megan Smith, the current chief technology officer was previously at Google and the director of patent and trademark office Michelle K. Lee was an intellectual property lawyer for Google.

It is this relationship with the technology giants rather than any genuine concern for democratic rights on the Internet that explains Obama’s intervention in net neutrality dispute. In November last year Obama called on the FCC to take up the “strongest possible rules” to protect net neutrality.

Obama’s real attitude to Internet freedom was exposed by the revelations of Edward Snowden, who documented the mandate of the National Security Agency to “collect it all”—in other words, capture the entire content of all the world’s Internet activity in order to analyze and profile all potential opponents of the American government, above all, political opposition from the working class.

Both tech companies such as Google and Yahoo, as well as telecommunications giants like Comcast and AT&T are deeply implicated in the mass spying on the US and global population by the NSA. They routinely hand over data when asked and only expressed any concern once the extent of this was made known by Snowden.

 

http://www.wsws.org/en/articles/2015/03/06/netn-m06.html

Surveillance Valley: Why Google Is Eager to Align Itself With America’s Military Industrial Complex

google-spy

Is it wise for us to hand over the contents of our private lives to private companies?

The following is an excerpt from Yasha Levine’s ongoing investigative project,Surveillance Valley, which you can help support on KickStarter.

Oakland, California: On February 18, 2014, several hundred privacy, labor, civil rights activists packed Oakland’s city hall.

It was a rowdy crowd, and there was a heavy police presence. The people were there to protest the construction of a citywide surveillance center that would turn a firehouse in downtown Oakland into a high-tech intelligence hub straight out of Mission Impossible — a federally funded project that linking up real time audio and video feeds from thousands of sensors across the city into one high-tech control hub, where analysts could pipe the data through face recognition software and enrich its intelligence with data coming in from local, state and federal government and law enforcement agencies.

Residents’ anger at the fusion surveillance center was intensified by a set of internal documents showing that city officials were more interested in using the surveillance center monitor political protests rather than fighting crime: keeping tabs on activists, monitoring non-violent political protests and tracking union organizing that might shut down the Port of Oakland. It was an incendiary find — especially in Oakland, a city with a large marginalized black population, a strong union presence and a long, ugly history of police brutality aimed at minority groups and political activists.

But buried deep in the thousands of pages of planning documents was another disturbing detail. Emails that showed Google — the largest and most powerful corporation in Silicon Valley — was among several other defense contractors vying for a piece of Oakland’s $11 million surveillance contract.

What was Google doing there? What could a company known for superior search and cute doodles offer a controversial surveillance center?

Turns out, a lot.

Most people still think that Google is one of the good guys on the Internet, that it’s a goofy company that aims only to provide the best and coolest tools on the web — from search, to cool maps to endless email space to amazing mobile maps and a powerful replacement for Microsoft Office.

But the free Google services and apps that we interact with on a daily basis aren’t the company’s main product. They are the harvesting machines that dig up and process the stuff that Google really sells: for-profit intelligence.

Google isn’t a traditional Internet service company. It isn’t even an advertising company. Google is a whole new type of beast that runs on a  totally new type of tech business model.

Google is a global for-profit surveillance corporation — a company that tries to funnel as much user activity in the real and online world through its services in order to track, analyze, and profile us: It tracks as much of our daily lives as possible: who we are, what we do, what we like, where we go, who we talk to, what we think about, what we’re interested in. All those things are seized, packaged, commodified, and sold on the market.

It’s an amazingly profitable activity that takes bits and pieces and the most intimate detritus of our private lives — something that never really had any commercial value and turns it into billions of pure profit. It’s like turning rocks and gravel into gold. And it nets Google nearly $20 billion in annual profits.

At this point, most of the business comes from matching the right ad to the right pair of eyeballs at jus the right time.  But who knows how the massive database Google’s compiling on all of us will be used in the future?

What kind of intel does Google compile on us? The company is very secretive about that info. But here are a few data points that could go into its user profiles, gleaned from two patents Google filed a decade ago, prior to launching its Gmail service:

  • Concepts and topics discussed in email, as well as email attachments
  • The content of websites that users have visited
  • Demographic information—including income, sex, race, marital status
  • Geographic information
  • Psychographic information—personality type, values, attitudes, interests
  • Previous searches users have made
  • Information about documents users viewed and edited
  • Browsing activity
  • Previous purchases

If Google’s creepy for-profit surveillance for you, then there are Google’s deep ties to the NSA and the U.S. military-surveillance complex.

Googles ties to military-intelligence industrial complex go back to 1990s, when Sergey Brin and Larry Page were still run of the mill computer science PhD students at Stanford. Their research into web search and indexing, which they spun off into a private company in 1998, was part of a Stanford project partially funded by DARPA, a research and development appendage to the DoD. The two nerdy inventors even gave the DoD’s research arm a shout out in a 1998 paper that outlined Google’s search and indexing methodology.

Computer science research is frequently funded with military and defense money, of course. But Google’s ties to the military-intelligence world didn’t end after they Brin and Page privatized their research and moved their startup operation off campus. If anything, the relationship deepened and got more intimate after they left Stanford.

Google’s intel and military contracting started with custom search contracts with the CIA and NSA in the early 2000s (the CIA even had a customized Google’s logo on its Google-powered intranet search page) and hit a much more series phase in 2004, with Google’s acquisition of a tiny and unknown 3-D mapping startup called Keyhole.

Google purchased the company in 2004 for an undisclosed sum and immediately folded the company’s mapping technology into what later became known as Google Earth. The acquisition would have gone unnoticed if it wasn’t for one tiny detail: Keyhole was part owned by the CIA and NSA.

A year before Google bought the company, it had received a substantial investment from In-Q-Tel, the venture capital fund run by the CIA on behalf of the military and intelligence community. The exact amount that In-Q-Tel invested into Keyhole is classified, but its new spook backers didn’t sit idle — they became intimately involvement in running the company. This was no secret. The CIA publicly discussed its hands-on approach, bragging in its promotional materials that the agency “worked closely with other Intelligence Community organizations to tailor Keyhole’s systems to meet their needs.” And the CIA guys worked fast: Just a few weeks after In-Q-Tel invested in Keyhole, an NGA official bragged that its technology was already being deployed by the Pentagon to prepare U.S. forces for the invasion of Iraq.

This close collaboration between Keyhole/Google Earth and the U.S. National Security State continues today.

Over the years, Google’s reach expanded to include just about every major intel and law enforcement agency in the United States. Today, Google technology enhance the surveillance capabilities of the  NSA, FBI, CIA, DEA, NGA, the U.S Navy and Army, and just about every wing of the DoD.

If you take a look at the roster of Google’s DC office — Google Federal — you’ll see the list jammed with names of former spooks, high-level intelligence officials and assorted revolving door military contractors: US Army, Air Force Intelligence, Central Intelligence Agency, Director of National Intelligence, USAID, SAIC, Lockheed.

Take the CV of Michele R. Weslander Quaid, Google’s Chief Technology Officer of Public Sector and “Innovation Evangelist.”

After the 9/11 terrorist attacks, Weslander Quaid felt a patriotic duty to help fight the War on Terror. So she quit her private sector job at a CIA contractor called Scitor Corporation and joined the official world of US government intelligence. She quickly rose through the ranks, serving in executive positions at the National Geospatial-Intelligence Agency (sister agency to the NSA), National Reconnaissance Office and at the Office of the Director of National Intelligence. She toured combat zones in both Iraq and Afghanistan in order to see the tech needs of the military first-hand. All throughout her intel career, she championed a “startup” mentality and the benefits of cloud-based services. Which made her a perfect candidate to head up Google’s federal contractor-lobbying operation…

In the past few years, Google has aggressively intensified its campaign to grab a bigger slice of the insanely lucrative military-intelligence contracting market.

It’s been targeting big and juicy federal agencies — the U.S. Naval Academysigned up for Google Apps, the U.S. Army tapped Google Apps for a pilot program involving 50,000 DoD personnel, Idaho’s nuclear labwent Google, the U.S. Department of the Interior switched to Gmail, and the U.S. Coast Guard Academy went with Google, too. Google even entered into a partnership with the NGA, a sister agency to NSA to launch its very own spy satellite called GeoEye-1 — a spy satellite that it would share with the U.S. military intelligence apparatus.

In some cases, Google sells its wares to government intel agencies directly — like it did with the NSA and NGA. It’s also been taking the role of subcontractor: selling its tech by partnering with established military contractors and privatized surveillance firms like SAIC, Lockheed and smaller boutique outfits like the Blackwater-connected merc outfit called Blackbird.

In short: Google’s showing itself willing to do just about anything it can to more effectively hitch itself to America’s military-intelligence-industrial complex.

Google has also been hard-selling its intel technology to smaller local and state government agencies as well — which is why Google was trying to bid on a police surveillance center in Oakland, California.

A company that monopolizes huge swaths of the Internet, makes billions by surveilling and profiling its users and is very deliberately angling to become the Lockheed-Martin of the Internet Age?

Should we be so trusting towards Google? And is it so wise for us to hand over the contents of our private lives — without demanding any control or oversight or care?

Excerpted from Yasha Levine’s ongoing investigative project, Surveillance Valley, which you can help support on KickStarter.

Google has captured your mind

Searches reveal who we are and how we think. True intellectual privacy requires safeguarding these records

Google has captured your mind
(Credit: Kuzma via iStock/Salon)

The Justice Department’s subpoena was straightforward enough. It directed Google to disclose to the U.S. government every search query that had been entered into its search engine for a two-month period, and to disclose every Internet address that could be accessed from the search engine. Google refused to comply. And so on Wednesday January 18, 2006, the Department of Justice filed a court motion in California, seeking an order that would force Google to comply with a similar request—a random sample of a million URLs from its search engine database, along with the text of every “search string entered onto Google’s search engine over a one-week period.” The Justice Department was interested in how many Internet users were looking for pornography, and it thought that analyzing the search queries of ordinary Internet users was the best way to figure this out. Google, which had a 45-percent market share at the time, was not the only search engine to receive the subpoena. The Justice Department also requested search records from AOL, Yahoo!, and Microsoft. Only Google declined the initial request and opposed it, which is the only reason we are aware that the secret request was ever made in the first place.

The government’s request for massive amounts of search history from ordinary users requires some explanation. It has to do with the federal government’s interest in online pornography, which has a long history, at least in Internet time. In 1995 Time Magazine ran its famous “Cyberporn” cover, depicting a shocked young boy staring into a computer monitor, his eyes wide, his mouth agape, and his skin illuminated by the eerie glow of the screen. The cover was part of a national panic about online pornography, to which Congress responded by passing the federal Communications Decency Act (CDA) the following year. This infamous law prevented all websites from publishing “patently offensive” content without first verifying the age and identity of its readers, and the sending of indecent communications to anyone under eighteen. It tried to transform the Internet into a public space that was always fit for children by default.


The CDA prompted massive protests (and litigation) charging the government with censorship. The Supreme Court agreed in the landmark case of Reno v. ACLU (1997), which struck down the CDA’s decency provisions. In his opinion for the Court, Justice John Paul Stevens explained that regulating the content of Internet expression is no different from regulating the content of newspapers.The case is arguably the most significant free speech decision over the past half century since it expanded the full protection of the First Amendment to Internet expression, rather than treating the Internet like television or radio, whose content may be regulated more extensively. In language that might sound dated, Justice Stevens announced a principle that has endured: “Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The Internet, in other words, was now an essential forum for free speech.

In the aftermath of Reno, Congress gave up on policing Internet indecency, but continued to focus on child protection. In 1998 it passed the Children’s Online Protection Act, also known as COPA. COPA punished those who engaged in web communications made “for commercial purposes” that were accessible and “harmful to minors” with a $50,000 fine and prison terms of up to six months. After extensive litigation, the Supreme Court in Ashcroft v. ACLU (2004) upheld a preliminary injunction preventing the government from enforcing the law. The Court reasoned that the government hadn’t proved that an outright ban of “harmful to minors” material was necessary. It suggested that Congress could have instead required the use of blocking or filtering software, which would have had less of an impact on free speech than a ban, and it remanded the case for further proceedings. Back in the lower court, the government wanted to create a study showing that filtering would be ineffective, which is why it wanted the search queries from Google and the other search engine companies in 2006.

Judge James Ware ruled on the subpoena on March 17, 2006, and denied most of the government’s demands. He granted the release of only 5 percent of the requested randomly selected anonymous search results and none of the actual search queries. Much of the reason for approving only a tiny sample of the de-identified search requests had to do with privacy. Google had not made a direct privacy argument, on the grounds that de-identified search queries were not “personal information,” but it argued that disclosure of the records would expose its trade secrets and harm its goodwill from users who believed that their searches were confidential. Judge Ware accepted this oddly phrased privacy claim, and added one of his own that Google had missed. The judge explained that Google users have a privacy interest in the confidentiality of their searches because a user’s identity could be reconstructed from their queries and because disclosure of such queries could lead to embarrassment (searches for, e.g., pornography or abortions) or criminal liability (searches for, e.g., “bomb placement white house”). He also placed the list of disclosed website addresses under a protective order to safeguard Google’s trade secrets.

Two facets of Judge Ware’s short opinion in the “Search Subpoena Case” are noteworthy. First, the judge was quite correct that even search requests that have had their user’s identities removed are not anonymous, as it is surprisingly easy to re-identify this kind of data. The queries we enter into search engines like Google often unwittingly reveal our identities. Most commonly, we search our own names, out of vanity, curiosity, or to discover if there are false or embarrassing facts or images of us online. But other parts of our searches can reveal our identities as well. A few months after the Search Subpoena Case, AOL made public twenty million search queries from 650,000 users of its search engine users. AOL was hoping this disclosure would help researchers and had replaced its users’ names with numerical IDs to protect their privacy. But two New York Times reporters showed just how easy it could be to re-identify them. They tracked down AOL user number 4417749 and identified her as Thelma Arnold, a sixty-two-year old widow in Lilburn, Georgia. Thelma had made hundreds of searches including “numb fingers,” “60 single men,” and “dog that urinates on everything.” The New York Times reporters used old-fashioned investigative techniques, but modern sophisticated computer science tools make re-identification of such information even easier. One such technique allowed computer scientists to re-identify users in the Netflix movie-watching database, which that company made public to researchers in 2006.

The second interesting facet of the Search Subpoena Case is its theory of privacy. Google won because the disclosure threatened its trade secrets (a commercial privacy, of sorts) and its business goodwill (which relied on its users believing that their searches were private). Judge Ware suggested that a more direct kind of user privacy was at stake, but was not specific beyond some generalized fear of embarrassment (echoing the old theory of tort privacy) or criminal prosecution (evoking the “reasonable expectation of privacy” theme from criminal law). Most people no doubt have an intuitive sense that their Internet searches are “private,” but neither our intuitions nor the Search Subpoena Case tell us why. This is a common problem in discussions of privacy. We often use the word “privacy” without being clear about what we mean or why it matters. We can do better.

Internet searches implicate our intellectual privacy. We use tools like Google Search to make sense of the world, and intellectual privacy is needed when we are making sense of the world. Our curiosity is essential, and it should be unfettered. As I’ll show in this chapter, search queries implicate a special kind of intellectual privacy, which is the freedom of thought.

Freedom of thought and belief is the core of our intellectual privacy. This freedom is the defining characteristic of a free society and our most cherished civil liberty. This right encompasses the range of thoughts and beliefs that a person might hold or develop, dealing with matters that are trivial and important, secular and profane. And it protects the individual’s thoughts from scrutiny or coercion by anyone, whether a government official or a private actor such as an employer, a friend, or a spouse. At the level of law, if there is any constitutional right that is absolute, it is this one, which is the precondition for other political and religious rights guaranteed by the Western tradition. Yet curiously, although freedom of thought is widely regarded as our most important civil liberty, it has not been protected in our law as much as other rights, in part because it has been very difficult for the state or others to monitor thoughts and beliefs even if they wanted to.

Freedom of Thought and Intellectual Privacy

In 1913 the eminent Anglo-Irish historian J. B. Bury published A History of Freedom of Thought, in which he surveyed the importance of freedom of thought in the Western tradition, from the ancient Greeks to the twentieth century. According to Bury, the conclusion that individuals should have an absolute right to their beliefs free of state or other forms of coercion “is the most important ever reached by men.” Bury was not the only scholar to have observed that freedom of thought (or belief, or conscience) is at the core of Western civil liberties. Recognitions of this sort are commonplace and have been made by many of our greatest minds. René Descartes’s maxim, “I think, therefore I am,” identifies the power of individual thought at the core of our existence. John Milton praised in Areopagitica “the liberty to know, to utter, and to argue freely according to conscience, above all [other] liberties.”

In the nineteenth century, John Stuart Mill developed a broad notion of freedom of thought as an essential element of his theory of human liberty, which comprised “the inward domain of consciousness; demanding liberty of conscience, in the most comprehensive sense; liberty of thought and feeling; absolute freedom of opinion and sentiment on all subjects, practical or speculative, scientific, moral, or theological.” In Mill’s view, free thought was inextricably linked to and mutually dependent upon free speech, with the two concepts being a part of a broader idea of political liberty. Moreover, Mill recognized that private parties as well as the state could chill free expression and thought.

Law in Britain and America has embraced the central importance of free thought as the civil liberty on which all others depend. But it was not always so. People who cannot think for themselves, after all, are incapable of self-government. In the Middle Ages, the crime of “constructive treason” outlawed “imagining the death of the king” as a crime that was punishable by death. Thomas Jefferson later reflected that this crime “had drawn the Blood of the best and honestest Men in the Kingdom.” The impulse for political uniformity was related to the impulse for religious uniformity, whose story is one of martyrdom and burnings of the stake. As Supreme Court Justice William O. Douglas put it in 1963:

While kings were fearful of treason, theologians were bent on stamping out heresy. . . . The Reformation is associated with Martin Luther. But prior to him it broke out many times only to be crushed. When in time the Protestants gained control, they tried to crush the Catholics; and when the Catholics gained the upper hand, they ferreted out the Protestants. Many devices were used. Heretical books were destroyed and heretics were burned at the stake or banished. The rack, the thumbscrew, the wheel on which men were stretched, these were part of the paraphernalia.

Thankfully, the excesses of such a dangerous government power were recognized over the centuries, and thought crimes were abolished. Thus, William Blackstone’s influential Commentaries stressed the importance of the common law protection for the freedom of thought and inquiry, even under a system that allowed subsequent punishment for seditious and other kinds of dangerous speech. Blackstone explained that:

Neither is any restraint hereby laid upon freedom of thought or inquiry: liberty of private sentiment is still left; the disseminating, or making public, of bad sentiments, destructive of the ends of society, is the crime which society corrects. A man (says a fine writer on this subject) may be allowed to keep poisons in his closet, but not publicly to vend them as cordials.

Even during a time when English law allowed civil and criminal punishment for many kinds of speech that would be protected today, including blasphemy, obscenity, seditious libel, and vocal criticism of the government, jurists recognized the importance of free thought and gave it special, separate protection in both the legal and cultural traditions.

The poisons metaphor Blackstone used, for example, was adapted from Jonathan Swift’s Gulliver’s Travels, from a line that the King of Brobdingnag delivers to Gulliver. Blackstone’s treatment of freedom of thought was itself adopted by Joseph Story in his own Commentaries, the leading American treatise on constitutional law in the early Republic. Thomas Jefferson and James Madison also embraced freedom of thought. Jefferson’s famous Virginia Statute for Religious Freedom enshrined religious liberty around the declaration that “Almighty God hath created the mind free,” and James Madison forcefully called for freedom of thought and conscience in his Memorial and Remonstrance Against Religious Assessments.

Freedom of thought thus came to be protected directly as a prohibition on state coercion of truth or belief. It was one of a handful of rights protected by the original Constitution even before the ratification of the Bill of Rights. Article VI provides that “state and federal legislators, as well as officers of the United States, shall be bound by oath or affirmation, to support this Constitution; but no religious test shall ever be required as a qualification to any office or public trust under the United States.” This provision, known as the “religious test clause,” ensured that religious orthodoxy could not be imposed as a requirement for governance, a further protection of the freedom of thought (or, in this case, its closely related cousin, the freedom of conscience). The Constitution also gives special protection against the crime of treason, by defining it to exclude thought crimes and providing special evidentiary protections:

Treason against the United States, shall consist only in levying war against them, or in adhering to their enemies, giving them aid and comfort. No person shall be convicted of treason unless on the testimony of two witnesses to the same overt act, or on confession in open court.

By eliminating religious tests and by defining the crime of treason as one of guilty actions rather than merely guilty minds, the Constitution was thus steadfastly part of the tradition giving exceptional protection to the freedom of thought.

Nevertheless, even when governments could not directly coerce the uniformity of beliefs, a person’s thoughts remained relevant to both law and social control. A person’s thoughts could reveal political or religious disloyalty, or they could be relevant to a defendant’s mental state in committing a crime or other legal wrong. And while thoughts could not be revealed directly, they could be discovered by indirect means. For example, thoughts could be inferred either from a person’s testimony or confessions, or by access to their papers and diaries. But both the English common law and the American Bill of Rights came to protect against these intrusions into the freedom of the mind as well.

The most direct way to obtain knowledge about a person’s thoughts would be to haul him before a magistrate as a witness and ask him under penalty of law. The English ecclesiastical courts used the “oath ex officio” for precisely this purpose. But as historian Leonard Levy has explained, this practice came under assault in Britain as invading the freedom of thought and belief. As the eminent jurist Lord Coke later declared, “no free man should be compelled to answer for his secret thoughts and opinions.” The practice of the oath was ultimately abolished in England in the cases of John Lilburne and John Entick, men who were political dissidents rather than religious heretics.

In the new United States, the Fifth Amendment guarantee that “No person . . . shall be compelled in any criminal case to be a witness against himself ” can also be seen as a resounding rejection of this sort of practice in favor of the freedom of thought. Law of course evolves, and current Fifth Amendment doctrine focuses on the consequences of a confession rather than on mental privacy, but the origins of the Fifth Amendment are part of a broad commitment to freedom of thought that runs through our law. The late criminal law scholar William Stuntz has shown that this tradition was not merely a procedural protection for all, but a substantive limitation on the power of the state to force its enemies to reveal their unpopular or heretical thoughts. As he put the point colorfully, “[i]t is no coincidence that the privilege’s origins read like a catalogue of religious and political persecution.”

Another way to obtain a person’s thoughts would be by reading their diaries or other papers. Consider the Fourth Amendment, which protects a person from unreasonable searches and seizures by the police:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Today we think about the Fourth Amendment as providing protection for the home and the person chiefly against unreasonable searches for contraband like guns or drugs. But the Fourth Amendment’s origins come not from drug cases but as a bulwark against intellectual surveillance by the state. In the eighteenth century, the English Crown had sought to quash political and religious dissent through the use of “general warrants,” legal documents that gave agents of the Crown the authority to search the homes of suspected dissidents for incriminating papers.

Perhaps the most infamous dissident of the time was John Wilkes. Wilkes was a progressive critic of Crown policy and a political rogue whose public tribulations, wit, and famed personal ugliness made him a celebrity throughout the English-speaking world. Wilkes was the editor of a progressive newspaper, the North Briton, a member of Parliament, and an outspoken critic of government policy. He was deeply critical of the 1763 Treaty of Paris ending the Seven Years War with France, a conflict known in North America as the French and Indian War. Wilkes’s damning articles angered King George, who ordered the arrest of Wilkes and his co-publishers of the North Briton, authorizing general warrants to search their papers for evidence of treason and sedition. The government ransacked numerous private homes and printers’ shops, scrutinizing personal papers for any signs of incriminating evidence. In all, forty-nine people were arrested, and Wilkes himself was charged with seditious libel, prompting a long and inconclusive legal battle of suits and countersuits.

By taking a stand against the king and intrusive searches, Wilkes became a cause célèbre among Britons at home and in the colonies. This was particularly true for many American colonists, whose own objections to British tax policy following the Treaty of Paris culminated in the American Revolution. The rebellious colonists drew from the Wilkes case the importance of political dissent as well as the need to protect dissenting citizens from unreasonable (and politically motivated) searches and seizures.

The Fourth Amendment was intended to address this problem by inscribing legal protection for “persons, houses, papers, and effects” into the Bill of Rights. A government that could not search the homes and read the papers of its citizens would be less able to engage in intellectual tyranny and enforce intellectual orthodoxy. In a pre-electronic world, the Fourth Amendment kept out the state, while trespass and other property laws kept private parties out of our homes, paper, and effects.

The Fourth and Fifth Amendments thus protect the freedom of thought at their core. As Stuntz explains, the early English cases estab- lishing these principles were “classic First Amendment cases in a system with no First Amendment.” Even in a legal regime without protection for dissidents who expressed unpopular political or religious opinions, the English system protected those dissidents in their private beliefs, as well as the papers and other documents that might reveal those beliefs.

In American law, an even stronger protection for freedom of thought can be found in the First Amendment. Although the First Amendment text speaks of free speech, press, and assembly, the freedom of thought is unquestionably at the core of these guarantees, and courts and scholars have consistently recognized this fact. In fact, the freedom of thought and belief is the closest thing to an absolute right guaranteed by the Constitution. The Supreme Court first recognized it in the 1878 Mormon polygamy case of Reynolds v. United States, which ruled that although law could regulate religiously inspired actions such as polygamy, it was powerless to control “mere religious belief and opinions.” Freedom of thought in secular matters was identified by Justices Holmes and Brandeis as part of their dissenting tradition in free speech cases in the 1910s and 1920s. Holmes declared crisply in United States v. Schwimmer that “if there is any principle of the Constitution that more imperatively calls for attachment than any other it is the principle of free thought—not free thought for those who agree with us but freedom for the thought that we hate.” And in his dissent in the Fourth Amendment wiretapping case of Olmstead v. United States, Brandeis argued that the framers of the Constitution “sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations.” Brandeis’s dissent in Olmstead adapted his theory of tort privacy into federal constitutional law around the principle of freedom of thought.

Freedom of thought became permanently enshrined in constitutional law during a series of mid-twentieth century cases that charted the contours of the modern First Amendment. In Palko v. Connecticut, Justice Cardozo characterized freedom of thought as “the matrix, the indispensable condition, of nearly every other form of freedom.” And in a series of cases involving Jehovah’s Witnesses, the Court developed a theory of the First Amendment under which the rights of free thought, speech, press, and exercise of religion were placed in a “preferred position.” Freedom of thought was central to this new theory of the First Amendment, exemplified by Justice Jackson’s opinion in West Virginia State Board of Education v. Barnette, which invalidated a state regulation requiring that public school children salute the flag each morning. Jackson declared that:

If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. . . .

[The flag-salute statute] transcends constitutional limitations on [legislative] power and invades the sphere of intellect and spirit which it is the purpose of the First Amendment to our Constitution to reserve from all official control.

Modern cases continue to reflect this legacy. The Court has repeatedly declared that the constitutional guarantee of freedom of thought is at the foundation of what it means to have a free society. In particular, freedom of thought has been invoked as a principal justification for preventing punishment based upon possessing or reading dangerous media. Thus, the government cannot punish a person for merely possessing unpopular or dangerous books or images based upon their content. As Alexander Meiklejohn put it succinctly, the First Amendment protects, first and foremost, “the thinking process of the community.”

Freedom of thought thus remains, as it has for centuries, the foundation of the Anglo-American tradition of civil liberties. It is also the core of intellectual privacy.

“The New Home of Mind”

“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind.” So began “A Declaration of Independence of Cyberspace,” a 1996 manifesto responding to the Communications Decency Act and other attempts by government to regulate the online world and stamp out indecency. The Declaration’s author was John Perry Barlow, a founder of the influential Electronic Frontier Foundation and a former lyricist for the Grateful Dead. Barlow argued that “[c]yberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.” This definition of the Internet as a realm of pure thought was quickly followed by an affirmation of the importance of the freedom of thought. Barlow insisted that in Cyberspace “anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” The Declaration concluded on the same theme: “We will spread ourselves across the Planet so that no one can arrest our thoughts. We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.”

In his Declaration, Barlow joined a tradition of many (including many of the most important thinkers and creators of the digital world) who have expressed the idea that networked computing can be a place of “thought itself.” As early as 1960, the great computing visionary J. C. R. Licklider imagined that “in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought.” Tim Berners-Lee, the architect of the World Wide Web, envisioned his creation as one that would bring “the workings of society closer to the workings of our minds.”

Barlow’s utopian demand that governments leave the electronic realm alone was only partially successful. The Communications Decency Act was, as we have seen, struck down by the Supreme Court, but today many laws regulate the Internet, such as the U.S. Digital Millenium Copyright Act6and the EU Data Retention Directive. The Internet has become more (and less) than Barlow’s utopian vision—a place of business as well as of thinking. But Barlow’s description of the Internet as a world of the mind remains resonant today.

It is undeniable that today millions of people use computers as aids to their thinking. In the digital age, computers are an essential and intertwined supplement to our thoughts and our memories. Discussing Licklider’s prophesy from half a century ago, legal scholar Tim Wu notes that virtually every computer “program we use is a type of thinking aid—whether the task is to remember things (an address book), to organize prose (a word processor), or to keep track of friends (social network software).” These technologies have become not just aids to thought but also part of the thinking process itself. In the past, we invented paper and books, and then sound and video recordings to preserve knowledge and make it easier for us as individuals and societies to remember information. Digital technologies have made remembering even easier, by providing cheap storage, inexpensive retrieval, and global reach. Consider the Kindle, a cheap electronic reader that can hold 1,100 books, or even cheaper external hard drives that can hold hundreds of hours of high-definition video in a box the size of a paperback novel.

Even the words we use to describe our digital products and experiences reflect our understanding that computers and cyberspace are devices and places of the mind. IBM has famously called its laptops “ThinkPads,” and many of us use “smartphones.” Other technologies have been named in ways that affirm their status as tools of the mind—notebooks, ultrabooks, tablets, and browsers. Apple Computer produces iPads and MacBooks and has long sold its products under the slogan, “Think Different.” Google historian John Battelle has famously termed Google’s search records to be a “database of intentions.” Google’s own slogan for its web browser Chrome is “browse the web as fast as you think,” revealing how web browsing itself is not just a form of reading, but a kind of thinking itself. My point here is not just that common usage or marketing slogans connect Internet use to thinking, but a more important one: Our use of these words reflects a reality. We are increasingly using digital technologies not just as aids to our memories but also as an essential part of the ways we think.

Search engines in particular bear a special connection to the processes of thought. How many of us have asked a factual question among friends, only for smartphones to appear as our friends race to see who can look up the answer the fastest? In private, we use search engines to learn about the world. If you have a moment, pull up your own search history on your phone, tablet, or computer, and recall your past queries. It usually makes for interesting reading—a history of your thoughts and wonderings.

But the ease with which we can pull up such a transcript reveals another fundamental feature of digital technologies—they are designed to create records of their use. Think again about the profile a search engine like Google has for you. A transcript of search queries and links followed is a close approximation to a transcript of the operation of your mind. In the logs of search engine companies are vast repositories of intellectual wonderings, questions asked, and mental whims followed. Similar logs exist for Internet service providers and other new technology companies. And the data contained in such logs is eagerly sought by government and private entities interested in monitoring intellectual activity, whether for behavioral advertising, crime and terrorism prevention, and possibly other, more sinister purposes.

Searching Is Thinking

With these two points in mind—the importance of freedom of thought and the idea of the Internet as a place where thought occurs—we can now return to the Google Search Subpoena with which this chapter opened. Judge Ware’s opinion revealed an intuitive understanding that the disclosure of search records was threatening to privacy, but was not clear about what kind of privacy was involved or why it matters.

Intellectual privacy, in particular the freedom of thought, supplies the answer to this problem. We use search engines to learn about and make sense of the world, to answer our questions, and as aids to our thinking. Searching, then, in a very real sense is a kind of thinking. And we have a long tradition of protecting the privacy and confidentiality of our thoughts from the scrutiny of others. It is precisely because of the importance of search records to human thought that the Justice Department wanted to access the records. But if our search records were more public, we wouldn’t merely be exposed to embarrassment like Thelma Arnold of Lilburn, Georgia. We would be less likely to search for unpopular or deviant or dangerous topics. Yet in a free society, we need to be able to think freely about any ideas, no matter how dangerous or unpopular. If we care about freedom of thought—and our political institutions are built on the assumption that we do—we should care about the privacy of electronic records that reveal our thoughts. Search records illustrate the point well, but this idea is not just limited to that one important technology. My argument about freedom of thought in the digital age is this: Any technology that we use in our thinking implicates our intellectual privacy, and if we want to preserve our ability to think fearlessly, free of monitoring, interference, or repercussion, we should embody these technologies with a meaningful measure of intellectual privacy.

Excerpted from “Intellectual Privacy: Rethinking Civil Liberties in the Digital Age” by Neil Richards. Published by Oxford University Press. Copyright 2015 by Neil Richards. Reprinted with permission of the publisher. All rights reserved.

Neil Richards is a Professor of Law at Washington University, where he teaches and writes about privacy, free speech, and the digital revolution.

Robert Reich: America is headed full speed back to the 19th century

Former labor secretary Robert Reich on the dangers of on-demand jobs and our growing intolerance for labor unions

Robert Reich: America is headed full speed back to the 19th century
Robert Reich

My recent column about the growth of on-demand jobs like Uber making life less predictable and secure for workers unleashed a small barrage of criticism that workers get what they’re worth in the market.

Forbes Magazine contributor, for example, writes that jobs exist only  “when both employer and employee are happy with the deal being made.” So if the new jobs are low-paying and irregular, too bad.

Much the same argument was voiced in the late nineteenth century over alleged “freedom of contract.” Any deal between employees and workers was assumed to be fine if both sides voluntarily agreed to it.

It was an era when many workers were “happy” to toil twelve-hour days in sweat shops for lack of any better alternative.

It was also a time of great wealth for a few and squalor for many. And of corruption, as the lackeys of robber barons deposited sacks of cash on the desks of pliant legislators.

Finally, after decades of labor strife and political tumult, the twentieth century brought an understanding that capitalism requires minimum standards of decency and fairness – workplace safety, a minimum wage, maximum hours (and time-and-a-half for overtime), and a ban on child labor.

We also learned that capitalism needs a fair balance of power between big corporations and workers.

We achieved that through antitrust laws that reduced the capacity of giant corporations to impose their will, and labor laws that allowed workers to organize and bargain collectively.

By the 1950s, when 35 percent of private-sector workers belonged to a labor union, they were able to negotiate higher wages and better working conditions than employers would otherwise have been “happy” to provide.

But now we seem to be heading back to nineteenth century.



Corporations are shifting full-time work onto temps, free-lancers, and contract workers who fall outside the labor protections established decades ago.

The nation’s biggest corporations and Wall Street banks are larger and more potent than ever.

And labor union membership has shrunk to less than 6 percent of the private-sector workforce.

So it’s not surprising we’re once again hearing that workers are worth no more than what they can get in the market.

But as we should have learned a century ago, markets don’t exist in nature. They’re created by human beings. The real question is how they’re organized and for whose benefit.

In the late nineteenth century they were organized for the benefit of a few at the top.

But by the middle of the twentieth century they were organized for the vast majority.

During the thirty years after the end of World War II, as the economy doubled in size, so did the wages of most Americans — along with improved hours and working conditions.

Yet since around 1980, even though the economy has doubled once again (the Great Recession notwithstanding), the wages most Americans have stagnated. And their benefits and working conditions have deteriorated.

This isn’t because most Americans are worth less. In fact, worker productivity is higher than ever.

It’s because big corporations, Wall Street, and some enormously rich individuals have gained political power to organize the market in ways that have enhanced their wealth while leaving most Americans behind.

That includes trade agreements protecting the intellectual property of large corporations and Wall Street’s financial assets, but not American jobs and wages.

Bailouts of big Wall Street banks and their executives and shareholders when they can’t pay what they owe, but not of homeowners who can’t meet their mortgage payments.

Bankruptcy protection for big corporations, allowing them  to shed their debts, including labor contracts. But no bankruptcy protection for college graduates over-burdened with student debts.

Antitrust leniency toward a vast swathe of American industry – including Big Cable (Comcast, AT&T, Time-Warner), Big Tech (Amazon, Google), Big Pharma, the largest Wall Street banks, and giant retailers (Walmart).

But less tolerance toward labor unions — as workers trying to form unions are fired with impunity, and more states adopt so-called “right-to-work” laws that undermine unions.

We seem to be heading full speed back to the late nineteenth century.

So what will be the galvanizing force for change this time?

Robert Reich, one of the nation’s leading experts on work and the economy, is Chancellor’s Professor of Public Policy at the Goldman School of Public Policy at the University of California at Berkeley. He has served in three national administrations, most recently as secretary of labor under President Bill Clinton. Time Magazine has named him one of the ten most effective cabinet secretaries of the last century. He has written 13 books, including his latest best-seller, “Aftershock: The Next Economy and America’s Future;” “The Work of Nations,” which has been translated into 22 languages; and his newest, an e-book, “Beyond Outrage.” His syndicated columns, television appearances, and public radio commentaries reach millions of people each week. He is also a founding editor of the American Prospect magazine, and Chairman of the citizen’s group Common Cause. His new movie “Inequality for All” is in Theaters. His widely-read blog can be found at www.robertreich.org.

 

http://www.salon.com/2015/02/10/robert_reich_america_is_heading_full_speed_back_to_the_19th_century_partner/?source=newsletter

WikiLeaks considers legal action over Google’s compliance with US search orders

Wikileaks-001

By Evan Blake
29 January 2015

On Monday, lawyers for WikiLeaks announced at a press conference that they may pursue legal action against Google and the US government following revelations that the Internet company complied with Justice Department demands that it hand over communications and documents of WikiLeaks journalists.

More than two and a half years after complying with the surveillance orders, Google sent notifications to three victims of these unconstitutional searches—WikiLeaks investigations editor Sarah Harrison, organization spokesman Kristinn Hrafnsson and senior editor Joseph Farrell. The company informed WikiLeaks that that it had complied fully with “search and seizure” orders to turn over digital data, including all sent, received, draft and deleted emails, IP addresses, photographs, calendars and other personal information.

The government investigation ostensibly relates to claims of espionage, conspiracy to commit espionage, the theft or conversion of property belonging to the United States government, violation of the Computer Fraud and Abuse Act, and conspiracy, which combine to carry up to 45 years in prison. The ongoing investigation into WikiLeaks was first launched in 2010 by the Obama administration, which has so far led to the 35-year sentence for Chelsea (Bradley) Manning.

At the press conference, Hrafnsson stated, “I believe this is an attack on me as a journalist. I think this is an attack on journalism. I think this is a very serious issue that should concern all of you in here, and everybody who is working on, especially, sensitive security stories, as we have been doing as a media organization.”

Baltasar Garzon, the Legal Director for Julian Assange’s legal team, told reporters at the event, “We believe the way the documents were taken is illegal.”

On Sunday, prior to the press conference, Michael Ratner, the lead lawyer of the counsel for WikiLeaks and president emeritus at the Center for Constitutional Rights, penned a letter to Eric Schmidt, the executive chairman of Google, stating, “We are astonished and disturbed that Google waited over two and a half years to notify its subscribers that a search warrant was issued for their records.”

Google claims that they withheld this information from the three journalists due to a court-imposed gag order. A Google spokesperson told the Guardian, “Our policy is to tell people about government requests for their data, except in limited cases, like when we are gagged by a court order, which sadly happens quite frequently.”

In his letter, Ratner reminds Schmidt of a conversation he had with Julian Assange on April 19, 2011, in which Schmidt allegedly agreed to recommend that Google’s general counsel contest such a gag order were it to arise.

The letter requests that Google provide the counsel for WikiLeaks with “a list of all materials Google disclosed or provided to law enforcement in response to these search warrants,” as well as all other information relevant to the case, whether or not Google challenged the case prior to relinquishing their clients’ data, and whether Google attempted to remove the gag order at any point since they received their orders on March 22, 2012.

At the Monday press conference, Harrison noted that the government was not “going after specific things they thought could help them. What they were actually doing was blanketly going after a journalist’s personal and private email account, in the hopes that this fishing expedition would get them something to use to attack the organization and our editor-in-chief Julian Assange.”

The case, Harrison said, pointed to the “breakdown of legal processes within the US government, when it comes to dealing with WikiLeaks.”

Harrison assisted Edward Snowden for four months, shortly after his initial revelations on NSA spying in 2013, helping him leave Hong Kong. She is one of Assange’s closest collaborators, highlighting the inherent value of her personal email correspondence. Through her and her colleagues’ email accounts and other personal information, the Justice Department is seeking to manufacture a case against Assange.

Assange currently faces trumped up accusations of sexual assault in Sweden, along with the threat of extradition to the US. He has been forced to take refuge in the Ecuadorian embassy in London for over two and a half years, under round-the-clock guard by British police ready to arrest him if he steps out of the embassy.

In various media accounts, Google has postured as a crusader for democratic rights. A Google attorney, Albert Gidari, told the Washington Post that ever since a parallel 2010 order for the data of WikiLeaks’ volunteer and security researcher Jacob Appelbaum, “Google litigated up and down through the courts trying to get the orders modified so that notice could be given.”

In reality, the company serves as an integral component of, and is heavily invested in, the military-intelligence apparatus. In their 2014 “transparency report,” Google admitted to complying with 66 percent of the 32,000 data requests they received from governments worldwide during the first six months of 2014 alone, including 84 percent of those submitted by the US government, by far the largest requester.

In his book When Google Met WikiLeaks, published in September 2014, Assange detailed the company’s ties to Washington and its wide-ranging influence on geopolitics.

In a statement published by WikiLeaks, the organization noted that “The US government is claiming universal jurisdiction to apply the Espionage Act, general Conspiracy statute and the Computer Fraud and Abuse Act to journalists and publishers—a horrifying precedent for press freedoms around the world. Once an offence is alleged in relation to a journalist or their source, the whole media organisation, by the nature of its work flow, can be targeted as alleged ‘conspiracy.’”

 

http://www.wsws.org/en/articles/2015/01/29/wiki-j29.html

The Killing of America’s Creative Class

hqdefault

A review of Scott Timberg’s fascinating new book, ‘Culture Crash.’

Some of my friends became artists, writers, and musicians to rebel against their practical parents. I went into a creative field with encouragement from my folks. It’s not too rare for Millennials to have their bohemian dreams blessed by their parents, because, as progeny of the Boomers, we were mentored by aging rebels who idolized rogue poets, iconoclast cartoonists, and scrappy musicians.

The problem, warns Scott Timberg in his new book Culture Crash: The Killing of the Creative Class, is that if parents are basing their advice on how the economy used to support creativity – record deals for musicians, book contracts for writers, staff positions for journalists – then they might be surprised when their YouTube-famous daughter still needs help paying off her student loans. A mix of economic, cultural, and technological changes emanating from a neoliberal agenda, writes Timberg, “have undermined the way that culture has been produced for the past two centuries, crippling the economic prospects of not only artists but also the many people who supported and spread their work, and nothing yet has taken its place.”

 

Tech vs. the Creative Class

Timberg isn’t the first to notice. The supposed economic recovery that followed the recession of 2008 did nothing to repair the damage that had been done to the middle class. Only a wealthy few bounced back, and bounced higher than ever before, many of them the elites of Silicon Valley who found a way to harvest much of the wealth generated by new technologies. InCulture Crash, however, Timberg has framed the struggle of the working artist to make a living on his talents.

Besides the overall stagnation of the economy, Timberg shows how information technology has destabilized the creative class and deprofessionalized their labor, leading to an oligopoly of the mega corporations Apple, Google, and Facebook, where success is measured (and often paid) in webpage hits.

What Timberg glances over is that if this new system is an oligopoly of tech companies, then what it replaced – or is still in the process of replacing – was a feudal system of newspapers, publishing houses, record labels, operas, and art galleries. The book is full of enough discouraging data and painful portraits of artists, though, to make this point moot. Things are definitely getting worse.

Why should these worldly worries make the Muse stutter when she is expected to sing from outside of history and without health insurance? Timberg proposes that if we are to save the “creative class” – the often young, often from middle-class backgrounds sector of society that generates cultural content – we need to shake this old myth. The Muse can inspire but not sustain. Members of the creative class, argues Timberg, depend not just on that original inspiration, but on an infrastructure that moves creations into the larger culture and somehow provides material support for those who make, distribute, and assess them. Today, that indispensable infrastructure is at risk…

Artists may never entirely disappear, but they are certainly vulnerable to the economic and cultural zeitgeist. Remember the Dark Ages? Timberg does, and drapes this shroud over every chapter. It comes off as alarmist at times. Culture is obviously no longer smothered by an authoritarian Catholic church.

 

Art as the Province of the Young and Independently Wealthy

But Timberg suggests that contemporary artists have signed away their rights in a new contract with the market. Cultural producers, no matter how important their output is to the rest of us, are expected to exhaust themselves without compensation because their work is, by definition, worthless until it’s profitable. Art is an act of passion – why not produce it for free, never mind that Apple, Google, and Facebook have the right to generate revenue from your production? “According to this way of thinking,” wrote Miya Tokumitsu describing the do-what-you-love mantra that rode out of Silicon Valley on the back of TED Talks, “labor is not something one does for compensation, but an act of self-love. If profit doesn’t happen to follow, it is because the worker’s passion and determination were insufficient.”

The fact is, when creativity becomes financially unsustainable, less is created, and that which does emerge is the product of trust-fund kids in their spare time. “If working in culture becomes something only for the wealthy, or those supported by corporate patronage, we lose the independent perspective that artistry is necessarily built on,” writes Timberg.

It would seem to be a position with many proponents except that artists have few loyal advocates on either side of the political spectrum. “A working artist is seen neither as the salt of the earth by the left, nor as a ‘job creator’ by the right – but as a kind of self-indulgent parasite by both sides,” writes Timberg.

That’s with respect to unsuccessful artists – in other words, the creative class’s 99 percent. But, as Timberg disparages, “everyone loves a winner.” In their own way, both conservatives and liberals have stumbled into Voltaire’sCandide, accepting that all is for the best in the best of all possible worlds. If artists cannot make money, it’s because they are either untalented or esoteric elitists. It is the giants of pop music who are taking all the spoils, both financially and morally, in this new climate.

Timberg blames this winner-take-all attitude on the postmodernists who, beginning in the 1960s with film critic Pauline Kael, dismantled the idea that creative genius must be rescued from underneath the boots of mass appeal and replaced it with the concept of genius-as-mass-appeal. “Instead of coverage of, say, the lost recordings of pioneering bebop guitarist Charlie Christian,” writes Timberg, “we read pieces ‘in defense’ of blockbuster acts like the Eagles (the bestselling rock band in history), Billy Joel, Rush – groups whose songs…it was once impossible to get away from.”

Timberg doesn’t give enough weight to the fact that the same rebellion at the university liberated an enormous swath of art, literature, and music from the shadow of an exclusive (which is not to say unworthy) canon made up mostly of white men. In fact, many postmodernists have taken it upon themselves to look neither to the pop charts nor the Western canon for genius but, with the help of the Internet, to the broad creative class that Timberg wants to defend.

 

Creating in the Age of Poptimism

This doesn’t mean that today’s discovered geniuses can pay their bills, though, and Timberg is right to be shocked that, for the first time in history, pop culture is untouchable, off limits to critics or laypeople either on the grounds of taste or principle. If you can’t stand pop music because of the hackneyed rhythms and indiscernible voices, you’ve failed to appreciate the wonders of crowdsourced culture – the same mystery that propels the market.

Sadly, Timberg puts himself in checkmate early on by repeatedly pitting black mega-stars like Kanye West against white indie-rockers like the Decembrists, whose ascent to the pop-charts he characterizes as a rare triumph of mass taste.

But beyond his anti-hip-hop bias is an important argument: With ideological immunity, the pop charts are mimicking the stratification of our society. Under the guise of a popular carnival where a home-made YouTube video can bring a talented nobody the absurd fame of a celebrity, creative industries have nevertheless become more monotonous and inaccessible to new and disparate voices. In 1986, thirty-one chart-toppers came from twenty-nine different artists. Between 2008 and mid-2012, half of the number-one songs were property of only six stars. “Of course, it’s never been easy to land a hit record,” writes Timberg. “But recession-era rock has brought rewards to a smaller fraction of the artists than it did previously. Call it the music industry’s one percent.”

The same thing is happening with the written word. In the first decade of the new millennium, points out Timberg, citing Wired magazine, the market share of page views for the Internet’s top ten websites rose from 31 percent to 75 percent.

Timberg doesn’t mention that none of the six artists dominating the pop charts for those four years was a white man, but maybe that’s beside the point. In Borges’s “Babylon Lottery,” every citizen has the chance to be a sovereign. That doesn’t mean they were living in a democracy. Superstars are coming up from poverty, without the help of white male privilege, like never before, at the same time that poverty – for artists and for everyone else – is getting worse.

Essayists are often guilted into proposing solutions to the problems they perceive, but in many cases they should have left it alone. Timberg wisely avoids laying out a ten-point plan to clean up the mess, but even his initial thrust toward justice – identifying the roots of the crisis – is a pastiche of sometimes contradictory liberal biases that looks to the past for temporary fixes.

Timberg puts the kibosh on corporate patronage of the arts, but pines for the days of newspapers run by wealthy families. When information technology is his target because it forces artists to distribute their work for free, removes the record store and bookstore clerks from the scene, and feeds consumer dollars to only a few Silicon Valley tsars, Timberg’s answer is to retrace our steps twenty years to the days of big record companies and Borders book stores – since that model was slightly more compensatory to the creative class.

When his target is postmodern intellectuals who slander “middle-brow” culture as elitist, only to expend their breath in defense of super-rich pop stars, Timberg retreats fifty years to when intellectuals like Marshall McLuhan and Norman Mailer debated on network television and the word “philharmonic” excited the uncultured with awe rather than tickled them with anti-elitist mockery. Maybe television back then was more tolerable, but Timberg hardly even tries to sound uplifting. “At some point, someone will come up with a conception better than middlebrow,” he writes. “But until then, it beats the alternatives.”

 

The Fallacy of the Good Old Days

Timberg’s biggest mistake is that he tries to find a point in history when things were better for artists and then reroute us back there for fear of continued decline. What this translates to is a program of bipartisan moderation – a little bit more public funding here, a little more philanthropy there. Something everyone can agree on, but no one would ever get excited about.

Why not boldly state that a society is dysfunctional if there is enough food, shelter, and clothing to go around and yet an individual is forced to sacrifice these things in order to produce, out of humanistic virtue, the very thing which society has never demanded more of – culture? And if skeptics ask for a solution, why not suggest something big, a reorganization of society, from top to bottom, not just a vintage flotation device for the middle class? Rather than blame technological innovation for the poverty of artists, why not point the finger at those who own the technology and call for a system whereby efficiency doesn’t put people out of work, but allows them to work fewer hours for the same salary; whereby information is free not because an unpaid intern wrote content in a race for employment, but because we collectively pick up the tab?

This might not satisfy the TED Talk connoisseur’s taste for a clever and apolitical fix, but it definitely trumps championing a middle-ground littered with the casualties of cronyism, colonialism, racism, patriarchy, and all their siblings. And change must come soon because, if Timberg is right, “the price we ultimately pay” for allowing our creative class to remain on its crash course “is in the decline of art itself, diminishing understanding of ourselves, one another, and the eternal human spirit.”

 

http://www.alternet.org/news-amp-politics/killing-americas-creative-class?akid=12719.265072.45wrwl&rd=1&src=newsletter1030855&t=9

How the CIA made Google

google_cia

Inside the secret network behind mass surveillance, endless war, and Skynet—

part 1

By Nafeez Ahmed

INSURGE INTELLIGENCE, a new crowd-funded investigative journalism project, breaks the exclusive story of how the United States intelligence community funded, nurtured and incubated Google as part of a drive to dominate the world through control of information. Seed-funded by the NSA and CIA, Google was merely the first among a plethora of private sector start-ups co-opted by US intelligence to retain ‘information superiority.’

The origins of this ingenious strategy trace back to a secret Pentagon-sponsored group, that for the last two decades has functioned as a bridge between the US government and elites across the business, industry, finance, corporate, and media sectors. The group has allowed some of the most powerful special interests in corporate America to systematically circumvent democratic accountability and the rule of law to influence government policies, as well as public opinion in the US and around the world. The results have been catastrophic: NSA mass surveillance, a permanent state of global war, and a new initiative to transform the US military into Skynet.

THIS IS PART ONE. READ PART TWO HERE.


This exclusive is being released for free in the public interest, and was enabled by crowdfunding. I’d like to thank my amazing community of patrons for their support, which gave me the opportunity to work on this in-depth investigation. Please support independent, investigative journalism for the global commons.


In the wake of the Charlie Hebdo attacks in Paris, western governments are moving fast to legitimize expanded powers of mass surveillance and controls on the internet, all in the name of fighting terrorism.

US and European politicians have called to protect NSA-style snooping, and to advance the capacity to intrude on internet privacy by outlawing encryption. One idea is to establish a telecoms partnership that would unilaterally delete content deemed to “fuel hatred and violence” in situations considered “appropriate.” Heated discussions are going on at government and parliamentary level to explore cracking down on lawyer-client confidentiality.

What any of this would have done to prevent the Charlie Hebdo attacks remains a mystery, especially given that we already know the terrorists were on the radar of French intelligence for up to a decade.

There is little new in this story. The 9/11 atrocity was the first of many terrorist attacks, each succeeded by the dramatic extension of draconian state powers at the expense of civil liberties, backed up with the projection of military force in regions identified as hotspots harbouring terrorists. Yet there is little indication that this tried and tested formula has done anything to reduce the danger. If anything, we appear to be locked into a deepening cycle of violence with no clear end in sight.

As our governments push to increase their powers, INSURGE INTELLIGENCE can now reveal the vast extent to which the US intelligence community is implicated in nurturing the web platforms we know today, for the precise purpose of utilizing the technology as a mechanism to fight global ‘information war’ — a war to legitimize the power of the few over the rest of us. The lynchpin of this story is the corporation that in many ways defines the 21st century with its unobtrusive omnipresence: Google.

Google styles itself as a friendly, funky, user-friendly tech firm that rose to prominence through a combination of skill, luck, and genuine innovation. This is true. But it is a mere fragment of the story. In reality, Google is a smokescreen behind which lurks the US military-industrial complex.

The inside story of Google’s rise, revealed here for the first time, opens a can of worms that goes far beyond Google, unexpectedly shining a light on the existence of a parasitical network driving the evolution of the US national security apparatus, and profiting obscenely from its operation.

The shadow network

For the last two decades, US foreign and intelligence strategies have resulted in a global ‘war on terror’ consisting of prolonged military invasions in the Muslim world and comprehensive surveillance of civilian populations. These strategies have been incubated, if not dictated, by a secret network inside and beyond the Pentagon.

Established under the Clinton administration, consolidated under Bush, and firmly entrenched under Obama, this bipartisan network of mostly neoconservative ideologues sealed its dominion inside the US Department of Defense (DoD) by the dawn of 2015, through the operation of an obscure corporate entity outside the Pentagon, but run by the Pentagon.

In 1999, the CIA created its own venture capital investment firm, In-Q-Tel, to fund promising start-ups that might create technologies useful for intelligence agencies. But the inspiration for In-Q-Tel came earlier, when the Pentagon set up its own private sector outfit.

Known as the ‘Highlands Forum,’ this private network has operated as a bridge between the Pentagon and powerful American elites outside the military since the mid-1990s. Despite changes in civilian administrations, the network around the Highlands Forum has become increasingly successful in dominating US defense policy.

Giant defense contractors like Booz Allen Hamilton and Science Applications International Corporation are sometimes referred to as the ‘shadow intelligence community’ due to the revolving doors between them and government, and their capacity to simultaneously influence and profit from defense policy. But while these contractors compete for power and money, they also collaborate where it counts. The Highlands Forum has for 20 years provided an off the record space for some of the most prominent members of the shadow intelligence community to convene with senior US government officials, alongside other leaders in relevant industries.

I first stumbled upon the existence of this network in November 2014, when I reported for VICE’s Motherboard that US defense secretary Chuck Hagel’s newly announced ‘Defense Innovation Initiative’ was really about building Skynet — or something like it, essentially to dominate an emerging era of automated robotic warfare.

That story was based on a little-known Pentagon-funded ‘white paper’ published two months earlier by the National Defense University (NDU) in Washington DC, a leading US military-run institution that, among other things, generates research to develop US defense policy at the highest levels. The white paper clarified the thinking behind the new initiative, and the revolutionary scientific and technological developments it hoped to capitalize on.

The Highlands Forum

The co-author of that NDU white paper is Linton Wells, a 51-year veteran US defense official who served in the Bush administration as the Pentagon’s chief information officer, overseeing the National Security Agency (NSA) and other spy agencies. He still holds active top-secret security clearances, and according to a report by Government Executive magazine in 2006 hechaired the ‘Highlands Forum’, founded by the Pentagon in 1994.

Linton Wells II (right) former Pentagon chief information officer and assistant secretary of defense for networks, at a recent Pentagon Highlands Forum session. Rosemary Wenchel, a senior official in the US Department of Homeland Security, is sitting next to him

New Scientist magazine (paywall) has compared the Highlands Forum to elite meetings like “Davos, Ditchley and Aspen,” describing it as “far less well known, yet… arguably just as influential a talking shop.” Regular Forum meetings bring together “innovative people to consider interactions between policy and technology. Its biggest successes have been in the development of high-tech network-based warfare.”

Given Wells’ role in such a Forum, perhaps it was not surprising that his defense transformation white paper was able to have such a profound impact on actual Pentagon policy. But if that was the case, why had no one noticed?

Despite being sponsored by the Pentagon, I could find no official page on the DoD website about the Forum. Active and former US military and intelligence sources had never heard of it, and neither did national security journalists. I was baffled.

The Pentagon’s intellectual capital venture firm

In the prologue to his 2007 book, A Crowd of One: The Future of Individual Identity, John Clippinger, an MIT scientist of the Media Lab Human Dynamics Group, described how he participated in a “Highlands Forum” gathering, an “invitation-only meeting funded by the Department of Defense and chaired by the assistant for networks and information integration.” This was a senior DoD post overseeing operations and policies for the Pentagon’s most powerful spy agencies including the NSA, the Defense Intelligence Agency (DIA), among others. Starting from 2003, the position was transitioned into what is now the undersecretary of defense for intelligence. The Highlands Forum, Clippinger wrote, was founded by a retired US Navy captain named Dick O’Neill. Delegates include senior US military officials across numerous agencies and divisions — “captains, rear admirals, generals, colonels, majors and commanders” as well as “members of the DoD leadership.”

What at first appeared to be the Forum’s main website describes Highlands as “an informal cross-disciplinary network sponsored by Federal Government,” focusing on “information, science and technology.” Explanation is sparse, beyond a single ‘Department of Defense’ logo.

But Highlands also has another website describing itself as an “intellectual capital venture firm” with “extensive experience assisting corporations, organizations, and government leaders.” The firm provides a “wide range of services, including: strategic planning, scenario creation and gaming for expanding global markets,” as well as “working with clients to build strategies for execution.” ‘The Highlands Group Inc.,’ the website says, organizes a whole range of Forums on these issue.

For instance, in addition to the Highlands Forum, since 9/11 the Group runs the ‘Island Forum,’ an international event held in association with Singapore’s Ministry of Defense, which O’Neill oversees as “lead consultant.” The Singapore Ministry of Defense website describes the Island Forum as “patterned after the Highlands Forum organized for the US Department of Defense.” Documents leaked by NSA whistleblower Edward Snowden confirmed that Singapore played a key role in permitting the US and Australia to tap undersea cables to spy on Asian powers like Indonesia and Malaysia.

The Highlands Group website also reveals that Highlands is partnered with one of the most powerful defense contractors in the United States. Highlands is “supported by a network of companies and independent researchers,” including “our Highlands Forum partners for the past ten years at SAIC; and the vast Highlands network of participants in the Highlands Forum.”

SAIC stands for the US defense firm, Science Applications International Corporation, which changed its name to Leidos in 2013, operating SAIC as a subsidiary. SAIC/Leidos is among the top 10 largest defense contractors in the US, and works closely with the US intelligence community, especially the NSA. According to investigative journalist Tim Shorrock, the first to disclose the vast extent of the privatization of US intelligence with his seminal book Spies for Hire, SAIC has a “symbiotic relationship with the NSA: the agency is the company’s largest single customer and SAIC is the NSA’s largest contractor.”

CONTINUED:  https://medium.com/@NafeezAhmed/how-the-cia-made-google-e836451a959e