Nimoy transformed the classic intellectual, self-questioning archetype…

 …into a dashing “Star Trek” action hero

How Leonard Nimoy made Spock an American Jewish icon
Leonard Nimoy as Spock on “Star Trek” (Credit: CBS)

I suspect I can speak for most American Jews when I say: Before I’d watched even a single episode of “Star Trek,” I knew about Leonard Nimoy.

Although there are plenty of Jews who have achieved fame and esteem in American culture, only a handful have their Jewishness explicitly intertwined with their larger cultural image. Much of the difference has to do with how frequently the celebrity in question alludes to his or her heritage within their body of work. This explains why, for instance, a comedian like Adam Sandler is widely identified as Jewish while Andrew Dice Clay is not, or how pitcher Sandy Koufax became famous as a “Jewish athlete” after skipping Game 1 of the 1965 World Series to observe Yom Kippur, while wide receiver Julian Edelman’s Hebraic heritage has remained more obscure.

With this context in mind, it becomes much easier to understand how Nimoy became an iconic figure in the American Jewish community. Take Nimoy’s explanation of the origin of the famous Vulcan salute, courtesy of a 2000 interview with the Baltimore Sun: “In the [Jewish] blessing, the Kohanim (a high priest of a Hebrew tribe) makes the gesture with both hands, and it struck me as a very magical and mystical moment. I taught myself how to do it without even knowing what it meant, and later I inserted it into ‘Star Trek.’”

Nimoy’s public celebration of his own Jewishness extends far beyond this literal gesture. He has openly discussed experiencing anti-Semitism in early-20th century Boston,speaking Yiddish to his Ukrainian grandparents, and pursuing an acting career in large part due to his Jewish heritage. “I became an actor, I’m convinced, because I found a home in a play about a Jewish family just like mine,” Nimoy told Abigail Pogrebin in “Stars of David: Prominent Jews Talk About Being Jewish.” “Clifford Odets’s ‘Awake and Sing.’ I was seventeen years old, cast in this local production, with some pretty good amateur and semiprofessional actors, playing this teenage kid in this Jewish family that was so much like mine it was amazing.”



Significantly, Nimoy did not disregard his Jewishness after becoming a star. Even after his depiction of Dr. Spock became famous throughout the world, Nimoy continued to actively participate in Jewish causes, from fighting to preserve the Yiddish language and narrating a documentary about Hasidic Jews to publishing a Kabbalah-inspired book of photography, The Shekhina Project, which explored “the feminine essence of God.” He even called for peace in Israel by drawing on the mythology from “Star Trek,” recalling an episode in which “two men, half black, half white, are the last survivors of their peoples who have been at war with each other for thousands of years, yet the Enterprise crew could find no differences separating these two raging men.” The message, he wisely intuited, was that “assigning blame over all other priorities is self-defeating. Myth can be a snare. The two sides need our help to evade the snare and search for a way to compromise.”

As we pay our respects to Nimoy’s life and legacy, his status as an American Jewish icon is important in two ways. The first, and by far most pressing, is socio-political: As anti-Semitism continues to rise in American colleges and throughout the world at large, it is important to acknowledge beloved cultural figures who not only came from a Jewish background, but who allowed their heritage to influence their work and continued to participate in Jewish causes throughout their lives. When you consider the frequency with which American Jews will either downplay their Jewishness (e.g., Andy Samberg) or primarily use it as grounds for cracking jokes at the expense of Jews (e.g., Matt Stone of “South Park”), Nimoy’s legacy as an outspokenly pro-Jewish Jew is particularly meaningful right now.

In addition to this, however, there is the simple fact that Nimoy presented American Jews with an archetype that was at once fresh and traditional. The trope of the intellectual, self-questioning Jew has been around for as long as there have been Chosen People, and yet Nimoy managed to transmogrify that character into something exotic and adventurous. Nimoy’s Mr. Spock was a creature driven by logic and a thirst for knowledge, yes, but he was also an action hero and idealist when circumstances demanded it. For the countless Jews who, like me, grew up as nerds and social outcasts, it was always inspiring to see a famous Jewish actor play a character who was at once so much like us and yet flung far enough across the universe to allow us temporary escape from our realities. This may not be the most topically relevant of Nimoy’s legacies, but my guess is that it will be his most lasting as long as there are Jewish children who yearn to learn more, whether by turning inward into their own heritage or casting their gaze upon the distant stars.

Matthew Rozsa is a Ph.D. student in history at Lehigh University as well as a political columnist. His editorials have been published in “The Morning Call,” “The Express-Times,” “The Newark Star-Ledger,” “The Baltimore Sun,” and various college newspapers and blogs. He actively encourages people to reach out to him at matt.rozsa@gmail.com

 

Google has captured your mind

Searches reveal who we are and how we think. True intellectual privacy requires safeguarding these records

Google has captured your mind
(Credit: Kuzma via iStock/Salon)

The Justice Department’s subpoena was straightforward enough. It directed Google to disclose to the U.S. government every search query that had been entered into its search engine for a two-month period, and to disclose every Internet address that could be accessed from the search engine. Google refused to comply. And so on Wednesday January 18, 2006, the Department of Justice filed a court motion in California, seeking an order that would force Google to comply with a similar request—a random sample of a million URLs from its search engine database, along with the text of every “search string entered onto Google’s search engine over a one-week period.” The Justice Department was interested in how many Internet users were looking for pornography, and it thought that analyzing the search queries of ordinary Internet users was the best way to figure this out. Google, which had a 45-percent market share at the time, was not the only search engine to receive the subpoena. The Justice Department also requested search records from AOL, Yahoo!, and Microsoft. Only Google declined the initial request and opposed it, which is the only reason we are aware that the secret request was ever made in the first place.

The government’s request for massive amounts of search history from ordinary users requires some explanation. It has to do with the federal government’s interest in online pornography, which has a long history, at least in Internet time. In 1995 Time Magazine ran its famous “Cyberporn” cover, depicting a shocked young boy staring into a computer monitor, his eyes wide, his mouth agape, and his skin illuminated by the eerie glow of the screen. The cover was part of a national panic about online pornography, to which Congress responded by passing the federal Communications Decency Act (CDA) the following year. This infamous law prevented all websites from publishing “patently offensive” content without first verifying the age and identity of its readers, and the sending of indecent communications to anyone under eighteen. It tried to transform the Internet into a public space that was always fit for children by default.


The CDA prompted massive protests (and litigation) charging the government with censorship. The Supreme Court agreed in the landmark case of Reno v. ACLU (1997), which struck down the CDA’s decency provisions. In his opinion for the Court, Justice John Paul Stevens explained that regulating the content of Internet expression is no different from regulating the content of newspapers.The case is arguably the most significant free speech decision over the past half century since it expanded the full protection of the First Amendment to Internet expression, rather than treating the Internet like television or radio, whose content may be regulated more extensively. In language that might sound dated, Justice Stevens announced a principle that has endured: “Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The Internet, in other words, was now an essential forum for free speech.

In the aftermath of Reno, Congress gave up on policing Internet indecency, but continued to focus on child protection. In 1998 it passed the Children’s Online Protection Act, also known as COPA. COPA punished those who engaged in web communications made “for commercial purposes” that were accessible and “harmful to minors” with a $50,000 fine and prison terms of up to six months. After extensive litigation, the Supreme Court in Ashcroft v. ACLU (2004) upheld a preliminary injunction preventing the government from enforcing the law. The Court reasoned that the government hadn’t proved that an outright ban of “harmful to minors” material was necessary. It suggested that Congress could have instead required the use of blocking or filtering software, which would have had less of an impact on free speech than a ban, and it remanded the case for further proceedings. Back in the lower court, the government wanted to create a study showing that filtering would be ineffective, which is why it wanted the search queries from Google and the other search engine companies in 2006.

Judge James Ware ruled on the subpoena on March 17, 2006, and denied most of the government’s demands. He granted the release of only 5 percent of the requested randomly selected anonymous search results and none of the actual search queries. Much of the reason for approving only a tiny sample of the de-identified search requests had to do with privacy. Google had not made a direct privacy argument, on the grounds that de-identified search queries were not “personal information,” but it argued that disclosure of the records would expose its trade secrets and harm its goodwill from users who believed that their searches were confidential. Judge Ware accepted this oddly phrased privacy claim, and added one of his own that Google had missed. The judge explained that Google users have a privacy interest in the confidentiality of their searches because a user’s identity could be reconstructed from their queries and because disclosure of such queries could lead to embarrassment (searches for, e.g., pornography or abortions) or criminal liability (searches for, e.g., “bomb placement white house”). He also placed the list of disclosed website addresses under a protective order to safeguard Google’s trade secrets.

Two facets of Judge Ware’s short opinion in the “Search Subpoena Case” are noteworthy. First, the judge was quite correct that even search requests that have had their user’s identities removed are not anonymous, as it is surprisingly easy to re-identify this kind of data. The queries we enter into search engines like Google often unwittingly reveal our identities. Most commonly, we search our own names, out of vanity, curiosity, or to discover if there are false or embarrassing facts or images of us online. But other parts of our searches can reveal our identities as well. A few months after the Search Subpoena Case, AOL made public twenty million search queries from 650,000 users of its search engine users. AOL was hoping this disclosure would help researchers and had replaced its users’ names with numerical IDs to protect their privacy. But two New York Times reporters showed just how easy it could be to re-identify them. They tracked down AOL user number 4417749 and identified her as Thelma Arnold, a sixty-two-year old widow in Lilburn, Georgia. Thelma had made hundreds of searches including “numb fingers,” “60 single men,” and “dog that urinates on everything.” The New York Times reporters used old-fashioned investigative techniques, but modern sophisticated computer science tools make re-identification of such information even easier. One such technique allowed computer scientists to re-identify users in the Netflix movie-watching database, which that company made public to researchers in 2006.

The second interesting facet of the Search Subpoena Case is its theory of privacy. Google won because the disclosure threatened its trade secrets (a commercial privacy, of sorts) and its business goodwill (which relied on its users believing that their searches were private). Judge Ware suggested that a more direct kind of user privacy was at stake, but was not specific beyond some generalized fear of embarrassment (echoing the old theory of tort privacy) or criminal prosecution (evoking the “reasonable expectation of privacy” theme from criminal law). Most people no doubt have an intuitive sense that their Internet searches are “private,” but neither our intuitions nor the Search Subpoena Case tell us why. This is a common problem in discussions of privacy. We often use the word “privacy” without being clear about what we mean or why it matters. We can do better.

Internet searches implicate our intellectual privacy. We use tools like Google Search to make sense of the world, and intellectual privacy is needed when we are making sense of the world. Our curiosity is essential, and it should be unfettered. As I’ll show in this chapter, search queries implicate a special kind of intellectual privacy, which is the freedom of thought.

Freedom of thought and belief is the core of our intellectual privacy. This freedom is the defining characteristic of a free society and our most cherished civil liberty. This right encompasses the range of thoughts and beliefs that a person might hold or develop, dealing with matters that are trivial and important, secular and profane. And it protects the individual’s thoughts from scrutiny or coercion by anyone, whether a government official or a private actor such as an employer, a friend, or a spouse. At the level of law, if there is any constitutional right that is absolute, it is this one, which is the precondition for other political and religious rights guaranteed by the Western tradition. Yet curiously, although freedom of thought is widely regarded as our most important civil liberty, it has not been protected in our law as much as other rights, in part because it has been very difficult for the state or others to monitor thoughts and beliefs even if they wanted to.

Freedom of Thought and Intellectual Privacy

In 1913 the eminent Anglo-Irish historian J. B. Bury published A History of Freedom of Thought, in which he surveyed the importance of freedom of thought in the Western tradition, from the ancient Greeks to the twentieth century. According to Bury, the conclusion that individuals should have an absolute right to their beliefs free of state or other forms of coercion “is the most important ever reached by men.” Bury was not the only scholar to have observed that freedom of thought (or belief, or conscience) is at the core of Western civil liberties. Recognitions of this sort are commonplace and have been made by many of our greatest minds. René Descartes’s maxim, “I think, therefore I am,” identifies the power of individual thought at the core of our existence. John Milton praised in Areopagitica “the liberty to know, to utter, and to argue freely according to conscience, above all [other] liberties.”

In the nineteenth century, John Stuart Mill developed a broad notion of freedom of thought as an essential element of his theory of human liberty, which comprised “the inward domain of consciousness; demanding liberty of conscience, in the most comprehensive sense; liberty of thought and feeling; absolute freedom of opinion and sentiment on all subjects, practical or speculative, scientific, moral, or theological.” In Mill’s view, free thought was inextricably linked to and mutually dependent upon free speech, with the two concepts being a part of a broader idea of political liberty. Moreover, Mill recognized that private parties as well as the state could chill free expression and thought.

Law in Britain and America has embraced the central importance of free thought as the civil liberty on which all others depend. But it was not always so. People who cannot think for themselves, after all, are incapable of self-government. In the Middle Ages, the crime of “constructive treason” outlawed “imagining the death of the king” as a crime that was punishable by death. Thomas Jefferson later reflected that this crime “had drawn the Blood of the best and honestest Men in the Kingdom.” The impulse for political uniformity was related to the impulse for religious uniformity, whose story is one of martyrdom and burnings of the stake. As Supreme Court Justice William O. Douglas put it in 1963:

While kings were fearful of treason, theologians were bent on stamping out heresy. . . . The Reformation is associated with Martin Luther. But prior to him it broke out many times only to be crushed. When in time the Protestants gained control, they tried to crush the Catholics; and when the Catholics gained the upper hand, they ferreted out the Protestants. Many devices were used. Heretical books were destroyed and heretics were burned at the stake or banished. The rack, the thumbscrew, the wheel on which men were stretched, these were part of the paraphernalia.

Thankfully, the excesses of such a dangerous government power were recognized over the centuries, and thought crimes were abolished. Thus, William Blackstone’s influential Commentaries stressed the importance of the common law protection for the freedom of thought and inquiry, even under a system that allowed subsequent punishment for seditious and other kinds of dangerous speech. Blackstone explained that:

Neither is any restraint hereby laid upon freedom of thought or inquiry: liberty of private sentiment is still left; the disseminating, or making public, of bad sentiments, destructive of the ends of society, is the crime which society corrects. A man (says a fine writer on this subject) may be allowed to keep poisons in his closet, but not publicly to vend them as cordials.

Even during a time when English law allowed civil and criminal punishment for many kinds of speech that would be protected today, including blasphemy, obscenity, seditious libel, and vocal criticism of the government, jurists recognized the importance of free thought and gave it special, separate protection in both the legal and cultural traditions.

The poisons metaphor Blackstone used, for example, was adapted from Jonathan Swift’s Gulliver’s Travels, from a line that the King of Brobdingnag delivers to Gulliver. Blackstone’s treatment of freedom of thought was itself adopted by Joseph Story in his own Commentaries, the leading American treatise on constitutional law in the early Republic. Thomas Jefferson and James Madison also embraced freedom of thought. Jefferson’s famous Virginia Statute for Religious Freedom enshrined religious liberty around the declaration that “Almighty God hath created the mind free,” and James Madison forcefully called for freedom of thought and conscience in his Memorial and Remonstrance Against Religious Assessments.

Freedom of thought thus came to be protected directly as a prohibition on state coercion of truth or belief. It was one of a handful of rights protected by the original Constitution even before the ratification of the Bill of Rights. Article VI provides that “state and federal legislators, as well as officers of the United States, shall be bound by oath or affirmation, to support this Constitution; but no religious test shall ever be required as a qualification to any office or public trust under the United States.” This provision, known as the “religious test clause,” ensured that religious orthodoxy could not be imposed as a requirement for governance, a further protection of the freedom of thought (or, in this case, its closely related cousin, the freedom of conscience). The Constitution also gives special protection against the crime of treason, by defining it to exclude thought crimes and providing special evidentiary protections:

Treason against the United States, shall consist only in levying war against them, or in adhering to their enemies, giving them aid and comfort. No person shall be convicted of treason unless on the testimony of two witnesses to the same overt act, or on confession in open court.

By eliminating religious tests and by defining the crime of treason as one of guilty actions rather than merely guilty minds, the Constitution was thus steadfastly part of the tradition giving exceptional protection to the freedom of thought.

Nevertheless, even when governments could not directly coerce the uniformity of beliefs, a person’s thoughts remained relevant to both law and social control. A person’s thoughts could reveal political or religious disloyalty, or they could be relevant to a defendant’s mental state in committing a crime or other legal wrong. And while thoughts could not be revealed directly, they could be discovered by indirect means. For example, thoughts could be inferred either from a person’s testimony or confessions, or by access to their papers and diaries. But both the English common law and the American Bill of Rights came to protect against these intrusions into the freedom of the mind as well.

The most direct way to obtain knowledge about a person’s thoughts would be to haul him before a magistrate as a witness and ask him under penalty of law. The English ecclesiastical courts used the “oath ex officio” for precisely this purpose. But as historian Leonard Levy has explained, this practice came under assault in Britain as invading the freedom of thought and belief. As the eminent jurist Lord Coke later declared, “no free man should be compelled to answer for his secret thoughts and opinions.” The practice of the oath was ultimately abolished in England in the cases of John Lilburne and John Entick, men who were political dissidents rather than religious heretics.

In the new United States, the Fifth Amendment guarantee that “No person . . . shall be compelled in any criminal case to be a witness against himself ” can also be seen as a resounding rejection of this sort of practice in favor of the freedom of thought. Law of course evolves, and current Fifth Amendment doctrine focuses on the consequences of a confession rather than on mental privacy, but the origins of the Fifth Amendment are part of a broad commitment to freedom of thought that runs through our law. The late criminal law scholar William Stuntz has shown that this tradition was not merely a procedural protection for all, but a substantive limitation on the power of the state to force its enemies to reveal their unpopular or heretical thoughts. As he put the point colorfully, “[i]t is no coincidence that the privilege’s origins read like a catalogue of religious and political persecution.”

Another way to obtain a person’s thoughts would be by reading their diaries or other papers. Consider the Fourth Amendment, which protects a person from unreasonable searches and seizures by the police:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Today we think about the Fourth Amendment as providing protection for the home and the person chiefly against unreasonable searches for contraband like guns or drugs. But the Fourth Amendment’s origins come not from drug cases but as a bulwark against intellectual surveillance by the state. In the eighteenth century, the English Crown had sought to quash political and religious dissent through the use of “general warrants,” legal documents that gave agents of the Crown the authority to search the homes of suspected dissidents for incriminating papers.

Perhaps the most infamous dissident of the time was John Wilkes. Wilkes was a progressive critic of Crown policy and a political rogue whose public tribulations, wit, and famed personal ugliness made him a celebrity throughout the English-speaking world. Wilkes was the editor of a progressive newspaper, the North Briton, a member of Parliament, and an outspoken critic of government policy. He was deeply critical of the 1763 Treaty of Paris ending the Seven Years War with France, a conflict known in North America as the French and Indian War. Wilkes’s damning articles angered King George, who ordered the arrest of Wilkes and his co-publishers of the North Briton, authorizing general warrants to search their papers for evidence of treason and sedition. The government ransacked numerous private homes and printers’ shops, scrutinizing personal papers for any signs of incriminating evidence. In all, forty-nine people were arrested, and Wilkes himself was charged with seditious libel, prompting a long and inconclusive legal battle of suits and countersuits.

By taking a stand against the king and intrusive searches, Wilkes became a cause célèbre among Britons at home and in the colonies. This was particularly true for many American colonists, whose own objections to British tax policy following the Treaty of Paris culminated in the American Revolution. The rebellious colonists drew from the Wilkes case the importance of political dissent as well as the need to protect dissenting citizens from unreasonable (and politically motivated) searches and seizures.

The Fourth Amendment was intended to address this problem by inscribing legal protection for “persons, houses, papers, and effects” into the Bill of Rights. A government that could not search the homes and read the papers of its citizens would be less able to engage in intellectual tyranny and enforce intellectual orthodoxy. In a pre-electronic world, the Fourth Amendment kept out the state, while trespass and other property laws kept private parties out of our homes, paper, and effects.

The Fourth and Fifth Amendments thus protect the freedom of thought at their core. As Stuntz explains, the early English cases estab- lishing these principles were “classic First Amendment cases in a system with no First Amendment.” Even in a legal regime without protection for dissidents who expressed unpopular political or religious opinions, the English system protected those dissidents in their private beliefs, as well as the papers and other documents that might reveal those beliefs.

In American law, an even stronger protection for freedom of thought can be found in the First Amendment. Although the First Amendment text speaks of free speech, press, and assembly, the freedom of thought is unquestionably at the core of these guarantees, and courts and scholars have consistently recognized this fact. In fact, the freedom of thought and belief is the closest thing to an absolute right guaranteed by the Constitution. The Supreme Court first recognized it in the 1878 Mormon polygamy case of Reynolds v. United States, which ruled that although law could regulate religiously inspired actions such as polygamy, it was powerless to control “mere religious belief and opinions.” Freedom of thought in secular matters was identified by Justices Holmes and Brandeis as part of their dissenting tradition in free speech cases in the 1910s and 1920s. Holmes declared crisply in United States v. Schwimmer that “if there is any principle of the Constitution that more imperatively calls for attachment than any other it is the principle of free thought—not free thought for those who agree with us but freedom for the thought that we hate.” And in his dissent in the Fourth Amendment wiretapping case of Olmstead v. United States, Brandeis argued that the framers of the Constitution “sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations.” Brandeis’s dissent in Olmstead adapted his theory of tort privacy into federal constitutional law around the principle of freedom of thought.

Freedom of thought became permanently enshrined in constitutional law during a series of mid-twentieth century cases that charted the contours of the modern First Amendment. In Palko v. Connecticut, Justice Cardozo characterized freedom of thought as “the matrix, the indispensable condition, of nearly every other form of freedom.” And in a series of cases involving Jehovah’s Witnesses, the Court developed a theory of the First Amendment under which the rights of free thought, speech, press, and exercise of religion were placed in a “preferred position.” Freedom of thought was central to this new theory of the First Amendment, exemplified by Justice Jackson’s opinion in West Virginia State Board of Education v. Barnette, which invalidated a state regulation requiring that public school children salute the flag each morning. Jackson declared that:

If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. . . .

[The flag-salute statute] transcends constitutional limitations on [legislative] power and invades the sphere of intellect and spirit which it is the purpose of the First Amendment to our Constitution to reserve from all official control.

Modern cases continue to reflect this legacy. The Court has repeatedly declared that the constitutional guarantee of freedom of thought is at the foundation of what it means to have a free society. In particular, freedom of thought has been invoked as a principal justification for preventing punishment based upon possessing or reading dangerous media. Thus, the government cannot punish a person for merely possessing unpopular or dangerous books or images based upon their content. As Alexander Meiklejohn put it succinctly, the First Amendment protects, first and foremost, “the thinking process of the community.”

Freedom of thought thus remains, as it has for centuries, the foundation of the Anglo-American tradition of civil liberties. It is also the core of intellectual privacy.

“The New Home of Mind”

“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind.” So began “A Declaration of Independence of Cyberspace,” a 1996 manifesto responding to the Communications Decency Act and other attempts by government to regulate the online world and stamp out indecency. The Declaration’s author was John Perry Barlow, a founder of the influential Electronic Frontier Foundation and a former lyricist for the Grateful Dead. Barlow argued that “[c]yberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.” This definition of the Internet as a realm of pure thought was quickly followed by an affirmation of the importance of the freedom of thought. Barlow insisted that in Cyberspace “anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” The Declaration concluded on the same theme: “We will spread ourselves across the Planet so that no one can arrest our thoughts. We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.”

In his Declaration, Barlow joined a tradition of many (including many of the most important thinkers and creators of the digital world) who have expressed the idea that networked computing can be a place of “thought itself.” As early as 1960, the great computing visionary J. C. R. Licklider imagined that “in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought.” Tim Berners-Lee, the architect of the World Wide Web, envisioned his creation as one that would bring “the workings of society closer to the workings of our minds.”

Barlow’s utopian demand that governments leave the electronic realm alone was only partially successful. The Communications Decency Act was, as we have seen, struck down by the Supreme Court, but today many laws regulate the Internet, such as the U.S. Digital Millenium Copyright Act6and the EU Data Retention Directive. The Internet has become more (and less) than Barlow’s utopian vision—a place of business as well as of thinking. But Barlow’s description of the Internet as a world of the mind remains resonant today.

It is undeniable that today millions of people use computers as aids to their thinking. In the digital age, computers are an essential and intertwined supplement to our thoughts and our memories. Discussing Licklider’s prophesy from half a century ago, legal scholar Tim Wu notes that virtually every computer “program we use is a type of thinking aid—whether the task is to remember things (an address book), to organize prose (a word processor), or to keep track of friends (social network software).” These technologies have become not just aids to thought but also part of the thinking process itself. In the past, we invented paper and books, and then sound and video recordings to preserve knowledge and make it easier for us as individuals and societies to remember information. Digital technologies have made remembering even easier, by providing cheap storage, inexpensive retrieval, and global reach. Consider the Kindle, a cheap electronic reader that can hold 1,100 books, or even cheaper external hard drives that can hold hundreds of hours of high-definition video in a box the size of a paperback novel.

Even the words we use to describe our digital products and experiences reflect our understanding that computers and cyberspace are devices and places of the mind. IBM has famously called its laptops “ThinkPads,” and many of us use “smartphones.” Other technologies have been named in ways that affirm their status as tools of the mind—notebooks, ultrabooks, tablets, and browsers. Apple Computer produces iPads and MacBooks and has long sold its products under the slogan, “Think Different.” Google historian John Battelle has famously termed Google’s search records to be a “database of intentions.” Google’s own slogan for its web browser Chrome is “browse the web as fast as you think,” revealing how web browsing itself is not just a form of reading, but a kind of thinking itself. My point here is not just that common usage or marketing slogans connect Internet use to thinking, but a more important one: Our use of these words reflects a reality. We are increasingly using digital technologies not just as aids to our memories but also as an essential part of the ways we think.

Search engines in particular bear a special connection to the processes of thought. How many of us have asked a factual question among friends, only for smartphones to appear as our friends race to see who can look up the answer the fastest? In private, we use search engines to learn about the world. If you have a moment, pull up your own search history on your phone, tablet, or computer, and recall your past queries. It usually makes for interesting reading—a history of your thoughts and wonderings.

But the ease with which we can pull up such a transcript reveals another fundamental feature of digital technologies—they are designed to create records of their use. Think again about the profile a search engine like Google has for you. A transcript of search queries and links followed is a close approximation to a transcript of the operation of your mind. In the logs of search engine companies are vast repositories of intellectual wonderings, questions asked, and mental whims followed. Similar logs exist for Internet service providers and other new technology companies. And the data contained in such logs is eagerly sought by government and private entities interested in monitoring intellectual activity, whether for behavioral advertising, crime and terrorism prevention, and possibly other, more sinister purposes.

Searching Is Thinking

With these two points in mind—the importance of freedom of thought and the idea of the Internet as a place where thought occurs—we can now return to the Google Search Subpoena with which this chapter opened. Judge Ware’s opinion revealed an intuitive understanding that the disclosure of search records was threatening to privacy, but was not clear about what kind of privacy was involved or why it matters.

Intellectual privacy, in particular the freedom of thought, supplies the answer to this problem. We use search engines to learn about and make sense of the world, to answer our questions, and as aids to our thinking. Searching, then, in a very real sense is a kind of thinking. And we have a long tradition of protecting the privacy and confidentiality of our thoughts from the scrutiny of others. It is precisely because of the importance of search records to human thought that the Justice Department wanted to access the records. But if our search records were more public, we wouldn’t merely be exposed to embarrassment like Thelma Arnold of Lilburn, Georgia. We would be less likely to search for unpopular or deviant or dangerous topics. Yet in a free society, we need to be able to think freely about any ideas, no matter how dangerous or unpopular. If we care about freedom of thought—and our political institutions are built on the assumption that we do—we should care about the privacy of electronic records that reveal our thoughts. Search records illustrate the point well, but this idea is not just limited to that one important technology. My argument about freedom of thought in the digital age is this: Any technology that we use in our thinking implicates our intellectual privacy, and if we want to preserve our ability to think fearlessly, free of monitoring, interference, or repercussion, we should embody these technologies with a meaningful measure of intellectual privacy.

Excerpted from “Intellectual Privacy: Rethinking Civil Liberties in the Digital Age” by Neil Richards. Published by Oxford University Press. Copyright 2015 by Neil Richards. Reprinted with permission of the publisher. All rights reserved.

Neil Richards is a Professor of Law at Washington University, where he teaches and writes about privacy, free speech, and the digital revolution.

Ending austerity in Greece: time for plan B?

By Jerome Roos On February 26, 2015

Post image for Ending austerity in Greece: time for plan B?Syriza’s “head-long retreat” in the standoff with its creditors hails the failure of Tsipras’ pro-euro strategy. It’s time to start preparing for Grexit.

Photo by Angelos Tzortzinis.

When the Eurogroup accepted Greece’s reform proposals on Tuesday, investors and EU leaders let out a collective sigh of relief: it appears that the bombshell of a disorderly Greek exit from the Eurozone has been diffused, at least until the start of the summer. In return for a significant roll-back of its campaign pledges, Greece’s freshly inaugurated government secured a four-month extension of its current bailout program and thereby managed to avert a potentially catastrophic bank run that would likely have resulted in Grexit.

But while Greece’s creditors seemed content, the agreement immediately unleashed a bitter debate within the governing leftist party Syriza. Prime Minister Tsipras may have declared a tentative victory for his anti-austerity coalition, but some influential party members strongly criticized what they perceived to be an unacceptable climbdown. Costas Lapavitsas, the SOAS economist and Syriza MP, wrote a scathing letter expressing his serious concerns about the government’s ability to stick to its promises, while Stathis Kouvelakis of Syriza’s central committee dubbed the agreement a “head-long retreat.”

Manolis Glezos, the 94-year-old war hero and Syriza MEP, even went so far as toapologize to the Greek people for having participated in “this illusion,” while the legendary composer Mikis Theodorakis urged the government to resist the “fatal embrace” of its creditors. Paul Mason reports that “there is a sea change going on within Syriza. In the past 48 hours I’ve heard people who were staunch believers in the ‘good euro’ — a euro that can accommodate by negotiation a radical left government — say, effectively, they were wrong.”

How are we to respond to all this? The first thing to observe is Spinoza’s dictum:non ridere, non lugere, neque detestari, sed intelligere — not to ridicule, lament or condemn, but to understand. If we really want to understand Syriza’s rapid retreat over the past week and engage in constructive criticism to end austerity, we’ll have to start, first of all, with the strategy chosen by its party leadership, particularly in relation to the euro; and secondly with the way in which the single currency serves as an amplifier of structural power relations between creditors and debtors — core and periphery — in the European political economy.

On the first point, it is clear that the so-called “good euro” strategy proposed by the party leadership and Finance Minister Yanis Varoufakis, whose “modest proposal” for resolving the crisis fundamentally revolves around a wholesale restructuring of the Eurozone along Keynesian lines, has run headlong into the opposition of virtually everyone else involved. In the negotiations, Greece found itself isolated not only by the 18 other Eurozone finance ministers (including the center-left French and Italians and the other heavily indebted countries), but also by the ECB and the European Commission.

Moreover, going into the negotiations, Greece suffered from two structural weaknesses: the near-total depletion of its public finances and the extremely parlous state of its domestic banking system. With its reserves running on empty, the government would have run out of financing by February 24 and would have been forced to default on its IMF obligations by March. At the same time, increasing uncertainty about Greece’s place in the Eurozone produced sustained deposit flight, bringing the Greek banks to the brink of collapse.

Strategically speaking, the government could have wielded these weaknesses as a bargaining chip. Had it been willing to put its euro membership on the line, Greece might have been able to extract greater concessions from its risk-averse “partners” by threatening unilateral action if the creditors refused to give in. But default and Grexit were ruled out a priori by Syriza’s moderate leadership, which repeatedly declared its unwavering commitment to the single currency. Knowing this, Germany and its allies pushed for total surrender: with Greece weak and dependent on external loans, the Eurozone could enforce strict conditions in return for continued membership.

This first observation is connected to the second point: the highly asymmetric power relations at the heart of the monetary union. In previous columns, I have repeatedly argued that Germany — as the dominant force inside the Eurozone — would never accept a restructuring of the Greek debt, that the Eurozone wouldnever accommodate a social democratic alternative in its midst, and that as a result Greece’s leftist government would find it impossible to pursue a socially progressive alternative (let alone a radical program) inside the fundamentally regressive, anti-social and anti-democratic straitjacket of the Eurozone.

These predictions — which are very similar to those made by Costas Lapavitsasand others inside Syriza’s Left Platform — have now been proven correct. Continued Eurozone membership keeps Greece stuck within a web of structural constraints from which it cannot escape without its creditors’ approval. And since these creditors are loathe to set a precedent of successful debtor defiance, they will do anything to prevent Greece from upending the neoliberal austerity doctrine. There can be only one conclusion from this: to truly end austerity, Greece will have to leave the euro.

To be sure, Grexit is not a panacea. Readjustment will be extremely painful in the short term, and even in the long-run it is clear that restoring fiscal and monetary policy autonomy will never be enough to overcome the structural dependence of the Greek economy on foreign investment or to insulate the Greek state from the systemic pressures of global finance. The point of Grexit, however, is not to fetishize national sovereignty but simply to reclaim the essential monetary and fiscal policy tools that the government now lacks — and without which it is materially impossible to determine socioeconomic priorities and pursue a progressive economic program.

The most important challenge, in this respect, will not necessarily be economic in nature but rather social, political and psychological. Before Greece can ever be liberated from its state of debt servitude and its plight of permanent austerity, its government will first need to be in a position to default on its European creditors and “print” its own currency. This will in turn require three things:

First, mass mobilization from below will be essential, both to put pressure on Syriza’s leadership and to empower the pro-Grexit faction inside the party, which is now steadily growing in the wake of last week’s dramatic retreat. Second, voters will have to abandon their aversion towards Grexit and public opinion will have to sway behind a much more confrontational approach. To get there, the left and the movements will have to embark on a concerted campaign of “popular education” to inform the Greek public of the only real options available to their country: progressive exit or endless austerity.

Finally, and most importantly, the government would have to be meticulously prepared to manage the extremely difficult transition period, in which the price of imported goods will skyrocket following a sharp devaluation; key commodities like food, petroleum and medicine will have to be rationed to deal with sudden scarcity; capital controls and border controls will have to be reintroduced to prevent catastrophic capital flight; deposits and loan contracts will need to be re-denominated into drachma; and the banks will have to be nationalized to prevent a complete collapse of the financial system.

All of this will require a degree of radicalization and preparation that currently seems both utterly irresponsible and completely unrealistic. Yet this is precisely where the brutally anti-democratic methods of the Eurozone are pushing Greece today. For five years, Greeks have been living in total despair. Desperate times call for desperate measures — and the time for unilateral default and Grexit may be approaching faster than most people are willing or able to recognize. If the left truly cares about ending austerity, it should start preparing for Plan B.

Jerome Roos is a PhD researcher in International Political Economy at the European University Institute and founding editor of ROAR Magazine. Follow him on Twitter @JeromeRoos. This article was written for TeleSUR English.

 

http://roarmag.org/2015/02/greece-austerity-euro-exit/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+roarmag+%28ROAR+Magazine%29

DIGITAL MUSIC NEWS

Global Decision: New Music Will Be

Released On Fridays, Starting This Summer

 

     After months of discussions and negotiation it appears every country will now adopt a standardized music launch day in an attempt to create “a sense of occasion around the release of new music.” That’s the word from IFPI, the worldwide body representing the recording industry, which this week said that sometime this summer  all new music will be released globally on Fridays.

“As well as helping music fans, the move will benefit artists who want to harness social media to promote their new music,” the IFPI said in a statement. “It also creates the opportunity to reignite excitement and a sense of occasion around the release of new music.” Currently, new music is released in the U.K. on Monday, with U.S. releases coming out on Tuesday. This new arrangement will see new albums and singles released at 00:01 am (local time) on Fridays. IFPI says the decision to standardize the release day came after thorough consultation with all parties who have an interest in recoded music.

“We’ve had a long consultation involving retailers, artists, and record labels, and we have looked at a large amount of insight and research,” IFPI CEO Frances Moore told Music Week. “The good news has been the widespread support we’ve seen around the world for global release day – no one has seriously questioned the concept. The only debate has been about the day. The artist organizations and many retailers and record companies internationally support Friday, and this is backed by consumer research in many countries.”

Still, many independent labels and artists appear to be dissatisfied with the idea of designating Friday – or any day – as “new music day.” And since there’s no law that forces companies to comply with this new agreement, look for some rogue players to defy the standard and release their singles and albums on any day they choose. 

Apple Reportedly Buys Camel Audio;

Plans For Tech Firm Remain Unclear

 

Apple      Apple Inc. reportedly has acquired U.K. music technology company Camel Audio – a company that, among other things, built the Alchemy software suite that allowed musicians to produce their own tracks digitally. While Apple has not officially acknowledged the acquisition, digital music blog MusicRadar says the deal closed in early January, around the time Apple attorney Heather Joy Morrison was named as the company’s sole director. Camel reportedly has shut down its operations, leaving behind a website containing only a user login page for contacting customer support, and miscellaneous legal information.

A notice on the website reads, in part: “We would like to thank you for the support we’ve received over the years in our efforts to create instruments, effects plug-ins, and sound libraries. Camel Audio’s plug-ins, Alchemy Mobile IAPs, and sound libraries are no longer available for purchase. We will continue to provide downloads of your previous purchases and email support until July 7, 2015. We recommend you download all of your purchases and back them up so that you can continue to use them.”

Thus far it’s unclear how the Camel Audio acquisition fits within Apple’s apparent plans to lead the digital music space. Apple already offers products for digital music production, including Garageband and Logic Pro X, and some sources believe Camel’s products will be folded into those existing products or perhaps into iTunes. The Silicon Valley giant has issued a vague statement noting that, “Apple buys smaller technology companies from time to time, and we generally do not discuss our purpose or plans.” The statement is typically offered when an acquisition rumor is legitimate, suggesting Apple did in fact purchase Camel Audio last month. [Read more: Apple Insider

Google Play Music Increases Its Music

Storage Capacity To 50,000 Songs

 

     In an attempt to thwart any attempt by Apple to grow its dominance in the digital music space, Google Play Music this week announced it has upgraded the storage space for registered users from 20,000 songs to 50,000 songs. The extra space is a free upgrade for users, and the expanded capacity is applied automatically for those who already host their music collection in Google’s cloud. Google Play Music is a music streaming and storage service that lets users listen on the web, smartphones, or tablets.

While many consumers are shifting to streaming services and away from downloaded digital files, many users have invested in building – and listening to – massive music libraries. Google’s offer to host even bigger collections is an attempt to lure those customers who are unwilling to give up their previous musical life in favor of streaming platforms.

Because of this single change many analysts say Google Play Music significantly has strengthened its competitive position against Spotify; a lack of storage for music and other media is considered one of the core issues still plaguing smartphones and tablets. Example: Apple sells the iPhone 6 with 16GB of storage, not nearly enough room for all the functions a modern smartphone is expected to provide. Even the base Moto X, which some people consider the best Android smartphone available, has only 16GB of storage. Google’s expansion to 50,000 songs – approximately 200 GB of cloud space – goes will beyond this limit and provides the convenience of streaming their own library. [Read more: Forbes Tech Crunch  Engadget

Starbucks Will Stop Selling CDs

In Stores At The End Of March

 

     As CD sales continue to slip both in the U.S. and globally, Starbucks has decided to stop offering them at its 21,000 retail shops by the end of next month. Starbucks representative Maggie Jantzen told Billboard the company “continually seeks to redefine the experience in our retail stores to meet the evolving needs of our customers. Music will remain a key component of our coffeehouse and retail experience, [and] we will continue to evolve the format of our music offerings to ensure we’re offering relevant options for our customers. As a leader in music curation, we will continue to strive to select unique and compelling artists from a broad range of genres we think will resonate with our customers.”

Starbucks supposedly will continue to provide digital music to its customers, although Jantzen did not reveal what offerings will be available in the future. “Music has always been a key component at Starbucks,” she said. “We are looking for new ways to offer customers music options.”

Starbucks began investing in music in the late 1990s with its purchase of music retailer Hear Music, which created collections that would inspire people to discover new music. That effort resulted in significant in-store sales, and the company expanded its music push with a partnership with William Morris. A subsequent deal with Concord Music Group led to original music releases from such major artists as Paul McCartney, Joni Mitchell, and Alanis Morissette.

 

Grace Digital’s WiFi Devices Log

More Than 1 Billion Listener Hours

 

Music Business      Grace Digital, a manufacturer of Wi-Fi-based wireless music systems, announced this week its North American customer base has exceeded 1 billion total internet radio listening hours. According to the Edison Research report titled “The Infinite Dial,” internet radio has seen steady listening increases in the U.S. over the last six years, as 21% of Americans listened to it in 2008, while 47% do so today. Listening hours also have increased: the average listening time in one week in 2008 was 6 hours and 13 minutes, a figure that today has more than doubled to 13 hours 19 minutes.

“The growth we’ve seen year over year… mixed with the projections within the industry, show us clearly that wireless streaming of digital content will continue to grow and has become the standard,” Grace Digital Audio’s CEO Greg Fadul said in a statement. “We are committed to our customers and will continue to provide products that will aid in this digital revolution.”

While numerous devices can be used to listen to online radio from a fixed or mobile location, Grace Digital’s Wi-Fi music players serve more as a traditional stereo unit designed for in-home use. 

A publication of Bunzel Media Resources © 2015

Child poverty at devastating levels in US cities and states

r-CHILD-POVERTY-large570

By Patrick Martin 

26 February 2015

Reports issued over the past week suggest that child poverty in America is more widespread than at any time in the last 50 years. For all the claims of economic “recovery” in the United States, the reality for the new generation of the working class is one of ever-deeper social deprivation.

The Annie E. Casey Foundation publishes the annual Kids Count report on child poverty, which was the source of state-by-state reports issued last week. These reports use the new Supplemental Poverty Measure, developed by the Census Bureau, which includes the impact of government benefit programs like food stamps and unemployment compensation, as well as state social programs, and accounts for variations in the cost of living as well.

The result is a picture of the United States with a markedly different regional distribution of child poverty than usually presented. The state with the highest child poverty rate is California, the most populous, at a staggering 27 percent, followed by neighboring Arizona and Nevada, each at 22 percent.

The child poverty rate of California is much higher than figures previously reported, because the cost of living in the state is higher. Moreover, many of the poorest immigrant families are not enrolled in federal social programs because they are undocumented or face language barriers. The same conditions apply in Arizona and Nevada.

The other major centers of child poverty in the United States are the long-impoverished states of the rural Deep South, and the more recently devastated states of the industrial Midwest, where conditions of life for the working class have deteriorated the most rapidly over the past ten years.

It is a remarkable fact, documented in a separate report issued February 23 by the Catholic charity Bread for the World, that African-American child poverty rates are actually worse in the Midwest states of Iowa, Ohio, Michigan, Wisconsin and Indiana than in the traditionally poorest parts of the Deep South, including Mississippi, Louisiana and Alabama.

Several of the Midwest states have replaced Mississippi at the bottom of one or another social index. Iowa has the worst poverty rate for African-American children. Indiana has the highest rate of teens attempting or seriously considering suicide.

The most remarkable transformation is in Michigan, once the center of American industry with the highest working-class standard of living of any state. Michigan is the only major US state whose overall poverty rate is actually worse now than in 1960.

This half-century of decline is a devastating indictment of the failure of the American trade unions, which have collaborated in the systematic impoverishment of the working class in what was once their undisputed stronghold.

The United Auto Workers, in particular, did nothing as dozens of plants were shut down and cities like Detroit, Pontiac, Flint and Saginaw were laid waste by the auto bosses. Meanwhile, the UAW became a billion-dollar business, its executives controlling tens of billions in pension and benefit funds, while the rank-and-file workers lost their jobs, their homes and their livelihoods.

In Detroit, once the industrial capital of the world’s richest country, the child poverty rate was 59 percent in 2012, up from 44.3 percent in 2006.

The social catastrophe facing the population in Detroit also exposes the role of the Democratic Party and the organizations around it that have for decades promoted identity politics—according to which race, and not class, is the fundamental social category in America. The city, like many throughout the region, has been run by a layer of black politicians who have overseen the shocking decay in the social position of African-American workers and youth. (See, “Half a million children in poverty in Michigan”.)

Cleveland, also devastated by steel and auto plant closings, was the only other major US city with a child poverty rate of over 50 percent.

The Detroit figure undoubtedly understates the social catastrophe in the Motor City, since it comes from a study concluded before the state-imposed emergency manager put the city into bankruptcy in the summer of 2013, leading to drastic cuts in wages, benefits and pensions for city workers and retirees.

Wayne County, which includes Detroit, had the highest child poverty rate of any of Michigan’s 82 counties. Southeast Michigan, which includes the entire Detroit metropolitan area, endured an overall rise in child poverty rates from 18.9 percent in 2006 to 27 percent in 2012.

The state-by-state reports issued by Kids Count were accompanied by a press release by the Casey Foundation noting that the child poverty rate in the United States would nearly double, from 18 percent to 33 percent, without social programs like food stamps, school meals, Medicaid and the Earned Income Tax Credit.

This was issued as a warning of the effect of widely expected budget cuts in these critical programs. It coincided with the first hearing before the House Agriculture Committee on plans to attack the federal food stamp program by imposing work requirements and other restrictions to limit eligibility.

The food stamp program has already suffered through two rounds of budget cuts agreed on in bipartisan deals between the Obama White House and congressional Republicans, which cut $1 billion and $5 billion respectively from the program. Now that Republicans control both houses of Congress, they will press for even more sweeping cuts in a program that helps feed 47 million low-income people, many of them children.

 

http://www.wsws.org/en/articles/2015/02/26/cpov-f26.html

Banksy filmed himself sneaking into Gaza to paint new artwork

9be161daa3e674de63fbf554f41a6b6d

World-renowned street artist Banksy released a 2-minute video on his website this week that delivers an eye-opening view of life behind the guarded walls of the Gaza Strip.

The video, “Make this the year YOU discover a new destination,” invites viewers to witness the devastation of war-torn Gaza and the tribulation of the Palestinian population cordoned within its borders.

The satirical mini-documentary, which is set to the legendary East Flatbush Project’s “Tried by 12,” presents itself as an advertisement for world travelers. It begins by showing an individual, presumably Banksy himself, entering Gaza by climbing through what’s parenthetically described as an illegal network of tunnels. “Well away from the tourist track,” the caption reads.

Exiting one of the dark tunnels, Banksy ascends into the bombed-out region and is greeted by children playing amid piles of rubble. “The locals like it so much they never leave,” the video says. Cutting to a scrum of Israeli Defense Force (IDF) soldiers, it continues: “Because they’re never allowed to.

 

The video captures four of the mysterious artist’s new pieces. The first, titled “Bomb Damage,” is painted on a door defiantly erect at the facade of a destroyed building. Potentially inspired by Rodin’s “The Thinker,” it shows a man knelt over in apparent agony. Another depicts one of the Israeli guard towers along the separation wall transformed into an amusement park swing carousel. The third and largest piece is a white cat with a pink bow measuring roughly 3 meters high; its paw hovering above a twisted ball of scrap metal like a ball of yarn.

“A local man came up and said ‘Please – what does this mean?’ I explained I wanted to highlight the destruction in Gaza by posting photos on my website – but on the internet people only look at pictures of kittens.”

“This cat tells the world that she is missing joy in her life,” a Palestinian man resting nearby speaks in Arabic to the camera. “The cat found something to play with. What about our children?”

The fourth and final piece is simple red paint on a wall. It reads: “If we wash our hands of the conflict between the powerful and the powerless we side with the powerful—we don’t remain neutral.”

According to the video, much of the recorded destruction is the result of “Operation Protective Edge,” a July 2014 Israeli military campaign, the stated of goal of which was to prevent Hamas rocket fire from entering Israeli territory.

During the seven weeks of airstrikes, Gaza suffered up to 2,300 casualties, including 513 children; 66 Israeli soldiers and 5 civilians were also killed. Up to 7,000 Palestinian homes were complete destroyed, and another 10,000 severely damaged, according to the United Nations. Over half a million people were displaced by the conflict. By some estimates, rebuilding Gaza City could cost in excess of $6 billion and take more than 20 years.

Banksy isn’t new to the Israeli-Palestinian conflict. In 2005, he painted nine pieces along the 425-mile West Bank Wall, the barrier which separates Palestine and Israel.

Screenshot via Banksy/YouTube

http://www.dailydot.com/politics/banksy-new-gaza-artwork/?fb=dd

The poor fetish: commodifying working class culture

By Joseph Todd On February 25, 2015

Post image for The poor fetish: commodifying working class cultureBullshit jobs and a pointless existence are increasingly driving London’s spiritually dead middle class towards a fetishization of working class culture.

Photo: Fruit stall in Shoreditch, London (Source: Flickr/Garry Knight).

Literally, he paints her portrait, then he can fuck off  —  he can leave. When Leonardo DiCaprio is freezing in water, she notices that he’s dead, and starts to shout, ‘I will never let you go,’ but while she is shouting this, she is pushing him away. It’s not even a love story. Again, Captains Courageous: upper classes lose their life, passion, vitality and act like a vampire to suck vitality from a lower-class guy. Once they replenish their energy, he can fuck off.

– Slavoj Zizek on Titanic

London’s middle class are in crisis — they feel empty and clamor for vitality. Their work is alienating and meaningless, many of them in “bullshit jobs” that are either socially useless, overly bureaucratic or divorced from any traditional notion of labor.

Financial services exist to grow the fortunes of capitalists, advertising to exploit our insecurities and public relations to manage the reputations of companies that do wrong. Society would not collapse without these industries. We could cope without the nexus of lobbyists, corporate lawyers and big firm accountants whose sole purpose is to protect the interests of capital. How empty if must feel to work a job that could be abolished tomorrow. One that at best makes no tangible difference to society and at worst encourages poverty, hunger and ecological collapse.

At the same time our doctors, teachers, university professors, architects, lawyers, solicitors and probation officers are rendered impotent. Desperate to just do their jobs yet besieged by bureaucracy and box-ticking. Their energies are focused not on helping the sick, teaching the young or building hospitals but on creating and maintaining the trail of paperwork that is a prerequisite of any meaningful action in late capitalist society. Talk to anybody in these professions, from the public or private sector, and the frustration that comes up again and again is that they spend the majority of their time writing reports, filling in forms and navigating bureaucratic labyrinths that serve only to justify themselves.

This inaction hurts the middle-class man. He feels impotent in the blue glare of his computer screen. Unable to do anything useful, alienated from physical labor and plagued by the knowledge that his father could use his hands, and the lower classes still do. Escape, however, is impossible. Ever since the advent of the smartphone the traditional working day has been abolished. Office workers are at the constant mercy of email, a culture of overwork and a digitalization of work. Your job can be done anytime, anywhere and this is exactly what capital demands. Refuge can only be found in sleep, another domain which capital isdetermined to control.

And when the middle classes are awake and working, they cannot even show contempt for their jobs. Affective (or emotional) labor has always been a part of nursing and prostitution, be it fluffing pillows or faking orgasms, but now it has infected both the shop floor of corporate consumer chains and the offices of middle-management above. Staff working at Pret-à-Manger are encouraged to touch each other, “have presence” and “be happy to be themselves.” In the same way the open plan, hyper-extroverted modern office environment enforces positivity. Offering a systemic critique of the very nature of your work does not make you a ‘team player.’ In such an environment, bringing up the pointlessness of your job is akin to taking a shit on the boss’s desk.

This culture is symptomatic of neoliberal contradiction, one which tells us to be true to ourselves and follow our passions in a system that makes it nearly impossible to do so. A system where we work longer hours, for less money and are taught to consume instead of create. Where fulfilling vocations such as teaching, caring or the arts are either vilified, badly paid or not paid at all. Where the only work that will enable you to have a comfortable life is meaningless, bureaucratic or evil. In such a system you are left with only one option: to embrace the myth that your job is your passion while on a deeper level recognizing that it is actually bullshit.

This is London’s middle class crisis.

But thankfully capital has an antidote. Just as in Titanic, when Kate Winslet saps the life from the visceral, working class Leonardo DiCaprio, middle-class Londoners flock to bars and clubs that sell a pre-packaged, commodified experience of working class and immigrant culture. Pitched as a way to re-connect with reality, experience life on the edge and escape the bureaucratic, meaningless, alienated dissonance that pervades their working lives.

The problem, however, is that the symbols, aesthetics and identities that populate these experiences have been ripped from their original contexts and re-positioned in a way that is acceptable to the middle class. In the process, they are stripped of their culture and assigned an economic value. In this way, they are emptied of all possible meaning.

Visit any bar in the hip districts of Brixton, Dalston or Peckham and you will invariably end up in a warehouse, on the top floor of a car park or under a railway arch. Signage will be minimal and white bobbing faces will be crammed close, a Stockholm syndrome recreation of the twice-daily commute, enjoying their two hours of planned hedonism before the work/sleep cycle grinds back into gear.

Expect gritty, urban aesthetics. Railway sleepers grouped around fire pits, scuffed tables and chairs reclaimed from the last generation’s secondary schools and hastily erected toilets with clattering wooden doors and graffitied mixed sex washrooms. Notice the lack of anything meaningful. Anything with politics or soul. Notice the ubiquity of Red Stripe, once an emblem of Jamaican culture, now sold to white ‘creatives’ at £4 a can.

The warehouse, once a site of industry, has trudged down this path of appropriation. At first it was squatters and free parties, the disadvantaged of a different kind, transforming a space of labor into one of hedonistic illegality and sound system counter-culture. Now the warehouse resides in the middle-class consciousness as the go-to space for every art exhibition or party. Any meaning it may once have had is dead. Its industrial identity has been destroyed and the transgressive thrill the warehouse once represented has been neutered by money, legality and middle-class civility.

Nonetheless many still function as clubs across Southeast London, pumping out reggae and soul music appropriated from the long-established Afro-Caribbean communities to white middle-class twenty-somethings who can afford the £15 entrance. Eventually the warehouse aesthetic will make its way to the top of the pay scale and, as the areas in which they reside reach an acceptable level of gentrification, they will become blocks of luxury flats. Because what else does London need but more kitsch, high ceiling hideaways to shield capital from tax?

The ‘street food revolution’ was not a revolution but a middle-class realization that they could abandon their faux bourgeois restaurants and reach down the socioeconomic ladder instead of up. Markets that once sold fruit and vegetables for a pound a bowl to working class and immigrant communities became venues that commodified and sold the culture of their former clientèle. Vendors with new cute names but the same gritty aesthetics serve over-priced ethnic food and craft beer to a bustling metropolitan crowd, paying not for the cuisine or the cold but for the opportunity to bathe in the edgy cool aesthetic of a former working class space.

This is the romantic illusion that these bars, clubs and street food markets construct; that their customers are the ones on the edge of life, running the gauntlet of Zola’s Les Halles, eating local on makeshift benches whilst drinking beer from the can. Yet this zest is vicarious. Only experienced secondhand through objects and spaces appropriated from below. Spaces which are dully sanitized of any edge and rendered un-intimidating enough for the middle classes to inhabit. Appealing enough for them to trek to parts of London in which they’d never dare live in search of something meaningful. In the hope that some semblance of reality will slip back into view.

The illusion is delicate and fleeting. In part it explains the roving zeitgeist of the metropolitan hipster whose anatomy Douglas Haddow so brilliantly managed to pin down. Because as soon as a place becomes inhabited with too many white, middle-class faces it becomes difficult to keep playing penniless. The braying accents crowd in and the illusion shatters. Those who aren’t committed to the working class aesthetic, yuppies dressed in loafers and shirts rather than scruffy plimsoles and vintage wool coats, begin to dominate and it all becomes just a bit too West London. And in no-time at all the zeitgeist rolls on to the next market, pool hall or dive bar ripe for discovery, colonization and commodification.

Not all businesses understand this delicacy. Champagne and Fromage waded into the hipster darling food market of Brixton Village, upsetting locals and regulars alike. This explicitly bourgeois restaurant, attracted by the hip kudos and ready spending of the area, inadvertently pointed out that the emperor had no clothes. That the commodified working class experience the other restaurants had been pedaling was nothing more than an illusion.

The same anxiety that fuels this cultural appropriation also drives first wave gentrifiers to ‘discover’ new areas that have been populated by working class or immigrant communities for decades. Cheap rents beckon but so does the chance of emancipation from the bourgeois culture of their previous North London existence. The chance to live in an area that is gritty, genuine and real. But this reality is always kept at arm’s length. Gentrifiers have the income to inoculate themselves from how locals live. They plump for spacious Georgian semi-detached houses on a quiet street away from the tower blocks. They socialize in gastro-pubs and artisan cafés. They can do without sure start centers, food banks and the local comprehensive.

Their experience will always be confined to dancing in a warehouse, drinking cocktails from jam jars or climbing the stairs of a multi-story car park in search of a new pop-up restaurant. Never will they face the grinding monotony of mindless work, the inability to pay bills or feed their children, nor the feeling of guilt and hopelessness that comes from being at the bottom of a system that blames the individual but offers no legitimate means by which they can escape.

This partial experience is deliberate. Because with intimate knowledge of how the other half live comes an ugly truth: that middle-class privilege is in many ways premised on working class exploitation. That the rising house prices and cheap mortgages from which they have benefited create a rental market shot with misery. That the money inherited from their parents goes largely untaxed while benefits for both the unemployed and working poor are slashed. That the unpaid internships they can afford to take sustains a culture that excludes the majority from comfortable, white collar jobs. That their accent, speech patterns and knowledge of institutions, by their very deployment in the job market, perpetuate norms that exclude those who were born outside of the cultural elite.

Effie Trinket of the Hunger Games is the ideal manifestation of this contradiction. She is Kaitness and Peeta’s flamboyant chaperone who goes from being a necessary annoyance in the first film towards nominal acceptance in the second. The relationship climaxes when, just as Kaitness and Peeta are about to re-enter the arena, Effie presents Hamich and Peeta with a gold band and necklace, a consumerist expression of their heightened intimacy. And in that very moment, her practiced façade of enthusiastic positivity finally breaks. Through her sobs she cries “I’m sorry, I’m so sorry” and backs away, absent for the rest of the film.

For Effie, the contradiction surfaced and was too much to bear. She realized that the misery and oppression of those in the districts was in some way caused by her privilege. But her tears were shed for a more fundamental truth — that although she recognizes the horror of the world, she enjoys the material comfort exploitation brings. That if given the choice between the status quo and revolution, she wouldn’t change a thing.

Joseph Todd is a writer and activist who has been published in The Baffler, Salon and CounterFire, among others. For more writings, visit his website.

Follow

Get every new post delivered to your Inbox.

Join 1,638 other followers