The terrifying “smart” city of the future

Cities across the country are implementing smart technologies — with grave implications for our personal freedoms

The terrifying "smart" city of the future
This article originally appeared on AlterNet.

Imagine a world without waste. A place where the train always comes on time, where streets are plowed before snow even stops falling, and watchful surveillance cameras have sent rates of petty crime plunging. Never again will you worry about remembering your keys because your front door has an iris recognition system that won’t allow strangers to enter. To some people, this kind of uber-efficient urban living sounds like a utopian dream. But to a growing number of critics, the promise of the “smart city” is starting to seem like the stuff of nightmare.

Smart cities are loosely defined as urban centers that rely on digital technology to enhance efficiency and reduce resource consumption. This happens by means of ubiquitous wireless broadband, citywide networks of computerized sensors that measure human activities (from traffic to electricity use), and mass data collection that analyzes these patterns. Many American cities, including New York, Boston and Chicago, already make use of smart technologies. But far more radical advances are happening overseas. Masdar, in Abu Dhabi, and Songdo, in South Korea, will be the first fully functioning smart cities, in which everything from security to electricity to parking is monitored by sensors and controlled by a central city “brain.”

The surveillance implications of these sorts of mass data-generating civic projects are unnerving, to say the least. Urban designer and author Adam Greenfield wrote on his blog Speedbird that this centralized governing model is “disturbingly consonant with the exercise of authoritarianism.” To further complicate matters, the vast majority of smart-city technology is designed by IT-systems giants like IBM and Siemens. In places like Songdo, which was the brainchild of Cisco Systems, corporate entities become responsible for designing and maintaining the basic functions of urban life.



Smart cities are predicated on the neoliberal idea that the market can fix anything—that companies can manage cities better than governments can. Their advocates claim that they will enhance democratic participation by relying on crowdsourcing and “civic hacking projects” that allow locals to use newly available data to solve municipal problems. But they ignore the fact that private corporations are the ones measuring and controlling these mountains of data, and that they don’t have the same accountability to the public that government does.

In the Nation last year, urban theorist and author Catherine Tumber expressed some of the principle concerns about smart tech, reviewing Anthony Townsend’s Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia. (Full disclosure: I fact-checked the review). Tumber asserts that “the economics of ‘smart’” are in keeping with “the ramped-up market rationalization carried out by finance monopoly since the Civil War, culminating in a minimally civic world fit only for…the unencumbered self.”

I caught up with Tumber via telephone at her office at Northeastern University’s School of Public Policy and Urban Affairs, where she is a visiting scholar, to talk about what the rise of smart cities means for our understanding of urban life.

Editor’s note: This interview has been condensed and lightly edited for clarity.

Allegra Kirkland: How did you first become acquainted with the concept of smart cities?

Catherine Tumber: I had been aware of them kind of through the ether because I pay attention to cities, and I’m very much aware of what’s going on in the digital world in a broad sense. I think it’s quite dangerous actually, in all kinds of ways….

I thought Townsend did a good job laying out what the fault lines are: the big digital systems corporations like Siemens and Cisco and IBM versus what these hacker “democratic heroes” are trying to do. I found that to be useful but I wasn’t persuaded that they aren’t all part of the same sort of dangerous direction of things.

AK: These digital innovations are supposed to be all about access to information and transparency, but it seems like many people don’t even know these initiatives are going on. Like Chicago, Barcelona, and all of these other urban centers are now considered “smart cities” but I feel like most people don’t think of them that way.

CT: I think people are only vaguely aware.

AK: It seems like these major urban initiatives are being conducted largely out of the public eye, without public oversight or involvement. Maybe there are some smaller initiatives being carried out by civic hackers, but the major ones have to be implemented by corporations or the government because regular people don’t have the ability to build that kind of infrastructure.

CT: Right, these are major infrastructure projects.

AK: And there’s no means of opting out. Once a city integrates smart technology, your information gets caught up with all the rest, whether you want it to be or not.

CT: Exactly. And also what’s often not taken into account, and I guess you have to live long enough to really see it—though it’s happening very quickly in our time—is that when you introduce a whole new paradigm of infrastructure, the old infrastructure dies. So it ends up being coercive. At some point, you really have to participate in it or you are not able to execute that function, whether that function be communications or entertainment or transportation or energy.

For example, if you did not really want to be available on a cell phone at any given moment or own one, and wanted to simply rely on a landline, that was fine as long as you were home. But they stripped out all of the phone booths. That was really completed by around five years ago. So it really forces your hand quite a bit.

AK: You seem skeptical of the idea that smart cities are inherently democratizing—that they are sites of greater sociability and inclusion. Does that seem plausible to you? 

CT: I think that digital technology, aside from providing all kinds of information that is trackable, holds up the false promise of greater democratic participation. It holds out a sort of false sense of moral agency, for one thing. The argument as I understand it is that crowdsourcing provides people with a different, less curated sense of democratic participation. It involves reaching out to individuals, so it’s a version of democratic practice. I think the jury is still very much out on whether that is persuasive.

Part of what I think is important and rich about democratic culture as a living tradition is that it brings people of very different backgrounds and types together in surge spaces. And crowdsourcing tends to be consistent marketing in that it excludes whole groups of people, just because of the way it works. It’s not even intentional.

AK: Because of the kind of people who get surveyed, who are aware that these kinds of civic campaigns are going on and would get involved?

CT: Yeah. I find that to be somewhat dubious…for the long-term health of the civic project.

AK: It seems like there’s a fundamental split between people who think there is something organic and inexplicable about the ways human beings come together in cities, and those who believe that all human behavior is quantifiable—that we can rely on data to understand how humans interact. Which side of the line do you fall on?

CT: Digital technology and its use compresses experience. It tends to lead to niche cultures; it tends to lead to a sense of being untethered, as if that’s the golden pathway to real freedom. There are several traditions of political philosophy that hold that its important to be tethered so that you have a sense of the limits of yourself and of what it is that humans can do in the time that they have on this earth. This sense of endless freedom can lead to a very false sense of utopian promise that is simply unrealistic and unwanted. It’s yet another way that we’ve decided to take a pause from history and what history has long told us.

There are some things that you really don’t play with. People have acquired great wisdom over the ages—across the globe, this isn’t just a Eurocentric thing—about what it means to travel and to leave home and to come back. These are all the great stories and myths and fables. Technology kind of flattens all of that.

AK: This is sort of a related question, but what do you think are the primary things smart cities take away from the people who live there? What do we lose in these sorts of manufactured urban environments?

It makes me think of the complaints about the gentrification of places like New York City. Michael Bloomberg created new green spaces in Times Square and along the waterfront, made city services more efficient, rezoned districts, and now we have this sanitized, business-friendly, soulless city. The neighborhoods look the same; there’s no mixing of social classes, no weird dive bars. So you’d think smart cities, with their emphasis on homogeneity and efficiency, would be equally off-putting to people.

CT: I think it’s a matter of the convenience of it and the novelty of it. But smart technology is relatively new and there are so many unexamined consequences, as I think there are with any major technological change like this.

I think that we’re only beginning as a culture to wince a little and take a second look at this. … There really hasn’t been any sort of consensus about what the right manners are in using these technologies. Across the world for time immemorial, every culture had some understanding of manners, and I don’t mean that in the prim Victorian sense. But just some ways in which you convey unspoken, coded assumptions about respect and caring and common courtesy and stuff like that. We haven’t had that conversation here. …The main point is that there are real unintended consequences of this.

AK: The corporations behind smart cities throw around all these statistics about how smart technology reduces crime, reduces waste. So it makes you feel like a Luddite to say that you’re uncomfortable with these technologies because there is all of this evidence that they’re successful. But I feel like there’s a difference between using technology to fix a specific urban problem, like Rio de Janeiro using weather tracking to forecast flash floods, which are a major problem there, and places like Songdo, where you’re really rebuilding the concept of the city from scratch and dictating how people should live. 

CT: Yes, they’re riddled with totalitarian overtones, and that’s built into it, it’s part of the built structure.

AK: So do you think smart city initiatives are not necessarily problematic, and it’s just when they’re applied on the scale of an entire city that it gets out of control?

CT: I’m mainly concerned with this assumption that this is new, this is shiny, this is innovative, to use everyone’s favorite buzzword, and that we should just do it. A lot of people don’t really understand what’s involved. There’s a tendency to have it sort of inflicted on people, and part of that is the way the business model for digital technology, at least at this point in time, works, which is to make everything cheap. It doesn’t cost the public very much to say, oh, okay, because there’s not much of a pricetag on it yet. Part of the reason why it’s so cheap is that so much of the work is based on volunteer labor.

So many of these civic hackers, all these projects and apps they develop, so much of that is based on free labor. People try to frame that as a sort of revival of Tocqueville—voluntary associations and all that stuff. But instead it’s just downright free labor, like unpaid internships or something. That’s why I’m very skeptical of all of this; this is really just another variation on the sort of neoliberal business model that we’ve been using now for the past 35 years and has grown out of control. This is just another iteration of that with nice shiny technology attached to it. Americans are always suckers for technological determinism.

AK: Sure. I feel like privatization initiatives in cities have multiplied in recent years, with cities selling stakes in public housing to private developers—

CT: And all the stuff Rahm Emanuel is doing in Chicago.

AK: Exactly. It seems like smart cities are sort of the ultimate example of the corporate-designed urban environment. Should that inherently be a cause for concern? It goes without saying that corporations don’t always have the best interests of people in mind. And places like Songdo were designed to have minimal regulatory barriers. They prioritize technological innovation and wealth generation, so it seems like they could really deepen existing economic inequality. If you’re not part of those spheres, you don’t really have a place in these cities.

CT: To really take on wealth inequality and the kind of ravaging done by the spoiling land use policies that we’ve had in place since after World War II, we need to have a body of ideas and practices that have a clearly defined sense of what their political vision is: what the good life is and how to get there. What are our fundamental values, our limitations? All of this smart city design is apolitical. That’s the problem. The longer it seeps into our political culture, the more it will drain the public imagination of the next generation, of what a real political movement looks like and why politics are important.

AK: It also seems like the obligation of government to provide essential public services like housing is reduced. It becomes the responsibility of corporations and developers, so there’s less accountability, less control over pricing and over the data the companies acquire.

CT: Then there’s all this debate about regulations—which industries require more or less. These are all very difficult questions of practicality and philosophy. And I fear that our political discourse and understanding of the world is being degraded and coarsened by the uncritical dissemination of a digital substitute for a real politics.

AK: Another thing I wanted to bring up is the surveillance concern. I read a quote from the mayor of Rio, which is a smart city, saying “The operations center allows us to have people looking into every corner of the city, 24 hours a day, seven days a week.” He meant it as a positive, but that’s a sort of terrifying statement. What are your thoughts about the surveillance implications of smart technology?

CT: All these sensors will and are being used to invade our privacy. There are good and bad things about that. You know, here in Boston we had the marathon bombers and they were very quickly apprehended, partly because that area is so rigged up with security cameras. We have to decide whether it’s worth it.

Another thing I’ve been concerned about is thinking about the difference between Aldous Huxley and George Orwell. You know, George Orwell talked about Big Brother and the authoritarian state, the invasion of privacy. Huxley talked more about the internalization of oppression, and I’m in some ways even more concerned about that. It’s a cultural critique of the way we internalize and accept the terms of our lack of freedom. We accept the deprivation that totalitarian movements end up exacting on us. So we end up being our own worst enemies. It’s almost like we don’t even need Big Brother.

AK: Sure. We voluntarily give up so much information about ourselves.

CT: When I see people walking around in public as though they’re wearing a blindfold because they’re so absorbed in another world on their devices, that has the look to me of self-degradation and degradation of the public realm that is more effective than security cameras. Because people won’t resist. They’re not even aware of their surroundings, just as animals moving through the world. So why would they be able to muster whatever it takes to resist the invasion of privacy by the state or by corporations, for that matter? It just all represents such a contraction of democratic culture to me. It worries the heck out of me.

 

 

http://www.salon.com/2015/02/28/the_terrifying_smart_city_of_the_future_partner/?source=newsletter

 

Bookchin: living legacy of an American revolutionary

By Federico Venturini On February 28, 2015

Post image for Bookchin: living legacy of an American revolutionaryAn interview with Debbie Bookchin on her father’s contributions to revolutionary theory and the adoption of his ideas by the Kurdish liberation movement. 

Editor’s note: Below you will find an interview with Debbie Bookchin, daughter of the late Murray Bookchin, who passed away in 2006. Bookchin spent his life in revolutionary leftist circles, joining a communist youth organization at the age of nine and becoming a Trotskyist in his late thirties, before switching to anarchist thought and finally ending up identifying himself as a ‘communalist’ after developing the ideas of ‘libertarian municipalism’.

Bookchin was (and remains) as influential as he was controversial. His radical critiques of deep ecology and ‘lifestyle anarchism’ stirred up a number of heated debates that continue to this day. Now that his revolutionary ideas have been picked up by the Kurdish liberation movement, who are using Bookchin’s works to build a democratic, gender-equal and ecologically sustainable society in the heart of the Middle East, we are seeing a renewed interest in the life and thoughts of this great political thinker.

For this reason ROAR is very excited to publish this interview with Debbie Bookchin, which not only provides valuable insights into her father’s political legacy, but also offers a glimpse into the life of the man behind the ideas.

:::::::::::::::::::::::

Federico Venturini: Verso Books has just published The Next Revolution: Popular Assemblies and the Promise of Direct Democracy, a collection of essays by your father Murray Bookchin. Could you tell us something about this book? Why did you decide to embark on this venture?

Debbie Bookchin: The creation of this book was inspired among other things by the ongoing political discussion about which direction the Left should take with respect to the question of organization. Our publisher, Verso, publishes the writings of both Slavoj Žižek and Simon Critchley. Briefly, Žižek advocates revolution with the power given to a centralized state – a rehashing of Marxist theory. Critchley, on the other hand, advocates social change that takes place in the interstices of society.

Murray felt that both of these solutions were inadequate responses to the question of how to develop radical forms of governance that are democratic and can fundamentally change society. We thought this collection of essays on decentralized democracy could offer an important third pole in this political debate. And we wanted to present them, along with some previously unpublished material, to a new generation of activists.

How did Bookchin arrive at the concept of decentralized democracy?

Murray had spent a lifetime studying revolutionary movements and in fact wrote an entire history of those movements in his four-volume work, The Third Revolution. This study reaffirmed his belief that revolutionary change could not be achieved through activities that remained within the margins of a society – for example, building alternative organizations like food co-ops and free schools, as Critichley proposes – or by creating a massive socialist state, an idea which has been completely discredited and could never gain any kind of widespread appeal.

Instead, he felt that we had to employ modes of organization that built on the best traditions of revolutionary movements – such as the Paris commune of 1871 and the collectives formed in 1936 revolutionary Spain – an overlooked tradition that enshrines decision-making at the municipal level in neighborhood assemblies that increasingly challenge the hegemony of the nation-state. And because he was an American, he was also looking for a way to build upon traditions that would appeal to an American public, such as the committees of the American Revolution or the New England town meeting style democracy that is still active in places like Vermont today. These are the ideas he discusses in the essays in this book.

Bookchin is known for his writings on ecology, hierarchy and capitalism — collected under the umbrella of what he called ‘social ecology’. How do the ideas in this book emerge from the concept of social ecology?

One of Murray’s central contributions to Left thought was his insistence, back in the early 1960s, that all ecological problems are social problems. Social ecology starts from this premise: that we will never properly address climate change, the poisoning of the earth with pesticides and the myriad of other ecological problems that are increasingly undermining the ecological stability of the planet, until we address underlying issues of domination and hierarchy. This includes domination based on gender, ethnicity, race, and sexual orientation, as well as class distinctions.

Eradicating those forms of oppression immediately raises the question of how to organize society in a fashion that maximizes freedom. So the ideas about popular assemblies presented in this book grow naturally out of the philosophy of social ecology. They address the question of how to advance revolutionary change that will achieve true freedom for individuals while still allowing for the social organization necessary to live harmoniously with each other and the natural world.

Popular assemblies are part of the renewed importance that Bookchin gives to municipal organization. When and why did Bookchin begin to focus on these issues?

Murray had begun thinking about these issues early on, in the 1960s. He addresses them even in 1968, in his essay, “The Forms of Freedom.” But this question, of political and social organization, especially consumed Murray in the last two decades of his life, when the essays we’ve collected here were written. When Murray saw the predicament of the alter-globalization movement and similar movements, he asserted that simply engaging in “festivals of the oppressed” failed to offer a structural framework within which to address deep-seated social and economic inequities.

He had spent more than three decades working within the anarchist tradition but had come to feel that anarchism didn’t deal adequately with the question of power and political organization. Instead, he advocated a localized, grassroots democratic social philosophy, which he called Communalism. He called the political expression of that idea Libertarian Municipalism. He believed that by developing and institutionalizing general assemblies on the local level we could re-empower ourselves as active citizens, charting the course of our communities and economies and confederating with other local assemblies.

He envisioned this self-government as becoming increasingly strong as it solidified into a “dual power,” that would challenge, and ultimately dismantle, the power of the nation-state. Murray occasionally used the term Communalism interchangeably with Libertarian Municipalism but generally he thought of Communalism as the umbrella political philosophy and Libertarian Municipalism as its political practice, which entails the running of candidates on the municipal level, municipalizing the economy and the like.

It seems that recent movements like Occupy Wall Street and the indignados movement resemble some of these ideas. What would Bookchin have thought of them and of developments like the Podemos phenomenon in Spain?

Murray would have been excited to see the Indignados movement, in part because of his admiration for 1936 revolutionary Spain, which informs his book The Spanish Anarchists. And he would have appreciated the impulses behind Occupy and the citizen revolts across the Mideast. But I think he would have anticipated many of the troubles that preoccupied Occupy. This includes the problems inherent in the use of consensus, and the mistaken belief by many within the Occupy movement that the act of creating protest encampments can be equated with the actual reclaiming of popular power, which Murray believed had to be institutionalized in local assemblies within communities in order to create a true political force.

I think it’s hard not to be excited by political events in Greece and Spain, where new, more democratic parties are coming to power. But Murray would have warned that these kinds of national parties are almost always forced to compromise their ideals to the point where they no longer represent significant change. He warned about that when the German Greens came to power in the early 1980s and he was proven correct. They started out calling themselves a “non-party party” but they ended up in a coalition with the conservative CDU (the Christian Democratic Union) in order to maintain power.

That is why he differentiates between “statecraft,” his name for traditional representative government, which never really invests power with the citizenry, and “politics,” a term that he wants to reclaim to signify directly democratic self-management by popular assemblies that are networked together to make decisions that affect larger regions. So that’s one reason why we’re happy about the publication of this book at this time; it directly speaks to the impulses of millions of people around the world who are demanding direct democracy instead of representative democracy, and helps point a way to achieving that goal.

As direct democracy has become a rallying cry, your father’s work has enjoyed a resurgence. But even before that, he was considered one of the most important anarchist and libertarian thinkers of the last century. What is it like to be his daughter?

I guess there’s more than one answer to that question. One is political—most of my adult life has been spent as an investigative journalist, but since my father died in 2006, I’ve felt increasingly that it’s my job to help project his ideas forward, that we are living in a time when the need for political change has never been greater, and that his work has a major contribution to make to the Left.

The other answer is more personal – I had an unusual childhood because of both of my parents’ activism and deep involvement with so many ideas. Murray was self-educated – he never went to college – so he taught himself everything from physics to philosophy and had an especially remarkable command of history. He had an innate desire to contextualize everything, and that made him very engaging to be around. And my mother, Bea, was a mathematician, and a dialectical thinker in her own right. Her intellect and sensibilities made her an important sounding board for him, which helped him elaborate ideas.

They were extremely close; even though they were only married for 12 years they continued to live together for decades, right up until the early 1990s. So there were endless discussions and strong intellectual and emotional bonds that made it a wonderfully vibrant home to be raised in. And because I grew up in the 1960s and 1970s it was also a very active time politically, so our house was full of interesting people all the time, which was great fun for a kid.

Ultimately, the thing I appreciate about both my parents is their tremendous love of ideas – their lifelong commitment to great ideas that at their root form the possibility for political transformation – and their desire to act on them.

Could you say something about what Murray was like as a person?

While it’s hard to believe when reading some of his polemics, Murray was immensely warm and caring to the people around him. He took a supportive interest in his students at the Institute for Social Ecology and he was a very social creature; he loved good company.

In many of his writings, especially in his earlier wotrk, like the essays in Post-Scarcity Anarchism, and of course The Ecology of Freedom, but also in later pieces like Social Anarchism or Lifestyle Anarchism, you can feel the intensity of his utopian vision, his belief that human beings deserve to live in societies that maximize creativity and freedom. As a person he was deeply moved by human suffering and very empathetic, even sentimental at times. At the same time, he was profoundly committed to rational thought and felt strongly that human beings had an obligation to create a rational society.

As with all thinkers that produce work that spans over decades, your father’s thinking modified with the passing of time. How do you explain this?

Murray was constantly studying, evaluating, and reassessing. He allowed his theories to evolve organically and dialectically and didn’t hold on to set theoretical doctrines, be they Marxist or anarchist. On the other hand, Murray wasn’t immune from making mistakes. So, for example, while I agreed with his critique of “lifestyle” anarchism (in his book Social Anarchism or Lifestyle Anarchism: An Unbridgeable Chasmpublished in 1995), I think there were stylistic errors that made his tone more polarizing than it needed to be and that may have made it harder for some undecided anarchists to adopt his point of view.

But I think that now, twenty years later, his critique has stood the test of time not only with respect to “lifestyle” anarchism but anarchism per se and that Communalism can be seen, in a sense, as a logical progression that addresses organizational lacunae in anarchism. I hope that anarchists who read this new collection of essays will see Communalism as a natural outgrowth of anarchism and view Murray’s critique of the failures of anarchism in the context of his search for a potent instrument for revolutionary change.

Why do you think Murray adopted what some people viewed as a harsh tone in his book ‘Social Anarchism or Lifestyle Anarchism’?

Murray had spent a lifetime explaining why the irrationalities of capitalism could only be countered by an organized social movement and here was a vocal group of anarchists dismissing that goal in favor of an individualist, anti-technology, primitivist politics, which Murray found as irrational as capitalism itself.

So, if his tone was unforgiving, it’s because he was desperately trying to rescue the social dimension of anarchism. Murray was also unsparing in his critique of deep ecology—for example in his adamant assertion, long before others dared to say so, that deep ecology was a fundamentally misanthropic, anti-rational political philosophy. There were many in the anarchist and the deep ecology movements who were unable to answer his criticisms of those ideologies. So some of these adversaries resorted to personal attacks.

In his book Recovering Bookchin: Social Ecology and the Crises of Our Times, Andy Price of Sheffield Hallam University in England does an excellent job of analyzing Murray’s critiques with respect to anarchism and deep ecology and unmasks the efforts to caricaturize him by some members of those movements. Price’s book is a very fine treatment of those issues, and also happens to serve as a great introduction to Murray’s ideas.

What do you view as Murray’s most important teaching?

The necessity of dialectical thinking – that to really know a thing you have to see it in its full development, not statically, not as it “is” but rather as it has the potential to “become.” That hierarchy and capitalism weren’t inevitable developments and that a legacy of freedom has always existed alongside the legacy of domination. That it’s our job as human beings capable of rational thought to try to develop an ethics and social structure that maximizes freedom.

What about his most relevant achievement?

On a very basic level, his introduction of ecology as a political category was extraordinary. He was fifty years ahead of his time in saying unequivocally that capitalism was incompatible with living in harmony with the natural world, a concept that key activists today such as Naomi Klein have taken up and popularized. He also was ahead of his time in critiquing the Left from a Leftist perspective, insisting that traditional Marxism, with it’s focus on the proletariat as a hegemonic class and its economic reductionism, had to be abandoned in favor of a more sweeping framework for social change.

But even more important, I think, was his desire to develop a unified social theory grounded in philosophy. In other words, he was searching for an objective foundation for an ethical society. That led him to immerse himself in history, anthropology, and even in biology and the sciences, all in the service of advancing the idea that mutual aid, complementarity, and other concepts that predominate in natural evolution point to the notion that human beings are capable of using their rationality to live in harmony with each other and the natural world—that we are capable of creating what he called “free nature.” And in this sense I would agree with you that he was one of the most original thinkers of the twentieth century.

Recently Bookchin’s name has come up in connection with the Kurdish autonomy movement. Can you tell us a bit about his role in influencing Kurdish resistance and their social forms of organization?

Right now the Kurds in parts of Turkey and northern Syria are engaged in one of the most daring and innovative efforts in the world to employ directly democratic decision-making in their politics. Two years before Murray died in 2006, he was contacted by Abdullah Öcalan, the imprisoned leader of the Kurdish resistance. While they never had a chance to engage in a direct dialogue, Öcalan did undertake a serious study of Murray’s work, reading seminal books like The Ecology of Freedom and From Urbanization to Cities.

As a result, Öcalan abandoned his Marxist-Leninist approach to social revolution in favor of Murray’s non-statist, libertarian municipalist approach, adapting Murray’s ideas and developing his own into what he called Democratic Confederalism. We see these ideas at work now in many Kurdish communities in Turkey and in the Rojava region in northern Syria, including in Kobani, where Kurdish forces battled and ultimately drove out the Islamic State from the city after 134 days of fighting.

These towns are remarkable for instituting the kind of directly democratic councils that empower every member of the community regardless of ethnicity, gender or religion. They have embraced the principals of democratic decision-making, ecological stewardship, and equality and representation for ethnic minorities and for women, who now constitute 40 percent of every decision-making body. They’ve instituted freedom of speech and in many cases municipalized their economies. Importantly they view Kurdish autonomy as inseparable from creating a liberatory, non-capitalist society for all and have created their own autonomous zones which stand as a true challenge to the nation-state.

This kind of self-government is a model not just for the region but for the world. I wish Murray, who not only believed so strongly in the libertarian municipalist model, but also in the Kurdish struggle for autonomy, had lived long enough to see it.

In your introduction to the book, you point out that Murray’s influence has also been felt within the practices and politics of new social movements. What do you think is his legacy for social movements and what is your aim with respect to this new publication?

I think that features of Murray’s thought are evident in a wide range of current political and social theorizing, for example in the insightful work of theorists like David Harveyand Marina Sitrin. My co-editor Blair Taylor, a PhD candidate at the New School for Social Research in the Politics Department, specializes in the history of new social movements and has observed that these movements have already embraced many of Murray’s ideas, even if this was sometimes unknowingly. You see this in the use of affinity groups, spokes-councils, and other forms of directly democratic organizing; in the sensitivity to matters of domination and hierarchy; in the understanding of pre-figurative politics—that is that we must live the values in our movement that we want to achieve in a new society.

These are all concepts that Murray introduced in the 1970s. You see these ideas at work also in the transition towns movement and on the streets when protesters are asked by reporters: “What do you want?” and they respond, “Direct democracy.” I think that it’s exciting that his work is being discussed by people like David Harvey and David Graeber and rediscovered by a new generation. What I hope is that the social movements taking shape across the globe will consider using the ideas in this book as a way of reclaiming popular power on the municipal level, so that we can institutionalize the political change necessary to move us from the realm of protest to that of social transformation—to a self-managed society and a liberated future.

Federico Venturini is an activist-researcher, working with social ecology and urban social movement. He is currently PhD candidate at the School of Geography, University of Leeds and member of Transnational Institute of Social Ecology

 

http://roarmag.org/2015/02/bookchin-interview-social-ecology/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+roarmag+%28ROAR+Magazine%29

Nimoy transformed the classic intellectual, self-questioning archetype…

 …into a dashing “Star Trek” action hero

How Leonard Nimoy made Spock an American Jewish icon
Leonard Nimoy as Spock on “Star Trek” (Credit: CBS)

I suspect I can speak for most American Jews when I say: Before I’d watched even a single episode of “Star Trek,” I knew about Leonard Nimoy.

Although there are plenty of Jews who have achieved fame and esteem in American culture, only a handful have their Jewishness explicitly intertwined with their larger cultural image. Much of the difference has to do with how frequently the celebrity in question alludes to his or her heritage within their body of work. This explains why, for instance, a comedian like Adam Sandler is widely identified as Jewish while Andrew Dice Clay is not, or how pitcher Sandy Koufax became famous as a “Jewish athlete” after skipping Game 1 of the 1965 World Series to observe Yom Kippur, while wide receiver Julian Edelman’s Hebraic heritage has remained more obscure.

With this context in mind, it becomes much easier to understand how Nimoy became an iconic figure in the American Jewish community. Take Nimoy’s explanation of the origin of the famous Vulcan salute, courtesy of a 2000 interview with the Baltimore Sun: “In the [Jewish] blessing, the Kohanim (a high priest of a Hebrew tribe) makes the gesture with both hands, and it struck me as a very magical and mystical moment. I taught myself how to do it without even knowing what it meant, and later I inserted it into ‘Star Trek.’”

Nimoy’s public celebration of his own Jewishness extends far beyond this literal gesture. He has openly discussed experiencing anti-Semitism in early-20th century Boston,speaking Yiddish to his Ukrainian grandparents, and pursuing an acting career in large part due to his Jewish heritage. “I became an actor, I’m convinced, because I found a home in a play about a Jewish family just like mine,” Nimoy told Abigail Pogrebin in “Stars of David: Prominent Jews Talk About Being Jewish.” “Clifford Odets’s ‘Awake and Sing.’ I was seventeen years old, cast in this local production, with some pretty good amateur and semiprofessional actors, playing this teenage kid in this Jewish family that was so much like mine it was amazing.”



Significantly, Nimoy did not disregard his Jewishness after becoming a star. Even after his depiction of Dr. Spock became famous throughout the world, Nimoy continued to actively participate in Jewish causes, from fighting to preserve the Yiddish language and narrating a documentary about Hasidic Jews to publishing a Kabbalah-inspired book of photography, The Shekhina Project, which explored “the feminine essence of God.” He even called for peace in Israel by drawing on the mythology from “Star Trek,” recalling an episode in which “two men, half black, half white, are the last survivors of their peoples who have been at war with each other for thousands of years, yet the Enterprise crew could find no differences separating these two raging men.” The message, he wisely intuited, was that “assigning blame over all other priorities is self-defeating. Myth can be a snare. The two sides need our help to evade the snare and search for a way to compromise.”

As we pay our respects to Nimoy’s life and legacy, his status as an American Jewish icon is important in two ways. The first, and by far most pressing, is socio-political: As anti-Semitism continues to rise in American colleges and throughout the world at large, it is important to acknowledge beloved cultural figures who not only came from a Jewish background, but who allowed their heritage to influence their work and continued to participate in Jewish causes throughout their lives. When you consider the frequency with which American Jews will either downplay their Jewishness (e.g., Andy Samberg) or primarily use it as grounds for cracking jokes at the expense of Jews (e.g., Matt Stone of “South Park”), Nimoy’s legacy as an outspokenly pro-Jewish Jew is particularly meaningful right now.

In addition to this, however, there is the simple fact that Nimoy presented American Jews with an archetype that was at once fresh and traditional. The trope of the intellectual, self-questioning Jew has been around for as long as there have been Chosen People, and yet Nimoy managed to transmogrify that character into something exotic and adventurous. Nimoy’s Mr. Spock was a creature driven by logic and a thirst for knowledge, yes, but he was also an action hero and idealist when circumstances demanded it. For the countless Jews who, like me, grew up as nerds and social outcasts, it was always inspiring to see a famous Jewish actor play a character who was at once so much like us and yet flung far enough across the universe to allow us temporary escape from our realities. This may not be the most topically relevant of Nimoy’s legacies, but my guess is that it will be his most lasting as long as there are Jewish children who yearn to learn more, whether by turning inward into their own heritage or casting their gaze upon the distant stars.

Matthew Rozsa is a Ph.D. student in history at Lehigh University as well as a political columnist. His editorials have been published in “The Morning Call,” “The Express-Times,” “The Newark Star-Ledger,” “The Baltimore Sun,” and various college newspapers and blogs. He actively encourages people to reach out to him at matt.rozsa@gmail.com

 

US economy in deflation and slump

corecpi

By Andre Damon
28 February 2015

The US Commerce Department said Friday that Gross Domestic Product, the broadest measure of economic output, grew by only 2.2 percent in the fourth quarter of last year, down from an earlier estimate of 2.6 percent and a sharp fall from earlier quarters.

This followed the announcement by the Labor Department on Thursday that consumer prices fell by 0.7 percent, the largest fall since December 2008. Over the past 12 months, prices have fallen by 0.1 percent, the first annual deflation figure posted since October 2009.

These figures belie official claims that the US is an economically healthy counterbalance to the overall slump and deflation that now encompasses most of the world. In fact, US economic growth, hampered by an enormous impoverishment of the working class in the years following the financial crisis, remains far below previous historical averages.

On Tuesday, Standard and Poor’s said that its Case-Shiller Index showed that home prices grew by 4.6 percent over the past year, the slowest housing price increase since 2011. “The housing recovery is faltering,” David Blitzer, chairman of the index committee at S&P Dow Jones, told the Los Angeles Times. “Before the recession, anytime housing starts were at their current level… the economy was in a recession.”

Meanwhile the number of people in the US newly filing for jobless benefits jumped by 31,000 to 313,000 last week, in the largest increase since December 2013, reflecting a series of mass layoffs and business closures announced this month.

On February 4, office supply retailer Staples announced plans to buy its rival Office Depot, which would result in the closure of up to a thousand stores and tens of thousands of layoffs. The next day, electronics retailer RadioShack filed for bankruptcy, saying it plans to close up to 3,500 stores.

Mass layoffs have also been announced at online marketplace eBay, credit card company American Express, the oilfield services companies Schlumberger and Baker Hughes, as well as the retailers J.C. Penney and Macy’s.

These disastrous economic developments come even as the Dow Jones Industrial Average hit an all-time record of 18,140 on Wednesday, though it retreated slightly later in the week. Worldwide, the FTSE All-World Index is near its highest level in history.

The rise in global stock indices reflects the satisfaction of global financial markets with the pledge by the Syriza-led Greek government to impose austerity measures dictated by the EU, as well as indications by Federal Reserve Chairwoman Janet Yellen in congressional testimony this week that the US central bank is likely to delay raising the federal funds rate in response to recent negative economic figures.

The US federal funds rate has been at essentially zero since the beginning of 2009. Together with the central bank’s multi-trillion-dollar “quantitative easing” program, this has helped to inflate a massive stock market bubble that has seen the NASDAQ triple in value since 2009.

This enormous growth in asset values has taken place despite the relatively depressed state of the US economy, which grew at an annual rate of 2.4 percent in 2014. During the entire economic “recovery” since 2010, the US economy has grown at an average rate of 2.2 percent. By comparison, the US economy grew at an average rate of 3.2 percent in the 1990s and 4.2 percent in the 1950s.

The ongoing stock market bubble has led to a vast enrichment of the financial elite: the number of billionaires in the US has nearly doubled since 2009. The financial oligarchy, however, has not used its ever-growing wealth for productive investment, as shown by the decline in business spending in the fourth quarter of last year. Instead, it has either hoarded it or used it to buy real estate, art and luxury goods.

On Thursday, Bloomberg reported that global sales of “ultra-premium” vehicles, costing $100,000 or more, surged by 154 percent, compared with a 36 percent increase in global vehicle sales overall. The report noted, “Rolls-Royce registrations have risen almost five-fold. Almost 10,000 new Bentleys cruised onto the streets last year, a 122 percent increase over 2009, while Lamborghini rode a 50 percent increase to pass the 2,000 vehicle mark.”

Meanwhile, the number of people in poverty in the US remains at record levels. In January, the Southern Education Foundation reported that, for the first time in at least half a century, low-income children make up the majority of students enrolled in American public schools.

To the extent that jobs are being created in the US, they are largely part-time, contingent and low-wage, replacing higher-wage jobs eliminated during the 2008 crash. A report published last year by the National Employment Law Project found that while American companies have added 1.85 million low-wage jobs since 2009, they have eliminated 1.83 million medium-wage and high-wage jobs.

Earlier this month, Jim Clifton, head of the Gallup polling agency, denounced claims that the US unemployment rate has returned to “normal” levels. “There’s no other way to say this,” he wrote. “The official unemployment rate, which cruelly overlooks the suffering of the long-term and often permanently unemployed as well as the depressingly underemployed, amounts to a Big Lie.”

“Gallup defines a good job as 30+ hours per week for an organization that provides a regular paycheck. Right now, the US is delivering at a staggeringly low rate of 44%, which is the number of full-time jobs as a percent of the adult population, 18 years and older.”

Clifton added, “I hear all the time that ‘unemployment is greatly reduced, but the people aren’t feeling it.’ When the media, talking heads, the White House and Wall Street start reporting the truth—the percent of Americans in good jobs; jobs that are full time and real—then we will quit wondering why Americans aren’t ‘feeling’ something that doesn’t remotely reflect the reality in their lives.”

 

http://www.wsws.org/en/articles/2015/02/28/econ-f28.html

Google has captured your mind

Searches reveal who we are and how we think. True intellectual privacy requires safeguarding these records

Google has captured your mind
(Credit: Kuzma via iStock/Salon)

The Justice Department’s subpoena was straightforward enough. It directed Google to disclose to the U.S. government every search query that had been entered into its search engine for a two-month period, and to disclose every Internet address that could be accessed from the search engine. Google refused to comply. And so on Wednesday January 18, 2006, the Department of Justice filed a court motion in California, seeking an order that would force Google to comply with a similar request—a random sample of a million URLs from its search engine database, along with the text of every “search string entered onto Google’s search engine over a one-week period.” The Justice Department was interested in how many Internet users were looking for pornography, and it thought that analyzing the search queries of ordinary Internet users was the best way to figure this out. Google, which had a 45-percent market share at the time, was not the only search engine to receive the subpoena. The Justice Department also requested search records from AOL, Yahoo!, and Microsoft. Only Google declined the initial request and opposed it, which is the only reason we are aware that the secret request was ever made in the first place.

The government’s request for massive amounts of search history from ordinary users requires some explanation. It has to do with the federal government’s interest in online pornography, which has a long history, at least in Internet time. In 1995 Time Magazine ran its famous “Cyberporn” cover, depicting a shocked young boy staring into a computer monitor, his eyes wide, his mouth agape, and his skin illuminated by the eerie glow of the screen. The cover was part of a national panic about online pornography, to which Congress responded by passing the federal Communications Decency Act (CDA) the following year. This infamous law prevented all websites from publishing “patently offensive” content without first verifying the age and identity of its readers, and the sending of indecent communications to anyone under eighteen. It tried to transform the Internet into a public space that was always fit for children by default.


The CDA prompted massive protests (and litigation) charging the government with censorship. The Supreme Court agreed in the landmark case of Reno v. ACLU (1997), which struck down the CDA’s decency provisions. In his opinion for the Court, Justice John Paul Stevens explained that regulating the content of Internet expression is no different from regulating the content of newspapers.The case is arguably the most significant free speech decision over the past half century since it expanded the full protection of the First Amendment to Internet expression, rather than treating the Internet like television or radio, whose content may be regulated more extensively. In language that might sound dated, Justice Stevens announced a principle that has endured: “Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The Internet, in other words, was now an essential forum for free speech.

In the aftermath of Reno, Congress gave up on policing Internet indecency, but continued to focus on child protection. In 1998 it passed the Children’s Online Protection Act, also known as COPA. COPA punished those who engaged in web communications made “for commercial purposes” that were accessible and “harmful to minors” with a $50,000 fine and prison terms of up to six months. After extensive litigation, the Supreme Court in Ashcroft v. ACLU (2004) upheld a preliminary injunction preventing the government from enforcing the law. The Court reasoned that the government hadn’t proved that an outright ban of “harmful to minors” material was necessary. It suggested that Congress could have instead required the use of blocking or filtering software, which would have had less of an impact on free speech than a ban, and it remanded the case for further proceedings. Back in the lower court, the government wanted to create a study showing that filtering would be ineffective, which is why it wanted the search queries from Google and the other search engine companies in 2006.

Judge James Ware ruled on the subpoena on March 17, 2006, and denied most of the government’s demands. He granted the release of only 5 percent of the requested randomly selected anonymous search results and none of the actual search queries. Much of the reason for approving only a tiny sample of the de-identified search requests had to do with privacy. Google had not made a direct privacy argument, on the grounds that de-identified search queries were not “personal information,” but it argued that disclosure of the records would expose its trade secrets and harm its goodwill from users who believed that their searches were confidential. Judge Ware accepted this oddly phrased privacy claim, and added one of his own that Google had missed. The judge explained that Google users have a privacy interest in the confidentiality of their searches because a user’s identity could be reconstructed from their queries and because disclosure of such queries could lead to embarrassment (searches for, e.g., pornography or abortions) or criminal liability (searches for, e.g., “bomb placement white house”). He also placed the list of disclosed website addresses under a protective order to safeguard Google’s trade secrets.

Two facets of Judge Ware’s short opinion in the “Search Subpoena Case” are noteworthy. First, the judge was quite correct that even search requests that have had their user’s identities removed are not anonymous, as it is surprisingly easy to re-identify this kind of data. The queries we enter into search engines like Google often unwittingly reveal our identities. Most commonly, we search our own names, out of vanity, curiosity, or to discover if there are false or embarrassing facts or images of us online. But other parts of our searches can reveal our identities as well. A few months after the Search Subpoena Case, AOL made public twenty million search queries from 650,000 users of its search engine users. AOL was hoping this disclosure would help researchers and had replaced its users’ names with numerical IDs to protect their privacy. But two New York Times reporters showed just how easy it could be to re-identify them. They tracked down AOL user number 4417749 and identified her as Thelma Arnold, a sixty-two-year old widow in Lilburn, Georgia. Thelma had made hundreds of searches including “numb fingers,” “60 single men,” and “dog that urinates on everything.” The New York Times reporters used old-fashioned investigative techniques, but modern sophisticated computer science tools make re-identification of such information even easier. One such technique allowed computer scientists to re-identify users in the Netflix movie-watching database, which that company made public to researchers in 2006.

The second interesting facet of the Search Subpoena Case is its theory of privacy. Google won because the disclosure threatened its trade secrets (a commercial privacy, of sorts) and its business goodwill (which relied on its users believing that their searches were private). Judge Ware suggested that a more direct kind of user privacy was at stake, but was not specific beyond some generalized fear of embarrassment (echoing the old theory of tort privacy) or criminal prosecution (evoking the “reasonable expectation of privacy” theme from criminal law). Most people no doubt have an intuitive sense that their Internet searches are “private,” but neither our intuitions nor the Search Subpoena Case tell us why. This is a common problem in discussions of privacy. We often use the word “privacy” without being clear about what we mean or why it matters. We can do better.

Internet searches implicate our intellectual privacy. We use tools like Google Search to make sense of the world, and intellectual privacy is needed when we are making sense of the world. Our curiosity is essential, and it should be unfettered. As I’ll show in this chapter, search queries implicate a special kind of intellectual privacy, which is the freedom of thought.

Freedom of thought and belief is the core of our intellectual privacy. This freedom is the defining characteristic of a free society and our most cherished civil liberty. This right encompasses the range of thoughts and beliefs that a person might hold or develop, dealing with matters that are trivial and important, secular and profane. And it protects the individual’s thoughts from scrutiny or coercion by anyone, whether a government official or a private actor such as an employer, a friend, or a spouse. At the level of law, if there is any constitutional right that is absolute, it is this one, which is the precondition for other political and religious rights guaranteed by the Western tradition. Yet curiously, although freedom of thought is widely regarded as our most important civil liberty, it has not been protected in our law as much as other rights, in part because it has been very difficult for the state or others to monitor thoughts and beliefs even if they wanted to.

Freedom of Thought and Intellectual Privacy

In 1913 the eminent Anglo-Irish historian J. B. Bury published A History of Freedom of Thought, in which he surveyed the importance of freedom of thought in the Western tradition, from the ancient Greeks to the twentieth century. According to Bury, the conclusion that individuals should have an absolute right to their beliefs free of state or other forms of coercion “is the most important ever reached by men.” Bury was not the only scholar to have observed that freedom of thought (or belief, or conscience) is at the core of Western civil liberties. Recognitions of this sort are commonplace and have been made by many of our greatest minds. René Descartes’s maxim, “I think, therefore I am,” identifies the power of individual thought at the core of our existence. John Milton praised in Areopagitica “the liberty to know, to utter, and to argue freely according to conscience, above all [other] liberties.”

In the nineteenth century, John Stuart Mill developed a broad notion of freedom of thought as an essential element of his theory of human liberty, which comprised “the inward domain of consciousness; demanding liberty of conscience, in the most comprehensive sense; liberty of thought and feeling; absolute freedom of opinion and sentiment on all subjects, practical or speculative, scientific, moral, or theological.” In Mill’s view, free thought was inextricably linked to and mutually dependent upon free speech, with the two concepts being a part of a broader idea of political liberty. Moreover, Mill recognized that private parties as well as the state could chill free expression and thought.

Law in Britain and America has embraced the central importance of free thought as the civil liberty on which all others depend. But it was not always so. People who cannot think for themselves, after all, are incapable of self-government. In the Middle Ages, the crime of “constructive treason” outlawed “imagining the death of the king” as a crime that was punishable by death. Thomas Jefferson later reflected that this crime “had drawn the Blood of the best and honestest Men in the Kingdom.” The impulse for political uniformity was related to the impulse for religious uniformity, whose story is one of martyrdom and burnings of the stake. As Supreme Court Justice William O. Douglas put it in 1963:

While kings were fearful of treason, theologians were bent on stamping out heresy. . . . The Reformation is associated with Martin Luther. But prior to him it broke out many times only to be crushed. When in time the Protestants gained control, they tried to crush the Catholics; and when the Catholics gained the upper hand, they ferreted out the Protestants. Many devices were used. Heretical books were destroyed and heretics were burned at the stake or banished. The rack, the thumbscrew, the wheel on which men were stretched, these were part of the paraphernalia.

Thankfully, the excesses of such a dangerous government power were recognized over the centuries, and thought crimes were abolished. Thus, William Blackstone’s influential Commentaries stressed the importance of the common law protection for the freedom of thought and inquiry, even under a system that allowed subsequent punishment for seditious and other kinds of dangerous speech. Blackstone explained that:

Neither is any restraint hereby laid upon freedom of thought or inquiry: liberty of private sentiment is still left; the disseminating, or making public, of bad sentiments, destructive of the ends of society, is the crime which society corrects. A man (says a fine writer on this subject) may be allowed to keep poisons in his closet, but not publicly to vend them as cordials.

Even during a time when English law allowed civil and criminal punishment for many kinds of speech that would be protected today, including blasphemy, obscenity, seditious libel, and vocal criticism of the government, jurists recognized the importance of free thought and gave it special, separate protection in both the legal and cultural traditions.

The poisons metaphor Blackstone used, for example, was adapted from Jonathan Swift’s Gulliver’s Travels, from a line that the King of Brobdingnag delivers to Gulliver. Blackstone’s treatment of freedom of thought was itself adopted by Joseph Story in his own Commentaries, the leading American treatise on constitutional law in the early Republic. Thomas Jefferson and James Madison also embraced freedom of thought. Jefferson’s famous Virginia Statute for Religious Freedom enshrined religious liberty around the declaration that “Almighty God hath created the mind free,” and James Madison forcefully called for freedom of thought and conscience in his Memorial and Remonstrance Against Religious Assessments.

Freedom of thought thus came to be protected directly as a prohibition on state coercion of truth or belief. It was one of a handful of rights protected by the original Constitution even before the ratification of the Bill of Rights. Article VI provides that “state and federal legislators, as well as officers of the United States, shall be bound by oath or affirmation, to support this Constitution; but no religious test shall ever be required as a qualification to any office or public trust under the United States.” This provision, known as the “religious test clause,” ensured that religious orthodoxy could not be imposed as a requirement for governance, a further protection of the freedom of thought (or, in this case, its closely related cousin, the freedom of conscience). The Constitution also gives special protection against the crime of treason, by defining it to exclude thought crimes and providing special evidentiary protections:

Treason against the United States, shall consist only in levying war against them, or in adhering to their enemies, giving them aid and comfort. No person shall be convicted of treason unless on the testimony of two witnesses to the same overt act, or on confession in open court.

By eliminating religious tests and by defining the crime of treason as one of guilty actions rather than merely guilty minds, the Constitution was thus steadfastly part of the tradition giving exceptional protection to the freedom of thought.

Nevertheless, even when governments could not directly coerce the uniformity of beliefs, a person’s thoughts remained relevant to both law and social control. A person’s thoughts could reveal political or religious disloyalty, or they could be relevant to a defendant’s mental state in committing a crime or other legal wrong. And while thoughts could not be revealed directly, they could be discovered by indirect means. For example, thoughts could be inferred either from a person’s testimony or confessions, or by access to their papers and diaries. But both the English common law and the American Bill of Rights came to protect against these intrusions into the freedom of the mind as well.

The most direct way to obtain knowledge about a person’s thoughts would be to haul him before a magistrate as a witness and ask him under penalty of law. The English ecclesiastical courts used the “oath ex officio” for precisely this purpose. But as historian Leonard Levy has explained, this practice came under assault in Britain as invading the freedom of thought and belief. As the eminent jurist Lord Coke later declared, “no free man should be compelled to answer for his secret thoughts and opinions.” The practice of the oath was ultimately abolished in England in the cases of John Lilburne and John Entick, men who were political dissidents rather than religious heretics.

In the new United States, the Fifth Amendment guarantee that “No person . . . shall be compelled in any criminal case to be a witness against himself ” can also be seen as a resounding rejection of this sort of practice in favor of the freedom of thought. Law of course evolves, and current Fifth Amendment doctrine focuses on the consequences of a confession rather than on mental privacy, but the origins of the Fifth Amendment are part of a broad commitment to freedom of thought that runs through our law. The late criminal law scholar William Stuntz has shown that this tradition was not merely a procedural protection for all, but a substantive limitation on the power of the state to force its enemies to reveal their unpopular or heretical thoughts. As he put the point colorfully, “[i]t is no coincidence that the privilege’s origins read like a catalogue of religious and political persecution.”

Another way to obtain a person’s thoughts would be by reading their diaries or other papers. Consider the Fourth Amendment, which protects a person from unreasonable searches and seizures by the police:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Today we think about the Fourth Amendment as providing protection for the home and the person chiefly against unreasonable searches for contraband like guns or drugs. But the Fourth Amendment’s origins come not from drug cases but as a bulwark against intellectual surveillance by the state. In the eighteenth century, the English Crown had sought to quash political and religious dissent through the use of “general warrants,” legal documents that gave agents of the Crown the authority to search the homes of suspected dissidents for incriminating papers.

Perhaps the most infamous dissident of the time was John Wilkes. Wilkes was a progressive critic of Crown policy and a political rogue whose public tribulations, wit, and famed personal ugliness made him a celebrity throughout the English-speaking world. Wilkes was the editor of a progressive newspaper, the North Briton, a member of Parliament, and an outspoken critic of government policy. He was deeply critical of the 1763 Treaty of Paris ending the Seven Years War with France, a conflict known in North America as the French and Indian War. Wilkes’s damning articles angered King George, who ordered the arrest of Wilkes and his co-publishers of the North Briton, authorizing general warrants to search their papers for evidence of treason and sedition. The government ransacked numerous private homes and printers’ shops, scrutinizing personal papers for any signs of incriminating evidence. In all, forty-nine people were arrested, and Wilkes himself was charged with seditious libel, prompting a long and inconclusive legal battle of suits and countersuits.

By taking a stand against the king and intrusive searches, Wilkes became a cause célèbre among Britons at home and in the colonies. This was particularly true for many American colonists, whose own objections to British tax policy following the Treaty of Paris culminated in the American Revolution. The rebellious colonists drew from the Wilkes case the importance of political dissent as well as the need to protect dissenting citizens from unreasonable (and politically motivated) searches and seizures.

The Fourth Amendment was intended to address this problem by inscribing legal protection for “persons, houses, papers, and effects” into the Bill of Rights. A government that could not search the homes and read the papers of its citizens would be less able to engage in intellectual tyranny and enforce intellectual orthodoxy. In a pre-electronic world, the Fourth Amendment kept out the state, while trespass and other property laws kept private parties out of our homes, paper, and effects.

The Fourth and Fifth Amendments thus protect the freedom of thought at their core. As Stuntz explains, the early English cases estab- lishing these principles were “classic First Amendment cases in a system with no First Amendment.” Even in a legal regime without protection for dissidents who expressed unpopular political or religious opinions, the English system protected those dissidents in their private beliefs, as well as the papers and other documents that might reveal those beliefs.

In American law, an even stronger protection for freedom of thought can be found in the First Amendment. Although the First Amendment text speaks of free speech, press, and assembly, the freedom of thought is unquestionably at the core of these guarantees, and courts and scholars have consistently recognized this fact. In fact, the freedom of thought and belief is the closest thing to an absolute right guaranteed by the Constitution. The Supreme Court first recognized it in the 1878 Mormon polygamy case of Reynolds v. United States, which ruled that although law could regulate religiously inspired actions such as polygamy, it was powerless to control “mere religious belief and opinions.” Freedom of thought in secular matters was identified by Justices Holmes and Brandeis as part of their dissenting tradition in free speech cases in the 1910s and 1920s. Holmes declared crisply in United States v. Schwimmer that “if there is any principle of the Constitution that more imperatively calls for attachment than any other it is the principle of free thought—not free thought for those who agree with us but freedom for the thought that we hate.” And in his dissent in the Fourth Amendment wiretapping case of Olmstead v. United States, Brandeis argued that the framers of the Constitution “sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations.” Brandeis’s dissent in Olmstead adapted his theory of tort privacy into federal constitutional law around the principle of freedom of thought.

Freedom of thought became permanently enshrined in constitutional law during a series of mid-twentieth century cases that charted the contours of the modern First Amendment. In Palko v. Connecticut, Justice Cardozo characterized freedom of thought as “the matrix, the indispensable condition, of nearly every other form of freedom.” And in a series of cases involving Jehovah’s Witnesses, the Court developed a theory of the First Amendment under which the rights of free thought, speech, press, and exercise of religion were placed in a “preferred position.” Freedom of thought was central to this new theory of the First Amendment, exemplified by Justice Jackson’s opinion in West Virginia State Board of Education v. Barnette, which invalidated a state regulation requiring that public school children salute the flag each morning. Jackson declared that:

If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein. . . .

[The flag-salute statute] transcends constitutional limitations on [legislative] power and invades the sphere of intellect and spirit which it is the purpose of the First Amendment to our Constitution to reserve from all official control.

Modern cases continue to reflect this legacy. The Court has repeatedly declared that the constitutional guarantee of freedom of thought is at the foundation of what it means to have a free society. In particular, freedom of thought has been invoked as a principal justification for preventing punishment based upon possessing or reading dangerous media. Thus, the government cannot punish a person for merely possessing unpopular or dangerous books or images based upon their content. As Alexander Meiklejohn put it succinctly, the First Amendment protects, first and foremost, “the thinking process of the community.”

Freedom of thought thus remains, as it has for centuries, the foundation of the Anglo-American tradition of civil liberties. It is also the core of intellectual privacy.

“The New Home of Mind”

“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind.” So began “A Declaration of Independence of Cyberspace,” a 1996 manifesto responding to the Communications Decency Act and other attempts by government to regulate the online world and stamp out indecency. The Declaration’s author was John Perry Barlow, a founder of the influential Electronic Frontier Foundation and a former lyricist for the Grateful Dead. Barlow argued that “[c]yberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.” This definition of the Internet as a realm of pure thought was quickly followed by an affirmation of the importance of the freedom of thought. Barlow insisted that in Cyberspace “anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” The Declaration concluded on the same theme: “We will spread ourselves across the Planet so that no one can arrest our thoughts. We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.”

In his Declaration, Barlow joined a tradition of many (including many of the most important thinkers and creators of the digital world) who have expressed the idea that networked computing can be a place of “thought itself.” As early as 1960, the great computing visionary J. C. R. Licklider imagined that “in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought.” Tim Berners-Lee, the architect of the World Wide Web, envisioned his creation as one that would bring “the workings of society closer to the workings of our minds.”

Barlow’s utopian demand that governments leave the electronic realm alone was only partially successful. The Communications Decency Act was, as we have seen, struck down by the Supreme Court, but today many laws regulate the Internet, such as the U.S. Digital Millenium Copyright Act6and the EU Data Retention Directive. The Internet has become more (and less) than Barlow’s utopian vision—a place of business as well as of thinking. But Barlow’s description of the Internet as a world of the mind remains resonant today.

It is undeniable that today millions of people use computers as aids to their thinking. In the digital age, computers are an essential and intertwined supplement to our thoughts and our memories. Discussing Licklider’s prophesy from half a century ago, legal scholar Tim Wu notes that virtually every computer “program we use is a type of thinking aid—whether the task is to remember things (an address book), to organize prose (a word processor), or to keep track of friends (social network software).” These technologies have become not just aids to thought but also part of the thinking process itself. In the past, we invented paper and books, and then sound and video recordings to preserve knowledge and make it easier for us as individuals and societies to remember information. Digital technologies have made remembering even easier, by providing cheap storage, inexpensive retrieval, and global reach. Consider the Kindle, a cheap electronic reader that can hold 1,100 books, or even cheaper external hard drives that can hold hundreds of hours of high-definition video in a box the size of a paperback novel.

Even the words we use to describe our digital products and experiences reflect our understanding that computers and cyberspace are devices and places of the mind. IBM has famously called its laptops “ThinkPads,” and many of us use “smartphones.” Other technologies have been named in ways that affirm their status as tools of the mind—notebooks, ultrabooks, tablets, and browsers. Apple Computer produces iPads and MacBooks and has long sold its products under the slogan, “Think Different.” Google historian John Battelle has famously termed Google’s search records to be a “database of intentions.” Google’s own slogan for its web browser Chrome is “browse the web as fast as you think,” revealing how web browsing itself is not just a form of reading, but a kind of thinking itself. My point here is not just that common usage or marketing slogans connect Internet use to thinking, but a more important one: Our use of these words reflects a reality. We are increasingly using digital technologies not just as aids to our memories but also as an essential part of the ways we think.

Search engines in particular bear a special connection to the processes of thought. How many of us have asked a factual question among friends, only for smartphones to appear as our friends race to see who can look up the answer the fastest? In private, we use search engines to learn about the world. If you have a moment, pull up your own search history on your phone, tablet, or computer, and recall your past queries. It usually makes for interesting reading—a history of your thoughts and wonderings.

But the ease with which we can pull up such a transcript reveals another fundamental feature of digital technologies—they are designed to create records of their use. Think again about the profile a search engine like Google has for you. A transcript of search queries and links followed is a close approximation to a transcript of the operation of your mind. In the logs of search engine companies are vast repositories of intellectual wonderings, questions asked, and mental whims followed. Similar logs exist for Internet service providers and other new technology companies. And the data contained in such logs is eagerly sought by government and private entities interested in monitoring intellectual activity, whether for behavioral advertising, crime and terrorism prevention, and possibly other, more sinister purposes.

Searching Is Thinking

With these two points in mind—the importance of freedom of thought and the idea of the Internet as a place where thought occurs—we can now return to the Google Search Subpoena with which this chapter opened. Judge Ware’s opinion revealed an intuitive understanding that the disclosure of search records was threatening to privacy, but was not clear about what kind of privacy was involved or why it matters.

Intellectual privacy, in particular the freedom of thought, supplies the answer to this problem. We use search engines to learn about and make sense of the world, to answer our questions, and as aids to our thinking. Searching, then, in a very real sense is a kind of thinking. And we have a long tradition of protecting the privacy and confidentiality of our thoughts from the scrutiny of others. It is precisely because of the importance of search records to human thought that the Justice Department wanted to access the records. But if our search records were more public, we wouldn’t merely be exposed to embarrassment like Thelma Arnold of Lilburn, Georgia. We would be less likely to search for unpopular or deviant or dangerous topics. Yet in a free society, we need to be able to think freely about any ideas, no matter how dangerous or unpopular. If we care about freedom of thought—and our political institutions are built on the assumption that we do—we should care about the privacy of electronic records that reveal our thoughts. Search records illustrate the point well, but this idea is not just limited to that one important technology. My argument about freedom of thought in the digital age is this: Any technology that we use in our thinking implicates our intellectual privacy, and if we want to preserve our ability to think fearlessly, free of monitoring, interference, or repercussion, we should embody these technologies with a meaningful measure of intellectual privacy.

Excerpted from “Intellectual Privacy: Rethinking Civil Liberties in the Digital Age” by Neil Richards. Published by Oxford University Press. Copyright 2015 by Neil Richards. Reprinted with permission of the publisher. All rights reserved.

Neil Richards is a Professor of Law at Washington University, where he teaches and writes about privacy, free speech, and the digital revolution.

Record global stock prices reflect growth of financial parasitism

stock-market-down

By Nick Beams
27 February 2015

This week has seen global stock prices approach record highs under conditions where the German government took the unprecedented step of issuing bonds at a negative yield. The two interrelated developments point to an explosive growth of financial parasitism.

World equity markets are close to their highest levels in history, as measured by the FTSE All-World Index. The FTSE 100, Britain’s index of leading shares, surpassed its previous high, achieved at the end of 1999 on the eve of the bursting of the dot.com share market bubble, to join Wall Street’s Dow and the German DAX in record territory.

This is an extraordinary phenomenon given that large areas of the global economy, most notably Europe and Japan, are either stagnant or in recession; China and the so-called “emerging markets,” which have been the main centre of global growth, are slowing down; and the much-vaunted US growth is still below historical trends.

All of the major reports on the state of the world economy in the recent period—from the World Bank, the International Monetary Fund, and the Organisation for Economic Cooperation and Development—have downgraded previous growth projections and warned that the economy is increasingly characterised by a vicious cycle.

Investment has fallen to historic lows because of the lack of demand and profit opportunities. The decline in investment is leading, in turn, to a further decline in demand and profit expectations.

Notwithstanding these powerful trends, the stock markets continue to power on, providing a graphic demonstration of the degree to which the accumulation of wealth by global financial elites has become divorced from the actual process of production.

One of the main factors boosting Wall Street in recent days was the estimation, following the testimony by US Federal Reserve Chairwoman Janet Yellen to the US Congress, that the central bank was in no hurry to start lifting official interest rates, ensuring that the flow of cheap money into financial markets would continue.

European markets also took heart from Yellen’s remarks and were boosted as well by the approach of the European Central Bank’s money-printing “quantitative easing” (QE) program, slated to begin next week.

In addition, they were warmed by the news that the European Union and the financial oligarchy it represents had obtained the Syriza-led Greek government’s abject capitulation, including the renunciation of the pseudo-left party’s election promises to fight the EU’s austerity program. The Greek developments, ensuring the further impoverishment of the Greek working class, were a source of satisfaction not only because of their implications for Greece, but also for the message they sent across Europe that any demand for an end to austerity would meet the same fate.

The emergence of negative bond yields, underscored by the German government’s issuance of five-year notes at a negative rate, signifies that the bond market is being transformed into a gigantic Ponzi scheme, in which the ability to make money depends on the continuous flow of new cash—largely emanating from central banks—into the financial system. It is increasingly operating according to the “bigger fool” principle. While it may be considered foolish to invest in a high-priced bond that offers a negative yield, speculators bet that there is an even bigger fool who will buy the bond when its price rises even further.

When negative yields first made their appearance, it was thought they were a transitory phenomenon, the result of the search for a “safe haven” for cash. But now they are becoming a permanent feature of the financial landscape.

Besides Germany, five-year bonds issued by Denmark, Finland, the Netherlands and Austria, as well as corporate bonds issued by Nestlé and Shell, have come with negative yields.

The immediate impetus for the growth in negative yields is the decision by the European Central Bank to begin bond purchases from March 1 at the rate of €60 billion per month for at least the next 16 months.

Speaking to the Financial Times, Divyang Shah, a global strategist at IFR Markets, said: “It should not be ruled out that, once the ECB QE program begins, we will see German 10-year yields trade through zero and into negative territory.” Swiss 14-year bonds were already trading at negative yields, so such an outcome could not be ruled out, he said, adding that “instead of safe haven-related demand we have QE-related demand.”

The yield on the German 10-year bond yesterday touched a record low of 0.28 percent, with 10-year yields in France, Portugal and Spain also falling to record levels.

The truly explosive growth of financial parasitism, expressed in the negative yield phenomenon, is highlighted by data compiled by JPMorgan Chase. It estimates that in the past year alone the value of negative-yielding bonds in Europe has escalated exponentially—from $20 billion to $2 trillion, a hundred-fold increase. It is calculated that at least one-third of all European bonds now show negative yields. Nothing remotely resembling this has been seen in economic history.

One of its immediate effects is to destroy the financial modus operandi of pension funds and insurance companies. Throughout their history, they have invested in government debt in order to secure a steady and safe rate of return over the long term, often under legal requirements to do so. However, this strategy is increasingly unviable, and in order to meet their commitments, they are being forced to make riskier investments or join the bond market speculation.

The rise of financial parasitism has decisive economic and political implications. As the whole of economic history demonstrates, and the events of the past decade have again revealed, the maintenance of this house of cards cannot continue indefinitely.

A major bankruptcy, produced by a sudden shift in the value of one or another of the major currencies, for example, (such as took place earlier this year with the dramatic leap in the value of the Swiss franc), a corporate default, a sudden shift in sentiment due to an interest rate rise, or one of any number of seemingly accidental events can trigger a chain reaction that brings the entire rotten financial edifice crashing down.

Furthermore, because trillions of dollars have been injected into the financial system by central banks over the past six years, the consequences have the potential to be even more serious than those that followed the collapse of Lehman Brothers in September 2008.

The consequent closures, sackings and mass unemployment and the intensification of the assault on social services will fuel the eruption of social and political struggles that will be met with an immediate and ruthless response from the financial oligarchy. That is the lesson of Greece.

Acutely aware that they have no economic solution to the crisis of the profit system, the ruling elites in every country have spent the past six years boosting police and security forces to deal with the inevitable outbreak of mass struggles.

 

 

http://www.wsws.org/en/articles/2015/02/27/econ-f27.html

Child poverty at devastating levels in US cities and states

r-CHILD-POVERTY-large570

By Patrick Martin 

26 February 2015

Reports issued over the past week suggest that child poverty in America is more widespread than at any time in the last 50 years. For all the claims of economic “recovery” in the United States, the reality for the new generation of the working class is one of ever-deeper social deprivation.

The Annie E. Casey Foundation publishes the annual Kids Count report on child poverty, which was the source of state-by-state reports issued last week. These reports use the new Supplemental Poverty Measure, developed by the Census Bureau, which includes the impact of government benefit programs like food stamps and unemployment compensation, as well as state social programs, and accounts for variations in the cost of living as well.

The result is a picture of the United States with a markedly different regional distribution of child poverty than usually presented. The state with the highest child poverty rate is California, the most populous, at a staggering 27 percent, followed by neighboring Arizona and Nevada, each at 22 percent.

The child poverty rate of California is much higher than figures previously reported, because the cost of living in the state is higher. Moreover, many of the poorest immigrant families are not enrolled in federal social programs because they are undocumented or face language barriers. The same conditions apply in Arizona and Nevada.

The other major centers of child poverty in the United States are the long-impoverished states of the rural Deep South, and the more recently devastated states of the industrial Midwest, where conditions of life for the working class have deteriorated the most rapidly over the past ten years.

It is a remarkable fact, documented in a separate report issued February 23 by the Catholic charity Bread for the World, that African-American child poverty rates are actually worse in the Midwest states of Iowa, Ohio, Michigan, Wisconsin and Indiana than in the traditionally poorest parts of the Deep South, including Mississippi, Louisiana and Alabama.

Several of the Midwest states have replaced Mississippi at the bottom of one or another social index. Iowa has the worst poverty rate for African-American children. Indiana has the highest rate of teens attempting or seriously considering suicide.

The most remarkable transformation is in Michigan, once the center of American industry with the highest working-class standard of living of any state. Michigan is the only major US state whose overall poverty rate is actually worse now than in 1960.

This half-century of decline is a devastating indictment of the failure of the American trade unions, which have collaborated in the systematic impoverishment of the working class in what was once their undisputed stronghold.

The United Auto Workers, in particular, did nothing as dozens of plants were shut down and cities like Detroit, Pontiac, Flint and Saginaw were laid waste by the auto bosses. Meanwhile, the UAW became a billion-dollar business, its executives controlling tens of billions in pension and benefit funds, while the rank-and-file workers lost their jobs, their homes and their livelihoods.

In Detroit, once the industrial capital of the world’s richest country, the child poverty rate was 59 percent in 2012, up from 44.3 percent in 2006.

The social catastrophe facing the population in Detroit also exposes the role of the Democratic Party and the organizations around it that have for decades promoted identity politics—according to which race, and not class, is the fundamental social category in America. The city, like many throughout the region, has been run by a layer of black politicians who have overseen the shocking decay in the social position of African-American workers and youth. (See, “Half a million children in poverty in Michigan”.)

Cleveland, also devastated by steel and auto plant closings, was the only other major US city with a child poverty rate of over 50 percent.

The Detroit figure undoubtedly understates the social catastrophe in the Motor City, since it comes from a study concluded before the state-imposed emergency manager put the city into bankruptcy in the summer of 2013, leading to drastic cuts in wages, benefits and pensions for city workers and retirees.

Wayne County, which includes Detroit, had the highest child poverty rate of any of Michigan’s 82 counties. Southeast Michigan, which includes the entire Detroit metropolitan area, endured an overall rise in child poverty rates from 18.9 percent in 2006 to 27 percent in 2012.

The state-by-state reports issued by Kids Count were accompanied by a press release by the Casey Foundation noting that the child poverty rate in the United States would nearly double, from 18 percent to 33 percent, without social programs like food stamps, school meals, Medicaid and the Earned Income Tax Credit.

This was issued as a warning of the effect of widely expected budget cuts in these critical programs. It coincided with the first hearing before the House Agriculture Committee on plans to attack the federal food stamp program by imposing work requirements and other restrictions to limit eligibility.

The food stamp program has already suffered through two rounds of budget cuts agreed on in bipartisan deals between the Obama White House and congressional Republicans, which cut $1 billion and $5 billion respectively from the program. Now that Republicans control both houses of Congress, they will press for even more sweeping cuts in a program that helps feed 47 million low-income people, many of them children.

 

http://www.wsws.org/en/articles/2015/02/26/cpov-f26.html