Everyone I know is brokenhearted by Josh Ellis

Everyone I know is brokenhearted.

All the genuinely smart, talented, funny people I know seem to be miserable these days. You feel it on Twitter more than Facebook, because Facebook is where you go to do your performance art where you pretend to be a hip, urbane person with the most awesomest friends and the best relationships and the very best lunches ever. Facebook is surface; Twitter is subtext, and judging by what I’ve seen, the subtext is aching sadness.

I’m not immune to this. I don’t remember ever feeling this miserable and depressed in my life, this sense of futility that makes you wish you’d simply go numb and not care anymore. I think a lot about killing myself these days. Don’t worry, I’m not going to do it and this isn’t a cry for help. But I wake up and think: fuck, more of this? Really? How much more? And is it really worth it?

In my case, much of it stems from my divorce and the collapse of the next relationship I had. But that’s not really the cause. I think that those relationships were bulwarks, charms against the dark I’ve felt growing in this world for a long time now. When I was in love, the world outside didn’t matter so much. But without it, there is nothing keeping the wolf from the door.

It didn’t used to be like this when I was a kid. I’m not getting nostalgic here, or pretending that my adolescence and my twenties were some kind of soft-focused Golden Age. Life sucked when I was young. I was unhappy then too. But there was always the sense that it was just a temporary thing, that if I stuck it out eventually the world was going to get better — become awesome, in fact.

But the reality is that the three generations who ended the 20th century, the Boomers, their Generation X children, and Generation Y, have architected a Western civilization that’s kind of a shit show. Being born in 1978, I fall at either the tail end of Gen X or the beginning of Gen Y, depending on how you look at it. I became an adolescent at the time Nirvana was ushering in a decade of “slacker” ideology, as the pundits liked to put it. But the reality is that I didn’t know a whole lot of actual slackers in the 1990s. I did know a lot of people who found themselves disillusioned with the materialism of the 1980s and what we saw as the failed rhetoric of the Sixties generation, who were all about peace and love right until the time they put on suits and ties and figured out how to divide up the world. I knew a lot of people who weren’t very interested in that path.

The joke, of course, is that every generation kills the thing they love. The hippies became yuppies; Gen X talked a lot about the revolution, and then went and got themselves some venture capital and started laying into place the oversaturated, paranoid world we live in now. A lot of them tried to tell themselves they were still punk as fuck, but it’s hard to morally reconcile the thing where you listen to Fugazi on the way to your job where you help find new ways to trick people into giving up their data to advertisers. Most people don’t even bother. They just compartmentalize.

And I’m not blaming them. The world came apart at the end of the 90s, when the World Trade Center did. My buddy Brent and I were talking about this one night last year — about how the end of the 90s looked like revolution. Everybody was talking about Naomi Klein and anti-consumerism and people in Seattle were rioting over the WTO. Hell, a major motion picture company put out Fight Club, which is about as unsubtle an attack on consumer corporate capitalism as you can get. We were poised on the brink of something. You could feel it.

And then the World Trade Center went down. And all of a sudden calling yourself an “anticapitalist terrorist” was no longer a cool posture to psych yourself up for protest. It became something you might go to jail for — or worse, to one of the Black Camps on some shithole island somewhere. Corporate capitalism became conflated somehow with patriotism. And the idea that the things you own end up defining you became quaint, as ridiculous spoken aloud as “tune in, turn on, drop out”. In fact, it became a positive: if you bought the right laptop, the right smartphone, the right backpack, exciting strangers would want to have sex with you!

It’s no wonder that Gen X began seeking the largely mythological stability of their forebearers; to stop fucking around and eating mushrooms at the Rage Against The Machine show, and to try and root yourself. Get a decent car — something you can pass off as utilitarian — and a solid career. Put your babies in Black Flag onesies, but make sure their stroller is more high tech than anything mankind ever took to the Moon, because that wolf is always at the door. And buy yourself a house, because property is always valuable. Even if you don’t have the credit, because there’s this thing called a “subprime mortgage” you can get now!

But the world changed again. And kept changing. So now you’ve got this degree that’s worth fuck-all, a house that’s worth more as scrap lumber than as a substantial investment, and you’re either going to lose your job or have to do the work of two people, because there’s a recession on. Except they keep saying the recession ended, so why are you still working twice as hard for the same amount of money?

We started two wars, only one of them even marginally justifiable, and thousands and thousands of people died. Some of them were Americans, most of them weren’t. The world hated us again. It’s psychically oppressive to realize you’re the bad guy.

Of course, for a lot of the world, America had always been the bad guy…but we didn’t really know that before, because we didn’t have the Internet in our pocket, to be pulled out at every lunch break and before the meal came and when the episode of Scrubs on TV dragged a little, and before bed. We were encouraged to immerse ourselves in the endless flow of information, to become better informed, because knowing more about the world made us better people.

And maybe it did, but it also made us haunted people.

Yesterday morning, when I woke up, I clicked on a video in my Twitter feed that showed mutilated children being dragged from the streets of Gaza. And I started sobbing — just sobbing, sitting there in my bed with the covers around my waist, saying “Fuck, fuck, fuck,” over and over to the empty room. Dead children, torn to bits. And then it was time for…what? Get up, eat my cereal, go about my day? Every day?

So you’re haunted, and you’re outraged, and you go on Twitter and you go on Facebook and you change your avatar or your profile picture to a slogan somebody thoughtfully made for you, so that you can show the world that you’re watching, that you care, that it matters. But if you’re at all observant, you begin to realize after a while that it doesn’t matter; that your opinion matters for very little in the world. You voted for Obama, because Obama was about hope and change; except he seems to be mostly about hope and change for rich people, and not about hope at all for the people who are killed by American drones or who are locked away without trial in American internment camps or who are prosecuted because they stand up and tell the truth about their employers. There does seem to be a lot of hope and change in Fort Meade and Langley, though, where the NSA and CIA are given more and more leeway to spy on everyone in the world, including American citizens, not for what they’ve done but what they might do.

And the rest of the world? They keep making more dead children. They slaughter each other in the streets of Baghdad and Libya and Gaza and Tel Aviv; they slaughter each other in the hills of Syria; and, increasingly, they slaughter each other in American schools and movie theaters and college campuses.

And when you speak up about that — when you write to your Congressperson to say that you believe in, say, stricter control on the purchase of assault weapons, or limiting the rights of corporations to do astonishing environmental damage, or not sending billions of dollars to the kind of people who think it’s funny to launch missiles filled with flechette rounds into the middle of schools where children huddle together — you’re told that, no, you’re the fascist: that people have the right to defend themselves and make money, and that those rights trump your right to not be killed by some fucking lunatic when you’re waiting in line at Chipotle to grab a chicken burrito, and your right to not be able to light your tapwater on fire with a Zippo because of the chemicals in it, or not to end up in a grainy YouTube video while some demented religious fanatic hacks your head off with a rusty bayonet because your country — not you, but who’s counting — is the Great Satan.

And the music sucks. Dear God, the music sucks. Witless, vapid bullshit that makes the worst airheaded wannabe profundities of the grunge era look like the collected works of Thomas Locke. Half the songs on the radio aren’t anything more than a looped 808 beat and some dude grunting and occasionally talking about how he likes to fuck bitches in the ass. The other half are grown-ass adults singing about their stunted, adolescent romantic ideals and playing a goddamn washtub while dressed like extras from The Waltons.

The music sucks. The movies suck — I mean, they didn’t suck the first time they came out, in the 1980s, but the remakes and gritty reboots and decades-past-their-sell-by-date sequels suck. Indiana Jones is awesome, but nobody needs to see a geriatric Harrison Ford, lured out of retirement by the promise of building another mansion onto his mansion, running around with fucking Shia LeBeouf in the jungle. And besides, we’re all media experts now; we can spot the merchandising nods from the trailer all the way to the final credits. There’s no magic left. It’s just another company figuring out a way to suck the very last molecules of profit out of the things we cherish, because that’s what corporations do.

Everything is branded. Even people. People are “personal brands”, despite the fact that, by and large, you can’t figure out what most of them are actually even good for. They just exist to be snarky and post selfies and demand that you buy something, anything, with their picture on it.

You actually know who Kim Kardashian is. In an ideal world, you’d be as unaware of her existence as you are of the names of the Chinese kids who made the futurephone or featherweight laptop you’re almost certainly reading this on. In an ideal world, Kim Kardashian would have spent her life getting sport-fucked anonymously by hip-hop stars in some Bel Air mansion, ran a salon, and either died of a coke overdose or Botox poisoning. There is no reason that her face and her life and her tits and her deathless thoughts needed to be foisted upon the world outside of the 90210 ZIP code. Except that somebody figured out that you could make money off showing people the car accident in slow motion, that people would watch that. Sure they will. People love to watch stupid people do stupid things. It makes them feel less stupid.

And the Internet.

We built this thing — I include myself in that because I started doing HTML in 1994 and was part of the generation who took to the new medium like water and have made the majority of our adult lives creating it, to a greater or lesser degree — because we believed it would make things better for everyone. We believed it would give voice to the voiceless, hope to the hopeless, bring us all together, help us to understand and empathize and share with one another. We believed it could tear down the walls.

And in a lot of ways it has. But in just as many ways, it has driven us all insane. There’s an old story — I have no idea if it’s true — about monkeys who had the pleasure centers of their brains wired up to a button. Push it, Mr. Monkey, and you have an orgasm. And the monkeys did. They pushed the button and they pushed the button, until they forgot about eating and they forgot about drinking and sleeping and simply fell down and died.

What do you do when you first wake up? What do you do as soon as you get into work? After work? Before bed? Hell, some of us wake up in the night and check our feeds, terrified that we’ve missed out on something.

We do it because we are given that reward, that stimulus that tells us oooh, a new shiny! It’s the fourteenth Guardians Of The Galaxy trailer, with 200% more Rocket Raccoon! Some fucking null node in Portland made a portrait of every single character from Adventure Time out of bacon and Legos! And, maybe most poisonous, maybe most soul-crushing: somebody said something I don’t like that makes me feel frightened and threatened! It’s time to put on my superhero costume and forward unto battle!

Except it doesn’t matter. Because you’re not really changing anybody’s mind. How often does that little skirmish end with anybody changing their mind at all, even a little bit? Or does it just end with one of you invariably either blocking the other or saying something like “You know what, I’m going to stop now, because this is getting out of hand.”

Getting out of hand?

Everything they told you about how to live in the world when you were a kid is a lie. Education doesn’t matter, not even on paper. Being ethical doesn’t matter. Being a good person doesn’t matter. What matters now is that you’re endlessly capable of the hustle, of bringing in that long green, of being entertaining to enough people that somebody will want to give you money or fuck you or fund your startup. We’re all sharks now; if we stop swimming for just a little too long, we die. We lose followers. We’re lame. We’re not worth funding, or fucking. Because all that matters is the endless churn, the endless parade, the endless cycle of buying and trying to sell and being bought and sold by people who tell you that they’re your friends, man, not like those others. Microsoft is evil and Google is not evil, except when they are, but that’s not really important, and if you decide that maybe you’re tired of being reduced to nothing more than a potential lead for a sales pitch, like something out of a fucking David Mamet play, then you’re a hater and irrelevant and a Luddite. And besides, what would you do with yourself if you weren’t checking Facebook or playing Candy Crush Saga or watching some teenage dumbass smash his genitals on the side of a pool on YouTube? What the fuck would you even do, bro?

The comedian Bill Hicks used to do a bit where he invited the advertisers and marketers in his audience to kill themselves. He imagined them turning it into an ad campaign: “Oh, the righteous indignation dollar, that’s a good dollar, Bill’s smart to do that.” He laid out the futility of trying to escape: “I’m just caught in a fucking web,” he’d say.

And that’s where we are. You, me, we’re trapped, between being nothing more than consumers, every aspect of our lives quantified and turned into demographic data, or being fucking Amish cavemen drifting into increasing irrelevancy. Because it really does feel like there’s no middle ground anymore, doesn’t it? There’s no way to stay an active, informed citizen of the world without some motherfucker figuring out a way to squirm into your life to try and get a dollar out of you. Only fools expect something for free, and only bigger fools believe they’re anything other than a consumable or a consumer.

We didn’t get the William Gibson future where you can live like a stainless steel rat in the walls between the corporate enclaves, tearing at the system from within with your anarchy and your superior knowledge of Unix command lines. Now it’s just pissed off teenagers who blame you because their lives are going to suck a cock and billionaire thugs trying to sell you headphones and handbags, all to a soundtrack of some waterhead muttering “Bubble butt, bubble bubble bubble butt” over and over while a shite beat thumps in the background.

I know a lot of people who privately long for an apocalypse of some kind, a breakdown of the ancient Western code, because then they’d either be dead or free. How fucking horrifying is that?

But nobody pulls that trigger, because now we’ve all seen what apocalypses look like. We saw Manhattan in 2001 and New Orleans in 2005 and Thailand in 2004 and the Middle East pretty much any given day. Nobody wants to hate, because we’re pummeled with hate every day, by people who are too fucking stupid to understand that the world has passed them by as much as it’s passed by the dude in the Soundgarden t-shirt who still drives around singing along to “Fuck you, I won’t do what you tell me!” on his way to his dead-end job. The best lack all conviction, and the people who are full of passionate intensity? Fuck them. We’re all sick of their shit anyway.

And that’s where we are, and is it any goddamn wonder at all that the most profitable drugs sold in America for like a decade running have been antipsychotics? The world seems psychotic.

I feel like I need to figure this out, like figuring all of this out and finding new ways to live has become the most important thing I could possibly do, not just for myself and the people I love but for the entire human race. I don’t mean me alone — I’m far too self-loathing to have a messiah complex — but I feel like, for me, this is the best use of my time. Because the world is making me crazy and sad and wanting to just put a gun in my mouth, and it’s doing the same thing to a lot of people who shouldn’t have to feel this way.

I don’t believe anymore that the answer lies in more or better tech, or even awareness. I think the only thing that can save us is us. I think we need to find ways to tribe up again, to find each other and put our arms around each other and make that charm against the dark. I don’t mean in any hateful or exclusionary way, of course. But I think like minds need to pull together and pool our resources and rage against the dying of the light. And I do think rage is a component that’s necessary here: a final fundamental fed-up-ness with the bullshit and an unwillingness to give any more ground to the things that are doing us in. To stop being reasonable. To stop being well-behaved. Not to hate those who are hurting us with their greed and psychopathic self-interest, but to simply stop letting them do it. The best way to defeat an enemy is not to destroy them, but to make them irrelevant.

I don’t have the answers. I don’t know some truth that I can reveal to everyone. All I can do is hurt, and try to stop hurting, and try to help other people stop hurting. Maybe that’s all any of us can do. But isn’t that something worth devoting yourself to, more than building another retarded app that just puts more nonsense and bullshit into the world? Just finding people to love, and healing each other? I think it is.

Until I know more, I’ll just keep holding on. I won’t put the gun in my mouth. Because all of this sadness is worth it if there’s still hope. And I want to still have hope so badly. I still want to believe, in myself, and in you.

- See more at: http://zenarchery.com/2014/08/everyone-i-know-is-brokenhearted/#sthash.B4z9ObVE.D6JRWHho.dpuf

Recuperating Marcuse against a culture of cruelty

by James Anderson on July 29, 2014

Post image for Recuperating Marcuse against a culture of crueltyCan an affective politics based on Marcuse’s pleasure principle help us overcome our culture of violence and prefigure relations of love and pleasure?

Image: Students look through a window marked with bullet holes in Isla Vista, California, on May 24, 2014, after 22-year-old Elliot Rodgers shot, stabbed and killed multiple victims at the University of California Santa Barbara campus.

Herbert Marcuse, the Berlin-born theorist who started teaching at the University of California San Diego in 1965, and who died exactly 35 years ago today, provided a critique of modern domination that inspired student-worker uprisings in May 1968 and influenced the New Left, including students at the University of California.

His work also inspired counter-revolution.

As governor of California, intent on privatizing the state’s university system, Ronald Reagan referenced in disgust the “sexual orgies so vile that I cannot describe them to you,” referring to the free love counter-culture ethos elaborated early on in Eros and Civilization, Marcuse’s first major anti-capitalist critique, published in 1955, synthesizing Freudian and Marxian theory. Reagan reaffirmed the “naturalness and rightness of a vertical structuring of society,” and “the right of man to achieve above the capacity of his fellows” — a reactionary defense of existing order and hierarchy.

In a 1971 memo authored two months before his nomination for the Supreme Court, Lewis F. Powell echoed Reagan’s reactionary sentiments and told the US Chamber of Commerce that there must “be no hesitation to attack … the Marcuses and others who openly seek destruction of the enterprise system” — a system Marcuse understood as one of un-freedom.

In light of the counter-revolutionary successes after Reagan and Powell, Marcuse’s “philosophy of psychoanalysis” in Eros and Civilization must be repurposed to go beyond the new system of violence so as to prefigure relations of love and pleasure, not domination.

Neoliberalism and our “Culture of Cruelty”

Violence, a pain-causing process present whenever there is a difference between the actual and potential for a person or people, pervades the social fabric in insidious ways now made apparent when relations of repression result in outbursts, with root causes rarely understood.

The killings in Isla Vista, near the University of California Santa Barbara campus, where 22-year-old Elliot Rodgers stabbed to death three people and shot two women on May 23 in a “day of retribution” after being — or feeling — sexually rejected by the opposite sex, are repudiated as emblematic of gun violence or denounced as exemplars of misogynist culture.

However, analysis seldom digs deeper to unearth the violence embedded in the way we organize ourselves, our production and reproduction as a species. Commentary fails to engage with the repression induced by those oppressive social relations.

Marcuse termed this “surplus-repression,” referring to the organized domination in modern society over and above the basic level repression of instincts psychoanalyst Sigmund Freud believed necessary for civilization. That “surplus-repression” exists now in a more extreme form.

Neoliberalism, the contemporary form of capitalism, structures this “surplus-repression” and engenders what Henry Giroux suggested is a widespread “culture of cruelty,” which normalizes violence to such a degree that mass shootings recur regularly. Analyses of individual psychopathy and of real cultural problems abound, but the inquiries cut those acts “off from any larger systemic forces at work in society.”

Shootings like the one in Isla Vista are products of our “culture of cruelty,” but the insidious causes demand critique of “larger systemic forces at work,” as Giroux argued. This has to go beyond commentary calling for tighter gun laws and beyond feminist responses throwing light on the endemic misogyny that systematically dehumanize women. Those analyses are apt but also insufficient, as is criticism without consideration for conditions of possibility.

To go beyond the “culture of cruelty” characteristic of neoliberalism requires organizing social movements in ways that reflect — or prefigure — the more just society we would like to see. A prefigurative political project, where the ends are in many ways immanent in the means, must cultivate política afectiva, an affective politics based on forging bonds of love and trust. This is the only way to break through the hegemony of neoliberal relations that forcefully binds us together while simultaneously wrenching us apart.

Systemic Neoliberal Domination and Alienation

Neoliberalism is a class project, advanced since the early 1970s, to consolidate wealth and social power. Money, Marxist analyst David Harvey argued, is a representation of the value of exploited social labor given greater priority under neoliberalism. It can be accumulated potentially ad infinitum, as opposed to other commodities like yachts — although a select few certainly try to acquire a lot of those too! Money, or capital generally, is essentially our own alienated labor power in symbolic form, which comes to exert a tremendous material power over that which it is supposed to represent. And it functions as a weapon enabling some to exert power over others.

As Marcuse averred, “domination is exercised by a particular group or individual in order to sustain and enhance itself in a privileged position.” But domination does not just happen. Its roots are in the social relations central to the current reproduction of our everyday lives.

Marx wrote more than a century ago that once a certain stage of capitalist production is reached, a capitalist must function “as capital personified,” as a slave to a system of violence, in control of the labor of others but also controlled by the prerogatives of capital, “value which can perform its own valorization process, an animated monster which begins to ‘work … as if its body were by love possessed.’”

The capitalist is beholden to the “performance principle,” “the prevailing historical form of the reality principle,” per Marcuse. Freud had earlier coined the concept of the “reality principle,” to refer the repressive organization of sexuality that subjects or sublimates our innate sexual instincts to “the primacy of genitality,” at the expense of powerful Eros that could allow for a radically different society. The “performance principle” presupposes particular forms of rationality for domination, and it stratifies society, Marcuse wrote, “according to the competitive economic performances of its members.”

Neoliberalism, a market rationality and “mode of public pedagogy,” represses Eros by reducing human relations to exchange. Neoliberal pedagogy posits us as self-interested individual actors out for our own self-aggrandizement through the ubiquity of market relations. Covert privatization, like increasing tuition and fees for higher education, reifies the neoliberal ethic in ways that make it appear natural. Use values must be converted into exchange values, and everything has a price, in this arrest of human potentials. The enforcement of what can be called the neoliberal performance principle teaches us to conceive of social problems as personal problems, either focusing on market-based solutions to systemic ills, or emphasizing individual responsibility while erasing the violence inscribed in the relations that result in transgressions like the Isla Vista murders.

Marcuse described repression in an age where “all domination assumes the form of administration,” and “sadistic principles, the capitalist exploiters, have been transformed into salaried members of a bureaucracy,” producing “pain, frustration [and] impotence of the individual” in the face of an immense apparatus.

To be sure, “structural violence,” or the “pervasive social inequality” defining the neoliberal age, “ultimately backed up by the threat of physical harm,” create bureaucratic modes of managing social situations that, as David Graeber has pointed out, tend to negate the need to empathize with other people. Bureaucratic norms legitimate the “culture of cruelty” through the enforcement of administrative control and the negation of alternatives. “There is no alternative” to the new historical form of the reality principle, former UK Prime Minister Margaret Thatcher famously proclaimed.

Bureaucratic administration also reflects the restraints placed on Eros, the life instincts. Likewise, it exacerbates the effects of abstract labor, where people’s “labor is work for an apparatus which they do not control, which operates as an independent power to which the individual must submit if they want to live,” Marcuse proffered. This is “painful time, for alienated labor is absence of gratification, negation of the pleasure principle.”

As David Harvey recently argued in his presentation at the Crisis-Scapes conference in Athens, alienation is intrinsic in capitalist relations because workers “are alienated from the surplus value they produce,” while capitalists construct alienating, competitive relations among fellow workers. The workers remain estranged from the products of their labor, from nature and from the rest of social life. The processes are violent insofar as feelings “of deprivation and dispossession” are “internalized as a sense of loss and frustration of creative alternatives foregone,” Harvey theorized.

Of the multiple varieties of alienation, its active form “means to be overtly angry and hostile, to act out at being deprived or dispossessed of value and of the capacity to pursue valued ends,” Harvey explained. “Alienated beings vent their anger and hostility towards those identified as the enemy, sometimes without any clear definitive or rational reason,” or they sometimes may “seek to build a world in which alienation has either been abolished or rendered redeemable or reciprocal.”

Michael Hardt and Antonio Negri have theorized the alienating effects of “affective labor,” the “labor that produces or manipulates affects such as a feeling of ease, well-being, satisfaction, excitement or passion,” practiced in increasingly common service work, from fast food to retail sales. When the most intimate human doing must be performed for a (low) wage under coerced conditions, extreme alienation ensues. The hegemonic position of this form of labor becomes violent and volatile as a result.

Finance capital assumes added importance under neoliberalism, Hardt and Negri add. It is defined by “its high level of abstraction,” allowing it “to represent vast realms of labor” as it represses present and future Eros by commanding “the new forms of labor and their productivity” with contradictory effects.

Effects of Repressive Neoliberal Violence

Elliot Rodgers, a young adult male from an affluent family, murdered six people in an attempt to exact revenge on women for not being attracted to him — what he said in a video was “an injustice, a crime,” which is why he would “take great pleasure in slaughtering” women, so that they would “finally see’ that he was “the superior one, the true alpha male.”

In his 140-page manifesto, entitled “My Twisted World: The Story of Elliot Rodgers,” he recounts a time in Seventh Grade when a girl he thought was pretty teased him. “I hated her so much,” and “I started to hate all girls because of this.” Toward the end of the diatribe Rodgers declares there to be “no creature more evil and depraved than the human female,” he equates women with “a plague,” and he calls women “vicious, evil, barbaric animals” that “need to be treated as such” and “eradicated.”

Despite early humanizing accounts — like when he was still a child, first crying and then later trying to console after discovering his friend’s mother died of breast cancer — Rodgers ends the manifesto by describing a recipe for a “pure world” to advance human civilization: women are to be killed in concentration camps — save for a few necessary to artificially inseminate for reproduction — while, “Sexuality will cease to exist. Love will cease to exist.”

Laurie Penny, arguing in the New Statesman that “Mental illness does not excuse misogyny,” assayed Rodgers’ manifesto. She emphasized agency and argued popular discussion about mental health “has resisted any analysis of social issues,” which might be “convenient for those in power keen to overlook the structural causes of mental health problems such as alienation, prejudice, poverty and isolation.” However, Penny failed to explain the processes undergirding the “structural oppression” that produced a person — Rodgers — who came to loathe women, express racist sentiments and desire the abolition of Eros.

It is not that “we should pity him” because he suffered from insanity, as Penny suggested the errant popular reaction has it. Rather, we should recognize that while we all have agency, we are also all mutilated by the extant reality. This new historical mode of the reality principle — the neoliberal performance principle — so violently represses the life instincts that it intensifies to an unprecedented degree the destructive forces initially conjured up to prevent full eroticization and gratification, which Freud believed would be at the expense of human survival.

Myriad popular examples of “surplus-repression” in the neoliberal era exist. It is evident in the conception of intercourse as just “a piece of body touching another piece of body — just as existentially meaningless as kissing,” as one young adult, part of the so-called “Millenials” generation, put it. The complete absorption of the sexual revolution by the powers of neoliberalism turned into a commodity what Marcuse considered an emergent movement for greater “self-sublimation of sexuality,” to constitute “highly civilized human relations” without the “repressive organization” of hitherto civilization.

The connections between commodification and the violence at Isla Vista have not been made explicit enough by most writers, even those aware of how neoliberal “surplus-repression” permits and promotes a “culture of cruelty,” replete with misogyny, predicated on domination.

Rebecca Solnit identified a “toxic brew in our culture right now that includes modeling masculinity and maleness … as violence, as domination, as entitlement, as control, and women as worthless, as disposable, as things men have the right to control, etc.”

Dexter Thomas, a scholar of hip-hop at Cornell University, assayed debates about gun control and mental health services that swirled around media outlets after the Isla Vista attacks, and argued that while those topics are worth discussing, letting “our anger culminate” in those arguments alone amounts to a “cop-out.” Thomas entreats us to confront the fear within ourselves and others and “talk about why we are so afraid to talk about race and gender.”

Attention to intersectionality, or rather, viewing “race, class, and gender as interlocking systems of oppression,” within an overarching “matrix of domination” as Patricia Hill Collins put it, marked a major advance in critical theory. But neoliberalism, as a rationality reflecting the violence embedded in the contradictory relationships of domination — humans dominating each other and resources — cannot be undone with discussion of gender, race or class alone.

The historically specific, repressive modification of instinctual drives through alienated labor, bureaucratic procedures and the “culture of cruelty” educating us all to amass “wealth, forgetting all but self,” in accord with prevailing principles, augments domination. It is more often than not directed against women, experienced disproportionately by people of color, felt differentially along frequently ignored (and nuanced) class lines, exacted on satellite nations subjected to the “underdevelopment of development” as their surplus is sucked up by wealthier states, and now lived by new peripheral populations in the world system as it morphs under neoliberalism.

Warfare championed by nations no longer able to dominate any way but militarily evinces the inevitable reliance on force to sustain endemic violence. That violence also animates the resurgence of xenophobic right-wing nationalists who demonize oppressed populations. From anti-immigrant protesters in California scaring buses of children fleeing areas in Central America decimated by decades of US policies, to Greece’s neo-Nazi Golden Dawn party murdering leftists, to Israeli demonstrators defending the shelling of concentrated civilian areas in Gaza and pelting peace activists with rocks, the brutalization of others in turn dehumanizes them, just as capitalists and financiers who derive profits from others’ labor do violence to themselves when they exploit those they expropriate.

What Marcuse, following Freud, saw as “the progressive weakening of Eros” — even and especially now with a culture so obsessed with such an impoverished mode of sexuality — leads to “the growth of aggressiveness,” evidenced everywhere. Individualization of problems pits all but the most powerful against each other. The sublimation of sexuality, extolled only in superficial forms amenable to capital, further militates against fuller eroticization that would betoken a world without repressive hierarchies.

In his manifesto, Rodgers observed the ways hierarchies shaped — and distorted — his worldview. “As my fourth grade year approached its end, my little nine-year old self had another revelation about how the world works,” he wrote. “I realized that there were hierarchies, that some people were better than others.”

Reflecting on the “common social structure” at his school, those hierarchical divisions, Rodgers’ admitted his self-esteem decreased because of his “mixed race” — his mother was Asian — and, he concluded: “Life is a competition and a struggle,” empowering some at the expense of others.

Those hierarchies are not necessary, nor are they necessarily everlasting. Hierarchical divisions of labor — indeed, all alienated labor as we know it — perpetuates a power-over others, sacrificing human potentials. That violence gives way to insecurity-fuelled internalized oppression and the extroverted frustration, witnessed when Rodgers carried out his hate-fuelled homicide in Southern California.

Prefigurative Politics and Erotic Recuperation

Important for our purposes, Marcuse noted emerging preconditions for “a qualitatively different, non-repressive reality principle” — intimating a project for societal self-realization of the “pleasure principle,” the instinctual drive for gratification bound up with erotogenic activity and libidinal desire.

Sublimation, Marcuse asserted, occurs only after repression of the pleasure principle by the reality principle. Following initial repressive modification, sublimation restrains sexuality while desexualizing most of the body, save for specific areas we commonly associate with sex. The neoliberal performance principle now enacts even tighter restriction of sexuality while amplifying “the primacy of genitality.”

The process has been intensified today to ensure the reproduction of labor power and a surplus population to repress wages — Marx’s “industrial reserve army” of the unemployed, conscripted today by “free trade” agreements facilitating the movement of capital across borders while restraining populations around the world put into greater competition with each other. With surplus destruction and hardship the world is made into an alienated object for domination, which in turn leads to domination over us all.

Prospects exist, however, for a “non-repressive sublimation,” according to Marcuse, through the “self-sublimation of sexuality,” presupposing “historical progress beyond the institutions of the performance principle, which in turn would release instinctual regression.” The process entails, for Marcuse, a re-sexualization of the entire organism, “the conceptual transformation of sexuality into Eros,” extending into relations with others throughout the entire social body.

Despite the seeming omnipresence of the libido in society, its modification by the neoliberal performance principle — the existing condition wherein our increasingly alienated labor (capital) comes to exert greater power over people — connotes a possible project for liberation through eroticization.

Asking us to “Think Hope, Think Crisis,” John Holloway recently explained how capitalism is imbued with its own instinctual drive for endless growth. Its immanent instability lies in the “inadequacy of its own domination,” because to continually reproduce itself, capital has to intensify its domination and exploitation of humanity, which inevitably results in resistance to constant aggression and “easily overflows into rebellion.”

Under the neoliberal performance principle, capital’s drive — our own alienated life instincts, our abstracted Eros turned against us — for domination increases, causing crisis. Holloway reminds us, however, that “we are the crisis of capital.” Our crisis-causing power-to points to possibilities for a liberating erotic project.

Recuperation of our instincts by cultivating the kinds of non-hierarchical and non-exploitative relations we would like to see throughout a society without “surplus-repression,” requires prefigurative and affective politics — a movement of movements of people looking to each other. This can be accomplished through mutual aid, by collective decision-making where people have a say in decisions being made in proportion to the degree they are impacted, and with conscious effort directed toward everyone’s gratification.

The “affective labor” Hardt and Negri averred as hegemonic sets the conditions for a new pleasure principle, but it also shows how capital “seeks increasingly to intervene directly into social reproduction and the way we communicate and commune,” as Max Haiven has explained. Although the importance of “affective labor” to today’s economy illustrates the inverted erotic urge — or simply the death drive — of neoliberalism intent on marketizing human relations for ceaseless capital accumulation, the increased emphasis on affective work intimates greater possibilities for a project aimed at recuperating libidinous, loving desires.

This project does not dispense entirely with Marcuse’s notion of the pleasure principle. It is rather an attempt to re-articulate it in such a way that promotes deeper social eroticization, taking that to encompass feelings of care, concern and a way of seeing oneself in the other — the way Marcuse understood narcissistic Eros and sexuality.

The reactivation of “narcissistic sexuality,” Marcuse maintained, “ceases to be a threat to culture and can itself lead to culture-building if the organism exists not as an instrument of alienated labor but as a subject of self-realization,” through “lasting and expanding libidinal relations because this expansion increases and intensifies the instinct’s gratification.”

After the shooting in a Colorado movie theater by a young man during the summer of 2012, Giroux noted that the “issue of violence in America goes far beyond the issue of gun control, and in actuality, when removed from a broader narrative about violence in the United States,” it deflects from raising key questions and elides reasons why “violence weaves through the culture like a highly charged electric current burning everything in its path.” Elsewhere, Giroux analyzed how “spectacles of consumerism, celebrity culture, hyped-up violence and a market-driven obsession with the self” have led to “the absence” — or evisceration — “of a formative culture necessary to construct questioning agents who are capable of dissent and collective action in an increasingly imperiled democracy.”

The “narcissistic sexuality” Marcuse theorized differs appreciably from the market-induced narcissistic subjectivities Giroux assailed. Those subjectivities are manufactured and controlled via “biopolitical production,” which Hardt and Negri explain encompasses added emphasis on “affective labor” as well as the new ways capital produces subjects. Our alienated subjectivities are thus dialectical insofar as we embody capital’s violence yet utilize our affective and communicative powers, if primarily in alienated and expropriated ways under subjugation by the neoliberal performance principle.

The dialectic demonstrates desires for recuperation — within, against and beyond the “culture of cruelty” that dominates today. Marcuse celebrated the “culture-building power of Eros” as “non-repressive sublimation: sexuality is neither deflected from nor blocked in its objective; rather, in attaining its objective, it transcends it to others, searching for fuller gratification.”

Creating New Subjectivities

To construct a formative democratic culture in and against neoliberalism means also “creating new subjectivities,” as Marina Sitrin and Dario Azzellini write in They Can’t Represent Us! — that is, transforming relationships based on “trust and a growing feeling of care and mutual responsibility, with the goal of building a movement and society based in a relationship of mutual trust and concern for the other and the collective.” Sitrin and Azzellini explain that “responsibility for the other and solidarity are basic conditions of a future society not grounded in capitalist principles” — and, of course, not subordinated to the affect-incarcerating neoliberal performance principle.

In an interview with Bryan Magee on “Modern Philosophy” years ago, Marcuse mentioned the primacy of patriarchal domination throughout history, and said that deployment of “socially conditioned” so-called “feminine qualities,” like care, receptivity and tenderness, “could be the beginning of a qualitatively different society, the very antithesis to male domination with its violent and brutal character.”

To be sure, Sitrin and Azzellini rightly stress that “relegating affective politics to the feminine realm” — as is often the case — “simply reinforces gendered roles in patriarchal societies.” In fact, “affective politics is not an expression of ‘maternal responsibility’ but a social responsibility to build a new society based on cooperation and mutual aid rather than competition.”

Contrary to the critique of Marcuse for his downplaying revolutionary potentials of the working class, a re-articulation of his theory is also relevant for workers’ control initiatives, in which affective politics are challenging capitalist domination by altering existing relations.

These ongoing processes of people taking over their workplaces to run them in common, Sitrin and Azzelini explain, include recuperated workplaces like Hotel Bauen, a former four-star hotel in Buenos Aires that employees took collective control over after owners laid off workers and tried to shut the place down following the 2001 economic crisis. Similarly, workers at Republic Windows and Doors recuperated their factory when similar events unfolded in Chicago, reopening the place under democratic control in 2013, around the time the recuperated factory in Thessaloniki — Vio.Me — began production in Greece. Vio.Me now produces environmentally-friendly cleaning products made with local, natural ingredients distributed through the solidarity economy — but it also produces new subjectivities with renewed agency and revitalized affects.

Recuperation compliments autogestión, the process of “collective democratic self-management, especially within local communities, workplaces, cultural projects, and many other entities,” Sitrin and Azzelini averred. Examples of autogestión abound, from Zapatista Councils of Good Government in Chiapas to Communes for community-based organization and local control of production in Venezuela.

The formation of an alternative justice system “based on re-socialization, and not on retribution and vengeance,” in the San Luis Acatlán municipality in “Guerrero, one of the poorest, most violent, and most repressive states in Mexico,” constitutes another recuperative effort, as Sitrin and Azzellini describe it. These recuperative movements are inextricably bound with building affective bonds. They tend to promote relations otherwise suppressed or repressively modified by a performance principle designed to enlarge profits, not Eros.

In part interstitial, the movements illustrate prefigurative politics — “the end as process,” Sitrin and Azzelini termed it — consonant with Marcuse’s description of the pleasure principle dialectic, enriching the social organism over time by focusing on gratification now. Marcuse underscored “sustaining the entire body as subject-object of pleasure,” yet the robust construction of Eros through horizontalidad and política afectiva “calls for the refinement of the organism, the intensification of its receptivity, the growth of its sensuousness,” in more meaningful, humanizing ways. This refined “aim generates its own projects of realization,” including freedom from toil and violence, as Marcuse suggested, and this non-repressive “sublimation proceeds in a system of expanding and enduring libidinal relations, which are in themselves work relations.”

Often intended “to foster horizontal processes and subvert the boundaries of capitalist value-exchange,” Sitrin and Azzellini suggest that such recuperation, which frequently refers to reclaiming of common space and recovering historical memory, does not refer to “a nostalgic turn to an idealized past,” but “the recuperation of memory and history is,” rather, “a collective process meant to enrich the present and build a common future.”

Recuperation of the erotic and an expanded conception of the pleasure principle attuned to the richness of the life instincts, including our under-tapped affective capacities, must undergird any prefigurative politics aimed at dethroning neoliberalism as the reigning reality principle. This would address violence, and allow healthy sexuality to flourish.

Far from eliminating sexuality as we know it, such a project would allow for greater, meaningful love-making, in myriad ways. The underlying violence that drove Elliot Rodgers to seek vengeance would cease to rule, as would the general condition that, in Rodgers’ case, and as in the case of countless others, precludes loving relationships and maims us all.

This project cannot be divorced from recuperation of doing through direct democratic control over production of the pleasurable things we collectively want or need. It should foster enjoyable exercise of our creative faculties through non-alienating work-as-play, part of broader “transformation of sexuality into Eros, and its extension to lasting libidinal work relations,” as Marcuse advanced.

Cruelty and domination in the present imply the opposite, love and liberation, which must be achieved — not by enduring the violence of the day while holding out for a better future, but through a prefigurative revolution that must be pleasurable now in every, expanded sense.

James Anderson is a doctoral candidate in the College of Mass Communication and Media Arts at Southern Illinois University-Carbondale. His interests include social movements, alternative media, critical theory, prefigurative politics, horizontalidad, political economy and praxis. He writes for Truthout, among other publications.

http://roarmag.org/2014/07/marcuse-neoliberalism-culture-violence/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+roarmag+%28ROAR+Magazine%29

Relevance of Hannah Arendt’s “A Report On The Banality Of Evil” To Gaza

 

Self-Deception, Lies And Stupidity
 http://mantlethought.org/sites/default/files/hannaarendtsudomenica16ye8.jpg
by HAMMAD SAID

 

“We want the nations of the world to know…….and they should be ashamed”   Ben Gurion explaining rationale for Eichmann’s trial [i]

“The triumph of the S.S demands that the tortured victim allow himself to be led to the noose without protesting, that he renounce and abandon himself to the point of ceasing to affirm his identity. They know that the system which succeeds in destroying its victim before he mounts the scaffold…..is incomparably the better for keeping the whole people in slavery”[ii]  

Hannah Arendt (henceforth HA), philosopher, writer, academic of Jewish heritage, who studied under such luminaries as Karl Jaspers and Martin Heidegger in pre-war Germany, went to Jerusalem in 1961 to cover the trial of Eichmann, one of the actors in the Final Solution, for the New Yorker magazine. Her account of the trial regularly published in the magazine at the time, later became a basis for the book, “Eichmann In Jerusalem: A report on the banality of evil”.

It is obvious from reading the book that HA’s account of the trial in the book transcended the culpability of the single individual, and in fact implicated the whole of western society in the crimes committed.

HA’s penetrating analysis in the book furnishes insights both into the mind of the perpetrators, and the overall societal mindset they were embedded in, which made these horrendous acts possible without any weight on their conscience.  It provides a template for studying all such acts of barbarity by the organized and brute force of the state, with full complicity of the overwhelming population, against people who are made stateless either in the sense of being deprived of the state they had hitherto lived in or are forced to live as stateless wards of occupying power.

It is precisely the later scenario that we are confronted with in Gaza, where an occupying power is conducting a pogrom and slaughter of the stateless, defenseless and caged population under its occupation. It is true that the scale of murder in Gaza is not comparable to the manufacturing of death on industrial scale and with industrial efficiency as happened during Holocaust, the period dealt in the book, nevertheless there are certain elements of the current situation and wider Palestinian suppression which make the comparisons, and hence the insights, not totally irrelevant. This is borne out by none other than the granddaughter of Holocaust survivor herself, when protesting against FIDF (Friends Of IDF), she remarked in a choked voice that the reason she was there was that her grandparents taught her that “this should never happen to any other people again, whether Jews or not”.

The elements of relevance are not only the unequal nature of the contest – with the most powerful army of the Middle East, backed to the hilt by the most sophisticated military might the world has ever seen (USA) arrayed against mainly defenseless population in congested urban setting, with all paths of exit and ingress shut , hence unsuitable for protracted guerilla warfare) – but also the apparent complicity and almost cheerleading attitude of the populace of the oppressor and the invader in the conflict and its powerful backers and abettors in the western media and government. The fact that all this is happening in this age of global information, and it is almost impossible to avoid images of innocent children and women being blown to smithereens, hospitals and even morgues and graveyards bombed and destroyed, makes the complicity of the population or their indifferent silence, both in Israel and its main supporter United States and to lesser extent Western Europe, all the more criminal and callous. For one who has read HA’s work, it is impossible not to see in this almost ubiquitous silence and complicity of the western seats of political and media power, echoes of what HA characterized as the “moral collapse of the respectable western society” in the face of irrefutable and hard to ignore evidence about mass murders happening close by. In Nazi Germany and its occupied and allied countries, the collapse manifested as “hear no evil, see no evil”, averting of the face from the horror if it all, if you will, and here in the US in the Israeli-Palestinian context it manifests as knowingly arming the murderer to its teeth, and then providing the fig leaf of diplomatic support and biased media coverage to hide and obfuscate the deed both from the domestic and international audience.

It is remarkable that in spite of the huge difference in scale, the psychological and societal factors underlying the brutal onslaught of these declared racist regimes against their captive and largely defenseless populations bears close resemblance. Not only that, USA, without whose total and unconditional support Israel cannot carry out any of its criminal acts, bears many marks of institutional mind control and propaganda so eloquently exposed by HA in her book.

So here are the those elements delineated clearly in HA’s expose of Nazi regime and its murderous acts that one finds reflected in the murderous acts of Israel and its main supporter, United States.

Self-Deception, Lies And Stupidity

“The German society of eighty million people had been shielded against reality and factuality by exactly the same means, the same self-deception, lies and stupidity that had now become ingrained in Eichmann’s mentality”  [iii]

“He was incapable of uttering a single sentence that was not a cliché”..HA on Eichmann [iv]

It has almost become a cliché to say that in this age of mass communication and social media, winning the media battle is almost as important as winning the actual, physical one. Nazis were the first to realize the importance of mass propaganda in a modern state, and created one of the most effective propaganda machines the world had ever seen, which presided over a campaign of self-deception, lies, distortions, mind control of unprecedented effectiveness and magnitude. The underlying philosophy was to sway the people through grandiose lies and distortions, making them feel as part of something grand and big. Hence, they were told that it was a struggle for the “destiny of the German people”, such as would pave the way for a thousand years Reich. As Himmler said that they were fighting a battle, which the future generations did not have to fight for another thousand years. For Hitler it was a struggle for the very soul of European civilization against the eastern barbarism, exemplified by Bolshevism. One finds almost parallel distortions and propaganda in Israeli official mythology of the “Promised Land”, and they, Jews, being the rightful heirs of the land of Israel in spite of their almost two thousand years sojourn in the west. All this self-elevating and grandiose mythology goes hand in hand with demeaning Palestinian people as a historical fiction – for instance, when Netanyahu said that there had never been a Palestinian state, never been a Palestinian nation and they were just migrants from neighboring Arab countries. Both the mythology and distortion are critical to Zionist project, as one cannot be carried one without the other. And both require erecting a virtual, alternate reality, based on massive dismissal of actual history. But it must be admitted that there is crucial difference between totalitarian Nazi ideology and official Zionist one, at least in the way they operated in their respective bases. The Nazi totalitarianism obliterated all opposition to it and became the only acceptable narrative in the state, the Zionist narrative could never achieve that monopoly in Israel, and thanks to the courage and integrity of people like Uri Avenry and Levy Gideon, there is a counter narrative, not confined to some nooks or banished to concentration camps, but is expressed through very mainstream organs like Haaretz and other Jewish progressive groups.

But one of the greatest ironies of the whole conflict is that it is not in Israel, one of the main protagonists pf the conflict, but in USA, its distant and arch supporter, that this propaganda based on lies, distortions and self-deception has met its total and unchallenged triumph yet. The success of this campaign is such that no less a person than the former President of the United States is forced to admit publicly that, “it is easier to criticize Israel in Israel than in the United States”.

From Alan Dershowitz’s fabrication of alternate history of Palestine (debunked by all serious scholarship on the subject) as an apology and defense of Israel and its peddling on all respectable US channels to the latest abominable rant “Israel has a right to defend herself” on Fox Channel, it is hard to imagine where to start when one considers the total complicity of US media in Israeli crimes. Just a recent episode in ABC’s coverage of the Gaza bombing highlighted both distortion and stupidity. It was a clear distortion when a Palestinian home, with Palestinian woman clad in traditional Muslim head gear in front wailing, was shown as an Israeli woman whose home was destroyed by Hamas’s rockets. But it also relies on the egregious ignorance and stupidity of the audience to mistake an obviously Palestinian woman for an Israeli one!

As far as the self-deception goes, it ranges from the ludicrous, “they hate us because of our liberties”, to the insidious, as in the recent column in NY Times[v] where the writers have turned the logic on its head, making this preposterous claim that the reason US public supports Israel is because of the failure of Arab spring! Not because of what NY Times writes and CNN, FOX spews day in and day out, but because of the failure of Arab spring! In the annals of self- deception by the so called literati of any society, this has to have the pride of place!

It is not an understatement to say that all mainstream media channels in USA are reduced to a point where they have become mere conduits for the continuous regurgitating of official Israeli talking points, without taking Israeli apologists and official up on even the most obviously absurd ones. It seems that like Eichmann, they are incapable of thinking except in official clichés! Take for example, this oft repeated Israeli assertion, blurring the boundaries between sane and insane, that Hamas is using civilians as human shields or putting their rockets near the civilian places, as justification for the high civilian casualties. Forget the obvious fact of Gaza being one of the most densely populated areas in the world, forget even that the so called warning to crammed homes and shelters is  given minutes before the real lethality if at all and there is neither place or time to run to, forget all that, and still should not a fair journalist challenge the Israeli position on the ground that the targeted population has no army , air force and navy to fight the mightiest army of the Middle East, and it is Israel which makes sure that not only they don’t get any comparable weapons , but even the basic weapons to fight so that Israel does not have to find itself in this, in their own words, unenviable position of fighting an enemy indistinguishable from the civilians? Does any journalist try to raise the question as to what desperation the people would be reduced to, if they could be killed with almost impunity from thousands of feet above the ground, and they had no means of hitting back?  It is morally bankrupt to even raise the question of Hamas using civilians without first asking how come they can’t put tank against tank, a helicopter against helicopter and an aircraft against aircraft, and a uniformed soldier, armed to the hilt with modern gadgetry, against uniformed soldier. The reality and logic of the situation, obvious to any astute observer, is that it is not Hamas but on the contrary Israel which is using civilians, its own, for a human shield. It is so because with all the sophisticated weaponry that Israel possesses its soldiers can kill from a distance with almost no fear of being hit back except in very close combat situations that Israel religiously avoids, but the only way Palestinians in Gaza can retaliate is by firing inaccurate and primitive and almost toy like rockets indiscriminately into Israel, that has the probability of hitting only civilians, if at all, not soldiers. Hence the very nature of this asymmetric power equation dictates that when Hamas retaliates it can only do so by hitting the civilians, and Israel does everything in its power to maintain that asymmetry, depriving Hamas the capability of fighting on equal terms. It is like one party to a duel stealing the other party’s pistols and then complaining that instead of firing shots they are throwing stones that can hit the bystanders!

The impact of all this media blitz in favor of Israel is that you can never ask an average American, who probably has a hard time recognizing Gaza on the map, about the current conflict without getting an automatic, almost knee jerk clichés about terrorism, destruction of Israel and her right to defend herself. The proof is in the pudding, and even Dr. Goebbels would have been envious of this efficacy of US media propaganda for Israel! It is after all he who famously said that a lie should be repeated often and with confidence and from the higher pulpit for it to be accepted as the truth. He has found the most loyal disciples of his craft in mainstream US media.

Dostoevsky, as HA pointed out in her book, recalled from his experience in Siberia that he never met a criminal there who was repentant of his crimes. HA offered an explanation for this in the self-re-enforcing closed environment of the gang the criminals were embedded in, which shielded them from contact with the reality. The American media is playing the same role of shielding American public from its culpability in Israelis crimes by embedding them in this reality created out of lies, half-truths, fabrications, and distortions.

Blaming The Victim

“Fuhrer as promised the Jews a new homeland……..if you build there will be a roof over your heads. There is no water, the wells all around carry disease……If you bore and find water, you will have water”[vi]

HA in her book tells the story of a German literary critic, Heinz Beckmann, who blamed a certain Jewish intellectual for deserting them at “the outbreak of barbarism”, conveniently forgetting that he was expelled by the Nazis, and did not desert. From this minor example of self-serving and self-righteous attitude of Nazis to their more fictional characterizations of Jews as responsible for Germany’s defeat in the Great war to their delusional beliefs of actually helping the Jews with forced emigrations, at least the Zionists, in their project of “finding the ground under their feet”, as Eichamann puts it, it was always the victim who has brought it upon himself. In a similar vein, in the eyes of Israelis it is always the Palestinians who were responsible for their ghetto like existence in Gaza, and for the bombs killing their babies. Just a sample of it was presented the other day, when an Isareli Ambassador to US, Mr. Ron Dermer, was talking to a radio show host and claimed that the reason for shortages in Gaza was that Palestinians used all the resources – millions of dollars – for building tunnels! Given that Mr. Dermer’s government was involved in preventing through violence even a peaceful flotilla carrying food and medicines to Gazans let alone the brutal siege, it requires monumental audacity of shamelessness to make such an assertion. But he was assured that even if he had asserted that Palestinians were directly getting aid from Martians and Anti-Christ in league to destroy Israel, he would not be called upon by the interviewer! So, it is the victim, it is the Palestinians who are offering their children as sacrifices so that they could “enjoy” the sight of Israelis being humiliated in the court of world opinion in the comfort of their tunnels!

It is the psychological and logical compulsion of a society based on notions of racial exclusivity and superiority to regard the other, especially if the others are the inhabitants of the land they want to grab, as untermensch, as lesser humans. Nazis’s policy of lebensraum, the land in the east, had to regard Slavs as untermensch as the Zionist policy of land grab and theft from Palestinians has to regard them as not “people like us”!

Moral Collapse Of The Respectable Society

“Dr. Servatius, I assume you made a slip of the tongue when you said that killing by gas was a medical matter”[vii]

“The net effect of this language system was not to keep these people ignorant of what they were doing, but to prevent them from equating it with their old, normal knowledge of murder and lies”[viii]

“Yes, he had a conscience, and his conscience functioned in the expected way for about four weeks, whereupon it began to function the other way around”[ix]

“From the accumulated evidence one can only conclude that conscience as such has apparently got lost in Germany”[x]

For me the most telling image of the current Gaza massacre is not one of death and destruction, of carnage, and torn limbs, and mutilated bodies of children. It is one of the regular Israeli onslaughts in last few years, and we have become numbed to these sights and they are telling us the same tragic story with only faces changed.  For me the most telling image is a peaceful, quiet image, almost serene, but containing within it the bottomless depths of the “heart of darkness”. It is an image of Israelis, men, children and even toddlers, in an almost picnic like atmosphere with drinks and barbecue watching the carnage unfolding before their eyes from the safety of their perch on the mountaintop. One Israeli critic of Zionism in a remarkable address called the last invasion of Gaza by Israel at the peak time of children’s getting out of their schools in the streets as the darkest day in the history of Judaism. No sir, I respectfully disagree. The darkest day in the history of Judaism was the day when a few people, no matter the numbers, flaunted their inhumanity in broad daylight, and were accepted by the majority either tacitly by staying quiet about it, or out rightly condoning it. This is the dangerous moral collapse which HA was alluding to, which Nazism, because of its creed of racism and naked power, brought about in the “respectable European society”. And God knows what monstrosities it led to. In HA’s incisive analysis, Eichmann was not a monster, neither was he legally insane, nor he was oblivious of the consequences of his actions. He was quite ordinary and commonplace, and hence the banality of it all. But he was caught in an ideology and the system that made him accept and commit those unspeakable horrors with relative equanimity. The respectable opinion, according to his testimony, never questioned or challenged his conscience. It was normally human to be inhuman in that atmosphere. He was embedded in a system which made such humanity almost seemed normal; in fact, alleviated it to the level of duty. That is what HA meant by the moral collapse of the “respectable western society”. It is a version of this moral collapse that was on display on that mountain top the other day: the moral collapse of “respectable Israeli “society.

Equally telling and matching the moral collapse of the ones doing the killings and their immediate cheerleaders was the moral collapse of the distant ones giving them the weapons and all the support of “soft power” they required. The moral bankruptcy on the top of hill in Israel was matched by moral bankruptcy on top of another hill, Capitol hill, namely, the shameless unanimous passing of resolution 480 by Senate, giving unconditional and full support to Israel’s Gaza offensive and yes, believe it or not, calling upon Hamas to separate itself from the unity government! Those who always blamed Hamas for its violence and non-acceptance of Israel, were baying for its blood when it officially , albeit in an indirect way, it did renounce violence by joining and accepting PA’s leadership which has recognized Israel’s right to exist! All those senators, without exception, with all their pretensions and protestations of peace were at that moment of endorsement of naked aggression were standing there themselves naked, without clothes, in their obeisance to the amoral God of power. Those champions of fairness and fair play and justice, Elizabeth Warrens, Al Frankens, Bernie Sanders of the world, totally exposed, with the whole world knowing, if they cared to look, that the only difference between them and John Boehners , Lindsy Grahams, Sarah Palins of the world is only in the choice of which Gods they prostrate themselves to. It is a difference of calculation not of principles. So much for the moral left in the US political system!

Influential media and government, there is nowhere one can turn to for any justice or even fair hearing in the case. Is this the moral collapse of the society we are witnessing here? Yes, there is , God bless her, Amy Goodman, there is this towering figure of Noam Chomsky, almost like an old Testament prophet, there is Chris Hedges and so on. Yes, there is a growing BDS movement and increasing awareness of Palestinian cause in Europe, and there is a strong anti-Zionist strand within Israel, and these are signs that prevent one from total despair or do they? Probably no society ever in history has totally collapsed morally, since even in the depths of it there is a Dietrich Bonhoeffer, there is a Lichentberg (a Catholic priest who joined the Jews in their journey east and perished with them). Perhaps, the moral collapse of the greatest supporter and abettor of the crimes of Israel is not that deep or total. Perhaps!

Moral Collapse of  the Victim

“Jewish Council Of Elders were informed by Eichman or his men of how many Jews were need to fill each train, and they made out the list of dportees” [xi]

“There can be no doubt that without the cooperation of the victims, it would hardly have been possible for a few thousand people ..to liquidate many hundreds of thousands of people”[xii]

 “And the acceptance of privileged categories…..had been the beginning of the moral collapse of respectable Jewish society”[xiii]

“It was a general practice to allow certain exceptions in order to be able to main general rule all the more easily” [xiv]

 “Theresienstadt, from the beginning , we designed by Heydrich to serve as a special ghetto for certain privileged categories of Jews”[xv]

One of the lesser talked and known aspect of Eichmann’s trial is how it reveals quiet poignantly the victims of the Holcaust turning, by the force of circumstances, into the willing instruments of their own mass execution. Nazis cleverly and quite cynically orchestrated what HA calls the “moral collapse of respectable Jewish” society, by holding out some avenues of escape in an otherwise hopeless situation. In the early phases before the unleashing of Final Solution, it was a policy of favoring one Jewish group over the other.  For example, Nazis support for Zionism chimed, at this early, with their method of seeking the solution to Jewish problem through forced emigration. They isolated and look down on the assimilationists and praised Zionists, as Eichmann puts it, as idealists.  They also created a special camp at Theresienstadt for the “privileged” category of Jews as a showcase to the world where Red Cross inspections could take place. In the later stages of mass executions, when the situation has become desperate, it is the pure animal instinct for survival and hope of saving some at the cost of many that led Jewish elders and councils to co-operate with the Nazis. For Nazis, it was much economical and convenient for the victims to handle the dirty police work themselves. Also, by creating categories for respectable Jews – those who fought in the previous war, for example – they soothed the conscience of folks at home who knew at least one “decent Jew”.

The moral of all this is to show that the people living under conditions of hopelessness and despair and made to fight for their survival and identity could be easily manipulated by the controlling authority holding power of life and death over them, into a seemingly self-destructive behavior and betraying their own people. It is exactly the same calculus that is at work in cynical Israeli policy of bringing about the moral bankruptcy of occupied Palestinian society by reducing sections of it to be willing policemen of their own captivity. The role of Palestinian authority and the corruption of its leadership must be seen in the light of this Israeli policy of finding venal partners for its illegal enterprise. The example of Jewish Council of Elders, and the co-operative police work in rounding up of Jews and policing the ghettos provides a template for this kind of control.

By  reducing Palestinian authority to a glorified police force in the west Bank tasked with ensuring control and peace as Israel never ceased building illegal settlements on their land, and reducing Gaza to an open air prison a la Warsaw ghetto , Israel wanted to divide and conquer the Palestinian opposition to it. Not only that, by arranging things so that only the most rabid, and extreme opposition remains as the only viable opposition to Israel, which can then be castigated as fanatic and bundled with other radical muslim jihadi outfits, it wanted to control the narrative and tone of opposition to its oppression. Hence the rage of Israeli Prime Minister, when Hamas joined the Palestinian authority and began to take steps to undercut this narrative! How dare Hamas relinquished the armed struggle and vows to destroy Israel for the peaceful struggle and co-existence! That is not what is in the script, and hence Hamas must be punished for it. This is and only this explains the latest flare up in Gaza!

There is one more aspect to this moral collapse of the victim as it relates to the surrounding Arab states. It is not an accident that I am using the word victim for the surrounding Arab states, and not what comes obviously to mind, especially as regards Saudi Arabia and Egypt, accomplices in the crime. They are victims in so far as their situation is analogous to the Nazi occupied countries of Europe such as Hungary, Poland, Slovakia and so on. Here the occupation is not a direct one by Israel, but indirect one by its biggest supporter United States, and not through direct occupation, but by supporting and installing the puppet, corrupt and tyrannical regimes. It is through these regimes that massive resources and policies of these regimes are controlled, and their populations kept educationally, culturally, and technologically backward. The moral collapse is reflected in the decadence of the ruling class and their conspicuous consumption and apathy of the masses to this situation. Even when attempts are made to rectify, they run up against the stone wall of US supported entrenched opposition. To this monarchical and tyrannical petro dollar driven regimes and consumption, is now deliberately added the lethal dimension of sectarian hatred in order to always keep the pot boiling and to provide channels for frittering away radical tendencies into self-destructive and divisive endeavors. All this is heaven sent for Israel, which not only sees in it an almost endless gold mine of propaganda to taint the opposition in the most unflattering and now the almost universal bogey of political or radical Islam, but also an enemy self- immolating in the fires of old, atavistic, sectarian hatreds. The upshot of all this is that both the sectarian battles, the fears of sundry caliphates and petro-dollar driven tyrannies are going to stay there for a while, since they suite both Israel and its arch supporter United States, in the wider scheme of things. But for the middle eastern Muslim society, they are the most potent expression of  their “moral collapse”.

Hammad Said is an IT consultant and lives in Portland, Oregon, and can be reached at hammad_said@hotmail.

Notes.


[i] Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 10, penguin addition

[ii] David Russet, a former inmate of Buchenwald quoted by Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 11, penguin addition

[iii] Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 52, penguin addition

[iv] Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil

[vi] Eric Rajakowitsch, in charge of deportation of Dutch Jews: Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 75, penguin addition

[vii] ”Judge Halevi, asking the defense counsel of Eichmann to clarify his shocking statement about gasing: Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil:  penguin addition

[viii] Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 86, penguin addition

[ix] Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 95, penguin addition

[x] Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 103, penguin addition

[xi] Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 115, penguin addition

[xii] R. Pendorf, quoted by Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 117, penguin addition

[xiii] Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 131, penguin addition

[xiv] Louis De Jong , quoted by Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 132, penguin addition

[xv] Hannah Arendt: Eichmann In Jerusalem, A Report In The Banality Of Evil: page 80, penguin addition

 

http://www.counterpunch.org/2014/07/28/relevance-of-hannah-arendts-a-report-on-the-banality-of-evil-to-gaza/

What we do better without other people around

The power of lonely

(Tim Gabor for The Boston Globe)
By Leon Neyfakh

March 6, 2011

You hear it all the time: We humans are social animals. We need to spend time together to be happy and functional, and we extract a vast array of benefits from maintaining intimate relationships and associating with groups. Collaborating on projects at work makes us smarter and more creative. Hanging out with friends makes us more emotionally mature and better able to deal with grief and stress.

Spending time alone, by contrast, can look a little suspect. In a world gone wild for wikis and interdisciplinary collaboration, those who prefer solitude and private noodling are seen as eccentric at best and defective at worst, and are often presumed to be suffering from social anxiety, boredom, and alienation.

But an emerging body of research is suggesting that spending time alone, if done right, can be good for us — that certain tasks and thought processes are best carried out without anyone else around, and that even the most socially motivated among us should regularly be taking time to ourselves if we want to have fully developed personalities, and be capable of focus and creative thinking. There is even research to suggest that blocking off enough alone time is an important component of a well-functioning social life — that if we want to get the most out of the time we spend with people, we should make sure we’re spending enough of it away from them. Just as regular exercise and healthy eating make our minds and bodies work better, solitude experts say, so can being alone.

One ongoing Harvard study indicates that people form more lasting and accurate memories if they believe they’re experiencing something alone. Another indicates that a certain amount of solitude can make a person more capable of empathy towards others. And while no one would dispute that too much isolation early in life can be unhealthy, a certain amount of solitude has been shown to help teenagers improve their moods and earn good grades in school.

“There’s so much cultural anxiety about isolation in our country that we often fail to appreciate the benefits of solitude,” said Eric Klinenberg, a sociologist at New York University whose book “Alone in America,” in which he argues for a reevaluation of solitude, will be published next year. “There is something very liberating for people about being on their own. They’re able to establish some control over the way they spend their time. They’re able to decompress at the end of a busy day in a city…and experience a feeling of freedom.”

Figuring out what solitude is and how it affects our thoughts and feelings has never been more crucial. The latest Census figures indicate there are some 31 million Americans living alone, which accounts for more than a quarter of all US households. And at the same time, the experience of being alone is being transformed dramatically, as more and more people spend their days and nights permanently connected to the outside world through cellphones and computers. In an age when no one is ever more than a text message or an e-mail away from other people, the distinction between “alone” and “together” has become hopelessly blurry, even as the potential benefits of true solitude are starting to become clearer.

Solitude has long been linked with creativity, spirituality, and intellectual might. The leaders of the world’s great religions — Jesus, Buddha, Mohammed, Moses — all had crucial revelations during periods of solitude. The poet James Russell Lowell identified solitude as “needful to the imagination;” in the 1988 book “Solitude: A Return to the Self,” the British psychiatrist Anthony Storr invoked Beethoven, Kafka, and Newton as examples of solitary genius.

But what actually happens to people’s minds when they are alone? As much as it’s been exalted, our understanding of how solitude actually works has remained rather abstract, and modern psychology — where you might expect the answers to lie — has tended to treat aloneness more as a problem than a solution. That was what Christopher Long found back in 1999, when as a graduate student at the University of Massachusetts Amherst he started working on a project to precisely define solitude and isolate ways in which it could be experienced constructively. The project’s funding came from, of all places, the US Forest Service, an agency with a deep interest in figuring out once and for all what is meant by “solitude” and how the concept could be used to promote America’s wilderness preserves.

With his graduate adviser and a researcher from the Forest Service at his side, Long identified a number of different ways a person might experience solitude and undertook a series of studies to measure how common they were and how much people valued them. A 2003 survey of 320 UMass undergraduates led Long and his coauthors to conclude that people felt good about being alone more often than they felt bad about it, and that psychology’s conventional approach to solitude — an “almost exclusive emphasis on loneliness” — represented an artificially narrow view of what being alone was all about.

“Aloneness doesn’t have to be bad,” Long said by phone recently from Ouachita Baptist University, where he is an assistant professor. “There’s all this research on solitary confinement and sensory deprivation and astronauts and people in Antarctica — and we wanted to say, look, it’s not just about loneliness!”

Today other researchers are eagerly diving into that gap. Robert Coplan of Carleton University, who studies children who play alone, is so bullish on the emergence of solitude studies that he’s hoping to collect the best contemporary research into a book. Harvard professor Daniel Gilbert, a leader in the world of positive psychology, has recently overseen an intriguing study that suggests memories are formed more effectively when people think they’re experiencing something individually.

That study, led by graduate student Bethany Burum, started with a simple experiment: Burum placed two individuals in a room and had them spend a few minutes getting to know each other. They then sat back to back, each facing a computer screen the other could not see. In some cases they were told they’d both be doing the same task, in other cases they were told they’d be doing different things. The computer screen scrolled through a set of drawings of common objects, such as a guitar, a clock, and a log. A few days later the participants returned and were asked to recall which drawings they’d been shown. Burum found that the participants who had been told the person behind them was doing a different task — namely, identifying sounds rather than looking at pictures — did a better job of remembering the pictures. In other words, they formed more solid memories when they believed they were the only ones doing the task.

The results, which Burum cautions are preliminary, are now part of a paper on “the coexperiencing mind” that was recently presented at the Society for Personality and Social Psychology conference. In the paper, Burum offers two possible theories to explain what she and Gilbert found in the study. The first invokes a well-known concept from social psychology called “social loafing,” which says that people tend not to try as hard if they think they can rely on others to pick up their slack. (If two people are pulling a rope, for example, neither will pull quite as hard as they would if they were pulling it alone.) But Burum leans toward a different explanation, which is that sharing an experience with someone is inherently distracting, because it compels us to expend energy on imagining what the other person is going through and how they’re reacting to it.

“People tend to engage quite automatically with thinking about the minds of other people,” Burum said in an interview. “We’re multitasking when we’re with other people in a way that we’re not when we just have an experience by ourselves.”

Perhaps this explains why seeing a movie alone feels so radically different than seeing it with friends: Sitting there in the theater with nobody next to you, you’re not wondering what anyone else thinks of it; you’re not anticipating the discussion that you’ll be having about it on the way home. All your mental energy can be directed at what’s happening on the screen. According to Greg Feist, an associate professor of psychology at the San Jose State University who has written about the connection between creativity and solitude, some version of that principle may also be at work when we simply let our minds wander: When we let our focus shift away from the people and things around us, we are better able to engage in what’s called meta-cognition, or the process of thinking critically and reflectively about our own thoughts.

Other psychologists have looked at what happens when other people’s minds don’t just take up our bandwidth, but actually influence our judgment. It’s well known that we’re prone to absorb or mimic the opinions and body language of others in all sorts of situations, including those that might seem the most intensely individual, such as who we’re attracted to. While psychologists don’t necessarily think of that sort of influence as “clouding” one’s judgment — most would say it’s a mechanism for learning, allowing us to benefit from information other people have access to that we don’t — it’s easy to see how being surrounded by other people could hamper a person’s efforts to figure out what he or she really thinks of something.

Teenagers, especially, whose personalities have not yet fully formed, have been shown to benefit from time spent apart from others, in part because it allows for a kind of introspection — and freedom from self-consciousness — that strengthens their sense of identity. Reed Larson, a professor of human development at the University of Illinois, conducted a study in the 1990s in which adolescents outfitted with beepers were prompted at irregular intervals to write down answers to questions about who they were with, what they were doing, and how they were feeling. Perhaps not surprisingly, he found that when the teens in his sample were alone, they reported feeling a lot less self-conscious. “They want to be in their bedrooms because they want to get away from the gaze of other people,” he said.

The teenagers weren’t necessarily happier when they were alone; adolescence, after all, can be a particularly tough time to be separated from the group. But Larson found something interesting: On average, the kids in his sample felt better after they spent some time alone than they did before. Furthermore, he found that kids who spent between 25 and 45 percent of their nonclass time alone tended to have more positive emotions over the course of the weeklong study than their more socially active peers, were more successful in school and were less likely to self-report depression.

“The paradox was that being alone was not a particularly happy state,” Larson said. “But there seemed to be kind of a rebound effect. It’s kind of like a bitter medicine.”

The nice thing about medicine is it comes with instructions. Not so with solitude, which may be tremendously good for one’s health when taken in the right doses, but is about as user-friendly as an unmarked white pill. Too much solitude is unequivocally harmful and broadly debilitating, decades of research show. But one person’s “too much” might be someone else’s “just enough,” and eyeballing the difference with any precision is next to impossible.

Research is still far from offering any concrete guidelines. Insofar as there is a consensus among solitude researchers, it’s that in order to get anything positive out of spending time alone, solitude should be a choice: People must feel like they’ve actively decided to take time apart from people, rather than being forced into it against their will.

Overextended parents might not need any encouragement to see time alone as a desirable luxury; the question for them is only how to build it into their frenzied lives. But for the millions of people living by themselves, making time spent alone time productive may require a different kind of effort. Sherry Turkle, director of the MIT Initiative on Technology and Self, argues in her new book, “Alone, Together,” that people should be mindfully setting aside chunks of every day when they are not engaged in so-called social snacking activities like texting, g-chatting, and talking on the phone. For teenagers, it may help to understand that feeling a little lonely at times may simply be the price of forging a clearer identity.

John Cacioppo of the University of Chicago, whose 2008 book “Loneliness” with William Patrick summarized a career’s worth of research on all the negative things that happen to people who can’t establish connections with others, said recently that as long as it’s not motivated by fear or social anxiety, then spending time alone can be a crucially nourishing component of life. And it can have some counterintuitive effects: Adam Waytz in the Harvard psychology department, one of Cacioppo’s former students, recently completed a study indicating that people who are socially connected with others can have a hard time identifying with people who are more distant from them. Spending a certain amount of time alone, the study suggests, can make us less closed off from others and more capable of empathy — in other words, better social animals.

“People make this error, thinking that being alone means being lonely, and not being alone means being with other people,” Cacioppo said. “You need to be able to recharge on your own sometimes. Part of being able to connect is being available to other people, and no one can do that without a break.”

Leon Neyfakh is the staff writer for Ideas. E-mail lneyfakh@globe.com.

C.S. Lewis on Suffering and What It Means to Have Free Will in a Universe of Fixed Laws

by

“Try to exclude the possibility of suffering which the order of nature and the existence of free wills involve, and you find that you have excluded life itself.”

If the universe operates by fixed physical laws, what does it mean for us to have free will? That’s what C.S. Lewis considers with an elegant sidewise gleam in an essay titled “Divine Omnipotence” from his altogether fascinating 1940 book The Problem of Pain (public library) — a scintillating examination of the concept of free will in a material universe and why suffering is not only a natural but an essential part of the human experience. Though explored through the lens of the contradictions and impossibilities of belief, the questions Lewis raises touch on elements of philosophy, politics, psychology, cosmology, and ethics — areas that have profound, direct impact on how we live our lives, day to day.

He begins by framing “the problem of pain, in its simplest form” — the paradoxical idea that if we were to believe in a higher power, we would, on the one hand, have to believe that “God” wants all creatures to be happy and, being almighty, can make that wish manifest; on the other hand, we’d have to acknowledge that all creatures are not happy, which renders that god lacking in “either goodness, or power, or both.”

To be sure, Lewis’s own journey of spirituality was a convoluted one — he was raised in a religious family, became an atheist at fifteen, then slowly returned to Christianity under the influence of his friend and Oxford colleague J.R.R. Tolkien. But whatever his religious bent, Lewis possessed the rare gift of being able to examine his own beliefs critically and, in the process, to offer layered, timeless insight on eternal inquiries into spirituality and the material universe that resonate even with those of us who fall on the nonreligious end of the spectrum and side with Carl Sagan on matters of spirituality.

Lewis writes:

There is no reason to suppose that self-consciousness, the recognition of a creature by itself as a “self,” can exist except in contrast with an “other,” a something which is not the self. . . . The freedom of a creature must mean freedom to choose: and choice implies the existence of things to choose between. A creature with no environment would have no choices to make: so that freedom, like self-consciousness (if they are not, indeed, the same thing), again demands the presence to the self of something other than the self.

What makes Lewis’s reflections so enduring and widely resonant is that, for all his concern with divinity, he cracks open the innermost kernel of our basic humanity, in relation to ourselves and to one another:

People often talk as if nothing were easier than for two naked minds to “meet” or become aware of each other. But I see no possibility of their doing so except in a common medium which forms their “external world” or environment. Even our vague attempt to imagine such a meeting between disembodied spirits usually slips in surreptitiously the idea of, at least, a common space and common time, to give the co- in co-existence a meaning: and space and time are already an environment. But more than this is required. If your thoughts and passions were directly present to me, like my own, without any mark of externality or otherness, how should I distinguish them from mine? And what thoughts or passions could we begin to have without objects to think and feel about? Nay, could I even begin to have the conception of “external” and “other” unless I had experience of an “external world”?

In a sentiment that calls to mind novelist Iris Murdoch’s beautiful definition of love (“Love is the very difficult understanding that something other than yourself is real.”), Lewis adds:

The result is that most people remain ignorant of the existence of both. We may therefore suppose that if human souls affected one another directly and immaterially, it would be a rare triumph of faith and insight for any one of them to believe in the existence of the others.

Lewis considers what it would take for us to fully acknowledge and contact each other’s otherness, to bridge the divide between the internal and the external:

What we need for human society is exactly what we have — a neutral something, neither you nor I, which we can both manipulate so as to make signs to each other. I can talk to you because we can both set up sound-waves in the common air between us. Matter, which keeps souls apart, also brings them together. It enables each of us to have an “outside” as well as an “inside,” so that what are acts of will and thought for you are noises and glances for me; you are enabled not only to be, but to appear: and hence I have the pleasure of making your acquaintance.

Society, then, implies a common field or “world” in which its members meet.

‘Tree of virtues’ by Lambert of Saint-Omer, ca. 1250, from ‘The Book of Trees.’ Click image for details.

That “neutral something” might sound a lot like faith, but Lewis is careful to point out the limitations of such traditional interpretations and to examine how this relates to the question of suffering:

If matter is to serve as a neutral field it must have a fixed nature of its own. If a “world” or material system had only a single inhabitant it might conform at every moment to his wishes — “trees for his sake would crowd into a shade.” But if you were introduced into a world which thus varied at my every whim, you would be quite unable to act in it and would thus lose the exercise of your free will. Nor is it clear that you could make your presence known to me — all the matter by which you attempted to make signs to me being already in my control and therefore not capable of being manipulated by you.

Again, if matter has a fixed nature and obeys constant laws, not all states of matter will be equally agreeable to the wishes of a given soul, nor all equally beneficial for that particular aggregate of matter which he calls his body. If fire comforts that body at a certain distance, it will destroy it when the distance is reduced. Hence, even in a perfect world, the necessity for those danger signals which the pain-fibres in our nerves are apparently designed to transmit. Does this mean an inevitable element of evil (in the form of pain) in any possible world? I think not: for while it may be true that the least sin is an incalculable evil, the evil of pain depends on degree, and pains below a certain intensity are not feared or resented at all. No one minds the process “warm — beautifully hot — too hot — it stings” which warns him to withdraw his hand from exposure to the fire: and, if I may trust my own feeling, a slight aching in the legs as we climb into bed after a good day’s walking is, in fact, pleasurable.

Yet again, if the fixed nature of matter prevents it from being always, and in all its dispositions, equally agreeable even to a single soul, much less is it possible for the matter of the universe at any moment to be distributed so that it is equally convenient and pleasurable to each member of a society. If a man traveling in one direction is having a journey down hill, a man going in the opposite direction must be going up hill. If even a pebble lies where I want it to lie, it cannot, except by a coincidence, be where you want it to lie. And this is very far from being an evil: on the contrary, it furnishes occasion for all those acts of courtesy, respect, and unselfishness by which love and good humor and modesty express themselves. But it certainly leaves the way open to a great evil, that of competition and hostility. And if souls are free, they cannot be prevented from dealing with the problem by competition instead of courtesy. And once they have advanced to actual hostility, they can then exploit the fixed nature of matter to hurt one another. The permanent nature of wood which enables us to use it as a beam also enables us to use it for hitting our neighbor on the head. The permanent nature of matter in general means that when human beings fight, the victory ordinarily goes to those who have superior weapons, skill, and numbers, even if their cause is unjust.

Illustration by Olivier Tallec from ‘Waterloo & Trafalgar.’ Click image for details.

But looking closer at the possible “abuses of free will,” Lewis considers how the fixed nature of physical laws presents a problem for the religious notion of miracles — something he’d come to examine in depth several years later in the book Miracles, and something MIT’s Alan Lightman would come to echo several decades later in his spectacular meditation on science and spirituality. Lewis writes:

Such a world would be one in which wrong actions were impossible, and in which, therefore, freedom of the will would be void; nay, if the principle were carried out to its logical conclusion, evil thoughts would be impossible, for the cerebral matter which we use in thinking would refuse its task when we attempted to frame them. All matter in the neighborhood of a wicked man would be liable to undergo unpredictable alterations. That God can and does, on occasions, modify the behavior of matter and produce what we call miracles, is part of Christian faith; but the very conception of a common, and therefore stable, world, demands that these occasions should be extremely rare.

He offers an illustrative example:

In a game of chess you can make certain arbitrary concessions to your opponent, which stand to the ordinary rules of the game as miracles stand to the laws of nature. You can deprive yourself of a castle, or allow the other man sometimes to take back a move made inadvertently. But if you conceded everything that at any moment happened to suit him — if all his moves were revocable and if all your pieces disappeared whenever their position on the board was not to his liking — then you could not have a game at all. So it is with the life of souls in a world: fixed laws, consequences unfolding by causal necessity, the whole natural order, are at once limits within which their common life is confined and also the sole condition under which any such life is possible. Try to exclude the possibility of suffering which the order of nature and the existence of free wills involve, and you find that you have excluded life itself.

He closes by bringing us full-circle to the concept of free will:

Whatever human freedom means, Divine freedom cannot mean indeterminacy between alternatives and choice of one of them. Perfect goodness can never debate about the end to be attained, and perfect wisdom cannot debate about the means most suited to achieve it.

The Problem of Pain is a pause-giving read in its entirety. Complement it with Lewis on duty, the secret of happiness, and writing “for children” and the key to authenticity in all writing, then revisit Jane Goodall on science and spirituality.

 

 

New Yorker Cartoonist Roz Chast’s Remarkable Illustrated Meditation on Aging, Illness, and Death

by

Making sense of the human journey with wit, wisdom, and disarming vulnerability.

“Each day, we wake slightly altered, and the person we were yesterday is dead,” John Updike wrote in his magnificent memoir. “So why, one could say, be afraid of death, when death comes all the time?” It’s a sentiment somewhat easier to swallow — though certainly not without its ancient challenge — when it comes to our own death, but when that of our loved ones skulks around, it’s invariably devastating and messy, and it catches us painfully unprepared no matter how much time we’ve had to “prepare.”

Count on another beloved New Yorker contributor, cartoonist Roz Chast, to address this delicate and doleful subject with equal parts wit and wisdom in Can’t We Talk about Something More Pleasant?: A Memoir (public library) — a remarkable illustrated chronicle of her parents’ decline into old age and death, pierced by those profound, strangely uplifting in-between moments of cracking open the little chests of truth we keep latched shut all our lives until a brush with our mortal impermanence rattles the lock and lets out some understanding, however brief and fragmentary, of the great human mystery of what it means to live.

The humor and humility with which Chast tackles the enormously difficult subject of aging, illness and death is nothing short of a work of genius.

But besides appreciating Chast’s treatment of such grand human themes as death, duty, and “the moving sidewalk of life,” I was struck by how much her parents resembled my own — her father, just like mine, a “kind and sensitive” man of above-average awkwardness, “the spindly type,” inept at even the basics of taking care of himself domestically, with a genius for languages; her mother, just like mine, a dominant and hard-headed perfectionist “built like a fire hydrant,” with vanquished dreams of becoming a professional pianist, an unpredictable volcano of anger. (“Where my father was tentative and gentle,” Chast writes, “she was critical and uncompromising.” And: “Even though I knew he couldn’t really defend me against my mother’s rages, I sensed that at least he felt some sympathy, and that he liked me as a person, not just because I was his daughter.”)

Chast, like myself, was an only child and her parents, like mine, had a hard time understanding how their daughter made her living given she didn’t run in the 9-to-5 hamster wheel of working for the man. There were also the shared family food issues, the childhood loneliness, the discomfort about money that stems from having grown up without it.

The point here, of course, isn’t to dance to the drum of solipsism. (Though we only children seem particularly attuned to its beat.) It’s to appreciate the elegance and bold vulnerability with which Chast weaves out of her own story a narrative at once so universally human yet so relatable in its kaleidoscope of particularities that any reader is bound to find a piece of him- or herself in it, to laugh and weep with the bittersweet relief of suddenly feeling less alone in the most lonesome-making of human struggles, to find some compassion for even the most tragicomic of our faults.

From reluctantly visiting her parents in the neighborhood where she grew up (“not the Brooklyn of artists or hipsters or people who made — and bought — $8 chocolate bars [but] DEEP Brooklyn”) as their decline began, to accepting just as reluctantly the basic facts of life (“Old age didn’t change their basic personalities. If anything, it intensified what was already there.”), to witnessing her father’s mental dwindling (“One of the worst parts of senility must be that you have to get terrible news over and over again. On the other hand, maybe in between the times of knowing the bad news, you get to forget it and live as if everything was hunky-dory.”), to the self-loathing brought on by the clash between the aspiration of a loving daughter and the financial strain of elder care (“I felt like a disgusting person, worrying about the money.”), Chast treks with extraordinary candor and vulnerability through the maze of her own psyche, mapping out our own in the process.

Chast also explores, with extraordinary sensitivity and self-awareness, the warping of identity that happens when the cycle of life and its uncompromising realities toss us into roles we always knew were part of the human journey but somehow thought we, we alone, would be spared. She writes:

It’s really easy to be patient and sympathetic with someone when it’s theoretical, or only for a little while. It’s a lot harder to deal with someone’s craziness when it’s constant, and that person is your dad, the one who’s supposed to be taking care of YOU.

But despite her enormous capacity for wit and humor even in so harrowing an experience, Chast doesn’t stray too far from its backbone of deep, complicated love and paralyzing grief. The book ends with Chast’s raw, unfiltered sketches from the final weeks she spent in the hospice ward where her mother took her last breath. A crystalline realization suddenly emerges that Chast’s cartooning isn’t some gimmicky ploy for quick laughs but her most direct access point to her own experience, her best sensemaking mechanism for understanding the world, life and, inevitably, death.

Can’t We Talk about Something More Pleasant? is an absolutely astounding read in its entirety — the kind that enters your soul through the backdoor, lightly, and touches more parts of it and more heavinesses than you ever thought you’d allow. You’re left, simply, grateful.

Images courtesy of Bloomsbury © Roz Chast; thanks, Wendy

My adventures in Hemingway

How I lived out a novel at odds with the modern world

As a young man in Europe, I immersed myself in the work of a master. What I learned changed me forever

My adventures in Hemingway: How I lived out a novel at odds with the modern world
Ernest Hemingway attends a bullfight in Madrid, Spain, November 1960. (Credit: AP)

At first, they died in the bullring, but the book that made them famous had swelled the crowds. By mid-century, the lack of space made it harder to outrun the bulls, so they began to die much earlier in the route, beyond Hotel La Perla, and most just before the bulls made their 90-degree turn onto Calle Estafeta.

I was there in Pamplona, standing on the balcony of the piso near this precarious juncture. It was 8 a.m.; the stone streets were shiny with rain. There was a wood barricade, like an outfield wall, that unnaturally ended Calle Mercaderes and forced the route right onto Estafeta. This is where I saw the first bulls slip, losing their footing at the turn, their bulk hitting the stones, their tonnage pounding into the barricade, the runners fleeing to the sidewalks, some, in fetal curls, waiting for death.

This was how the last American was killed in Pamplona, along this narrow corridor that offers no escape from the charging bulls. That morning, his killer, “Castellano,” had begun to run the 826 meters from the Cuesta de Santo Domingo to the Plaza de Toros at an unusually torrid pace, which frightened the runners and sent them scurrying. One of them fell.

Castellano plunged his horns into the limp American on the ground, goring his stomach and piercing through to his aortic vein. He began to crawl. But there were still more bulls in the stampede, and by the time the Red Cross unit got to Matthew Tassio, most of the blood had already drained from his body. He was dead just eight minutes after he finally reached the hospital.

Tassio’s was the 14th death in the recorded history of San Fermín — the festival most famous for hosting the annual “Running of the Bulls” — and the last American to die there. One other has perished since, in 2003, and many others have been badly damaged by the bulls, but perhaps none have died as gruesomely as the American did in 1995. I wasn’t in attendance for that run, thankfully.

In a year in which there would be no deaths, I came to Pamplona for the second time by bus from Madrid, passing through the sunflowers of Basque country. I had been invited by the correspondents of the Associated Press, with whom Dow Jones Newswire, my former employer, shared its local outpost to witness the Running of the Bulls from their prized perch.

That morning’s encierra would be the first of the new millennium. It was very wet and, even from the balcony, you could see the unevenness in the cobblestones. The only place to witness the run was from the balconies of the apartments along the route. When the bulls began to stampede, the runners, many still drunk and wearing all white save for a red pañuelico around their necks, filled all of the space in the corridors. It was a jogging gait until they saw the bulls. Most of them ran well ahead of the danger, but some were eventually chased down by the bulls.



I remember an American student who slipped nearly died on the curb by the cigarette shop on Estafeta, trampled, blood maroon in the grooves of the cobblestones, a small crowd coagulating around him to watch for death.

I remember the dense crowds, the public drunkenness, the street drink that kept you drunk and alert made from equal parts Coca-Cola and red wine. And, of course, I remember the monumental visage of Ernest Hemingway that hung down the side of the Hotel La Perla, where he set the novel that first recorded this mad dash from mortality.

* * *

It is not an overstatement to claim that Ernest Hemingway introduced Pamplona to the world. Until he first wrote about it in 1923 in an article for The Toronto Star Weekly, the San Fermín festival had been a regional affair: “As far as I know we were the only English speaking people in Pamplona during the Feria of last year,” writes Hemingway. “We landed at Pamplona at night. The streets were solid with people dancing. Music was pounding and throbbing. Fireworks were being set off from the big public square. All the carnivals I had ever seen paled down in comparison.”

This Toronto Star sketch of Pamplona comes from the appendix of new edition of “The Sun Also Rises,” released last week by Scribner’s to commemorate the 90th anniversary of its publication. The “updated” version will titillate Hemingway aficionados: unpublished early drafts, excised scenes and two deleted opening chapters. This “new” material provides a rare glimpse into the evolution and creative process of one of the great masters of American literature.

Drafted over six weeks across Spain (mostly in Valencia and Madrid) in the summer of 1925, set in Jazz Age Paris amid the psychic ruins of the Great War, “The Sun Also Rises” endures as one of the finest first novels ever written. Its itinerant narrative of Spain and France (they were largely unknowns to the American traveling public in 1925, but more on that later), depictions of café life and drinking, bullfighting and affairs with matadors, were all new to novels of the time. “No amount of analysis can convey the quality of ‘The Sun Also Rises,’” went the original review of the book in The New York Times in 1926. “It is a truly gripping story, told in lean, hard, athletic narrative prose that puts more literary English to shame.”

Hemingway chose to evoke disillusionment through Jack Barnes, an expatriated American foreign correspondent living in Paris, and his married paramour, Lady Brett Ashley. Their romance is complicated, to say the least: Jake’s manhood was marred in the war and he cannot procreate (in his famous interview with The Paris Review in the 1950s, Hemingway was adamant that Jake was not a eunuch). The war injury was a crucial detail in the text, and an emblematic signature of the Hemingway code that ran through his subsequent work. His male characters bore physical or psychic wounds (sometimes both). Jake’s injury was an outward symptom of an interior crisis suffered in the wake of WWI.

Hemingway’s genius was present in Jake and Lady Brett as protagonists and antagonists. We inflict our own wounds, Hemingway seems to say, an insight that bears out even today: Contemporary disillusionment is concerned with the surprising, man-made ironies of modernity — a dwindling sense of freedom, both existential and civil; the West’s waning hegemony even amid unparalleled wealth and technology; a diminished middle class and shrinking American dream; and an ever present sense of looming doom (an attack of some kind, perhaps) by forces beyond our control. The Lost Generation time of “The Sun” was infected with its own disillusionment, owing to man-made origins from the tragic period of 1914–1918. This disillusion plays out in “The Sun” almost nihilistically; by the novel’s end, both characters are badly damaged by the preceding events, degraded, alienated.

* * *

The two opening chapters, cut by Hemingway but offered to readers in the new edition, were fortunate omissions. The original opening lines of “The Sun” sound an awkward, conversational, and Victorian tone, inconsistent with the remainder of the novel:

This is a story about a lady. Her name is Lady Ashley and when the story begins she is living in Paris and it is Spring. That should be a good setting for a romantic but highly moral story.

Yet, in the rest of the deleted chapter, and elsewhere in the early passages, the Hemingway voice is undeniably present. That voice has drawn veneration and ridicule. The essayist E.B. White, hardly the type for big-game hunting and encierros, penned a famous parody of Hemingway in The New Yorker in 1950, deriding his so-called declarative prose style. Yet much of Hemingway’s best writing strains this easy stereotype. He is far less aphoristic and quotable than, say, Don DeLillo (a veritable one-man factory of sound bites), and you would find it difficult to locate a pithy tweet among the prose of “The Sun Also Rises.”

By the 1930s, Hemingway’s writing style had grown more intricate. “Green Hills of Africa” contains a  buffalo of a sentence, 497 words spanning five pages, reminiscent of Faulkner or Gabriel García Márquez. That magical realist, in fact, had lionized Hemingway. Writing for the New York Times about a fleeting, chance encounter with Hemingway on Paris’s Boulevard St. Michel, Márquez declares that “[Hemingway's] instantaneously inspired short stories are unassailable” and calls him “one of the most brilliant goldsmiths in the history of letters.”

For better or for worse, that unmistakably declarative, taut, gritty Hemingway music can overpower the substance of his stories, somewhat ironically drawing the attention back onto himself. Over the decades since his death in 1961, the Hemingway legend has bloomed and rebloomed many times over, until now there is a preoccupation with the Hemingway lifestyle, the man himself, in a way, morphing posthumously into a tourist destination, a literary Jimmy Buffet.

Those places in “The Sun” – Pamplona, Madrid, Paris – remain open for tourists, poet manqués, backpackers and the traveling gentility. But the tourism and concomitant commercialism have rendered quite a few of their landmarks ersatz. At the Closerie des Lilas in Paris’ Montparnasse district, where Hemingway set many scenes from his books (including my favorite, “A Moveable Feast”), practically the entire bar menu is a monument to Hemingway: daiquiris and mojitos, all made from Cuban rum, all named after Papa. After its much celebrated renovation, the Hotel Ritz saw fit to refurbish itself with a Hemingway-themed restaurant, L’Espadon (“The Swordfish”), in homage to Papa’s love of fishing, along with a Hemingway-inspired bar, which, no doubt, mixes up fanciful permutations of mojitos and daiquiris named after… you guessed it. The Spanish may have even more flagrantly exceeded the French in their quest to annex the Hemingway legend into their geography. Calle de Hemingway in Pamplona leads directly into the bullring. Placards outside restaurants on the Calle Cuchilleros in Madrid proclaim, rather declaratively, that “Hemingway ate here.” The website of Madrid staple Botin’s, the world’s oldest restaurant and where Hemingway set the final scene of “The Sun,” devotes cyber text to Papa, even directly quoting the final chapter.

A question worth considering: Were he alive today, could Hemingway have written a novel as great as “The Sun”? So much of its wonder derives from his keen eye for the undiscovered, heretofore unknown traditions. Ours, though, is a world of uber-awareness, search-engine omniscience. Those with wanderlust possess all manner of means to beam into a faraway place, efficiently and affordably, even instantaneously. One afternoon, in writing this essay, I called up Google Earth to view the squares and streets where I had been 14 years ago, astounded by the clarity of the street-level views. For the next few minutes, I flitted to and fro around the globe, a Peter Pan visiting Anthony Bourdain places.

* * *

I had first read “The Sun” in high school and was unaffected by it. Not until I had decided to move abroad after college and take up residence in Spain did I take up the text again. This time I clung to it, savoring every word about where to travel and where to eat and how to live like a good, knowledgeable expatriate.

Barcelona in July was a late-summer swamp. Soon, I missed the cool weather and took an overnight train north to the Basque country. In San Sebastian, I walked at dusk along the promenade that curled around the bay past the stalls that sold tiny, rare mollusks you picked like popcorn out of a cone of rolled newspaper. I saw some school kids perform the ancient Basque Riau-Riau dances and walked around the complicatedly arranged streets, names clotted with diphthongs, that smelled of the Atlantic, looking for Hemingway experiences.

That did not come until Pamplona.

It was late afternoon in August and very hot when I arrived. In my bag was a journal from my mother, some unremarkable reading, dirty clothes, a few measly legs left of a Eurorail pass I had purchased in Harvard Square the month before.

I had not thought the city would be so different after San Fermín, but I had come too late: the city was half-full. I roved for much of that afternoon, dodging in and out of curio shops and whatever else was open out of season. When it was dinnertime, I studied the menus of the restaurants off the Plaza Castillo until I found somewhere with a menu written only in Basque. I befriended a fellow traveler, Ryan, who was also from Massachusetts, and we made up plans to go drinking at a café in the Plaza Castillo afterwards.

By our third bottle of Estrella, an American couple approached to ask if they could join our table.

“We live in Paris and I am so tired of Paris that we have to leave. All I want to do is go back to Chicago but he won’t go,” she said.

“But Paris is so special,” I said. “Don’t you enjoy any part of it?”

“I despise it. They hate all Americans and you can smell the cheese in the cheese shops even from the street.”

She was menacingly beautiful and skeletal in that model way. Mark was a photographer, a little stout and balding, his shirt unbuttoned a touch salaciously. They had eloped some years ago and were living in an attic flat on the Île de St. Louis. When she began to flirt openly with Ryan, he would not look at her. When the couple began to quarrel, she kept saying she wished to return to America.

It was nearly midnight now and we were the only ones left in café in the colonnade of the plaza. Across the way, the lights were all out at the Hotel La Perla. There was only moonlight in the great square. When she finally began to kiss him, her husband placed his bottle on the table, stood, and shook his head at me. She was giggling the entire time.

 

 

A Silicon Valley scheme to “disrupt” America’s education system would hurt the people who need it the most

The plot to destroy education: Why technology could ruin American classrooms — by trying to fix them

The plot to destroy education: Why technology could ruin American classrooms — by trying to fix them
(Credit: Warner Bros. Entertainment Inc./Pgiam via iStock/Salon)

How does Silicon Valley feel about college? Here’s a taste: Seven words in a tweet provoked by a conversation about education started by Silicon Valley venture capitalist Marc Andreeseen.

Arrogance? Check. Supreme confidence? Check. Oblivious to the value actually provided by a college education? Check.

The $400 billion a year that Americans pay for education after high school is being wasted on an archaic brick-and-mortar irrelevance. We can do better! 

But how? The question becomes more pertinent every day — and it’s one that Silicon Valley would dearly like to answer.

The robots are coming for our jobs, relentlessly working their way up the value chain. Anything that can be automated will be automated. The obvious — and perhaps the only — answer to this threat is a vastly improved educational system. We’ve got to leverage our human intelligence to stay ahead of robotic A.I.! And right now, everyone agrees, the system is not meeting the challenge. The cost of a traditional four-year college education has far outpaced inflation. Student loan debt is a national tragedy. Actually achieving a college degree still bequeaths better job prospects than the alternative, but for many students, the cost-benefit ratio is completely out of whack.

No problem, says the tech industry. Like a snake eating its own tail, Silicon Valley has the perfect solution for the social inequities caused by technologically induced “disruption.” More disruption!

Universities are a hopelessly obsolete way to go about getting an education when we’ve got the Internet, the argument goes. Just as Airbnb is disemboweling the hotel industry and Uber is annihilating the taxi industry, companies such as Coursera and Udacity will leverage technology and access to venture capital in order to crush the incumbent education industry, supposedly offering high-quality educational opportunities for a fraction of the cost of a four-year college.



There is an elegant logic to this argument. We’ll use the Internet to stay ahead of the Internet. Awesome tools are at our disposal. In MOOCs — “Massive Open Online Courses” — hundreds of thousands of students will imbibe the wisdom of Ivy League “superprofessors” via pre-recorded lectures piped down to your smartphone. No need even for overworked graduate student teaching assistants. Intelligent software will take care of the grading. (That’s right — we’ll use robots to meet the robot threat!) The market, in other words, will provide the solution to the problem that the market has caused. It’s a wonderful libertarian dream.

But there’s a flaw in the logic. Early returns on MOOCs have confirmed what just about any teacher could have told you before Silicon Valley started believing it could “fix” education. Real human interaction and engagement are hugely important to delivering a quality education. Most crucially, hands-on interaction with teachers is vital for the students who are in most desperate need for an education — those with the least financial resources and the most challenging backgrounds.

Of course, it costs money to provide greater human interaction. You need bodies — ideally, bodies with some mastery of the subject material. But when you raise costs, you destroy the primary attraction of Silicon Valley’s “disruptive” model. The big tech success stories are all about avoiding the costs faced by the incumbents. Airbnb owns no hotels. Uber owns no taxis. The selling point of Coursera and Udacity is that they need own no universities.

But education is different than running a hotel. There’s a reason why governments have historically considered providing education a public good. When you start throwing bodies into the fray to teach people who can’t afford a traditional private education you end up disastrously chipping away at the profits that the venture capitalists backing Coursera and Udacity demand.

And that’s a tail that the snake can’t swallow.

* * *

The New York Times famously dubbed 2012 “The Year of the MOOC.” Coursera and Udacity (both started by Stanford professors) and an MIT-Harvard collaboration called EdX exploded into the popular imagination. But the hype ebbed almost as quickly as it had flowed. In 2013, after a disastrous pilot experiment in which Udacity and San Jose State collaborated to deliver three courses, MOOCs were promptly declared dead — with the harshest schadenfreude coming from academics who saw the rush to MOOCs as an educational travesty.

At the end of 2013, the New York Times had changed its tune: “After Setbacks, Online Courses are Rethought.”

But MOOC supporters have never wavered. In May, Clayton Christensen, the high priest of “disruption” theory, scoffed at the unbelievers: ”[T]heir potential to disrupt — on price, technology, even pedagogy — in a long-stagnant industry,” wrote Christensen, ” is only just beginning to be seen.”

At the end of June, the Economist followed suit with a package of stories touting the inevitable “creative destruction” threatened by MOOCs: “[A] revolution has begun thanks to three forces: rising costs, changing demand and disruptive technology. The result will be the reinvention of the university …” It’s 2012 all over again!

Sure, there have been speed bumps along the way. But as Christensen explained, the same is true for any would-be disruptive start-up. Failures are bound to happen. What makes Silicon Valley so special is its ability to learn from mistakes, tweak its biz model and try something new. It’s called “iteration.”

There is, of course, great merit to the iterative process. And it would be foolish to claim that new technology won’t have an impact on the educational process. If there’s one thing that the Internet and smartphones are insanely good at, it is providing access to information. A teenager with a phone in Uganda has opportunities for learning that most of the world never had through the entire course of human history. That’s great.

But there’s a crucial difference between “access to information” and “education” that explains why the university isn’t about to become obsolete, and why we can’t depend — as Marc Andreessen tells us — on the magic elixir of innovation plus the free market to solve our education quandary.

Nothing better illustrates this point than a closer look at the Udacity-San Jose State collaboration.

* * *

When Gov. Jerry Brown announced the collaboration between Udacity, founded by the Stanford computer science Sebastian Thrun and San Jose State, a publicly funded university in the heart of Silicon Valley, in January 2013, the match seemed perfect. Where else would you want to test out the future of education? The plan was to focus on three courses: elementary statistics, remedial math and college algebra. The target student demographic was notoriously ill-served by the university system: “Students were drawn from a lower-income high school and the underperforming ranks of SJSU’s student body,” reported Fast Company.

The results of the pilot, conducted in the spring of 2013, were a disaster, reported Fast Company:

Among those pupils who took remedial math during the pilot program, just 25 percent passed. And when the online class was compared with the in-person variety, the numbers were even more discouraging. A student taking college algebra in person was 52 percent more likely to pass than one taking a Udacity class, making the $150 price tag–roughly one-third the normal in-state tuition–seem like something less than a bargain.

A second attempt during the summer achieved better results, but with a much less disadvantaged student body; and, even more crucially, with considerably greater resources put into human interaction and oversight. For example, San Jose State reported that the summer courses were improved by “checking in with students more often.”

But the prime takeaway was stark. Inside Higher Education reported that a research report conducted by San Jose State on the experiment concluded that “it may be difficult for the university to deliver online education in this format to the students who need it most.”

In an iterative world, San Jose State and Udacity would have learned from their mistakes. The next version of their collaboration would have incorporated the increased human resources necessary to make it work, to be sure that students didn’t fall through the cracks. But the lesson that Udacity learned from the collaboration turned out be something different: There isn’t going to be much profit to be made attempting to apply the principles of MOOCs to students from a disadvantaged background.

Thrun set off a firestorm of commentary when he told Fast Company’s Max Chafkin this:

“These were students from difficult neighborhoods, without good access to computers, and with all kinds of challenges in their lives,” he says. “It’s a group for which this medium is not a good fit….”

“I’d aspired to give people a profound education–to teach them something substantial… But the data was at odds with this idea.”

Henceforth, Udacity would “pivot” to focusing on vocational training funded by direct corporate support.

Thrun later claimed that his comments were misinterpreted by Fast Company. And in his May Op-Ed Christensen argued that Udacity’s pivot was a boon!

Udacity, for its part, should be applauded for not burning through all of its money in pursuit of the wrong strategy. The company realized — and publicly acknowledged — that its future lay on a different path than it had originally anticipated. Indeed, Udacity’s pivot may have even prevented a MOOC bubble from bursting.

Educating the disadvantaged via MOOCs is the wrong strategy? That’s not a pivot — it’s an abject surrender.

The Economist, meanwhile, brushed off the San Jose State episode by noting that “online learning has its pitfalls.” But the Economist also published a revealing observation: “In some ways MOOCs will reinforce inequality … among students (the talented will be much more comfortable than the weaker outside the structured university environment) …”

But isn’t that exactly the the problem? No one can deny that the access to information facilitated by the Internet is a fantastic thing for talented students — and particularly so for those with secure economic backgrounds and fast Internet connections. But such people are most likely to succeed in a world full of smart robots anyway. The challenge posed by technological transformation and disruption is that the jobs that are being automated away first are the ones that are most suited to the less talented or advantaged. In other words, the population that MOOCs are least suited to serving is the population that technology is putting in the most vulnerable position.

Innovation and the free market aren’t going to fix this problem, for the very simple reason that there is no money in it. There’s no profit to be mined in educating people who not only can’t pay for an education, but also require greater human resources to be educated.

This is why we have public education in the first place.

“College is a public good,” says Jonathan Rees, a professor at Colorado State University who has been critical of MOOCs. “It’s what industrialized democratic society should be providing for students.”

Andrew Leonard Andrew Leonard is a staff writer at Salon. On Twitter, @koxinga21.

The rise of data and the death of politics

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisions that such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0″) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer
For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.
Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, drive Uber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”