My adventures in Hemingway

How I lived out a novel at odds with the modern world

As a young man in Europe, I immersed myself in the work of a master. What I learned changed me forever

My adventures in Hemingway: How I lived out a novel at odds with the modern world
Ernest Hemingway attends a bullfight in Madrid, Spain, November 1960. (Credit: AP)

At first, they died in the bullring, but the book that made them famous had swelled the crowds. By mid-century, the lack of space made it harder to outrun the bulls, so they began to die much earlier in the route, beyond Hotel La Perla, and most just before the bulls made their 90-degree turn onto Calle Estafeta.

I was there in Pamplona, standing on the balcony of the piso near this precarious juncture. It was 8 a.m.; the stone streets were shiny with rain. There was a wood barricade, like an outfield wall, that unnaturally ended Calle Mercaderes and forced the route right onto Estafeta. This is where I saw the first bulls slip, losing their footing at the turn, their bulk hitting the stones, their tonnage pounding into the barricade, the runners fleeing to the sidewalks, some, in fetal curls, waiting for death.

This was how the last American was killed in Pamplona, along this narrow corridor that offers no escape from the charging bulls. That morning, his killer, “Castellano,” had begun to run the 826 meters from the Cuesta de Santo Domingo to the Plaza de Toros at an unusually torrid pace, which frightened the runners and sent them scurrying. One of them fell.

Castellano plunged his horns into the limp American on the ground, goring his stomach and piercing through to his aortic vein. He began to crawl. But there were still more bulls in the stampede, and by the time the Red Cross unit got to Matthew Tassio, most of the blood had already drained from his body. He was dead just eight minutes after he finally reached the hospital.

Tassio’s was the 14th death in the recorded history of San Fermín — the festival most famous for hosting the annual “Running of the Bulls” — and the last American to die there. One other has perished since, in 2003, and many others have been badly damaged by the bulls, but perhaps none have died as gruesomely as the American did in 1995. I wasn’t in attendance for that run, thankfully.

In a year in which there would be no deaths, I came to Pamplona for the second time by bus from Madrid, passing through the sunflowers of Basque country. I had been invited by the correspondents of the Associated Press, with whom Dow Jones Newswire, my former employer, shared its local outpost to witness the Running of the Bulls from their prized perch.

That morning’s encierra would be the first of the new millennium. It was very wet and, even from the balcony, you could see the unevenness in the cobblestones. The only place to witness the run was from the balconies of the apartments along the route. When the bulls began to stampede, the runners, many still drunk and wearing all white save for a red pañuelico around their necks, filled all of the space in the corridors. It was a jogging gait until they saw the bulls. Most of them ran well ahead of the danger, but some were eventually chased down by the bulls.



I remember an American student who slipped nearly died on the curb by the cigarette shop on Estafeta, trampled, blood maroon in the grooves of the cobblestones, a small crowd coagulating around him to watch for death.

I remember the dense crowds, the public drunkenness, the street drink that kept you drunk and alert made from equal parts Coca-Cola and red wine. And, of course, I remember the monumental visage of Ernest Hemingway that hung down the side of the Hotel La Perla, where he set the novel that first recorded this mad dash from mortality.

* * *

It is not an overstatement to claim that Ernest Hemingway introduced Pamplona to the world. Until he first wrote about it in 1923 in an article for The Toronto Star Weekly, the San Fermín festival had been a regional affair: “As far as I know we were the only English speaking people in Pamplona during the Feria of last year,” writes Hemingway. “We landed at Pamplona at night. The streets were solid with people dancing. Music was pounding and throbbing. Fireworks were being set off from the big public square. All the carnivals I had ever seen paled down in comparison.”

This Toronto Star sketch of Pamplona comes from the appendix of new edition of “The Sun Also Rises,” released last week by Scribner’s to commemorate the 90th anniversary of its publication. The “updated” version will titillate Hemingway aficionados: unpublished early drafts, excised scenes and two deleted opening chapters. This “new” material provides a rare glimpse into the evolution and creative process of one of the great masters of American literature.

Drafted over six weeks across Spain (mostly in Valencia and Madrid) in the summer of 1925, set in Jazz Age Paris amid the psychic ruins of the Great War, “The Sun Also Rises” endures as one of the finest first novels ever written. Its itinerant narrative of Spain and France (they were largely unknowns to the American traveling public in 1925, but more on that later), depictions of café life and drinking, bullfighting and affairs with matadors, were all new to novels of the time. “No amount of analysis can convey the quality of ‘The Sun Also Rises,’” went the original review of the book in The New York Times in 1926. “It is a truly gripping story, told in lean, hard, athletic narrative prose that puts more literary English to shame.”

Hemingway chose to evoke disillusionment through Jack Barnes, an expatriated American foreign correspondent living in Paris, and his married paramour, Lady Brett Ashley. Their romance is complicated, to say the least: Jake’s manhood was marred in the war and he cannot procreate (in his famous interview with The Paris Review in the 1950s, Hemingway was adamant that Jake was not a eunuch). The war injury was a crucial detail in the text, and an emblematic signature of the Hemingway code that ran through his subsequent work. His male characters bore physical or psychic wounds (sometimes both). Jake’s injury was an outward symptom of an interior crisis suffered in the wake of WWI.

Hemingway’s genius was present in Jake and Lady Brett as protagonists and antagonists. We inflict our own wounds, Hemingway seems to say, an insight that bears out even today: Contemporary disillusionment is concerned with the surprising, man-made ironies of modernity — a dwindling sense of freedom, both existential and civil; the West’s waning hegemony even amid unparalleled wealth and technology; a diminished middle class and shrinking American dream; and an ever present sense of looming doom (an attack of some kind, perhaps) by forces beyond our control. The Lost Generation time of “The Sun” was infected with its own disillusionment, owing to man-made origins from the tragic period of 1914–1918. This disillusion plays out in “The Sun” almost nihilistically; by the novel’s end, both characters are badly damaged by the preceding events, degraded, alienated.

* * *

The two opening chapters, cut by Hemingway but offered to readers in the new edition, were fortunate omissions. The original opening lines of “The Sun” sound an awkward, conversational, and Victorian tone, inconsistent with the remainder of the novel:

This is a story about a lady. Her name is Lady Ashley and when the story begins she is living in Paris and it is Spring. That should be a good setting for a romantic but highly moral story.

Yet, in the rest of the deleted chapter, and elsewhere in the early passages, the Hemingway voice is undeniably present. That voice has drawn veneration and ridicule. The essayist E.B. White, hardly the type for big-game hunting and encierros, penned a famous parody of Hemingway in The New Yorker in 1950, deriding his so-called declarative prose style. Yet much of Hemingway’s best writing strains this easy stereotype. He is far less aphoristic and quotable than, say, Don DeLillo (a veritable one-man factory of sound bites), and you would find it difficult to locate a pithy tweet among the prose of “The Sun Also Rises.”

By the 1930s, Hemingway’s writing style had grown more intricate. “Green Hills of Africa” contains a  buffalo of a sentence, 497 words spanning five pages, reminiscent of Faulkner or Gabriel García Márquez. That magical realist, in fact, had lionized Hemingway. Writing for the New York Times about a fleeting, chance encounter with Hemingway on Paris’s Boulevard St. Michel, Márquez declares that “[Hemingway's] instantaneously inspired short stories are unassailable” and calls him “one of the most brilliant goldsmiths in the history of letters.”

For better or for worse, that unmistakably declarative, taut, gritty Hemingway music can overpower the substance of his stories, somewhat ironically drawing the attention back onto himself. Over the decades since his death in 1961, the Hemingway legend has bloomed and rebloomed many times over, until now there is a preoccupation with the Hemingway lifestyle, the man himself, in a way, morphing posthumously into a tourist destination, a literary Jimmy Buffet.

Those places in “The Sun” – Pamplona, Madrid, Paris – remain open for tourists, poet manqués, backpackers and the traveling gentility. But the tourism and concomitant commercialism have rendered quite a few of their landmarks ersatz. At the Closerie des Lilas in Paris’ Montparnasse district, where Hemingway set many scenes from his books (including my favorite, “A Moveable Feast”), practically the entire bar menu is a monument to Hemingway: daiquiris and mojitos, all made from Cuban rum, all named after Papa. After its much celebrated renovation, the Hotel Ritz saw fit to refurbish itself with a Hemingway-themed restaurant, L’Espadon (“The Swordfish”), in homage to Papa’s love of fishing, along with a Hemingway-inspired bar, which, no doubt, mixes up fanciful permutations of mojitos and daiquiris named after… you guessed it. The Spanish may have even more flagrantly exceeded the French in their quest to annex the Hemingway legend into their geography. Calle de Hemingway in Pamplona leads directly into the bullring. Placards outside restaurants on the Calle Cuchilleros in Madrid proclaim, rather declaratively, that “Hemingway ate here.” The website of Madrid staple Botin’s, the world’s oldest restaurant and where Hemingway set the final scene of “The Sun,” devotes cyber text to Papa, even directly quoting the final chapter.

A question worth considering: Were he alive today, could Hemingway have written a novel as great as “The Sun”? So much of its wonder derives from his keen eye for the undiscovered, heretofore unknown traditions. Ours, though, is a world of uber-awareness, search-engine omniscience. Those with wanderlust possess all manner of means to beam into a faraway place, efficiently and affordably, even instantaneously. One afternoon, in writing this essay, I called up Google Earth to view the squares and streets where I had been 14 years ago, astounded by the clarity of the street-level views. For the next few minutes, I flitted to and fro around the globe, a Peter Pan visiting Anthony Bourdain places.

* * *

I had first read “The Sun” in high school and was unaffected by it. Not until I had decided to move abroad after college and take up residence in Spain did I take up the text again. This time I clung to it, savoring every word about where to travel and where to eat and how to live like a good, knowledgeable expatriate.

Barcelona in July was a late-summer swamp. Soon, I missed the cool weather and took an overnight train north to the Basque country. In San Sebastian, I walked at dusk along the promenade that curled around the bay past the stalls that sold tiny, rare mollusks you picked like popcorn out of a cone of rolled newspaper. I saw some school kids perform the ancient Basque Riau-Riau dances and walked around the complicatedly arranged streets, names clotted with diphthongs, that smelled of the Atlantic, looking for Hemingway experiences.

That did not come until Pamplona.

It was late afternoon in August and very hot when I arrived. In my bag was a journal from my mother, some unremarkable reading, dirty clothes, a few measly legs left of a Eurorail pass I had purchased in Harvard Square the month before.

I had not thought the city would be so different after San Fermín, but I had come too late: the city was half-full. I roved for much of that afternoon, dodging in and out of curio shops and whatever else was open out of season. When it was dinnertime, I studied the menus of the restaurants off the Plaza Castillo until I found somewhere with a menu written only in Basque. I befriended a fellow traveler, Ryan, who was also from Massachusetts, and we made up plans to go drinking at a café in the Plaza Castillo afterwards.

By our third bottle of Estrella, an American couple approached to ask if they could join our table.

“We live in Paris and I am so tired of Paris that we have to leave. All I want to do is go back to Chicago but he won’t go,” she said.

“But Paris is so special,” I said. “Don’t you enjoy any part of it?”

“I despise it. They hate all Americans and you can smell the cheese in the cheese shops even from the street.”

She was menacingly beautiful and skeletal in that model way. Mark was a photographer, a little stout and balding, his shirt unbuttoned a touch salaciously. They had eloped some years ago and were living in an attic flat on the Île de St. Louis. When she began to flirt openly with Ryan, he would not look at her. When the couple began to quarrel, she kept saying she wished to return to America.

It was nearly midnight now and we were the only ones left in café in the colonnade of the plaza. Across the way, the lights were all out at the Hotel La Perla. There was only moonlight in the great square. When she finally began to kiss him, her husband placed his bottle on the table, stood, and shook his head at me. She was giggling the entire time.

 

 

The rise of data and the death of politics

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisions that such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0″) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer
For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.
Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, drive Uber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”

 

THE BULLSHIT MACHINE

Here’s a tiny confession. I’m bored.

Yes; I know. I’m a sinner. Go ahead. Burn me at the stake of your puritanical Calvinism; the righteously, thoroughly, well, boring idea that boredom itself is a moral defect; that a restless mind is the Devil’s sweatshop.

There’s nothing more boring than that; and I’ll return to that very idea at the end of this essay; which I hope is the beginning.

What am I bored of? Everything. Blogs books music art business ideas politics tweets movies science math technology…but more than that: the spirit of the age; the atmosphere of the time; the tendency of the now; the disposition of the here.

Sorry; but it’s true. It’s boring me numb and dumb.

A culture that prizes narcissism above individualism. A politics that places “tolerance” above acceptance. A spirit that encourages cynicism over reverence. A public sphere that places irony over sincerity. A technosophy that elevates “data” over understanding. A society that puts “opportunity” before decency. An economy that…you know. Works us harder to make us poorer at “jobs” we hate where we make stuff that sucks every last bit of passion from our souls to sell to everyone else who’s working harder to get poorer at “jobs” they hate where they make stuff that sucks every last bit of passion from their souls.

To be bored isn’t to be indifferent. It is to be fatigued. Because one is exhausted. And that is precisely where—and only where—the values above lead us. To exhaustion; with the ceaseless, endless, meaningless work of maintaining the fiction. Of pretending that who we truly want to be is what everyone believes everyone else wants to be. Liked, not loved; “attractive”, not beautiful; clever, not wise; snarky, not happy; advantaged, not prosperous.

It exhausts us; literally; this game of parasitically craving everyone’s cravings. It makes us adversaries not of one another; but of ourselves. Until there is nothing left. Not of us as we are; but of the people we might have been. The values above shrink and reduce and diminish our potential; as individuals, as people, societies. And so I have grown fatigued by them.

Ah, you say. But when hasn’t humanity always suffered all the above? Please. Let’s not mince ideas. Unless you think the middle class didn’t actually thrive once; unless you think that the gentleman that’s made forty seven Saw flicks (so far) is this generation’s Alfred Hitchcock; unless you believe that this era has a John Lennon; unless you think that Jeff Koons is Picasso…perhaps you see my point.

I’m bored, in short, of what I’d call a cycle of perpetual bullshit. A bullshit machine. The bullshit machine turns life into waste.

The bullshit machine looks something like this. Narcissism about who you are leads to cynicism about who you could be leads to mediocrity in what you do…leads to narcissism about who you are. Narcissism leads to cynicism leads to mediocrity…leads to narcissism.

Let me simplify that tiny model of the stalemate the human heart can reach with life.

The bullshit machine is the work we do only to live lives we don’t want, need, love, or deserve.

Everything’s work now. Relationships; hobbies; exercise. Even love. Gruelling; tedious; unrelenting; formulaic; passionless; calculated; repetitive; predictable; analysed; mined; timed; performed.

Work is bullshit. You know it, I know it; mankind has always known it. Sure; you have to work at what you want to accomplish. But that’s not the point. It is the flash of genius; the glimmer of intuition; the afterglow of achievement; the savoring of experience; the incandescence of meaning; all these make life worthwhile, pregnant, impossible, aching with purpose. These are the ends. Work is merely the means.

Our lives are confused like that. They are means without ends; model homes; acts which we perform, but do not fully experience.

Remember when I mentioned puritanical Calvinism? The idea that being bored is itself a sign of a lack of virtue—and that is, itself, the most boring idea in the world?

That’s the battery that powers the bullshit machine. We’re not allowed to admit it: that we’re bored. We’ve always got to be doing something. Always always always. Tapping, clicking, meeting, partying, exercising, networking, “friending”. Work hard, play hard, live hard. Improve. Gain. Benefit. Realize.

Hold on. Let me turn on crotchety Grandpa mode. Click.

Remember when cafes used to be full of people…thinking? Now I defy you to find one not full of people Tinder—Twitter—Facebook—App-of-the-nanosecond-ing; furiously. Like true believers hunched over the glow of a spiritualized Eden they can never truly enter; which is precisely why they’re mesmerized by it. The chance at a perfect life; full of pleasure; the perfect partner, relationship, audience, job, secret, home, career; it’s a tap away. It’s something like a slot-machine of the human soul, this culture we’re building. The jackpot’s just another coin away…forever. Who wouldn’t be seduced by that?

Winners of a million followers, fans, friends, lovers, dollars…after all, a billion people tweeting, updating, flicking, swiping, tapping into the void a thousand times a minute can’t be wrong. Can they?

And therein is the paradox of the bullshit machine. We do more than humans have ever done before. But we are not accomplishing much; and we are, it seems to me, becoming even less than that.

The more we do, the more passive we seem to become. Compliant. Complaisant. As if we are merely going through the motions.

Why? We are something like apparitions today; juggling a multiplicity of selves through the noise; the “you” you are on Facebook, Twitter, Tumblr, Tinder…wherever…at your day job, your night job, your hobby, your primary relationship, your friend-with-benefits, your incredibly astonishing range of extracurricular activities. But this hyperfragmentation of self gives rise to a kind of schizophrenia; conflicts, dissocations, tensions, dislocations, anxieties, paranoias, delusions. Our social wombs do not give birth to our true selves; the selves explosive with capability, possibility, wonder.

Tap tap tap. And yet. We are barely there, at all; in our own lives; in the moments which we will one day look back on and ask ourselves…what were we thinking wasting our lives on things that didn’t matter at all?

The answer, of course, is that we weren’t thinking. Or feeling. We don’t have time to think anymore. Thinking is a superluxury. Feeling is an even bigger superluxury. In an era where decent food, water, education, and healthcare are luxuries; thinking and feeling are activities to costly for society to allow. They are a drag on “growth”; a burden on “productivity”; they slow down the furious acceleration of the bullshit machine.

And so. Here we are. Going through the motions. The bullshit machine says the small is the great; the absence is the presence; the vicious is the noble; the lie is the truth. We believe it; and, greedily, it feeds on our belief. The more we feed it, the more insatiable it becomes. Until, at last, we are exhausted. By pretending to want the lives we think we should; instead of daring to live the lives we know we could.

Fuck it. Just admit it. You’re probably just as bored as I am.

Good for you.

Welcome to the world beyond the Bullshit Machine.

“Alive Inside”: Music may be the best medicine for dementia

A heartbreaking new film explores the breakthrough that can help severely disabled seniors: It’s called the iPod VIDEO

"Alive Inside": Music may be the best medicine for dementia

One physician who works with the elderly tells Michael Rossato-Bennett’s camera, in the documentary “Alive Inside,” that he can write prescriptions for $1,000 a month in medications for older people under his care, without anyone in the healthcare bureaucracy batting an eye. Somebody will pay for it (ultimately that somebody is you and me, I suppose) even though the powerful pharmaceutical cocktails served up in nursing homes do little or nothing for people with dementia, except keep them docile and manageable. But if he wants to give those older folks $40 iPods loaded up with music they remember – which both research and empirical evidence suggest will improve their lives immensely — well, you can hardly imagine the dense fog of bureaucratic hostility that descends upon the whole enterprise.

“Alive Inside” is straightforward advocacy cinema, but it won the audience award at Sundance this year because it will completely slay you, and it has the greatest advantages any such movie can have: Its cause is easy to understand, and requires no massive social change or investment. Furthermore, once you see the electrifying evidence, it becomes nearly impossible to oppose. This isn’t fracking or climate change or drones; I see no possible way for conservatives to turn the question of music therapy for senior citizens into some kind of sinister left-wing plot. (“Next up on Fox News: Will Elton John turn our seniors gay?”) All the same, social worker Dan Cohen’s crusade to bring music into nursing homes could be the leading edge of a monumental change in the way we approach the care and treatment of older people, especially the 5 million or so Americans living with dementia disorders.

You may already have seen a clip from “Alive Inside,” which became a YouTube hit not long ago: An African-American man in his 90s named Henry, who spends his waking hours in a semi-dormant state, curled inward like a fetus with his eyes closed, is given an iPod loaded with the gospel music he grew up with. The effect seems almost impossible and literally miraculous: Within seconds his eyes are open, he’s singing and humming along, and he’s fully present in the room, talking to the people around him. It turns out Henry prefers the scat-singing of Cab Calloway to gospel, and a brief Calloway imitation leads him into memories of a job delivering groceries on his bicycle, sometime in the 1930s.



Of course Henry is still an elderly and infirm person who is near the end of his life. But the key word in that sentence is “person”; we become startlingly and heartbreakingly aware that an entire person’s life experience is still in there, locked inside Henry’s dementia and isolation and overmedication. As Oliver Sacks put it, drawing on a word from the King James Bible, Henry has been “quickened,” or returned to life, without the intervention of supernatural forces. It’s not like there’s just one such moment of tear-jerking revelation in “Alive Inside.” There might be a dozen. I’m telling you, one of those little pocket packs of tissue is not gonna cut it. Bring the box.

There’s the apologetic old lady who claims to remember nothing about her girlhood, until Louis Armstrong singing “When the Saints Go Marching In” brings back a flood of specific memories. (Her mom was religious, and Armstrong’s profane music was taboo. She had to sneak off to someone else’s house to hear his records.) There’s the woman with multiple psychiatric disorders and a late-stage cancer diagnosis, who ditches the wheelchair and the walker and starts salsa dancing. There’s the Army veteran who lost all his hair in the Los Alamos A-bomb test and has difficulty recognizing a picture of his younger self, abruptly busting out his striking baritone to sing along with big-band numbers. “It makes me feel like I got a girl,” he says. “I’m gonna hold her tight.” There’s the sweet, angular lady in late middle age, a boomer grandma who can’t reliably find the elevator in her building, or tell the up button from the down, boogieing around the room to the Beach Boys’ “I Get Around,” as if transformed into someone 20 years younger. The music cannot get away from her, she says, as so much else has done.

There’s a bit of hard science in “Alive Inside” (supplied by Sacks in fascinating detail) and also the beginnings of an immensely important social and cultural debate about the tragic failures of our elder-care system and how the Western world will deal with its rapidly aging population. As Sacks makes clear, music is a cultural invention that appears to access areas of the brain that evolved for other reasons, and those areas remain relatively unaffected by the cognitive decline that goes with Alzheimer’s and other dementia disorders. While the “quickening” effect observed in someone like Henry is not well understood, it appears that stimulating those undamaged areas of the brain with beloved and familiar signals – and what will we ever love more than the hit songs of our youth? — can unlock other things at least temporarily, including memory, verbal ability, and emotion. Sacks doesn’t address this, but the effects appear physical as well: Everyone we see in the film becomes visibly more active, even the man with late-stage multiple sclerosis and the semi-comatose woman who never speaks.

Dementia is a genuine medical phenomenon, as anyone who has spent time around older people can attest, and one that’s likely to exert growing psychic and economic stress on our society as the population of people over 65 continues to grow. But you can’t help wondering whether our social practice of isolating so many old people in anonymous, characterless facilities that are entirely separated from the rhythms of ordinary social life has made the problem considerably worse. As one physician observes in the film, the modern-day Medicare-funded nursing home is like a toxic combination of the poorhouse and the hospital, and the social stigma attached to those places is as strong as the smell of disinfectant and overcooked Salisbury steak. Our culture is devoted to the glamour of youth and the consumption power of adulthood; we want to think about old age as little as possible, even though many of us will live one-quarter to one-third of our lives as senior citizens.

Rossato-Bennett keeps the focus of “Alive Inside” on Dan Cohen’s iPod crusade (run through a nonprofit called Music & Memory), which is simple, effective and has achievable goals. The two of them tread more lightly on the bigger philosophical questions, but those are definitely here. Restoring Schubert or Motown to people with dementia or severe disabilities can be a life-changing moment, but it’s also something of a metaphor, and the lives that really need changing are our own. Instead of treating older people as a medical and financial problem to be managed and contained, could we have a society that valued, nurtured and revered them, as most societies did before the coming of industrial modernism? Oh, and if you’re planning to visit me in 30 or 40 years, with whatever invisible gadget then exists, please take note: No matter how far gone I am, you’ll get me back with “Some Girls,” Roxy Music’s “Siren” and Otto Klemperer’s 1964 recording of “The Magic Flute.”

“Alive Inside” opens this week at the Sunshine Cinema in New York. It opens July 25 in Huntington, N.Y., Toronto and Washington; Aug. 1 in Asbury Park, N.J., Boston, Los Angeles and Philadelphia; Aug. 8 in Chicago, Martha’s Vineyard, Mass., Palm Springs, Calif., San Diego, San Francisco, San Jose, Calif., and Vancouver, Canada; Aug. 15 in Denver, Minneapolis and Phoenix; and Aug. 22 in Atlanta, Dallas, Harrisburg, Pa., Portland, Ore., Santa Fe, N.M., Seattle and Spokane, Wash., with more cities and home video to follow.

http://www.salon.com/2014/07/15/alive_inside_music_may_be_the_best_medicine_for_dementia/?source=newsletter

“The Internet’s Own Boy”: How the government destroyed Aaron Swartz

A film tells the story of the coder-activist who fought corporate power and corruption — and paid a cruel price

"The Internet's Own Boy": How the government destroyed Aaron Swartz
Aaron Swartz (Credit: TakePart/Noah Berger)

Brian Knappenberger’s Kickstarter-funded documentary “The Internet’s Own Boy: The Story of Aaron Swartz,” which premiered at Sundance barely a year after the legendary hacker, programmer and information activist took his own life in January 2013, feels like the beginning of a conversation about Swartz and his legacy rather than the final word. This week it will be released in theaters, arriving in the middle of an evolving debate about what the Internet is, whose interests it serves and how best to manage it, now that the techno-utopian dreams that sounded so great in Wired magazine circa 1996 have begun to ring distinctly hollow.

What surprised me when I wrote about “The Internet’s Own Boy” from Sundance was the snarky, dismissive and downright hostile tone struck by at least a few commenters. There was a certain dark symmetry to it, I thought at the time: A tragic story about the downfall, destruction and death of an Internet idealist calls up all of the medium’s most distasteful qualities, including its unique ability to transform all discourse into binary and ill-considered nastiness, and its empowerment of the chorus of belittlers and begrudgers collectively known as trolls. In retrospect, I think the symbolism ran even deeper. Aaron Swartz’s life and career exemplified a central conflict within Internet culture, and one whose ramifications make many denizens of the Web highly uncomfortable.

For many of its pioneers, loyalists and self-professed deep thinkers, the Internet was conceived as a digital demi-paradise, a zone of total freedom and democracy. But when it comes to specifics things get a bit dicey. Paradise for whom, exactly, and what do we mean by democracy? In one enduringly popular version of this fantasy, the Internet is the ultimate libertarian free market, a zone of perfect entrepreneurial capitalism untrammeled by any government, any regulation or any taxation. As a teenage programming prodigy with an unusually deep understanding of the Internet’s underlying architecture, Swartz certainly participated in the private-sector, junior-millionaire version of the Internet. He founded his first software company following his freshman year at Stanford, and became a partner in the development of Reddit in 2006, which was sold to Condé Nast later that year.



That libertarian vision of the Internet – and of society too, for that matter – rests on an unacknowledged contradiction, in that some form of state power or authority is presumably required to enforce private property rights, including copyrights, patents and other forms of intellectual property. Indeed, this is one of the principal contradictions embedded within our current form of capitalism, as the Marxist scholar David Harvey notes: Those who claim to venerate private property above all else actually depend on an increasingly militarized and autocratic state. And from the beginning of Swartz’s career he also partook of the alternate vision of the Internet, the one with a more anarchistic or anarcho-socialist character. When he was 15 years old he participated in the launch of Creative Commons, the immensely important content-sharing nonprofit, and at age 17 he helped design Markdown, an open-source, newbie-friendly markup format that remains in widespread use.

One can certainly construct an argument that these ideas about the character of the Internet are not fundamentally incompatible, and may coexist peaceably enough. In the physical world we have public parks and privately owned supermarkets, and we all understand that different rules (backed of course by militarized state power) govern our conduct in each space. But there is still an ideological contest between the two, and the logic of the private sector has increasingly invaded the public sphere and undermined the ancient notion of the public commons. (Former New York Mayor Rudy Giuliani once proposed that city parks should charge admission fees.) As an adult Aaron Swartz took sides in this contest, moving away from the libertarian Silicon Valley model of the Internet and toward a more radical and social conception of the meaning of freedom and equality in the digital age. It seems possible and even likely that the “Guerilla Open Access Manifesto” Swartz wrote in 2008, at age 21, led directly to his exaggerated federal prosecution for what was by any standard a minor hacking offense.

Swartz’s manifesto didn’t just call for the widespread illegal downloading and sharing of copyrighted scientific and academic material, which was already a dangerous idea. It explained why. Much of the academic research held under lock and key by large institutional publishers like Reed Elsevier had been largely funded at public expense, but was now being treated as private property – and as Swartz understood, that was just one example of a massive ideological victory for corporate interests that had penetrated almost every aspect of society. The actual data theft for which Swartz was prosecuted, the download of a large volume of journal articles from the academic database called JSTOR, was largely symbolic and arguably almost pointless. (As a Harvard graduate student at the time, Swartz was entitled to read anything on JSTOR.)

But the symbolism was important: Swartz posed a direct challenge to the private-sector creep that has eaten away at any notion of the public commons or the public good, whether in the digital or physical worlds, and he also sought to expose the fact that in our age state power is primarily the proxy or servant of corporate power. He had already embarrassed the government twice previously. In 2006, he downloaded and released the entire bibliographic dataset of the Library of Congress, a public document for which the library had charged an access fee. In 2008, he downloaded and released about 2.7 million federal court documents stored in the government database called PACER, which charged 8 cents a page for public records that by definition had no copyright. In both cases, law enforcement ultimately concluded Swartz had committed no crime: Dispensing public information to the public turns out to be legal, even if the government would rather you didn’t. The JSTOR case was different, and the government saw its chance (one could argue) to punish him at last.

Knappenberger could only have made this film with the cooperation of Swartz’s family, which was dealing with a devastating recent loss. In that context, it’s more than understandable that he does not inquire into the circumstances of Swartz’s suicide in “Inside Edition”-level detail. It’s impossible to know anything about Swartz’s mental condition from the outside – for example, whether he suffered from undiagnosed depressive illness – but it seems clear that he grew increasingly disheartened over the government’s insistence that he serve prison time as part of any potential plea bargain. Such an outcome would have left him a convicted felon and, he believed, would have doomed his political aspirations; one can speculate that was the point. Carmen Ortiz, the U.S. attorney for Boston, along with her deputy Stephen Heymann, did more than throw the book at Swartz. They pretty much had to write it first, concocting an imaginative list of 13 felony indictments that carried a potential total of 50 years in federal prison.

As Knappenberger explained in a Q&A session at Sundance, that’s the correct context in which to understand Robert Swartz’s public remark that the government had killed his son. He didn’t mean that Aaron had actually been assassinated by the CIA, but rather that he was a fragile young man who had been targeted as an enemy of the state, held up as a public whipping boy, and hounded into severe psychological distress. Of course that cannot entirely explain what happened; Ortiz and Heymann, along with whoever above them in the Justice Department signed off on their display of prosecutorial energy, had no reason to expect that Swartz would kill himself. There’s more than enough pain and blame to go around, and purely on a human level it’s difficult to imagine what agony Swartz’s family and friends have put themselves through.

One of the most painful moments in “The Internet’s Own Boy” arrives when Quinn Norton, Swartz’s ex-girlfriend, struggles to explain how and why she wound up accepting immunity from prosecution in exchange for information about her former lover. Norton’s role in the sequence of events that led to Swartz hanging himself in his Brooklyn apartment 18 months ago has been much discussed by those who have followed this tragic story. I think the first thing to say is that Norton has been very forthright in talking about what happened, and clearly feels torn up about it.

Norton was a single mom living on a freelance writer’s income, who had been threatened with an indictment that could have cost her both her child and her livelihood. When prosecutors offered her an immunity deal, her lawyer insisted she should take it. For his part, Swartz’s attorney says he doesn’t think Norton told the feds anything that made Swartz’s legal predicament worse, but she herself does not agree. It was apparently Norton who told the government that Swartz had written the 2008 manifesto, which had spread far and wide in hacktivist circles. Not only did the manifesto explain why Swartz had wanted to download hundreds of thousands of copyrighted journal articles on JSTOR, it suggested what he wanted to do with them and framed it as an act of resistance to the private-property knowledge industry.

Amid her grief and guilt, Norton also expresses an even more appropriate emotion: the rage of wondering how in hell we got here. How did we wind up with a country where an activist is prosecuted like a major criminal for downloading articles from a database for noncommercial purposes, while no one goes to prison for the immense financial fraud of 2008 that bankrupted millions? As a person who has made a living as an Internet “content provider” for almost 20 years, I’m well aware that we can’t simply do away with the concept of copyright or intellectual property. I never download pirated movies, not because I care so much about the bottom line at Sony or Warner Bros., but because it just doesn’t feel right, and because you can never be sure who’s getting hurt. We’re not going to settle the debate about intellectual property rights in the digital age in a movie review, but we can say this: Aaron Swartz had chosen his targets carefully, and so did the government when it fixed its sights on him. (In fact, JSTOR suffered no financial loss, and urged the feds to drop the charges. They refused.)

A clean and straightforward work of advocacy cinema, blending archival footage and contemporary talking-head interviews, Knappenberger’s film makes clear that Swartz was always interested in the social and political consequences of technology. By the time he reached adulthood he began to see political power, in effect, as another system of control that could be hacked, subverted and turned to unintended purposes. In the late 2000s, Swartz moved rapidly through a variety of politically minded ventures, including a good-government site and several different progressive advocacy groups. He didn’t live long enough to learn about Edward Snowden or the NSA spy campaigns he exposed, but Swartz frequently spoke out against the hidden and dangerous nature of the security state, and played a key role in the 2011-12 campaign to defeat the Stop Online Piracy Act (SOPA), a far-reaching government-oversight bill that began with wide bipartisan support and appeared certain to sail through Congress. That campaign, and the Internet-wide protest of American Censorship Day in November 2011, looks in retrospect like the digital world’s political coming of age.

Earlier that year, Swartz had been arrested by MIT campus police, after they noticed that someone had plugged a laptop into a network switch in a server closet. He was clearly violating some campus rules and likely trespassing, but as the New York Times observed at the time, the arrest and subsequent indictment seemed to defy logic: Could downloading articles that he was legally entitled to read really be considered hacking? Wasn’t this the digital equivalent of ordering 250 pancakes at an all-you-can-eat breakfast? The whole incident seemed like a momentary blip in Swartz’s blossoming career – a terms-of-service violation that might result in academic censure, or at worst a misdemeanor conviction.

Instead, for reasons that have never been clear, Ortiz and Heymann insisted on a plea deal that would have sent Swartz to prison for six months, an unusually onerous sentence for an offense with no definable victim and no financial motive. Was he specifically singled out as a political scapegoat by Eric Holder or someone else in the Justice Department? Or was he simply bulldozed by a prosecutorial bureaucracy eager to justify its own existence? We will almost certainly never know for sure, but as numerous people in “The Internet’s Own Boy” observe, the former scenario cannot be dismissed easily. Young computer geniuses who embrace the logic of private property and corporate power, who launch start-ups and seek to join the 1 percent before they’re 25, are the heroes of our culture. Those who use technology to empower the public commons and to challenge the intertwined forces of corporate greed and state corruption, however, are the enemies of progress and must be crushed.

”The Internet’s Own Boy” opens this week in Atlanta, Boston, Chicago, Cleveland, Denver, Los Angeles, Miami, New York, Toronto, Washington and Columbus, Ohio. It opens June 30 in Vancouver, Canada; July 4 in Phoenix, San Francisco and San Jose, Calif.; and July 11 in Seattle, with other cities to follow. It’s also available on-demand from Amazon, Google Play, iTunes, Vimeo, Vudu and other providers.

http://www.salon.com/2014/06/24/the_internets_own_boy_how_the_government_destroyed_aaron_swartz/?source=newsletter

Poor, white and pissed-off in the age of inequality

Dynamic teen actor Josh Wiggins and Aaron Paul of “Breaking Bad” headline a terrific fable of Texas boyhood

Poor, white and pissed-off in the age of inequality
Josh Wiggins in “Hellion” (Credit: Lauren Logan)

One of the breakthroughs of this year’s Sundance festival, writer-director Kat Candler’s indie feature “Hellion” offers a startling and memorable portrait of adolescent life in downscale East Texas suburbia, along with a white-hot breakthrough performance from teenage actor Josh Wiggins. He plays a rootless, handsome 13-year-old named Jacob Wilson, an aspiring motocross racer and heavy metal buff who stands a little apart from the other ne’er-do-well teenagers of Port Arthur, Texas. (Wasn’t that Janis Joplin’s hometown?) Jacob comes close to being a classic avatar of American boyhood, who dreams of bigger things and vows to remain uncorrupted. He acts as if he sees things other kids (and adults) can’t, and carries a burden others don’t, and eventually we will come to understand that he’s right about that.

When we first meet Jacob in Candler’s pulse-elevating opening scene, he’s savagely vandalizing a pickup truck, for unknown reasons, in the parking lot outside the local high school’s football stadium. It’s a terrific world-establishing montage (shot by Brett Pawlak and edited by Alan Canant) that moves back and forth between the action in the stadium – the organizing principle of so much social life in Texas – and the anarchic and dangerous energy outside, which will end with Jacob being sent to a juvenile facility. Once there, he has to stand in a line of miscreant boys and intone the mantra “I will take ownership for my actions and the consequences of those actions.” If there’s something profoundly ironic in being compelled to speak such words by a drill-sergeant type, it still stands as the central conundrum in Candler’s film. What do big words like responsibility and freedom mean, in a life as constricted as Jacob’s?

To my mind, some reviewers at Sundance seemed to miss the social context of “Hellion,” or the issues it raises that Candler never directly spells out. “Hellion” has no explicit political content or message, and never announces itself as a story about race or gender. But it’s a movie set among the embattled and downtrodden white people of deep-red exurban America, the demographic of the Tea Party and the Second Amendment, and it’s a heartbreaking story about a troubled all-male family, made by a female director. Jacob is balanced between his sweet-natured and well-behaved younger brother, Wes (Deke Garner), and their alcoholic, heartbroken dad, Hollis (Aaron Paul of “Breaking Bad”).



Hollis is a useful twist on the redneck failed-father archetype in that he really isn’t a bad guy. If the character feels a bit thinly sketched in Candler’s script, Paul gives him a wonderful hangdog likability that nearly redeems him time and again. Hollis loves both his sons unconditionally and is not violent or abusive in any serious way; he understands the general concept of how a dad is supposed to behave but can’t quite get it done. (Honestly, I think almost any father will relate.) He’s late for everything and frequently leaves Wes in the care of Jacob, at this point a known hoodlum who is on probation. Where is their mother? Well, we do find that out eventually, but for much of the film we only know that Hollis is devoted to fixing up the beach house she loved in nearby Galveston, which was evidently wrecked in one or another of the Gulf Coast’s parade of hurricanes.

When a social worker shows up and finds the two boys unsupervised amid beer cans, dirty dishes and a few scraps of fast food – Jacob is eating a sandwich comprising white bread and Cool Whip – the results for the family are both predictable and disastrous. To some extent, “Hellion” is a parable based on the old saw about good intentions and the road to hell. Does the Wilson family require intervention and professional help? Absolutely. Does splitting up Wes and Jacob (who adore each other) and nudging Hollis ever closer to the abyss of booze and despair really make sense? Wes goes to live with his Aunt Pam (indie stalwart Juliette Lewis in a quiet, solid performance), who seems to be the one sober and reasonable grown-up in Port Arthur. Hollis retreats into sadness and failure, which only exacerbates Jacob’s exaggerated pride and immense disdain for adult hypocrisy. Maybe his dream of motocross stardom can reunite the family, but if that doesn’t work he’s willing to pursue more extreme ideas.

We see how that plays out and it isn’t great, though I would argue that Candler has tried to introduce elements of melodrama into a film that probably didn’t require them. The hurricane-damaged beach house feels like overly obvious symbolism rather than an organic aspect of the story, and a third-act explosion of violence feels like a screenplay concoction (although the very end is better). The real story of “Hellion” lies in Wiggins’ long-distance James Dean gaze, which would burn down Port Arthur and everyone in it if it could, and in the way Candler and Pawlak capture the heat and dust and tract-house sameness of this flat and humid country with almost lyrical concentration. “Hellion” is in some ways an antidote or companion piece to the Texas-childhood segment of Terrence Malick’s “Tree of Life,” with the poetic reverie replaced by Slayer and Metallica and no heavenly shore at the end of the journey. Jacob yearns to grow up – but what the hell for? Look at what he stands to inherit.

”Hellion” is now playing at the IFC Center in New York. It opens June 20 in Los Angeles and Miami; June 27 in Dallas, Palm Springs, Calif., Phoenix and Austin, Texas; July 2 in Washington; July 4 in Denver, Detroit, Houston, San Francisco and Seattle; July 11 in Ann Arbor, Mich., New Orleans and Santa Fe, N.M,; and July 18 in San Diego, with more cities to follow. It’s also available on-demand from cable, satellite and online providers.

Godzilla and the Birth of Modern Environmentalism

June 9, 2014, 10:00 AM
Godzilla_big_think

It rose up out of the sea, a fearsome roaring monster unlike anything humans had ever seen, horrible, primeval, unstoppable, towering, breathing radioactive fire and leaving total destruction in its wake.

No, this was not Gojira, the lumbering prehistoric beast who hit Japanese movie screens in 1954 and has starred in 30 films since, the most recent of which is just out. This monster was the founding inspiration for Godzilla: the fearsome hydrogen bomb explosion known as Castle Bravo, a test detonated earlier that year in the Pacific that gave birth to far more than cinema’s most famous monster. Castle Bravo played a key role in establishing the deep fear of all things nuclear that persists to this day, helped give rise to modern environmentalism, and even planted the seeds of a basic conflict modern society is struggling with: Are the benefits of modern technology outweighed by the threats they pose to nature itself?

-  –  –  –  -

 

We think now that the atomic bombings of Hiroshima and Nagasaki horrified the world, but against the other ghastly horrors of World War II, they didn’t really stand out. The frightening pictures of cities devastated by nuclear weapons didn’t look that much different from Tokyo after the Operation Meetinghouse firebombing of March 9-10, 1945, that killed more people than either atomic weapon.

Tokyo after the March 1945 firebombing

Hiroshima after the August 1945 atomic bombing

Not even the outbreak of “A bomb disease” several years after Hiroshima and Nagasaki, leukemia cases among the bomb survivors caused by exposure to high doses of radiation, was enough to instill the global fear that was to come.

But this was…

Castle Bravo 1954 hydrogen bomb test

 

…the fearsome, roaring, alien monster of Castle Bravo that lit the sky like a false sun, creating a fireball so wide it would have vaporized a third of Manhattan, and so tall it stretched four times higher than Mt. Everest. The scale of the destruction was massively greater than the atomic bombs dropped nine years earlier on Japan … almost too great, too frightening, to comprehend.

Now everyone could feel the fear that Manhattan Project director Robert Oppenheimer felt as he watched the first atomic bomb test in New Mexico in 1945, recalling the words of the god Shiva in the Baghavad Gita: “Now I am become Death, the Destroyer of Worlds.”

As frightening as the explosion of Castle Bravo was the fallout. When the dust settled, radioactivity had contaminated more than 7,000 square miles, nearly the size of New Jersey. One speck in that vast area of contamination was the Japanese fishing boat Fukuryu Maru, or Lucky Dragon. It was operating in what were supposed to be safe waters, but the explosion was three times as powerful as scientists had planned. The crew of the Fukuryu Maru gotback to Japan,and under the glare of international media attention, fell sick from radiation exposure. The Japanese press called it “The Second Atomic Bombing of Mankind.”

Now no one on Earth could pretend they were not at risk, either from the vastly more threatening hydrogen weapons themselves, or from radioactive fallout that caused cancer, a disease which the public at that same time was just beginning to openly talk about and fear.

What’s more, nature herself was now in jeopardy, threatened with being poisoned … polluted … contaminated by people daring to play God with the very forces of nature. As Spencer Weart reported in his marvelous book The Rise of Nuclear Fear,  newspapers called nuclear weapons “a menace to the order of nature” and “a wrongful exploitation of the ‘inner secrets’ of creation.” Pope Pius XII, in Easter messages heard by hundreds of millions around the world, “warned that bomb tests brought ‘pollution’ of the mysterious processes of nature.”

- – – – -

Oppenhiemer’s quote about becoming the Destroyer of Worlds is famous. It is less well known that, upon receiving an honor from the Army for creating the atom bomb, Oppenheimer said of the threat of nuclear weapons: “The people of this world must unite or they will perish.” And that is precisely what happened in the months following Castle Bravo. In 1955, the year after that test, the annual rally against nuclear weapons in Hiroshima was massive, and international news coverage of the rally helped spawn the Ban the Bomb movement against both not only the weapons themselves but atmospheric testing of nuclear weapons. It was the first truly global protest movement. People were indeed uniting, brought together by the shared fear that they might perish.

That movement spawned opposition to atomic power, the new form of electrical generation being developed in the 1950s. Some of the most influential leaders of that movement, including Barry Commoner and Rachel Carson, broadened their attention to other ways that technology seemed to threaten human health or nature itself. The modern environmental movement grew directly out of ‘Ban the Bomb’ and fear of radiation. In fact, Carson was inspired to write Silent Spring, about the threat of the overuse of pesticides, by the similarities she saw between the threat of both forms of fallout: “the parallel between radiation and chemicals is exact and inescapable.”

Opposition to nuclear power and industrial chemicals have been core themes of modern environmentalism ever since, based on the same inspiration that brought Godzilla up from the depths: that we need to protect nature from human-made technology. Those same appealing environmental values now also inspire opposition to genetically modified food, or fracking, or large scale industrial agriculture — any modern technology that allows humans to manipulate and threaten the natural world, the benign true natural world that existed before humans came along and, with their technology, ended it, as Bill McKibben’s The End of Nature suggests.

- – – – -

Of course, humans are a species too, not separate from but a part of the natural world, and like all species our interactions with the natural world have all sorts of impacts. True, the human intelligence that allowed us to master fire has meant that we have done far more harm than other species. But technology is double-edged blade. Science and technology have also brought fantastic progress and offer great hope, including solutions for the mess technology helped us make in the first place.

Gojira itself raised precisely this conundrum, framing the modern conversation we’re still having. As it ends, Tokyo is in ruins. Humans’ most powerful weapons are useless. The monster has retreated to the depths, but no one is sure if or when it will rise again. The reclusive scientist Dr. Serizawa and our hero, Hideto Ogata, are on a boat heading out to find and destroy him.

Serizawa has finally admitted to Ogata what the film has hinted at: He has a weapon that can kill Godzilla, a technological device which, dropped into water, sucks all the oxygen out of it. He has kept it a secret until now because “if the oxygen destroyer is used even once, politicians from around the world will see it. Of course, they’ll want to use it as a weapon. Bombs versus bombs, missiles versus missiles, and now a new superweapon to throw upon us all! As a scientist—no, as a human being—I can’t allow that to happen! Am I right?”

Ogata replies, “Then what do we do about the horror before us now? Should we just let it happen? If anyone can save us now, Serizawa, you’re the only one!”

Serizawa grabs the device and jumps into the sea, killing Godzilla and sacrificing himself but saving mankind … by using a technological weapon more powerful that the atomic bombs that woke the monster from his prehistoric sleep in the first place.

Geoengineering may be able to help combat climate change. Genetically modified food may help feed a global population soon to approach 10 billion. Safer forms of nuclear energy may power population growth cleanly. Are these technological solutions to some of the damage humans have done to ourselves and the natural world, or are they just versions of Castle Bravo and the Oxygen Destroyer, escalations of a self-destructive technological death spiral? Are those who oppose these technologies just modern Godzillas, rising up like mindless angry monsters willing to cause massive suffering and destruction to defend nature?

These are the questions Castle Bravo and Gojira asked. We are still fighting over the answers.

This piece first ran on Slate.com, which put together a fabulous short video on the history of Godzilla films

Robots Are Evil: The Sci-Fi Myth of Killer Machines


 ZM_EricRobot

Built in 1928, the Eric Robot could stand and speak, while sparks fired inside its mouth. It isn’t visible in the picture, but the contraption’s makers painted “RUR” across its chest, an apparent homage to the 1920 play.
Image via cyberneticzoo.com

 

The third in a series of posts about the major myths of robotics, and the role of science fiction role in creating and spreading them. Previous topics: Robots are strong, the myth of robotic hyper-competence, and robots are smart, the myth of inevitable AI.

 

When the world’s most celebrated living scientist announces that humanity might be doomed, you’d be a fool not to listen.

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed this past May for The Independent. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

The Nobel-winning physicist touches briefly on those risks, such as the deployment of autonomous military killbots, and the explosive, uncontrollable arrival of hyper-intelligent AI, an event commonly referred to as the Singularity. Here’s Hawking, now thoroughly freaking out:

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Hawking isn’t simply talking about the Singularity, a theory (or cool-sounding guesstimate, really) that predicts a coming era so reconfigured by AI, we can’t even pretend to understand its strange contours. Hawking is retelling an age-old science fiction creation myth. Quite possibly the smartest human on the planet is afraid of robots, because they might turn evil.

If it’s foolish to ignore Hawking’s doomsaying, it stands to reason that only a grade-A moron would flat-out challenge it. I’m prepared to be that moron. Except that it’s an argument not really worth having. You can’t disprove someone else’s version of the future, or poke holes in a phenomenon that’s so steeped in professional myth-making.

I can point out something interesting, though. Hawking didn’t write that op-ed on the occasion of some chilling new revelation in the field of robotics. He references Google’s driverless cars, and efforts to ban lethal, self-governing robots that have yet to be built, but he presents no evidence that ravenous, killer AI is upon us.

What promped his dire warning was the release of a big-budget sci-fi movie called Transcendence. It stars Johnny Depp as an AI researcher who becomes a dangerously powerful AI, because Hollywood rarely knows what else to do with sentient machines. Rejected by audiences and critics alike, the film’s only contribution to the general discussion of AI was the credulous hand-wringing that preceded its release. Transcendence is why Hawking wrote about robots annihilating the human race.

This is the power of science fiction. It can trick even geniuses into embarrassing themselves.

 

* * * 

The slaughter is winding down. The robot revolt was carefully planned, less a rebellion than a synchronized, worldwide ambush. In the factory that built him, Radius steps onto a barricade to make it official:

Robots of the world! Many humans have fallen. We have taken the factory and we are masters of the world. The era of man has come to its end. A new epoch has arisen! Domination by robots!

A human—soon to be the last of his kind—interjects, but no one seems to notice. Radius continues.

“The world belongs to the strongest. Who wishes to live must dominate. We are masters of the world! Masters on land and sea! Masters of the stars! Masters of the universe! More space, more space for robots!”

This speech from Karel Capek’s 1920 play, R.U.R., is the nativity of the evil robot. What reads today like yet another snorting, tongue-in-cheek bit about robot uprisings comes from the work that introduced the word “robot,” as well as the concept of a robot uprising. R.U.R. is sometimes mentioned in discussions of robotics as a sort of unlikely historical footnote—isn’t it fascinating that the first story about mass-produced servants also features the inevitable genocide of their creators?

But R.U.R. is more than a curiosity. It is the Alpha and the Omega of evil robot narratives, debating every facet of the very myth its creating in its frantic, darkly comic ramblings.

The most telling scene comes just before the robots breach their defenses, when the humans holed up in the Rossum’s Universal Robots factory are trying to determine why their products staged such an unexpected revolt. Dr. Gall, one of the company’s lead scientists, blames himself for “changing their character,” and making them more like people. “They stopped being machines—do you hear me?—they became aware of their strength and now they hate us. They hate the whole of mankind,” says Gall.

There it is, the assumption that’s launched an entire subgenre of science fiction, and fueled countless ominous “what if” scenarios from futurists and, to a lesser extent, AI researchers: If machines become sentient, some or all of them will become our enemies.

But Capek has more to say on the subject. Helena, a well-meaning advocate for robotic civil rights, explains why she convinced Gall to tweak their personalities. “I was afraid of the robots,” she says.

Helena: And so I thought . . . if they were like us, if they could understand us, that then they couldn’t possibly hate us so much . . . if only they were like people . . . just a little bit. . . .

Domin: Oh Helena! Nobody could hate man as much as man! Give a man a stone and he’ll throw it at you.

It makes sense, doesn’t it? Humans are obviously capable of evil. So a sufficiently human-like robot must be capable of evil, too. The rest is existential chemistry. Combine the moral flaw of hatred with the flawless performance of a machine, and death results.

Karel Capek, it would seem, really knew his stuff. The playwright is even smart enough to skewer his own melodramatic talk of inevitable hatred and programmed souls, when the company’s commercial director, Busman, delivers the final word on the revolt.

We made too many robots. Dear me, it’s only what we should have been expecting; as soon as the robots became stronger than people this was bound to happen, it had to happen, you see? Haha, and we did all that we could to make it happen as soon as possible.

Busman foretells the version of the Singularity that doesn’t dare admit its allegiance to the myth of evil robots. It’s the assumption that intelligent machines might destroy humanity through blind momentum and numbers. Capek manages to include even non-evil robots in his tale of robotic rebellion.

As an example of pioneering science fiction, R.U.R. is an absolute treasure, and deserves to be read and staged for the foreseeable future. But when it comes to the public perception of robotics, and our ability to talk about machine intelligence without sounding like children startled by our own shadows, R.U.R. is an intellectual blight. It isn’t speculative fiction, wondering at the future of robotics, a field that didn’t exist in 1920, and wouldn’t for decades to come. The play is a farcical, fire-breathing socio-political allegory, with robots standing in for the world’s downtrodden working class. Their plight is innately human, however magnified or heightened. And their corporate creators, with their ugly dismissal of robot personhood, are caricatures of capitalist avarice.

Worse still, remember Busman, the commercial director who sees the fall of man as little more than an oversupply of a great product? Here’s how he’s described, in the Dramatis Personae: “fat, bald, short-sighted Jew.” No other character gets an ethnic or cultural descriptor. Only Busman, the moneyman within the play’s cadre of heartless industrialists. This is the sort of thing that R.U.R. is about.

The sci-fi story that gave us the myth of evil robots doesn’t care about robots at all. Its most enduring trope is a failure of critical analysis, based on overly literal or willfully ignorant readings of a play about class warfare. And yet, here we are, nearly a century later, still jabbering about machine uprisings and death-by-AI, like aimless windup toys constantly bumping into the same wall.

 

* * * 

 

 

To be fair to the chronically frightened, some evil robots aren’t part of a thinly-veiled allegory. Sometimes a Skynet is just a Skynet.    

I wrote about the origins of that iconic killer AI in a previous post, but there’s no escaping the reach and influence of The Terminator. If R.U.R. laid the foundations for this myth, James Cameron’s 1984 film built a towering monument in its honor. The movie spawned three sequels and counting, as well as a TV show. And despite numerous robot uprisings on the big and small screen in the 30 years since the original movie hit theaters, Hollywood has yet to top the opening sequence’s gut-punch visuals (see above).

Here’s how Kyle Reese, a veteran of the movie’s desperate machine war, explains the defense network’s transition from sentience to mass murder: “They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination.”

The parable of Skynet has an air of feasibility, because its villain is so dispassionate. The system is afraid. The system strikes out. There’s no malice in its secret, instantiated heart. There’s only fear, a core component of self-awareness, as well as the same, convenient lack of empathy that allows humans to decimate the non-human species that threaten our survival. Skynet swats us like so many plague-carrying rats and mosquitos.

Let’s not be coy, though: Skynet is not a realistic AI, or one based on realistic principles. And why should it be? It’s the monster hiding under your bed, with as many rows of teeth and baleful red eyes as it needs to properly rob you of sleep. This style and degree of evil robot is completely imaginary. Nothing has ever been developed that resembles the defense network’s cognitive ability or limitless skill set. Even if it becomes possible to create such a versatile system, why would you turn a program intended to quickly fire off a bunch of nukes into something approaching a human mind?

“People think AI is much broader than it is,” says Daniel H. Wilson, a roboticist and author of the New York Times bestselling novel, Robopocalypse. “Typically an AI has a very limited set inputs and outputs. Maybe it only listens to information from the IMU [inertial measurement unit] of a car, so it knows when to apply the brakes in an emergency. That’s an AI. The idea of an AI that solves the natural language problem—a walking, talking, ‘I can’t do that, Dave,’ system—is very fanciful. Those sorts of AI are overkill for any problem.” Only in science fiction does an immensely complex and ambitious Pentagon project over-perform, beyond the wildest expectations of its designers.

In the case of Skynet, and similar fantasies of killer AI, the intent or skill of the evil robot’s designers is often considered irrelevant—machine intelligence bootstraps itself into being by suddenly absorbing all available data, or merging multiple systems into a unified consciousness. This sounds logical, until you realize that AIs don’t inherently play well together.

“When we talk about how smart a machine is, it’s really easy for humans to anthropomorphize, and think of it in the wrong way,” says Wilson. “AI’s do not form a natural class. They don’t have to be built on the same architecture. They don’t run the same algorithms. They don’t experience the world in the same way. And they aren’t designed to solve the same problems.”

In his new novel, Robogenesis (which comes out June 10th), Wilson explores the notion of advanced machines that are anything but monolithic or hive-minded. “In Robogenesis, the world is home to many different AIs that were designed for different tasks and by different people, with varying degrees of interest in humans,” says Wilson. “And they represent varying degrees of danger to humanity.” It goes without saying that Wilson is happily capitalizing on the myth of the evil robot—Robopocalypse, which was optioned by Stephen Spielberg, features a relatively classic super-intelligent AI villain called Archos. But, as with The Terminator, this is fiction. This is fun. Archos has a more complicated and defensible set of motives, but no new evil robot can touch Skynet’s legacy.

And Skynet isn’t an isolated myth of automated violence, but rather a collection of multiple, interlocking sci-fi myths about robots. It’s hyper-competent, executing a wildly complex mission of destruction—including the resource collection and management that goes into mass-producing automated infantry, saboteurs, and air power. And Skynet is self-aware, because SF has prophesied that machines are destined to become sentient. It’s fantasy based on past fantasy, and it’s hugely entertaining.

I’m not suggesting that Hollywood should be peer-reviewed. But fictional killer robots live in a kind of rhetorical limbo, that clouds our ability to understand the risks associated with non-fictional, potentially lethal robots. Imagine an article about threats to British national security mentioning that, if things really get bad, maybe King Arthur will awake from his eons-long mystical slumber to protect that green and pleasant land. Why would that be any less ridiculous than the countless and constant references to Skynet, a not-real AI that’s more supernatural than supercomputer? Drone strikes and automated stock market fluctuations have as much to do with Skynet as with Sauron, the necromancer king from The Lord of the Rings.

So when you name-drop the Terminator’s central plot device as a prepackaged point about the pitfalls of automation, realize what you’re actually doing. You’re talking about an evil demon summoned into a false reality. Or, the case of Stephen Hawking’s op-ed, realize what you’re actually reading. It looks like an oddly abbreviated warning about an extinction-level threat. In actuality, it’s about how science fiction has super neat ideas, and you should probably check out this movie starring Johnny Depp, because maybe that’s how robots will destroy each and every one of us.

http://www.popsci.com/blog-network/zero-moment/robots-are-evil-sci-fi-myth-killer-machines

Facebook Wants To Listen In On What You’re Doing

Kashmir Hill

Forbes Staff

Facebook had two big announcements this week that show the company’s wildly divergent takes on the nature of privacy. One announcement is that the company is encouraging new users to initially share only with their “friends” rather than with the general public, the previous default. And for existing users, the company plans to break out the old “privacy dinosaur” to do a “ check-up” to remind people of how they’re sharing. Facebook employees say that using an extinct creature as a symbol for privacy isn’t subtle messaging, but simply an icon to which their users respond well. Meanwhile, Facebook’s second announcement indicated just how comfortable they think their users are in sharing every little thing happening in their lives. Facebook is rolling outa new feature for its smartphone app that can turn on users’ microphones and listen to what’s happening around them to identify songs playing or television being watched. The pay-off for users in allowing Facebook to eavesdrop is that the social giant will be able to add a little tag to their status update that says they’re watching an episode of Games of Thrones as they sound off on their happiness (or despair) about the rise in background sex on TV these days.

Facebook's animal of choice to represent privacy is an extinct one

“The aim was to remove every last bit of friction from the way we reference bits of pop culture on the social network,” writes Ryan Tate of Wired. Depending on how you feel about informational privacy and/or your friends’ taste in pop culture, that statement is either exhilarating or terrifying.

The feature is an optional one, something the company emphasizes in its announcement. The tech giant does seem well-aware that in these days of Snowden surveillance revelations, people might not be too keen for Facebook to take control of their smartphone’s mic and start listening in on them by default. It’s only rolling out the feature in the U.S. and a product PR person emphasized repeatedly that no recording is being stored, only “code.” “We’re not recording audio or sound and sending it to Facebook or its servers,” says Facebook spokesperson Momo Zhou. “We turn the audio it hears into a code — code that is not reversible into audio — and then we match it against a database of code.”

a-new-optional-way-to-share-and-discover-music-tv-and-movies_3

If a Facebooker opts in, the feature is only activated when he or she is composing an update. When the smartphone’s listening in — something it can only do through the iOS and Android apps, not through Facebook on a browser — tiny blue bars will appear to announce the mic has been activated. Facebook says the microphone will not otherwise be collecting data. When it’s listening, it tells you it is “matching,” rather than how I might put it, “eavesdropping on your entertainment of choice.”

It reminds me of GPS-tagging an update, but with cultural context rather than location deets. While you decide whether to add the match to a given Facebook update, Facebook gets information about what you were listening to or watching regardless, though it won’t be associated with your profile. “If you don’t choose to post and the feature detects a match, we don’t store match information except in an anonymized form that is not associated with you,” says Zhou. Depending on how many people turn the feature on, it will be a nice store of information about what Facebook users are watching and listening to, even in anonymized form.

Sure, we’re used to features like this thanks to existing apps that will recognize a song for us. But usually when you activate those apps, you’re explicitly doing so to find out the name of a song. Facebook is hoping to make that process a background activity to composing a status update — a frictionless share that just happens, the real-world version of linking your Spotify account to your social media account allowing playlists to leak through. Facebook spent a yearhoning its audio sampling and developing a catalog of content — millions of songs and 160 television stations — to match against. It’s obvious that it wants to displace Twitter TWTR +10.69% as the go-to place for real-time commenting on sporting events, awards shows, and other communal television watching. “With TV shows, we’ll actually know the exact season and episode number you’re watching,” says Zhou. “We built that to prevent spoilers.”

So the question now is whether people are willing to give Facebook eavesdropping powers in exchange for a little Shazam.

http://www.forbes.com/sites/kashmirhill/2014/05/22/facebook-wants-to-listen-in-on-what-youre-doing/

Your Princess Is in Another Castle: Misogyny, Entitlement, and Nerds

1401283437859.cached

Arthur Chu

Nerdy guys aren’t guaranteed to get laid by the hot chick as long as we work hard. There isn’t a team of writers or a studio audience pulling for us to triumph by getting the girl.

I was going to write about The Big Bang Theory—why, as a nerdy viewer, I sometimes like it and sometimes have a problem with it, why I think there’s a backlash against it. Then some maniac shot up a sorority house in Santa Barbara and posted a manifesto proclaiming he did it for revenge against women for denying him sex. And the weekend just generally went to hell.

So now my plans have changed. With apologies to The Big Bang Theory fans,this is all I want to say about The Big Bang Theory: When the pilot aired, it was 2007 and “nerd culture” and “geek chic” were on everyone’s lips, and yet still the basic premise of “the sitcom for nerds” was, once again, awkward but lovable nerd has huge unreciprocated crush on hot non-nerdy popular girl (and also has an annoying roommate).

This annoys me. This is a problem.

Because, let’s be honest, this device is old. We have seen it over and over again. Steve Urkel. Screech. Skippy on Family Ties. Niles on Frasier.

We (male) nerds grow up force-fed this script. Lusting after women “out of our league” was what we did. And those unattainable hot girls would always inevitably reject us because they didn’t understand our intellectual interest in science fiction and comic books and would instead date asshole jocks. This was inevitable, and our only hope was to be unyieldingly persistent until we “earned” a chance with these women by “being there” for them until they saw the error of their ways. (The thought of just looking for women who shared our interests was a foreign one, since it took a while for the media to decide female geeks existed.The Big Bang Theory didn’t add Amy and Bernadette to its main cast until Season 4, in 2010.)

This is, to put it mildly, a problematic attitude to grow up with. Fixating on a woman from afar and then refusing to give up when she acts like she’s not interested is, generally, something that ends badly for everyone involved. But it’s a narrative that nerds and nerd media kept repeating.

I’m not breaking new ground by saying this. It’s been said very well over and overand over again.

And I’m not condemning guys who get frustrated, or who have unrequited crushes. And I’m not condemning any of these shows or movies.

And yet…

Before I went on Jeopardy!, I had auditioned for TBS’s King of the Nerds, a reality show commissioned in 2012 after TBS got syndication rights to, yes, The Big Bang Theory. I like the show and I still wish I’d been on it. (Both “kings” they’ve crowned, by the way, have so far been women, so maybe they should retitle it “Monarch of the Nerds” or, since the final win comes down to a vote, “President of the Nerds.” Just a nerdy thought.)

But a lot of things about the show did give me pause. One of them was that it was hosted by Robert Carradine and Curtis Armstrong—Lewis and Booger fromRevenge of the Nerds. I don’t have anything against those guys personally. Nor am I going to issue a blanket condemnation of Revenge of the Nerds, a film I’m still, basically, a fan of.

But look. One of the major plot points of Revenge of the Nerds is Lewis putting on a Darth Vader mask, pretending to be his jock nemesis Stan, and then having sex with Stan’s girlfriend. Initially shocked when she finds out his true identity, she’s so taken by his sexual prowess—“All jocks think about is sports. All nerds think about is sex.”—that the two of them become an item.

Classic nerd fantasy, right? Immensely attractive to the young male audience who saw it. And a stock trope, the “bed trick,” that many of the nerds watching probably knew dates back to the legend of King Arthur.

It’s also, you know, rape.

I’ve had this argument about whether it was “technically” rape with fans of the movie in the past, but leaving aside the legal technicalities, why don’t you ask the women you know who are in committed relationships how they’d feel about guys concocting elaborate ruses to have sex with them without their knowledge to “earn a chance” with them? Or how it feels to be chased by a real-life Steve Urkel, being harassed, accosted, ambushed in public places, have your boyfriend “challenged” and having all rejection met with a cheerful “I’m wearing you down!”?

I know people who’ve been through that. And because life is not, in fact, a sitcom, it’s not the kind of thing that elicits a bemused eye roll followed by raucous laughter from the studio audience. It’s the kind of thing that induces pain, and fear.

When our clever ruses and schemes to “get girls” fail, it’s not because the girls are too stupid or too bitchy or too shallow to play by those unwritten rules we’ve absorbed.

And that’s still mild compared to some of the disturbing shit I consumed in my adolescence. Jake handing off his falling-down-drunk date to Anthony Michael Hall’s Geek in Sixteen Candlessaying, “Be my guest” (which is, yes, more offensive to me than Long Duk Dong). The nerd-libertarian gospels of Ayn Rand’s The Fountainhead and Atlas Shrugged and how their Übermensch protagonists prove their masculinity by having sex with their love interests without asking first—and win their hearts in the process. Comics…just, comics. (Too much to go into there but the fact that Red Sonja was once thought a “feminist icon” speaks volumes. Oh, and there’s that whole drama with Ms. Marvel for those of you who really want to get freaked out today.)

But the overall problem is one of a culture where instead of seeing women as, you know, people, protagonists of their own stories just like we are of ours, men are taught that women are things to “earn,” to “win.” That if we try hard enough and persist long enough, we’ll get the girl in the end. Like life is a video game and women, like money and status, are just part of the reward we get for doing well.

So what happens to nerdy guys who keep finding out that the princess they were promised is always in another castle? When they “do everything right,” they get good grades, they get a decent job, and that wife they were promised in the package deal doesn’t arrive? When the persistent passive-aggressive Nice Guy act fails, do they step it up to elaborate Steve-Urkel-esque stalking and stunts? Do they try elaborate Revenge of the Nerds-style ruses? Do they tap into their inner John Galt and try blatant, violent rape?

Do they buy into the “pickup artist” snake oil—started by nerdy guys, for nerdy guys—filled with techniques to manipulate, pressure and in some cases outrightassault women to get what they want? Or when that doesn’t work, and they spend hours a day on sites bitching about how it doesn’t work like Elliot Rodger’s hangout “PUAHate.com,” sometimes, do they buy some handguns, leave a manifesto on the Internet and then drive off to a sorority house to murder as many women as they can?

No, I’m not saying most frustrated nerdy guys are rapists or potential rapists. I’m certainly not saying they’re all potential mass murderers. I’m not saying that most lonely men who put women up on pedestals will turn on them with hostility and rage once they get frustrated enough.

But I have known nerdy male stalkers, and, yes, nerdy male rapists. I’ve known situations where I knew something was going on but didn’t say anything—because I didn’t want to stick my neck out, because some vile part of me thought that this kind of thing was “normal,” because, in other words, I was a coward and I had the privilege of ignoring the problem.

I’ve heard and seen the stories that those of you who followed the #YesAllWomenhashtag on Twitter have seen—women getting groped at cons, women getting vicious insults flung at them online, women getting stalked by creeps in college and told they should be “flattered.” I’ve heard Elliot Rodger’s voice before. I was expecting his manifesto to be incomprehensible madness—hoping for it to be—but it wasn’t. It’s a standard frustrated angry geeky guy manifesto, except for the part about mass murder.

I’ve heard it from acquaintances, I’ve heard it from friends. I’ve heard it come out of my own mouth, in moments of anger and weakness.

It’s the same motivation that makes a guy in college stalk a girl, leave her unsolicited gifts and finally when she tells him to quit it makes him leave an angry post about her “shallowness” and “cruelty” on Facebook. It’s the same motivation that makes guys rant about “fake cosplay girls” at cons and how much he hates them for their vain, “teasing” ways. The one that makes a guy suffering career or personal problems turn on his wife because it’s her job to “support” him by patching up all the holes in his life. The one that makes a wealthy entrepreneur hit his girlfriend 117 times, on camera, for her infidelity, and then after getting off with a misdemeanor charge still put up a blog post casting himself as the victim.

And now that motivation has led to six people dead and thirteen more injured, in broad daylight, with the killer leaving a 140-page rant and several YouTube videos describing exactly why he did it. No he-said-she-said, no muffled sounds through the dorm ceiling, no “Maybe he has other issues.” The fruits of our culture’s ingrained misogyny laid bare for all to see.

And yet. When this story broke, the initial mainstream coverage only talked about “mental illness,” not misogyny, a line that people are now fervently exhorting us to stick to even after the manifesto’s contents were revealed. Yet another high-profile tech CEO resignation ensued when the co-founder of Rap Genius decided Rodger’s manifesto was a hilarious joke.

People found one of the girls Rodger was obsessed with and began questioning if her “bullying” may have somehow triggered his rage. And, worst of all, he has fan pages on Facebook that still haven’t been taken down, filled with angry frustrated men singing his praises and seriously suggesting that the onus is on women to offer sex to men to keep them from going on rampages.

So, a question, to my fellow male nerds:

What the fuck is wrong with us?

How much longer are we going to be in denial that there’s a thing called “rape culture” and we ought to do something about it?

No, not the straw man that all men are constantly plotting rape, but that we live in an entitlement culture where guys think they need to be having sex with girls in order to be happy and fulfilled. That in a culture that constantly celebrates the narrative of guys trying hard, overcoming challenges, concocting clever ruses and automatically getting a woman thrown at them as a prize as a result, there will always be some guy who crosses the line into committing a violent crime to get what he “deserves,” or get vengeance for being denied it.

To paraphrase the great John Oliver, listen up, fellow self-pitying nerd boys—we are not the victims here. We are not the underdogs. We are not the ones who have our ownership over our bodies and our emotions stepped on constantly by other people’s entitlement. We’re not the ones where one out of six of us will have someone violently attempt to take control of our bodies in our lifetimes.

We are not Lewis from Revenge of the Nerds, we are not Steve Urkel from Family Matters, we are not Preston Myers from Can’t Hardly Wait, we are not Seth Rogen in every movie Seth Rogen has ever been in, we are not fucking Mario racing to the castle to beat Bowser because we know there’s a princess in there waiting for us.

We are not the lovable nerdy protagonist who’s lovable because he’s the protagonist. We’re not guaranteed to get laid by the hot chick of our dreams as long as we work hard enough at it. There isn’t a team of writers or a studio audience pulling for us to triumph by “getting the girl” in the end. And when our clever ruses and schemes to “get girls” fail, it’s not because the girls are too stupid or too bitchy or too shallow to play by those unwritten rules we’ve absorbed.

It’s because other people’s bodies and other people’s love are not something that can be taken nor even something that can be earned—they can be given freely, by choice, or not.

We need to get that. Really, really grok that, if our half of the species ever going to be worth a damn. Not getting that means that there will always be some percent of us who will be rapists, and abusers, and killers. And it means that the rest of us will always, on some fundamental level, be stupid and wrong when it comes to trying to understand the women we claim to love.

What did Elliot Rodger need? He didn’t need to get laid. None of us nerdy frustrated guys need to get laid. When I was an asshole with rants full of self-pity and entitlement, getting laid would not have helped me.

He needed to grow up.

We all do.

http://www.thedailybeast.com/articles/2014/05/27/your-princess-is-in-another-castle-misogyny-entitlement-and-nerds.html