The Killing of America’s Creative Class


A review of Scott Timberg’s fascinating new book, ‘Culture Crash.’

Some of my friends became artists, writers, and musicians to rebel against their practical parents. I went into a creative field with encouragement from my folks. It’s not too rare for Millennials to have their bohemian dreams blessed by their parents, because, as progeny of the Boomers, we were mentored by aging rebels who idolized rogue poets, iconoclast cartoonists, and scrappy musicians.

The problem, warns Scott Timberg in his new book Culture Crash: The Killing of the Creative Class, is that if parents are basing their advice on how the economy used to support creativity – record deals for musicians, book contracts for writers, staff positions for journalists – then they might be surprised when their YouTube-famous daughter still needs help paying off her student loans. A mix of economic, cultural, and technological changes emanating from a neoliberal agenda, writes Timberg, “have undermined the way that culture has been produced for the past two centuries, crippling the economic prospects of not only artists but also the many people who supported and spread their work, and nothing yet has taken its place.”


Tech vs. the Creative Class

Timberg isn’t the first to notice. The supposed economic recovery that followed the recession of 2008 did nothing to repair the damage that had been done to the middle class. Only a wealthy few bounced back, and bounced higher than ever before, many of them the elites of Silicon Valley who found a way to harvest much of the wealth generated by new technologies. InCulture Crash, however, Timberg has framed the struggle of the working artist to make a living on his talents.

Besides the overall stagnation of the economy, Timberg shows how information technology has destabilized the creative class and deprofessionalized their labor, leading to an oligopoly of the mega corporations Apple, Google, and Facebook, where success is measured (and often paid) in webpage hits.

What Timberg glances over is that if this new system is an oligopoly of tech companies, then what it replaced – or is still in the process of replacing – was a feudal system of newspapers, publishing houses, record labels, operas, and art galleries. The book is full of enough discouraging data and painful portraits of artists, though, to make this point moot. Things are definitely getting worse.

Why should these worldly worries make the Muse stutter when she is expected to sing from outside of history and without health insurance? Timberg proposes that if we are to save the “creative class” – the often young, often from middle-class backgrounds sector of society that generates cultural content – we need to shake this old myth. The Muse can inspire but not sustain. Members of the creative class, argues Timberg, depend not just on that original inspiration, but on an infrastructure that moves creations into the larger culture and somehow provides material support for those who make, distribute, and assess them. Today, that indispensable infrastructure is at risk…

Artists may never entirely disappear, but they are certainly vulnerable to the economic and cultural zeitgeist. Remember the Dark Ages? Timberg does, and drapes this shroud over every chapter. It comes off as alarmist at times. Culture is obviously no longer smothered by an authoritarian Catholic church.


Art as the Province of the Young and Independently Wealthy

But Timberg suggests that contemporary artists have signed away their rights in a new contract with the market. Cultural producers, no matter how important their output is to the rest of us, are expected to exhaust themselves without compensation because their work is, by definition, worthless until it’s profitable. Art is an act of passion – why not produce it for free, never mind that Apple, Google, and Facebook have the right to generate revenue from your production? “According to this way of thinking,” wrote Miya Tokumitsu describing the do-what-you-love mantra that rode out of Silicon Valley on the back of TED Talks, “labor is not something one does for compensation, but an act of self-love. If profit doesn’t happen to follow, it is because the worker’s passion and determination were insufficient.”

The fact is, when creativity becomes financially unsustainable, less is created, and that which does emerge is the product of trust-fund kids in their spare time. “If working in culture becomes something only for the wealthy, or those supported by corporate patronage, we lose the independent perspective that artistry is necessarily built on,” writes Timberg.

It would seem to be a position with many proponents except that artists have few loyal advocates on either side of the political spectrum. “A working artist is seen neither as the salt of the earth by the left, nor as a ‘job creator’ by the right – but as a kind of self-indulgent parasite by both sides,” writes Timberg.

That’s with respect to unsuccessful artists – in other words, the creative class’s 99 percent. But, as Timberg disparages, “everyone loves a winner.” In their own way, both conservatives and liberals have stumbled into Voltaire’sCandide, accepting that all is for the best in the best of all possible worlds. If artists cannot make money, it’s because they are either untalented or esoteric elitists. It is the giants of pop music who are taking all the spoils, both financially and morally, in this new climate.

Timberg blames this winner-take-all attitude on the postmodernists who, beginning in the 1960s with film critic Pauline Kael, dismantled the idea that creative genius must be rescued from underneath the boots of mass appeal and replaced it with the concept of genius-as-mass-appeal. “Instead of coverage of, say, the lost recordings of pioneering bebop guitarist Charlie Christian,” writes Timberg, “we read pieces ‘in defense’ of blockbuster acts like the Eagles (the bestselling rock band in history), Billy Joel, Rush – groups whose songs…it was once impossible to get away from.”

Timberg doesn’t give enough weight to the fact that the same rebellion at the university liberated an enormous swath of art, literature, and music from the shadow of an exclusive (which is not to say unworthy) canon made up mostly of white men. In fact, many postmodernists have taken it upon themselves to look neither to the pop charts nor the Western canon for genius but, with the help of the Internet, to the broad creative class that Timberg wants to defend.


Creating in the Age of Poptimism

This doesn’t mean that today’s discovered geniuses can pay their bills, though, and Timberg is right to be shocked that, for the first time in history, pop culture is untouchable, off limits to critics or laypeople either on the grounds of taste or principle. If you can’t stand pop music because of the hackneyed rhythms and indiscernible voices, you’ve failed to appreciate the wonders of crowdsourced culture – the same mystery that propels the market.

Sadly, Timberg puts himself in checkmate early on by repeatedly pitting black mega-stars like Kanye West against white indie-rockers like the Decembrists, whose ascent to the pop-charts he characterizes as a rare triumph of mass taste.

But beyond his anti-hip-hop bias is an important argument: With ideological immunity, the pop charts are mimicking the stratification of our society. Under the guise of a popular carnival where a home-made YouTube video can bring a talented nobody the absurd fame of a celebrity, creative industries have nevertheless become more monotonous and inaccessible to new and disparate voices. In 1986, thirty-one chart-toppers came from twenty-nine different artists. Between 2008 and mid-2012, half of the number-one songs were property of only six stars. “Of course, it’s never been easy to land a hit record,” writes Timberg. “But recession-era rock has brought rewards to a smaller fraction of the artists than it did previously. Call it the music industry’s one percent.”

The same thing is happening with the written word. In the first decade of the new millennium, points out Timberg, citing Wired magazine, the market share of page views for the Internet’s top ten websites rose from 31 percent to 75 percent.

Timberg doesn’t mention that none of the six artists dominating the pop charts for those four years was a white man, but maybe that’s beside the point. In Borges’s “Babylon Lottery,” every citizen has the chance to be a sovereign. That doesn’t mean they were living in a democracy. Superstars are coming up from poverty, without the help of white male privilege, like never before, at the same time that poverty – for artists and for everyone else – is getting worse.

Essayists are often guilted into proposing solutions to the problems they perceive, but in many cases they should have left it alone. Timberg wisely avoids laying out a ten-point plan to clean up the mess, but even his initial thrust toward justice – identifying the roots of the crisis – is a pastiche of sometimes contradictory liberal biases that looks to the past for temporary fixes.

Timberg puts the kibosh on corporate patronage of the arts, but pines for the days of newspapers run by wealthy families. When information technology is his target because it forces artists to distribute their work for free, removes the record store and bookstore clerks from the scene, and feeds consumer dollars to only a few Silicon Valley tsars, Timberg’s answer is to retrace our steps twenty years to the days of big record companies and Borders book stores – since that model was slightly more compensatory to the creative class.

When his target is postmodern intellectuals who slander “middle-brow” culture as elitist, only to expend their breath in defense of super-rich pop stars, Timberg retreats fifty years to when intellectuals like Marshall McLuhan and Norman Mailer debated on network television and the word “philharmonic” excited the uncultured with awe rather than tickled them with anti-elitist mockery. Maybe television back then was more tolerable, but Timberg hardly even tries to sound uplifting. “At some point, someone will come up with a conception better than middlebrow,” he writes. “But until then, it beats the alternatives.”


The Fallacy of the Good Old Days

Timberg’s biggest mistake is that he tries to find a point in history when things were better for artists and then reroute us back there for fear of continued decline. What this translates to is a program of bipartisan moderation – a little bit more public funding here, a little more philanthropy there. Something everyone can agree on, but no one would ever get excited about.

Why not boldly state that a society is dysfunctional if there is enough food, shelter, and clothing to go around and yet an individual is forced to sacrifice these things in order to produce, out of humanistic virtue, the very thing which society has never demanded more of – culture? And if skeptics ask for a solution, why not suggest something big, a reorganization of society, from top to bottom, not just a vintage flotation device for the middle class? Rather than blame technological innovation for the poverty of artists, why not point the finger at those who own the technology and call for a system whereby efficiency doesn’t put people out of work, but allows them to work fewer hours for the same salary; whereby information is free not because an unpaid intern wrote content in a race for employment, but because we collectively pick up the tab?

This might not satisfy the TED Talk connoisseur’s taste for a clever and apolitical fix, but it definitely trumps championing a middle-ground littered with the casualties of cronyism, colonialism, racism, patriarchy, and all their siblings. And change must come soon because, if Timberg is right, “the price we ultimately pay” for allowing our creative class to remain on its crash course “is in the decline of art itself, diminishing understanding of ourselves, one another, and the eternal human spirit.”

What Makes You You?

When you say the word “me,” you probably feel pretty clear about what that means. It’s one of the things you’re clearest on in the whole world—something you’ve understood since you were a year old. You might be working on the question, “Who am I?” but what you’re figuring out is the who am part of the question—the part is obvious. It’s just you. Easy.

But when you stop and actually think about it for a minute—about what “me” really boils down to at its core—things start to get pretty weird. Let’s give it a try.

The Body Theory

We’ll start with the first thing most people equate with what a person is—the physical body itself. The Body Theory says that that’s what makes you you. And that would make sense. It doesn’t matter what’s happening in your life—if your body stops working, you die. If Mark goes through something traumatic and his family says, CH“It really changed him—he’s just not the same person anymore,” they don’t literally mean Mark isn’t the same person—he’s changed, but he’s still Mark, because Mark’s body is Mark, no matter what he’s acting like. Humans believe they’re so much more than a hunk of flesh and bone, but in the end, a physical ant is the ant, a squirrel’s body is the squirrel, and a human is its body. This is the Body Theory—let’s test it:

So what happens when you cut your fingernails? You’re changing your body, severing some of its atoms from the whole. Does that mean you’re not you anymore? Definitely not—you’re still you.

How about if you get a liver transplant? Bigger deal, but definitely still you, right?

What if you get a terrible disease and need to replace your liver, kidney, heart, lungs, blood, and facial tissue with synthetic parts, but after all the surgery, you’re fine and can live your life normally. Would your family say that you had died, because most of your physical body was gone? No, they wouldn’t. You’d still be you. None of that is needed for you to be you.

Well maybe it’s your DNA? Maybe that’s the core thing that makes you you, and none of these organ transplants matter because your remaining cells all still contain your DNA, and they’re what maintains “you.” One major problem—identical twins have identical DNA, and they’re not the same person. You are you, and your identical twin is most certainly not you. DNA isn’t the answer.

So far, the Body Theory isn’t looking too good. We keep changing major parts of the body, and you keep being you.

But how about your brain?

The Brain Theory

Let’s say a mad scientist captures both you and Bill Clinton and locks the two of you up in a room.


The scientist then performs an operation on both of you, whereby he safely removes each of your brains and switches them into the other’s head. Then he seals up your skulls and wakes you both up. You look down and you’re in a totally different body—Bill Clinton’s body. And across the room, you see your body—with Bill Clinton’s personality.


Now, are you still you? Well, my intuition says that you’re you—you still have your exact personality and all your memories—you’re just in Bill Clinton’s body now. You’d go find your family to explain what happened:



So unlike your other organs, which could be transplanted without changing your identity, when you swapped brains, it wasn’t a brain transplant—it was a body transplant. You’d still feel like you, just with a different body. Meanwhile, your old body would not be you—it would be Bill Clinton. So what makes you you must be your brain. The Brain Theory says that wherever the brain goes, you go—even if it goes into someone else’s skull.

The Data Theory

Consider this—

What if the mad scientist, after capturing you and Bill Clinton, instead of swapping your physical brains, just hooks up a computer to each of your brains, copies every single bit of data in each one, then wipes both of your brains completely clean, and then copies each of your brain data onto the other person’s physical brain? So you both wake up, both with your own physical brains in your head, but you’re not in your body—you’re in Bill Clinton’s body. After all, Bill Clinton’s brain now has all of your thoughts, memories, fears, hopes, dreams, emotions, and personality. The body and brain of Bill Clinton would still run out and go freak out about this to your family. And again, after a significant amount of convincing, they would indeed accept that you were alive, just in Bill Clinton’s body.

Philosopher John Locke’s memory theory of personal identity suggests that what makes you you is your memory of your experiences. Under Locke’s definition of you, the new Bill Clinton in this latest example is you, despite not containing any part of your physical body, not even your brain. 

This suggests a new theory we’ll call The Data Theory, which says that you’re not your physical body at all. Maybe what makes you you is your brain’s data—your memories and your personality.

We seem to be honing in on something, but the best way to get to concrete answers is by testing these theories in hypothetical scenarios. Here’s an interesting one, conceived by British philosopher Bernard Williams:

The Torture Test

Situation 1: The mad scientist kidnaps you and Clinton, switches your brain data with Clinton’s, as in the latest example, wakes you both up, and then walks over to the body of Clinton, where you supposedly reside, and says, “I’m now going to horribly torture one of you—which one should I torture?”

What’s your instinct? Mine is to point at my old body, where I no longer reside, and say, “Him.” And if I believe in the Data Theory, then I’ve made a good choice. My brain data is in Clinton’s body, so I’m now in Clinton’s body, so who cares about my body anymore? Sure, it sucks for anyone to be tortured, but if it’s between me and Bill Clinton, I’m choosing him.

Situation 2: The mad scientist captures you and Clinton, except he doesn’t do anything to your brains yet. He comes over to you—normal you with your normal brain and body—and asks you a series of questions. Here’s how I think it would play out:

Mad Scientist: Okay so here’s what’s happening. I’m gonna torture one of you. Who should I torture?

You: [pointing at Clinton] Him.

MS: Okay but there’s something else—before I torture whoever I torture, I’m going to wipe both of your brains of all memories, so when the torture is happening, neither of you will remember who you were before this. Does that change your choice?

You: Nope. Torture him.

MS: One more thing—before the torture happens, not only am I going to wipe your brains clean, I’m going to build new circuitry into your brain that will convince you that you’re Bill Clinton. By the time I’m done, you’ll think you’re Bill Clinton and you’ll have all of his memories and his full personality and anything else that he thinks or feels or knows. I’ll do the same thing to him, convincing him he’s you. Does that change your choice?

You: Um, no. Regardless of any delusion I’m going through and no matter who Ithink I am, I don’t want to go through the horrible pain of being tortured. Insane people still feel pain. Torture him.

So in the first situation, I think you’d choose to have your own body tortured. But in the second, I think you’d choose Bill Clinton’s body—at least I would. But the thing is—they’re the exact same example. In both cases, before any torture happens, Clinton’s brain ends up with all of your data and your brain has his—the difference is just at which point in the process you were asked to decide. In both cases, your goal is for you to not be tortured, but in the first situation, you felt that after the brain data swap, you were in Clinton’s body, with all of your personality and memories there with you—while in the second situation, if you’re like me, you didn’t care what was going to happen with the two brains’ data, you believed that you would remain with your physical brain, and body, either way.

Choosing your body to be the one tortured in the first situation is an argument for the Data Theory—you believe that where your data goes, you go. Choosing Clinton’s body to be tortured in the second situation is an argument for the Brain Theory, because you believe that regardless of what he does with your brain’s data, you will continue to be in your own body, because that’s where your physical brain is. Some might even take it a step further, and if the mad scientist told you he was even going to switch your physical brains, you’d still choose Clinton’s body, with your brain in it, to be tortured. Those that would torture a body with their own brain in it over torturing their own body believe in the Body Theory.

Not sure about you, but I’m finishing this experiment still divided. Let’s try another. Here’s my version of modern philosopher Derek Parfit’s teletransporter thought experiment, which he first described in his book Reasons and Persons

The Teletransporter Thought Experiment

It’s the year 2700. The human race has invented all kinds of technology unimaginable in  today’s world. One of these technologies is teleportation—the ability to transport yourself to distant places at the speed of light. Here’s how it works—

You go into a Departure Chamber—a little room the size of a small cubicle.

cube stand

You set your location—let’s say you’re in Boston and your destination is London—and when you’re ready to go, you press the button on the wall. The chamber walls then scan your entire body, uploading the exact molecular makeup of your body—every atom that makes up every part of you and its precise location—and as it scans, it destroys, so every cell in your body is destroyed by the scanner as it goes.

cube beam

When it’s finished (the Departure Chamber is now empty after destroying all of your cells), it beams your body’s information to an Arrival Chamber in London, which has all the necessary atoms waiting there ready to go. The Arrival Chamber uses the data to re-form your entire body with its storage of atoms, and when it’s finished you walk out of the chamber in London looking and feeling exactly how you did back in Boston—you’re in the same mood, you’re hungry just like you were before, you even have the same paper cut on your thumb you got that morning.

The whole process, from the time you hit the button in the Departure Chamber to when you walk out of the Arrival Chamber in London, takes five minutes—but to you it feels instantaneous. You hit the button, things go black for a blink, and now you’re standing in London. Cool, right?

In 2700, this is common technology. Everyone you know travels by teleportation. In addition to the convenience of speed, it’s incredibly safe—no one has ever gotten hurt doing it.

But then one day, you head into the Departure Chamber in Boston for your normal morning commute to your job in London, you press the big button on the wall, and you hear the scanner turn on, but it doesn’t work.

cubicle broken

The normal split-second blackout never happens, and when you walk out of the chamber, sure enough, you’re still in Boston. You head to the check-in counter and tell the woman working there that the Departure Chamber is broken, and you ask her if there’s another one you can use, since you have an early meeting and don’t want to be late.

She looks down at her records and says, “Hm—it looks like the scanner worked and collected its data just fine, but the cell destroyer that usually works in conjunction with the scanner has malfunctioned.”

“No,” you explain, “it couldn’t have worked, because I’m still here. And I’m late for this meeting—can you please set me up with a new Departure Chamber?”

She pulls up a video screen and says, “No, it did work—see? There you are in London—it looks like you’re gonna be right on time for your meeting.” She shows you the screen, and you see yourself walking on the street in London.

“But that can’t be me,” you say, “because I’m still here.”

At that point, her supervisor comes into the room and explains that she’s correct—the scanner worked as normal and you’re in London as planned. The only thing that didn’t work was the cell destroyer in the Departure Chamber here in Boston. “It’s not a problem, though,” he tells you, “we can just set you up in another chamber and activate its cell destroyer and finish the job.”

And even though this isn’t anything that wasn’t going to happen before—in fact, you have your cells destroyed twice every day—suddenly, you’re horrified at the prospect.

“Wait—no—I don’t want to do that—I’ll die.”

The supervisor explains, “You won’t die sir. You just saw yourself in London—you’re alive and well.”

“But that’s not me. That’s a replica of me—an imposterI’m the real me—you can’t destroy my cells!”

The supervisor and the woman glance awkwardly at each other. “I’m really sorry sir—but we’re obligated by law to destroy your cells. We’re not allowed to form the body of a person in an Arrival Chamber without destroying the body’s cells in a Departure Chamber.”

You stare at them in disbelief and then run for the door. Two security guards come out and grab you. They drag you toward a chamber that will destroy your cells, as you kick and scream…


If you’re like me, in the first part of that story, you were pretty into the idea of teletransportation, and by the end, you were not.

The question the story poses is, “Is teletransportation, as described in this experiment, a form of traveling? Or a form of dying?

This question might have been ambiguous when I first described it—it might have even felt like a perfectly safe way of traveling—but by the end, it felt much more like a form of dying. Which means that every day when you commute to work from Boston to London, you’re killed by the cell destroyer, and a replica of you is created.1 To the people who know you, you survive teletransportation just fine, the same way your wife seems just fine when she arrives home to you after her own teletransportation, talking about her day and discussing plans for next week. But is it possible that your wife was actually killed that day, and the person you’re kissing now was just created a few minutes ago?

Well again, it depends on what you are. Someone who believes in the Data Theory would posit that London you is you as much as Boston you, and that teletransportation is perfectly survivable. But we all related to Boston you’s terror at the end there—could anyone really believe that he should be fine with being obliterated just because his data is safe and alive over in London? Further, if the teletransporter could beam your data to London for reassembly, couldn’t it also beam it to 50 other cities and create 50 new versions of you? You’d be hard-pressed to argue that those were all you. To me, the teletransporter experiment is a big strike against the Data Theory.

Similarly, if there were an Ego Theory that suggests that you are simply your ego, the teletransporter does away nicely with that. Thinking about London Tim, I realize that “Tim Urban” surviving means nothing to me. The fact that my replica in London will stay friends with my friends, keep Wait But Why going with his Tuesday-ish posts, and live out the whole life I was planning for myself—the fact that no one will miss me or even realize that I’m dead, the same way in the story you never felt like you lost your wife—does almost nothing for me. I don’t care about Tim Urban surviving. I care about me surviving.

All of this seems like very good news for Body Theory and Brain Theory. But let’s not judge things yet. Here’s another experiment:

The Split Brain Experiment

A cool fact about the human brain is that the left and right hemispheres function as their own little worlds, each with their own things to worry about, but if you remove one half of someone’s brain, they can sometimes not only survive, but their remaining brain half can learn to do many of the other half’s previous jobs, allowing the person to live a normal life. That’s right—you could lose half of your brain and potentially function normally.

So say you have an identical twin sibling named Bob who developes a fatal brain defect. You decide to save him by giving him half of your brain. Doctors operate on both of you, discarding his brain and replacing it with half of yours. When you wake up, you feel normal and like yourself. Your twin (who already has your identical DNA because you’re twins) wakes up with your exact personality and memories.


When you realize this, you panic for a minute that your twin now knows all of your innermost thoughts and feelings on absolutely everything, and you’re about to make him promise not to tell anyone, when it hits you that you of course don’t have to tell him. He’s not your twin—he’s you. He’s just as intent on your privacy as you are, because it’s his privacy too.

As you look over at the guy who used to be Bob and watch him freak out that he’s in Bob’s body now instead of his own, you wonder, “Why did I stay in my body and not wake up in Bob’s? Both brain halves are me, so why am I distinctly in my body and not seeing and thinking in dual split-screen right now, from both of our points of view? And whatever part of me is in Bob’s head, why did I lose touch with it? Who is the me in Bob’s head, and how did he end up over there while I stayed here?”

Brain Theory is shitting his pants right now—it makes no sense. If people are supposed to go wherever their brains go, what happens when a brain is in two places at once? Data Theory, who was badly embarrassed by the teletransporter experiment, is doing no better in this one.

But Body Theory—who was shot down at the very beginning of the post—is suddenly all smug and thrilled with himself. Body Theory says “Of course you woke up in your own body—your body is what makes you you. Your brain is just the tool your body uses to think. Bob isn’t you—he’s Bob. He’s just now a Bob who has your thoughts and personality. There’s nothing Bob’s body can ever do to not be Bob.” This would help explain why you stayed in your body.

So a nice boost for Body Theory, but let’s take a look at a couple more things—

What we learned in the teletransporter experiment is that if your brain data is transferred to someone else’s brain, even if that person is molecularly identical to you, all it does is create a replica of you—a total stranger who happens to be just like you. There’s something distinct about Boston you that was important. When you were recreated out of different atoms in London, something critical was lost—something that made you you.

Body Theory (and Brain Theory) would point out that the only difference between Boston you and London you was that London you was made out of different atoms. London you’s body was like your body, but it was still made of different material. So is that it? Could Body Theory explain this too?

Let’s put it through two tests:

The Cell Replacement Test

Imagine I replace a cell in your arm with an identical, but foreign, replica cell. Are you not you anymore? Of course you are. But how about if, one at a time, I replace 1% of your cells with replicas? How about 10%? 30%? 60%? The London you was composed of 100% replacement cells, and we decided that that was not you—so when does the “crossover” happen? How many of your cells do we need to swap out for replicas before you “die” and what’s remaining becomes your replica?

Something feels off with this, right? Considering that the cells we’re replacing are molecularly identical to those we’re removing, and someone watching this all happen wouldn’t even notice anything change about you, it seem implausible that you’d ever die during this process, even if we eventually replaced 100% of your cells with replicas. But if your cells are eventually all replicas, how are you any different from London you?

The Body Scattering Test 

Imagine going into an Atom Scattering Chamber that completely disassembles your body’s atoms so that all that’s left in the room is a light gas of floating atoms—and then a few minutes later, it perfectly reassembles the atoms into you, and you walk out feeling totally normal.


Is that still you? Or did you die when you were disassembled and what has been reassembled is a replica of you? It doesn’t really make sense that this reassembled you would be the real you and London you would be a replica, when the only difference between the two cases is that the scattering room preserves your exact atoms and the London chamber assembles you out of different atoms. At their most basic level, atoms are identical—a hydrogen atom from your body is identical in every way to a hydrogen atom in London. Given that, I’d say that if we’re deciding London you is not you, then reassembled you is probably not you either.

The first thing these two tests illustrate is that the key distinction between Boston you and London you isn’t about the presence or absence of your actual, physical cells. The Cell Replacement Test suggests that you can gradually replace much or all of your body with replica material and still be you, and the Body Scattering Test suggests that you can go through a scatter and a reassembly, even with all of your original physical material, and be no more you than the you in London. Not looking great for Body Theory anymore.

The second thing these tests reveal is that the difference between Boston and London you might not be the nature of the particular atoms or cells involved, but about continuity. The Cell Replacement Test might have left you intact because it changed you gradually, one cell at a time. And if the Body Scattering Test were the end of you, maybe it’s because it happened all at the same time, breaking thecontinuity of you. This could also explain why the teletransporter might be a murder machine—London you has no continuity with your previous life.

So could it be that we’ve been off the whole time pitting the brain, the body, and the personality and memories against each other? Could it be that anytime you relocate your brain, or disassemble your atoms all at once, transfer your brain data onto a new brain, etc., you lose you because maybe, you’re not defined by any of these things on their own, but rather by a long and unbroken string of continuous existence?


A few years ago, my late grandfather, in his 90s and suffering from dementia, pointed at a picture on the wall of himself as a six-year-old. “That’s me!” he explained.

He was right. But come on. It seems ridiculous that the six-year-old in the picture and the extremely old man standing next to me could be the same person. Those two people had nothing in common. Physically, they were vastly different—almost every cell in the six-year-old’s body died decades ago. As far as their personalities—we can agree that they wouldn’t have been friends. And they shared almost no common brain data at all. Any 90-year-old man on the street is much more similar to my grandfather than that six-year-old.

But remember—maybe it’s not about similarity, but about continuity. If similarity were enough to define you, Boston you and London you, who are identical, would be the same person. The thing that my grandfather shared with the six-year-old in the picture is something he shared with no one else on Earth—they were connected to each other by a long, unbroken string of continuous existence. As an old man, he may not know anything about that six-year-old boy, but he knows something about himself as an 89-year-old, and that 89-year-old might know a bunch about himself as an 85-year-old. As a 50-year-old, he knew a ton about him as a 43-year-old, and when he was seven, he was a pro on himself as a 6-year-old. It’s a long chain of overlapping memories, personality traits, and physical characteristics.

It’s like having an old wooden boat. You may have repaired it hundreds of times over the years, replacing wood chip after wood chip, until one day, you realize that not one piece of material from the original boat is still part of it. So is that still your boat? If you named your boat Polly the day you bought it, would you change the name now? It would still be Polly, right?

In this way, what you are is not really a thing as much as a story, or a progression, or one particular theme of person. You’re a bit like a room with a bunch of things in it—some old, some new, some you’re aware of, some you aren’t—but the room is always changing, never exactly the same from week to week.

Likewise, you’re not a set of brain data, you’re a particular database whose contents are constantly changing, growing, and being updated. And you’re not a physical body of atoms, you’re a set of instructions on how to deal with and organize the atoms that bump into you.

People always say the word soul and I never really know what they’re talking about. To me, the word soul has always seemed like a poetic euphemism for a part of the brain that feels very inner to us; or an attempt to give humans more dignity than just being primal biological organisms; or a way to declare that we’re eternal. But maybe when people say the word soul what they’re talking about is whatever it is that connects my 90-year-old grandfather to the boy in the picture. As his cells and memories come and go, as every wood chip in his canoe changes again and again, maybe the single common thread that ties it all together is his soul. After examining a human from every physical and mental angle throughout the post, maybe the answer this whole time has been the much less tangible Soul Theory.


It would have been pleasant to end the post there, but I just can’t do it, because I can’t quite believe in souls.

The way I actually feel right now is completely off-balance. Spending a week thinking about clones of yourself, imagining sharing your brain or merging yours with someone else’s, and wondering whether you secretly die every time you sleep and wake up as a replica will do that to you. If you’re looking for a satisfying conclusion, I’ll direct you to the sources below since I don’t even know who I am right now.

The only thing I’ll say is that I told someone about the topic I was posting on for this week, and their question was, “That’s cool, but what’s the point of trying to figure this out?” While researching, I came across this quote by Parfit: “The early Buddhist view is that much or most of the misery of human life resulted from the false view of self.” I think that’s probably very true, and that’s the point of thinking about this topic.


Here’s how I’m working on this false view of self thing.

Very few of the ideas or thought experiments in this post are my original thinking. I read and listened to a bunch of personal identity philosophy this week and gathered my favorite parts together for the post. The two sources I drew from the most were philosopher Derek Parfit’s book Reasons and Persons and Yale professor Shelly Kagan’s fascinating philosophy course on death—the lectures are all watchableonline for free.

Other Sources:
David Hume: Hume on Identity Over Time and Persons
Derek Parfit: We Are Not Human Beings
Peter Van Inwagen: Materialism and the Psychological-Continuity Account of Personal Identity
Bernard Williams: The Self and the Future
John Locke: An Essay Concerning Human Understanding (Chapter: Of Identity and Diversity)
Douglas Hofstadter: Gödel, Escher, Bach
Patrick Bailey: Concerning Theories of Personal Identity

“River’s Edge”: The darkest teen film of all time

“River’s Edge” understood ’80s kids — and what they’d do to combat that horrible feeling of emptiness

"River's Edge": The darkest teen film of all time

Keanu Reeves and Crispin Glover in “River’s Edge”

About a year and a half ago, I interviewed Daniel Waters, screenwriter of the enduring and dark teen comedy and media satire “Heathers” for the book (“Twee”) I was researching at the time. The conversation was genial and funny, and I could tell he was what we used to call at my old employer Spin magazine a “quote machine.” Soon, the subject got around to films of the late American auteur John Hughes, particularly his iconic high school trilogy of “Sixteen Candles” (1984), “The Breakfast Club” (1985) and “Pretty in Pink” (the 1986 romantic comedy that he wrote but did not direct). “I felt like Hughes was trying to coddle teenagers and almost suck up to them, idealize them,” Waters said, with almost no fear of reprisal from the many millions who hold these films (and Ferris Bueller … and even, to paraphrase Jeff Daniels in “The Squid and the Whale,” minor Hughes efforts like “Some Kind of Wonderful” and “She’s Having a Baby,” dear), “[With ‘Heathers’) I was more of a terrorist coming after John Hughes. What drove me nuts about the Hughes moves was the third act was always something about how bad adults were. When you grow up your heart dies. Hey, your heart dies when you’re 12!”

One could make an argument for Waters’ “Heathers” (directed as a gauzy, occasionally surrealist morality play by Michael Lehmann) as the darkest teen film of all time. The humor is pitch-black, there’s a body count, a monocle, corn nuts and an utter excoriation of clueless boomers who wonder, as the supremely camp Paul Lynde did a quarter of a century earlier in the film adaptation of “Bye Bye Birdie,” what (in fuck) is the matter with kids today?

But it’s not. Not even close, when compared with a film that preceded it by only three years, the Neal Jimenez-penned, Tim Hunter-directed 1986 drama “River’s Edge,” which is released this month on DVD after years of being difficult to find for home viewing. No other film captures more accurately what it’s like to be dead inside during the end of the Cold War, the height of MTV and the invasion of concerned but impotent parents. “River’s Edge” was the one film that seemed to understand that it wasn’t the rap music, heavy metal music or even drugs that made ’80s kids, it was … nothing. As in the feeling of searching your soul for what you should feel and finding it empty, and slowly, horrifyingly getting used to it to the point that at least one, maybe more of us, will do anything, even commit murder, in order to combat that horrible void. I didn’t want to kill anyone or even myself, but I wanted to disappear, or at least be frozen and wake up in art school in the early ’90s, when bands like Nirvana gave that feeling a voice, and a few anthems.

There’s a lot of Nirvana in “River’s Edge.” Most “what’s the matter with kids today?” films have their juvenile delinquents in some kind of drag: black leather jackets (“Blackboard Jungle,” “The Wild One”) or spiked hair and safety pins and pet rats (“Suburbia,” “Next Stop, Nowhere,” aka “the Punk Rock Quincy episode”). But the kids in “River’s Edge” dress in ripped jeans and T-shirts and chunky, shapeless sweaters. It’s sexless (the only sex scene takes place under a shitty maroon sleeping bag with bullfrogs croaking in the distance and a dead body being simultaneously disposed of not too far off). “The thing about a shark,” Robert Shaw famously observed during the “USS Indianapolis” speech in 1975’s “Jaws” just before all hell breaks loose, “is he’s got lifeless eyes. Black eyes. Like a doll’s eyes. When he comes at ya, he doesn’t even seem to be livin’… till he bites ya.” The kids who populate “River’s Edge,” Keanu Reeves’ Mike, Ione Skye’s Clarissa, Daniel Roebuck’s Samson, etc., don’t seem to be living, buzzed on sixers, many of which they must steal from a harried liquor store cashier (the great, recently late Taylor Negron), as they’re underage. Until they bite you. It’s hard to capture boredom on film without boring an audience (Richard Linklater’s “Suburbia,” for one, tries and fails). What keeps viewers of “River’s Edge” on, well, edge is the sense that these black-eyed, dead creatures in inside-out heavy metal tees (Iron Maiden, Def Leppard, even the band logos are muted) is that they might bite. It’s a sickening feeling and you cannot turn away.

The first thing we see is a preteen kid, Tim (Josh Miller), a juvenile delinquent fast in the making, with an earring, holding an actual doll. We notice, with a little required deduction as he barely reacts, that he is staring across the river at a murderer and his naked, blue-ing victim, while holding the doll he stole from his sister: All four have doll eyes, the corpse (Danyi Deats’ Jamie), the killer (Roebuck) and the doll, which Miller casually drops into the river despite knowing well it’s his little sister’s security object and probably best friend. We are soon with Jamie and Samson after the crime has been committed. Samson is smoking. Despite the occasional feral howl that he knows nobody will hear (except Tim, which is the same thing), it feels like some kind of test for the audience. How much apathy can we weather? How many dead eyes can we stare back at? This is, of course, a testament to the young cast, all of them brilliant and committed (it can’t be easy to portray those bored soulless, can it? You want to react, you want to break). Jamie, a stunned look on her face, lies there, in the cold, also a committed actress, and there is simply nothing like this in any other teen film, or even a teen-populated horror film. Horror films, as the “Scream” franchise would soon remind us, have rules. I wanted to enter the screen, like Jeff Daniels’ genial explorer in Woody Allen’s charming comedy from the previous year, “The Purple Rose of Cairo,” and cover her body somehow. But Hunter forces us to look, which could not be easy for him as an artist, and must have been a challenge for him to ask it of his young cast. In his review, the late Roger Ebert wrote, “The difference is that the film feels a horror that the teenagers apparently did not.”

“Where’s Jamie?” Samson’s crew asks once he leaves the crime scene (for more beer).

“I killed her,” he says.

Most don’t believe him but Layne (Crispin Glover, top billed but unmistakenly launching his freak phase, only a year after playing Michael J. Fox’s bumbling dad in the blockbuster “Back to the Future”). Layne sees the event, the tragedy, as both fait accompli (“You’re gonna bring her back? It’s done!” he squeals in a reedy, wired voice) and a life-changing (and -saving) break in the day-in, day-out living hell; a kind of moral test. He believes Samson, he rallies around Samson, and he tries to motivate his crew to do the same. The corpse is a gift to Layne and Layne returns the favor by pledging his loyalty. He can’t help stifling a smile when he is led to the site. “This is unreal! Completely unreal. It’s like some movie, you know?” Layne enters the movie, doing a reverse “Purple Rose …” Even Samson doesn’t want in. He wants out … of the world, and yet he becomes strangely proud when he displays the body to his group of friends, who borrow a red pickup truck to end their suspicion that they are being jerked around. Most of them instantly recoil at the site of the corpse (still naked!) and cannot get back to the torpor (arcades, sex, beer) quickly enough. Only Reeves’ Mike is conflicted and contemplates going to the cops. Similar terrain was covered in the hit “Stand By Me,” which was released the same year. “You guys wanna see a dead body?” Jerry O’Connell’s Vern asks his pals River Phoenix, Corey Feldman and Wil Wheaton, but they are clearly spooked and remain so into adulthood (as the narrator, Richard Dreyfuss, attests). The kids of this bumfuck town go about their bumfuck business, sleeping through class, hating their non-bio broken-home inhabitants (“Motherfucker, food eater!” Reeves yells at the slob who’s moved in with his mom). They’re not stupid. They’re just … unequipped for reality that does not repeat on a loop, sun up and sun down. Layne, in his makeup, watch cap, black clothes and muscle car is the only one among them who wants to feel “like Starsky and Hutch!”

“River’s Edge” is based, loosely, on reality. In late 1981, a 16-year-old student, Anthony Broussard, from Milpitas High School, near San Jose, California, led a group of his friends and his 8-year-old brother into the hills to see the barely clothed body of the 14-year-old Marcy Renee Conrad, whom he’d strangled days before. “Then instead of reporting the body of their dead school chum to the police,” reporter Claire Spiegel wrotein her coverage of the case, “they went back to class or the local pinball arcade. One went home and fell asleep listening to the radio.” She added, “Their surprising apathy toward murder bothered even hardened homicide detectives.”

Jimenez, then a college student in Santa Clara, California, read about the events and was inspired to begin working on a story based on this behavior. In the age of “Serial,” it’s hard not to see “River’s Edge” as prescient, and when I listened to the podcast last year, I thought a lot about the film. But its power comes not from reality, but from its craft: the script, the performances and the cinematography by David Lynch collaborator Frederick Elmes, who shot “Blue Velvet,” another milestone ’86 release. The beauty of the exteriors (the grainy opening, the murky drink, the perfect blue and shadows when Layne half-heartedly disposes of the body in it) make the ugliness of the behavior all the more disturbing.

Director Tim Hunter knew his way around a “youth gone fucked up” film by ’86. He was the co-writer of “Over the Edge,” known mostly as the film debut of then 14-year-old Matt Dillon who utters the pull-away line, “A kid who tells on another kid is a dead kid.” Loaded with excellent power pop (Cheap Trick’s “Downed,” and “Surrender,” especially), Dillon and his J.D. friends spoil the planned suburban community of “New Granada” on their dirt bikes, shooting off fireworks and BB guns. Dillon starred in Hunter’s directing debut, 1982’s “Tex,” based on a book by go-to wild, but sensitive, youth writer S.E. Hinton. Who knows why he didn’t appear in “River’s Edge.” Maybe it was too easy to see the heart beating under his flannel. Even Judd Nelson’s John Bender has a heart under his, and at the end of John Singleton’s 1991 film “Boyz n the Hood,” Ice Cube’s scowling gang member Doughboy has a monologue that provides evidence that he’s got a big one. (“Turned on the TV this morning. Had this shit on about living in a violent world. Showed all these foreign places, where foreigners live and all. Started thinking, man. Either they don’t know, don’t show, or don’t care about what’s going on in the hood.”)

The parents of New Granada are pretty pissed that their utopia has been vandalized and rally in protest, but the boomers of “River’s Edge” don’t even have the fight in them. There’s no Ms. Fleming from “Heathers” among them. Nobody will call when the shuttle lands. “Fuck you, man,” one of them rages in vain at his class. “You don’t give a damn! I don’t give a damn! No one in this classroom gives a damn that she’s dead. It gives us a chance to feel superior.” “Are we being tested on this shit?” a student asks. Even the media don’t really care. And if the kids themselves are apathetic (“I don’t give a fuck about you and I don’t give a fuck about your laws,” Samson tells Negron’s clerk before brandishing a gun), the new generation cares even less. Not even teenagers; they smoke weed, pack heat and drive big gas guzzlers they can barely see out of, when not speeding through nowheresville on their bikes or shooting trapped crayfish in a barrel, literally. Full disclosure: I was friendly with Josh Miller in Hollywood in the early ’90s. For a time, he was going to star in and produce a pretty decent screenplay I’d co-written, which eventually fell through. In person he was sweet, generous and caring, but I always, always looked at him sideways because he was also … Tim, who utters the following line: “Go get your numchucks and your dad’s car. I know where we can get a gun.”

There’s irony and black humor in “River’s Edge.” I don’t want to portray it as some kind of Fassbender-ish downer, 90 minutes of misery. Samson promises to read Dr. Seuss to his incapacitated aunt. And there’s, of course, Layne, who doesn’t even seem to realize that nearly every line out of his mouth is absolutely ridiculous (which makes him beyond endearing, sociopath that he likely is). When he is rewarded his sixer for chucking the corpse in the river, he complains, “You’d think I’d at least rate Michelob.” I wonder why Reeves became a star (this is only his second film, after a small part in the Rob Lowe hockey drama “Youngblood”) and Ione Skye, more briefly a sought after actress. Perhaps because his albeit belated actions make him as close to a hero as the film has … discounting, of course, Feck.

You know you are dealing with a dark film when its only true beating heart belongs to a crippled biker, weed dealer and fugitive murderer who is in love with a blow-up doll, having blown the head off his previous paramour. Feck lives alone. Feck, at the behest of Layne, briefly hides Samson. And, realizing he is dealing with a soulless and dangerous generation, Feck does what dozens of teachers and parents cannot, and will not do. He reacts. Perhaps it’s a testament to his skill, but Dennis Hopper the man looks genuinely heartbroken at what’s happened to the youth he fought so hard to liberate with his “Easy Rider.” In the midst of a glorious comeback (he’d appear in “Blue Velvet” and receive an Oscar nomination for the basketball film “Hoosiers”). It’s Feck that Samson finally opens up to (“She was dead there in front of me and I felt so fucking alive”). We don’t know why Feck shot his ex, but we do know that he maintains that he loved her. He sees none of that emotion, no emotion at all, in Samson. “I’m dead now,” Samson says. “They’re gonna fry me for sure.” Thanks to Feck, they won’t get the chance.

“River’s Edge” doesn’t end in a trial, but rather a quiet, plain, sparse church funeral and a bit of long-absent dignity returned to the victim. It somehow relieves the viewer. Sanity, as it is, has been restored. No one would call it a feel-good ending but somehow, strangely, bloodily, perversely, love wins in the end. “There was no hope for him. There was no hope at all. He didn’t love her. He didn’t feel a thing. I at least loved [mine],” Feck explains. “I cared for her.”

Released in May of ’87 in limited theaters, the movie quickly made a mark with critics, if not audiences, and began to amass a loyal cult of viewers who appreciated its unique and revolutionary qualities. It beat out Jonathan Demme’s “Swimming to Cambodia,” the acclaimed Spalding Gray monologue film, at the Independent Spirit Awards, as well as John Huston’s final film, “The Dead.” And while far from a box office hit, it effortlessly set a precedent for films about teens. They no longer had to be either good or evil or anything at all. They didn’t have to dress or look like James Dean or Droogs or get off in any way on their heroism and their villainy. “River’s Edge” made all that seem quaint. It’s a singular film that foresaw the ’90s and freed the cinema teen to be a loser … baby.

Turns out “Friends” treated fat people as punch lines and kind of had a homophobia problem

“Chandler’s treatment of his gay father is appalling”: Everything critics realized while watching “Friends” in 2015


"Chandler's treatment of his gay father is appalling": Everything critics realized while watching "Friends" in 2015

“Friends” hit Netflix for the first time in 2015, and while it’s certainly not the first time people have had the opportunity to rewatch the show since it went off the air in 2004, it has provided a handy excuse for people to ruminate belatedly on the show’s impact — and for crazy super-fans to binge-watch all 10 seasons, obviously — and perhaps learn something new about the gang in the process. And they did! Some revelations were goofy, some light, and others pretty damning. Here’s what the Internet dug up about our favorite sitcom when viewed in the cold harsh light of 2015:

Chandler is the worst, and he’s also pretty homophobic.

As Ruth Graham wrote in Slate: “Chandler’s treatment of his gay father, a Vegas drag queen played by Kathleen Turner, is especially appalling, and it’s not clear the show knows it. It’s one thing for Chandler to recall being embarrassed as a kid, but he is actively resentful and mocking of his loving, involved father right up until his own wedding (to which his father is initially not invited!)… his continuing discomfort now reads as jarringly out of place for a supposedly hip New York thirtysomething — let alone a supposedly good person, period…. When it comes to women, Chandler turns out to be just as retrograde as Joey, but his lust comes with an undercurrent of the kind of bitter desperation that I now recognize as not only gross, but potentially menacing.”

Although, this, of course, is not the first time the show’s homophobia has been addressed:

Refinery29, meanwhile, delved into the issues with the show’s use of “Fat Monica” as a punchline.

“In the show’s storyline, Monica loses weight in college after overhearing Chandler make fun of her size. Shamed into thinness, Fat Monica becomes just Monica — desirable and (finally) human. Monica is many things: funny, uptight, loving, competitive. Fat Monica is just fat… and always hungry. I was grateful for Fat Monica as a kid. She was proof I could overcome my disgusting plumpness and be seen as lovable, too. True, I would always bear the shame of my inflated past, just like Monica did, but I was willing to live with that if it meant I’d be a person instead of a punchline.”

The Globe and Mail’s John Doyle, meanwhile, asked if nostalgia for “Friends” is all about white privilege.

“The issues of race and ‘white privilege’ make some Americans deeply uncomfortable. Maybe, at a time when mainstream U.S. TV is finally airing shows with ensemble casts that look like the ensemble that is America, and after the shooting of Trayvon Martin, and after the shooting and rioting in Ferguson, Mo., and all the attendant questions raised, there’s an instinctive need on the part of some to return to the bubble of white-bread America that is epitomized by ‘Friends.’”

Yet while some griped about the show’s retrograde identity politics, others were able to find a feminist message.

Refinery29 picked out 21 of “Friends’” most surprising feminist moments. Meanwhile, Bustle listed the nine most feminist things about “Friends,” such as:

“When Rachel got pregnant, she turned down marriage proposals from both Joey and Ross. Being married and having a family don’t necessarily have to be connected, and Rachel was the hottest single mom on network television, and everyone respected (and applauded) her decision.”

In an interesting morsel of critical theory, Maggie Wheeler — aka Janice —  suggests her character is a stand-in for the viewer. As she tells EW:

“This crazy girl who is not particularly self-aware who still gets to be at the party. This interloper, this outsider managed to find her way into this little community of friends, and I think that was a vehicle for a lot of viewers who were sitting around in front of their televisions going, ‘Well, how do I hang out with those people?’”

There were also some novel discoveries on a more micro level — like the fact that the Friends intro without music is super creepy:

Perhaps the biggest revelation of all: Some geniuses at Bustle discovered the answer to the age-old question — How was the gang always able to get a seat at Central Perk?

While, as part of their comprehensive friends countdown — which is full of gems —Vulture reminded us that “Friends” actually invented the term “friend zone.” 

There was much discussion about why the Netflix episodes were shorter than the DVD episodes.

Turns out that, back in 2012, co-executive producer and director Kevin S. Brightexplained that the DVDs had a few minutes of extra footage: “The deleted footage was, frankly, added specifically for one home video release,” he said. “If fans are particularly interested in additional footage, those versions are still available. But for this, we wanted something that we, the creators, felt represented the show as we always wanted it to be remembered, which is the original NBC broadcast versions, which have never before been released as that, combined with fantastic new picture and sound, a new documentary and other new features.”

Still, just remember, no matter how these revelations may make you see “Friends” in a new light, it’s still okay to love the show (albeit with a grain of salt”).

As Vulture’s Margaret Lyons wrote in her “Stay Tuned” TV advice column, in response to a reader expressing discomfort with the show’s homophobia: “You can still love ‘Friends,’ but why would you want to love it like you did before? Love it the way you see it now, with the things you know now and the values you have now. I love ‘Friends,’ but I do not love its body or queer politics. Those things can be true at the same time.”

Drones and the New Ethics of War

Protesters march against President Obama’s drone wars on the day of his second inauguration on January 21, 2013. (Photo: Debra Sweet/flickr/cc)

This Christmas small drones were among the most popular gift under the tree in the U.S. with manufacturers stating that they sold 200,000 new unmanned aerial vehicles during the holiday season. While the rapid infiltration of drones into the gaming domain clearly reflects that drones are becoming a common weapon among armed forces, their appearance in Walmart, Toys “R” Us and Amazon serves, in turn, to normalize their deployment in the military.

Drones, as Grégoire Chamayou argues in his new book, A Theory of the Drone, have a uniquely seductive power, one that attracts militaries, politicians and citizens alike. A research scholar in philosophy at the Centre National de la Recherche Scientifique in Paris, Chamayou is one of the most profound contemporary thinkers working on the deployment of violence and its ethical ramifications. And while his new book offers a concise history of drones, it focuses on how drones are changing warfare and their potential to alter the political arena of the countries that utilize them.

If Guantanamo was the icon of President George W. Bush’s anti-terror policy, drones have become the emblem of the Obama presidency.

Chamayou traces one of the central ideas informing the production and deployment of drones back to John W. Clark, an American engineer who carried out a study on “remote control in hostile environments” in 1964. In Clark’s study, space is divided into two kinds of zones—hostile and safe—while robots operated by remote control are able to relieve human beings of all perilous occupations within hostile zones. The sacrifice of miners, firefighters, or those working on skyscrapers will no longer be necessary, since the collapse of a tunnel in the mines, for example, would merely lead to the loss of several robots operated by remote control.

The same logic informed the creation of drones. They were initially utilized as part of the military’s defense system in hostile territories. After the Egyptian military shot down about 30 Israel fighter jets in the first hours of the 1973 war, Israeli air-force commanders decided to change their tactics and send a wave of drones. As soon as the Egyptians fired their initial salvo of anti-aircraft missiles at the drones, the Israeli airplanes were able to attack as the Egyptians were reloading.

Over the years, drones have also become an important component of the intelligence revolution. Instead of sending spies or reconnaissance airplanes across enemy lines, drones can continuously fly above hostile terrain gathering information. As Chamayou explains, drones do not merely provide a constant image of the enemy, but manage to fuse together different forms of data. They carry technology that can interpret electronic communications from radios, cell phones and other devices and can link a telephone call with a particular video or provide the GPS coordinates of the person using the phone. Their target is, in other words, constantly visible.

Using drones to avert missiles or for reconnaissance was, of course, considered extremely important, yet military officials aspired to transform drones into lethal weapons as well. On February 16, 2001, after many years of U.S. investment in R&D, a Predator drone first successfully fired a missile and hit its target. As Chamayou puts it, the notion of turning the Predator into a predator had finally been realized. Within a year, the Predator was preying on live targets in Afghanistan.

A Humanitarian Weapon

Over the past decade, the United States has manufactured more than 6000 drones of various kinds. 160 of these are Predators, which are used not only in Afghanistan but also in countries officially at peace with the US, such as Yemen, Somalia and Pakistan. In Pakistan, CIA drones carry out on average of one strike every four days. Although exact figures of fatalities are difficult to establish, the estimated number of deaths between 2004 and 2012 vary from 2562 to 3325.

Chamayou underscores how drones are changing our conception of war in three major ways. First, the idea of a frontier or battlefield is rendered meaningless as is the idea that there are particular places—like homesteads—where the deployment of violence is considered criminal. In other words, if once the legality of killing was dependent on where the killing was carried out, today US lawyers argue that the traditional connection between geographical spaces—such as the battlefield, home, hospital, mosque—and forms of violence are out of date. Accordingly, every place becomes a potential site of drone violence.

Second, the development of “precise missiles,” the kind with which most drones are currently armed led to the popular conception that drones are precise weapons. Precision, though, is a slippery concept. For one, chopping off a person’s head with a machete is much more precise than any missile, but there is no political or military support for precision of this kind in the West. Indeed, “precision” turns out to be an extremely copious category. The U.S., for example, counts all military age males in a strike zone as combatants unless there is explicit intelligence proving them innocent posthumously. The real ruse, then, has to do with the relation between precision and geography. As precise weapons, drones also render geographical contours irrelevant since the ostensible precision of these weapons justifies the killing of suspected terrorists in their homes. A legal strike zone is then equated with anywhere the drone strikes. And when “legal killing” can occur anywhere, then one can execute suspects anywhere—even in zones traditionally conceived as off-limits.

Finally, drones change our conception of war because it becomes, in Chamayou’s words, a priori impossible to die as one kills. One air-force officer formulated this basic benefit in the following manner: “The real advantage of unmanned aerial systems is that they allow you to protect power without projecting vulnerability.” Consequently, drones are declared to be a humanitarian weapon in two senses: they are precise vis-à-vis the enemy, and ensure no human cost to the perpetrator.

From Conquest to Pursuit

If Guantanamo was the icon of President George W. Bush’s anti-terror policy, drones have become the emblem of the Obama presidency. Indeed, Chamayou maintains that President Barak Obama has adopted a totally different anti-terror doctrine from his predecessor: kill rather than capture, replace torture with targeted assassinations.

Citing a New York Times report, Chamayou describes the way in which deadly decisions are reached: “It is the strangest of bureaucratic rituals… Every week or so, more than 100 members of the sprawling national security apparatus gather by secure video teleconference, to pore over terrorist suspects’ biographies and to recommend to the president who should be the next to die.” In D.C, this is called “Terror Tuesday.” Once established, the list is subsequently sent to the White House where the president gives his oral approval for each name. “With the kill list validated, the drones do the rest.”

Obama’s doctrine entails a change in the paradigm of warfare. In contrast to military theorist Carl Von Clausewitz, who claimed that the fundamental structure of war is a duel of two fighters facing each other, we now have, in Chamayou’s parlance, a hunter closing in on its a prey. Chamayou, who also wrote Manhunts: A Philosophical History, which examines the history of hunting humans from ancient Sparta to the modern practices of chasing undocumented migrants, recounts how according to English common law one could hunt badgers and foxes in another man’s land, “because destroying such creatures is said to be profitable to the Public.” This is precisely the kind of law that the US would like to claim for drones, he asserts.

The strategy of militarized manhunting is essentially preemptive. It is not a matter of responding to actual attacks but rather preventing the possibility of emerging threats by the early elimination of potential adversaries. According to this new logic, war is no longer based on conquest—Obama is not interested in colonizing swaths of land in northern Pakistan—but on the right of pursuit. The right to pursue the prey wherever it may be found, in turn, transforms the way we understand the basic principles of international relations since it undermines the notion of territorial integrity as well as the idea of nonintervention and the broadly accepted definition of sovereignty as the supreme authority over a given territory.

Wars without Risks

The transformation of Clausewitz’s warfare paradigm manifests itself in other ways as well. Drone wars are wars without losses or defeats, but they are also wars without victory. The combination of the two lays the ground for perpetual violence, the utopian fantasy of those profiting from the production of drones and similar weapons.

The drone wars, however, are introducing a risk-free ethics of killing. What is taking place is a switch from an ethics of “self-sacrifice and courage to one of self-preservation and more or less assumed cowardice.”

Just as importantly, drones change the ethics of war. According to the new military morality, to kill while exposing one’s life to danger is bad; to take lives without ever endangering one’s own is good. Bradley Jay Strawser, a professor of philosophy at the US naval Postgraduate school in California, is a prominent spokesperson of the “principle of unnecessary risk.” It is, in his view, wrong to command someone to take an unnecessary risk, and consequently it becomes a moral imperative to deploy drones.

Exposing the lives of one’s troops was never considered good, but historically it was believed to be necessary. Therefore dying for one’s country was deemed to be the greatest sacrifice and those who did die were recognized as heroes. The drone wars, however, are introducing a risk-free ethics of killing. What is taking place is a switch from an ethics of “self-sacrifice and courage to one of self-preservation and more or less assumed cowardice.”

Chamayou refers to this as “necro-ethics.” Paradoxically, necro-ethics is, on the one hand, vitalist in the sense that the drone supposedly does not kill innocent bystanders while securing the life of the perpetrator. This has far-reaching implications, since the more ethical the weapon seems, the more acceptable it is and the more readily it will likely be used. On the other hand, the drone advances the doctrine of killing well, and in this sense stands in opposition to the classical ethics of living well or even dying well.

Transforming Politics in the Drone States

Moreover, drones change politics within the drone states. Because drones transform warfare into a ghostly teleguided act orchestrated from a base in Nevada or Missouri, whereby soldiers no longer risk their lives, the critical attitude of citizenry towards war is also profoundly transformed, altering, as it were, the political arena within drone states.

Drones, Chamayou says, are a technological solution for the inability of politicians to mobilize support for war. In the future, politicians might not need to rally citizens because once armies begin deploying only drones and robots there will be no need for the public to even know that a war is being waged. So while, on the one hand, drones help produce the social legitimacy towards warfare through the reduction of risk, on the other hand, they render social legitimacy irrelevant to the political decision making process relating to war. This drastically reduces the threshold for resorting to violence, so much so that violence appears increasingly as a default option for foreign policy. Indeed, the transformation of wars into a risk free enterprise will render them even more ubiquitous than they are today. This too will be one of Obama’s legacies.

Neve Gordon is an Israeli activist and the author of Israel’s Occupation.

Why All the Explanations for the Charlie Hebdo Attack Are Overly Simplistic

Our failure to acknowledge messy and complicated reality puts us in greater peril.

Photo Credit: Joel Saget/AFP

In times of crisis, those who would like us to keep just one idea in our heads at any one time are quick to the megaphones. By framing events in Manichean terms – dark versus light; good versus evil – an imposed binary morality seeks to corral us into crude camps. There are no dilemmas, only declarations. What some lack in complexity they make up for in polemical clarity and the provision of a clear enemy.

A black man kills two policemen in their car in New York, and suddenly those who protested against the police killing unarmed black people across the country and going unpunished have blood on their hands. Sony pulls a film about the fictional assassination of a real foreign leader after threats of violent reprisals, and suddenly anyone who challenged the wisdom of making such a film is channelling their inner Neville Chamberlain. Straw men are stopped and searched in case they are carrying nuance and then locked up until the crisis is over. No charges are ever brought because a trial would require questions and evidence. You’re either with us or against us.

The horrific events of the past week have provided one such crisis. From both the left and right, efforts to explain the assassinations at Charlie Hebdo magazine, a Kosher supermarket and elsewhere inevitably become reductive. Most seek, with a singular linear thesis, to explain what happened and what we should do about it: it’s about Islam; it has nothing to do with Islam; it’s about foreign policy; it has nothing to do with foreign policy; it’s war; it’s criminality; it’s about freedom of speech, integration, racism, multiculturalism.

There is something to most of these. And yet not enough to any one of them to get anywhere close. Too few, it seems, are willing to concede that while the act of shooting civilians dead where they live and work is crude, the roots of such actions are deep and complex, and the motivations, to some extent, unknowable and incoherent. The bolder each claim, the more likely it is to contain a qualifying or even contradictory argument at least as plausible.

Clearly, this was an attack on free speech. Despite the bold statements of the past week any cartoonist will now think more than twice before drawing the kind of pictures for which Charlie Hebdo became notorious. This principle should be unequivocally defended. It should also be honestly defined.

Every country, including France, has limits on freedom of speech. In 2005 Le Monde was found guilty of “racist defamation” against Israel and the Jewish people. In 2008 a cartoonist at Charlie Hebdo was fired after refusing to apologise for making antisemitic remarks in a column. And two years before the Danish paper Jyllands-Posten published the cartoons of Muhammad in 2006, it rejected ones offering a light-hearted take on the resurrection of Christ for fear they would “provoke an outcry”.

Far from being “sacred”, as some have claimed, freedom of speech is always contingent. All societies draw lines, that are ill-defined, constantly shifting and continually debated, about what constitutes acceptable standards of public discourse when it comes to cultural, racial and religious sensitivities. The question is whether those lines count for Muslims too.

The demand that Muslims should have to answer for these killings is repugnant. Muslims can no more be held responsible for these atrocities than Jews can for the bombings in Gaza. Muslims do not form a monolithic community; nor does their religion define their politics – indeed they are the people most likely to be killed by Islamic extremists. The Paris killers shot a Muslim policeman; the next day a Muslim shop assistant hid 15 people in the freezer of a kosher deli while the shooter held hostages upstairs. Nobody elected these gunmen; they don’t represent anyone.

That said, it is simply untenable to claim that these attackers had nothing to do with Islam, anymore than it would be to say the Ku Klux Klan had nothing to do with Christianity, or that India’s BJP has nothing to Hinduism. It is within the ranks of that religion that this particular strain of violence has found inspiration and justification. That doesn’t make the justifications valid or the inspirations less perverted. But it doesn’t render them irrelevant either.

Those who claim that Islam is “inherently”violent are more hateful, but no less nonsensical, than those who claim it is “inherently” peaceful. The insistence that these hateful acts are refuted by ancient texts makes as much sense as insisting they are supported by them. Islam, like any religion, isn’t “inherently” anything but what people make of it. A small but significant minority have decided to make it violent.

There is no need to be in denial about this. Given world events over the past decade or so, the most obvious explanation is also the most plausible: the fate of Muslims in foreign conflicts played a role in radicalising these young men. Working-class Parisians don’t go to Yemen for military training on a whim. Since their teens these young men have been raised on a nightly diet of illegal wars, torture and civilian massacres in the Gulf and the Middle East in which the victims have usually been Muslim.

In a court deposition in 2007, Chérif Kouachi, the younger of the brothers affiliated with al-Qaida who shot the journalists at Charlie Hebdo, was explicit about this. “I got this idea when I saw the injustices shown by television on what was going on over there. I am speaking about the torture that the Americans have inflicted on the Iraqis.”

In a video from beyond the grave the other shooter, Amedy Coulibaly, claims he joined Islamic State to avenge attacks on Muslims. These grievances are real even if attempts to square them with the killers’ actions make your head hurt. Franceopposed the Iraq war; Isis and al-Qaida have been sworn enemies and both have massacred substantial numbers of Muslims. Not only is the morality bankrupt, but the logic is warped.

This is why describing these attacks as criminal is both axiomatic and inadequate. They were not robbing a bank or avenging a turf war. Anti-terrorism police described the assault on the magazine as “calm and determined”. They walked in, asked for people by name, and executed them. Coulibaly killed a policewoman and shot a jogger before holding up a kosher supermarket and killing four Jews. These were, for the most part, not accidental targets. Nor were they acts of insanity. They were calculated acts of political violence driven by the incoherent allegiances of damaged and dangerous young men.

They are personally responsible for what they did. But we, as a society, are collectively responsible for the conditions that produced them. And if we want others to turn out differently – less hateful, more hopeful – we will have to keep more than one idea in our heads at the same time.

Leo Tolstoy’s theory of everything

Before writing some of the greatest novels in history, Tolstoy asked some of philosophy’s hardest questions

Leo Tolstoy's theory of everything

Leo Tolstoy (Credit: Wikimedia)

Tolstoy’s first diary, started on March 17, 1847, at the age of eighteen, began as a clinical investigation launched under laboratory conditions: in the isolation of a hospital ward, where he was being treated for a venereal disease. A student at Kazan University, he was about to drop out due to lack of academic progress. In the clinic, freed from external influences, the young man planned to “enter into himself” for intense self-exploration (vzoiti sam v sebia ; 46:3). On the first page, he wrote (then crossed out) that he was in complete agreement with Rousseau on the advantages of solitude. This act of introspection had a moral goal: to exert control over his runaway life. Following a well-established practice, the young Tolstoy approached the diary as an instrument of self-perfection.

But this was not all. For the young Tolstoy, keeping a diary (as I hope to show) was also an experimental project aimed at exploring the nature of self: the links connecting a sense of self, a moral ideal, and the temporal order of narrative.

From the very beginning there were problems. For one, the diarist obviously found it difficult to sustain the flow of narrative. To fill the pages of his first diary, beginning on day two, Tolstoy gives an account of his reading, assigned by a professor of history: Catherine the Great’s famous Instruction (Nakaz), as compared with Montesquieu’sL’Esprit de lois. This manifesto aimed at regulating the future social order, and its philosophical principles, rooted in the French Enlightenment (happy is a man in whom will rules over passions, and happy is a state in which laws serve as an instrument of such control), appealed to the young Tolstoy. But with the account of Catherine’s utopia (on March 26), Tolstoy’s first diary came to an end.

When he started again (and again), Tolstoy commented on the diary itself, its purpose and uses. In his diary, he will evaluate the course of self- improvement (46:29). He will also reflect on the purpose of human life (46:30). The diary will contain rules pertaining to his behavior in specific times and places; he will then analyze his failures to follow these rules (46:34). The diary’s other purpose is to describe himself and the world (46:35). But how? He looked in the mirror. He looked at the moon and the starry sky. “ But how can one write this ?” he asked. “One has to go, sit at an ink-stained desk, take coarse paper, ink . . . and trace letters on paper. Letters will make words, words—phrases, but is it possible to convey one’s feeling?” (46:65). The young diarist was in despair.

Apart from the diaries, the young Tolstoy kept separate notebooks for rules: “ Rules for Developing Will ” (1847), “Rules of Life” (1847), “Rules” (1847 and 1853), and “Rules in General” (1850) (46:262–76). “Rules for playing music” (46:36) and “Rules for playing cards in Moscow until January 1” (46: 39). There are also rules for determining “(a) what is God, (b) what is man, and (c) what are the relations between God and man” (46:263). It would seem that in these early journals, Tolstoy was actually working not on a history but on a utopia of himself: his own personal Instruction.

Yet another notebook from the early 1850s, “Journal for Weaknesses” (Zhurnal dlia slabostei)—or, as he called it, the “Franklin journal”—listed, in columns, potential weaknesses, such as laziness, mendacity, indecision, sensuality, and vanity, and Tolstoy marked (with small crosses) the qualities that he exhibited on a particular day. Here, Tolstoy was consciously following the method that Benjamin Franklin had laid out in his famous autobiography. There was also an account book devoted to financial expenditures. On the whole, on the basis of these documents, it appears that the condition of Tolstoy’s moral and monetary economy was deplorable. But another expenditure presented still graver problems: that of time.

Along with the first, hesitant diaries, for almost six months in 1847 Tolstoy kept a “Journal of Daily Occupations” (Zhurnal ezhednevnykh zaniatii; 46:245–61), the main function of which was to account for the actual expenditure of time. In the journal, each page was divided into two vertical columns: the first one, marked “The Future,” listed things he planned to do the next day; a parallel column, marked “The Past,” contained comments (made a day later) on the fulfillment of the plan. The most frequent entry was “not quite” (nesovsem). One thing catches the eye: there was no present.

The Moral Vision of Self and the Temporal Order of Narrative

Beginning in 1850, the time scheme of Tolstoy’s “Journal of Daily Occupations” and the moral accounting of the Franklin journal were incorporated into a single narrative. Each day’s entry was written from the reference point of yesterday’s entry, which ended with a detailed schedule for the next day—under tomorrow’s date. In the evening of the next day, Tolstoy reviewed what he had actually done, comparing his use of time to the plan made the previous day. He also commented on his actions, evaluating his conduct on a general scale of moral values. The entry concluded with a plan of action and a schedule for yet another day. The following entry (from March 1851) is typical for the early to mid-1850s:

24. Arose somewhat late and read, but did not have time to write. Poiret came, I fenced, and did not send him away (sloth and cowardice). Ivanov came, I spoke with him for too long (cowardice). Koloshin (Sergei) came to drink vodka, I did not escort him out (cowardice). At Ozerov’s argued about nothing (habit of arguing) and did not talk about what I should have talked about (cowardice). Did not go to Beklemishev’s (weakness of energy). During gymnastics did not walk the rope (cowardice), and did not do one thing because it hurt (sissiness).—At Gorchakov’s lied (lying). Went to the Novotroitsk tavern (lack of fierté). At home did not study English (insufficient firmness). At the Volkonskys’ was unnatural and distracted, and stayed until one in the morning (distractedness, desire to show off, and weakness of character). 25. [This is a plan for the next day, the 25th, written on the 24th—I.P.] From 10 to 11 yesterday’s diary and to read. From 11 to 12—gymnastics. From 12 to 1—English. Beklemishev and Beyer from 1 to 2. From 2 to 4—on horseback. From 4 to 6—dinner. From 6 to 8—to read. From 8 to 10—to write.—To translate something from a foreign language into Russian to develop memory and style.—To write today with all the impressions and thoughts it gives rise to.—25. Awoke late out of sloth. Wrote my diary and did gymnastics, hurrying. Did not study English out of sloth. With Begichev and with Islavin was vain. At Beklemishev’s was cowardly and lack of fierté. On Tver Boulevardwanted to show off. I did not walk on foot to the Kalymazhnyi Dvor (sissiness). Rode with a desire to show off. For the same reason rode to Ozerov’s.—Did not return to Kalymazhnyi, thoughtlessness. At the Gorchakovs’ dissembled and did not call things by their names, fooling myself. Went to L’vov’s out of insufficient energy and the habit of doing nothing. Sat around at home out of absentmindedness and read Werther inattentively, hurrying. 26 [This is a plan for the next day, the 26th, written on the 25th—I.P.] To get up at 5. Until 10—to write the history of this day. From 10 to 12—fencing and to read. From 12 to 1—English, and if something interferes, then in the evening. From 1 to 3—walking, until 4—gymnastics. From 4 to 6, dinner—to read and write.— (46:55).

An account of the present as much as a plan for the future, this diary combines the prescriptive and the descriptive. In the evening of each day, the young Tolstoy reads the present as a failure to live up to the expectations of the past, and he anticipates a future that will embody his vision of a perfect self. The next day, he again records what went wrong today with yesterday’s tomorrow. Wanting reality to live up to his moral ideal, he forces the past to meet the future.

In his attempt to create an ordered account of time, and thus a moral order, Tolstoy’s greatest difficulty remains capturing the present. Indeed, today makes its first appearance in the diary as tomorrow, embedded in the previous day and usually expressed in infinitive verb forms (“to read,” “to write,” “to translate”). On the evening of today, when Tolstoy writes his diary, today is already the past, told in the past tense. His daily account ends with a vision of another tomorrow. Since it appears under tomorrow’s date, it masquerades as today, but the infinitive forms of the verbs suggest timelessness.

In the diaries, unlike in the “Journal of Daily Occupations,” the present is accorded a place, but it is deprived of even a semblance of autonomy: The present is a space where the past and the future overlap. It appears that the narrative order of the diary simply does not allow one to account for the present. The adolescent Tolstoy’s papers contain the following excerpt, identified by the commentators of Tolstoy’s complete works as a “language exercise”: “Le passé est ce qui fut, le futur est ce qui sera et le présent est ce qui n’est pas.—C’est pour cela que la vie de l’homme ne consiste que dans le futur et le passé et c’est pour la même raison que le bonheur que nous voulons posséder n’est qu’une chimère de même que le présent” (1:217).  (The past is that which was, the future is that which will be, and the present is that which is not. That is why the life of man consists in nothing but the future and the past, and it is for the same reason that the happiness we want to possess is nothing but a chimera, just as the present is.) Whether he knew it or not, the problem that troubled the young Tolstoy, as expressed in this language exercise, was a common one, and it had a long history.

What Is Time? Cultural Precedents

It was Augustine, in the celebrated Book 11 of the Confessions, who first expressed his bewilderment: “What is time?” He argued as follows: The future is not yet here, the past is no longer here, and the present does not remain. Does time, then, have a real being? What is the present? The day? But “not even one day is entirely present.” Some hours of the day are in the future, some in the past. The hour? But “one hour is itself constituted of fugitive moments.”

Time flies quickly from future into past. In Augustine’s words, “the present occupies no space.” Thus, “time” both exists (the language speaks of it and the mind experiences it) and does not exist. The passage of time is both real and unreal (11.14.17–11.17.22). Augustine’s solution was to turn inward, placing the past and the future within the human soul (or mind), as memory and expectation. Taking his investigation further, he argues that these qualities of mind are observed in storytelling and fixed in narrative: “When I am recollecting and telling my story, I am looking on its image in present time, since it is still in my memory” (11.18.23). As images fixed in a story, both the past and the future lie within the present, which thus acquires a semblance of being. In the mind, or in the telling of one’s personal story, times exist all at once as traces of what has passed and will pass through the soul. Augustine thus linked the issue of time and the notion of self. In the end, the question “What is time?” was an extension of the fundamental question of the Confessions: “What am I, my God? What is my Nature?” (10.17.26).

For centuries philosophers continued to refine and transform these arguments. Rousseau reinterpreted Augustine’s idea in a secular perspective, focusing on the temporality of human feelings. Being attached to things outside us, “our affections” necessarily change: “they recall a past that is gone” or “anticipate a future that may never come into being.” From his own experience, Rousseau knew that the happiness for which his soul longed was not one “composed of fugitive moments” (“ le bonheur que mon coeur regrette n’est point composé d’instants fugitives ”) but a single and lasting state of the soul. But is there a state in which the soul can concentrate its entire being, with no need to remember the past or reach into the future? Such were Rousseau’s famous meditations on time in the fifth of his Reveries of the Solitary Walker (Rêveries du promeneur solitaire), a sequel to the Confessions. In both texts Rousseau practiced the habit of “reentering into himself,” with the express purpose of inquiring “What am I?” (“Que suis je moi-même ?”).

Since the mid-eighteenth century, after Rousseau and Laurence Sterne, time, as known through the mind of the perceiving individual, had also been the subject of narrative experiments undertaken in novels and memoirs. By the 1850s, the theme of the being and nonbeing of time in relation to human consciousness, inaugurated by Augustine and secularized by Rousseau, could serve as the topic of an adolescent’s language exercise.

In his later years, as a novelist, Tolstoy would play a decisive role in the never-ending endeavor to catch time in the act. In the 1850s, in his personal diary, the young Tolstoy was designing his first, homemade methods of managing the flow of personal time by narrative means. As we have seen, this dropout student was not without cultural resources. The young Tolstoy could hardly have known Augustine, but he did know Rousseau, whose presence in the early diaries is palpable. (In later years, when he does read Augustine, he will focus on the problem of narrating time and fully appreciate its theological meaning.)  But mostly he proceeded by way of his own narrative efforts: his diary. Fixed in the diary, the past would remain with him; planned in writing, the future was already there. Creating a future past and a present future, the diarist relieved some of the anxieties of watching life pass. But in one domain his efforts fell short of the ideal: not even one day was entirely present.

“A History of Yesterday”

In March 1851, the twenty-two-year-old Tolstoy embarked on another longplanned project: to write a complete account of a single day—a history of yesterday. His choice fell on March 24: “ not because yesterday was extraordinary in any way . . . but because I have long wished to tell the innermost [zadushevnuiu] side of life in one day. God only knows how many diverse . . . impressions and thoughts . . . pass in a single day. If it were only possible to recount them all so that I could easily read myself and others could read me as I do. . . . ” (1:279).

An outgrowth of the diary, “A History of Yesterday” (Istoriia vcherashnego dnia) was conceived as an experiment: Where would the process of writing take him? (Tolstoy was writing for himself alone; indeed, in his lifetime, “A History of Yesterday” remained unpublished.)

The metaphor of self, or life, as a book, an image to which Tolstoy would return throughout his life, makes its first appearance here. 8 Rousseau, in whose footsteps Tolstoy followed in wanting to make himself into an open book, believed that self-knowledge was based on feeling and that all he had to do was “to make my soul transparent to the reader.” The young Benjamin Franklin, who was a printer, used the image in his own epitaph: He imagined a typeset book of his life and expressed his belief that it would appear once more in a new edition, “revised and corrected by the author.”

Tolstoy, in 1851, seems to have suspected that the problem lay in the narrative itself. Knowing that “there is not enough ink in the world to write such a story, or typesetters to put it into print” (1:279), he nevertheless embarked upon this project.

In the end it turned out that after about twenty-four hours of writing (spread over a three-week period), Tolstoy was still at the start of the day. Having filled what amounts to twenty-six pages of printed text, he abandoned his “History.” By that time Tolstoy was in a position to know that the enterprise was doomed, and not only because of empirical difficulties (“there would not be enough ink in the world to write it, or typesetters to put it in print”), but also because of major philosophical problems (such as the constraints inherent in the nature of narrative).

“A History of Yesterday” starts in the morning: “I arose late yesterday—at a quarter to 10.” What follows is a causal explanation that relates the given event to an earlier event, which happened on the day before yesterday: “— because I had gone to bed after midnight.” At this point, the account is interrupted by a parenthetical remark that places the second event within a system of general rules of conduct: “( It has long been my rule never to retire after midnight, yet this happens to me about 3 times a week).” The story resumes with a detailed description of those circumstances which had led to the second event and a minor moral transgression (going to bed after midnight): “I was playing cards. . . .” (1:279). The account of the action is then interrupted by another digression—the narrator’s reflections on the nature of society games.

After a page and a half, Tolstoy returns to the game of cards. The narrative proceeds, slowly and painfully, tracing not so much external actions as the webs of the protagonist/narrator’s mental activity, fusing two levels of reflections: those that accompanied the action and those that accompany the act of narration. After many digressions, the narrative follows the protagonist home, puts him to bed, and ends with an elaborate description of his dream, leaving the hero at the threshold of “yesterday.”

What, then, is time? In Tolstoy’s “History,” the day (a natural unit of time) starts in the morning, moves rapidly to the previous evening, and then slowly makes its way back towards the initial morning. Time flows backward, making a circle. In the end, Tolstoy wrote not a history of yesterday but a history of the day before yesterday.

This pattern would play itself out once again in Tolstoy’s work when, in 1856, he started working on a historical novel, the future War and Peace. As he later described it (in an explanatory note on War and Peace), Tolstoy’ original plan was to write a novel about the Decembrists. He set the action in the present, in 1856: An elderly Decembrist returns to Moscow from Siberian exile. But before Tolstoy could move any further, he felt compelled to interrupt the narrative progression: “ involuntarily I passed from today to 1825 ”(that is, to the Decembrist uprising). In order to understand his hero in 1825, he then turned to the formative events of the war with Napoleon: “ I once again discarded what I had begun and started to write from the time of 1812.” “But then for a third time I put aside what I had begun”—Tolstoy now turned to 1805 (the dawn of the Napoleonic age in Russia; 13:54). The narrative did not progress in time; it regressed. In both an early piece of personal history, “A History of Yesterday,” and the mature historical novel, War and Peace, Tolstoy saw the initial event as the end of a chain of preceding events, locked into causal dependency by the implications of the narrative order. At the time he made this comment on the writing of his novel, Tolstoy seemed to hold this principle as the inescapable logic of historical narrative.

In “A History of Yesterday,” temporal refraction does not end with a shift from the target day to the preceding day. In the description of “the day before yesterday” itself, time also does not progress: It is pulled apart to fit an array of simultaneous processes. The game of cards has come to an end. The narrator is standing by the card table involved in a (mostly silent) conversation with the hostess. It is time to leave, but taking leave does not come easily to the young man; nor is it easy to tell the story of leaving:

I looked at my watch and got up . . . . Whether she wished to end this conversation which I found so sweet, or to see how I would refuse, or whether she simply wished to continue playing, she looked at the figures which were written on the table, drew the chalk across the table— making a figure that could be classified neither as mathematical nor as pictorial—looked at her husband, then between him and me, and said: “Let’s play three more rubbers.” I was so absorbed in the contemplation not of her movements alone, but of everything that is called charme, which it is impossible to describe, that my imagination was very far away, and I did not have time to clothe my words in a felicitous form; I simply said: “No, I can’t.” Before I had finished saying this I began to regret it,—that is, not all of me, but a certain part of me. . . .

—I suppose this part spoke very eloquently and persuasively (although I cannot convey this), for I became alarmed and began to cast about for arguments.—In the first place, I said to myself, there is no great pleasure in it, you do not really like her, and you’re in an awkward position; besides, you’ve already said that you can’t stay, and you have fallen in her estimation. . . .

Comme il est aimable, ce jeune homme.” [How pleasant he is, this young man.]

This sentence, which followed immediately after mine, interrupted my reflections. I began to make excuses, to say I couldn’t stay, but since one does not have to think to make excuses, I continued reasoning with myself: How I love to have her speak of me in the third person. In German this is rude, but I would love it even in German. . . . “Stay for supper,” said her husband.—As I was busy with my reflections on the formula of the third person, I did not notice that my body, while very properly excusing itself for not being able to stay, was putting down the hat again and sitting down quite coolly in an easy chair. It was clear that my mind was taking no part in this absurdity. (1:282–83)

Written from memory, in the past tense, this narrative nevertheless strives to imitate a notation of immediate experience—something like a stenographic transcription of a human consciousness involved in the act of apprehending itself.

Some critics see this as an early instance of what would later be called the “stream of consciousness” or even read Tolstoy’s desire to describe what lies “behind the soul” as an attempt to reach “what we now call the subconscious.”  But this is a special case: a stream of consciousness with an observer. As an external observer, the narrator can only guess at what is going on in the other’s mind. As a self-narrator who describes the zadushevnui  —“innermost,” or, translating literally, the “behind-the-soul”—side of one day’s life, he faces other difficulties.

Indeed, the narrator deals with internal multiplicity, with speech, thought, and bodily movement divided, with ambivalent desires, with the dialectical drama that stands behind a motive. There is yet another layer: the splitting of the self into a protagonist and a narrator, who operate in two different timeframes. Moreover, the narrator (even when he is lost in reverie) is involved in reflections not only on the process of narrating but also on general (meta-) problems in the “historiography” of the self. Finally, he keeps referring to the residue of that which cannot be expressed and explained. How could such multiplicity be represented in the linear order of a narrative?

Time and Narrative 

Unbeknownst to the young Tolstoy, Kant had long since deplored the limitations of narrative in The Critique of Pure Reason. In narrative representation, one event as a matter of convention follows upon another. In Kant’s words, “the apprehension of the manifold of appearance is always successive”; “the representations of the parts” succeed one another. It does not follow, however, that what we represent is also in itself successive; it is just that we “cannot arrange the apprehension otherwise than in this very succession.” This is the way “in which we are first led to construct for ourselves the concept of cause”: succession suggests causality.

As yet unfamiliar with Kant’s deductions, Tolstoy attempted to break the rule of succession—to stretch the temporality of his narrative in order to account for actions and processes that occur as if simultaneously. As a result, he extended time beyond the endurance of the narrative form: the story breaks off. The narrator who describes his own being from within knows (if only subconsciously) more than he can possibly tell. Is it humanly possible to give an account of even one day in one’s own life?

There were, of course, cultural precedents. Tolstoy’s narrative strategies were largely borrowed from Laurence Sterne, who, along with Rousseau, was among his first self-chosen mentors. 13 In 1851, in his diary, Tolstoy called Sterne his “favorite writer” (49:82). In 1851–52, he translated A Sentimental Journey from English into Russian as an exercise.

Informed by Locke’s philosophy, Sterne’s narrative strategy was to make the consciousness of the protagonist/narrator into a locus of action. Locke, unlike Augustine, hoped that time itself could be captured: He derived the idea of time (duration) from the way in which we experience a train of ideas that constantly succeed one another in our minds. It followed that the sense of self derives from the continuity of consciousness from past to future.

Sterne followed suit by laying bare the flow of free associations in the mind of the narrator. One of his discoveries concerned time and narrative: Turning the narration inward, Sterne discovered that there is a psychic time that diverges from clock time. The splitting of time results in living, and writing, simultaneously on several levels. To be true to life, Sterne’s narrator digresses. The author confronted the necessity for interweaving movements forward and backward, which alone promised to move beyond the confines of time. The combination of progression and digression, including retrospective digression, created a narrative marked by experimentation, with the narrator openly commenting on his procedures.  In the end, Sterne’s experimentation—his “realistic” representation—revealed flaws in Locke’s argument: Successive representation could not catch up with the manifold perceptions of the human mind. In brief, the narrative that attempted to represent human consciousness did not progress.

By mimicking Sterne’s narrative strategy, Tolstoy learned his first lessons in epistemology: the Cartesian shift to the point of view of the perceiving individual, the modern view on the train and succession of inner thoughts, the dependence of personal identity on the ability to extend consciousness backward to a past action, and so on. Tolstoy also confronted the restrictions that govern our apprehension and representation of time—limitations that he would continue to probe and challenge throughout his life and work, even after he had read, and fully appreciated, Kant’s Critique of Pure Reason (in 1869, as he was finishing War and Peace).

In his first diaries and in “A History of Yesterday,” proceeding by way of narrative experiments, the young Tolstoy discovered a number of things. He discovered that there was no history of today. Even in a record almost concurrent with experience, there was no present. A history was a history of yesterday. Moreover, writing a history of the individual and a self-history, he was confronted with the need to account not only for the order of events but also for a whole other domain: the inner life. Uncovering the inner life led to further temporal refraction: From an inside point of view, it appeared that behind an event or action there stood a whole array of simultaneous processes. This led to another discovery.

Excerpted from “’Who, What Am I?’: Tolstoy Struggles to Narrate the Self” by Irina Paperno. Copyright © 2014 by Irina Paperno. Reprinted by arrangement with Cornell University Press. All rights reserved.