Elegy for a Year of Death in America

If Nietzsche was right about “what does not kill me,” we’re stronger now. Facing the darkness is the way forward.

Photo Credit: By The original uploader was Nagelfar at English Wikipedia (Transferred from en.wikipedia to Commons.) [Public domain], via Wikimedia Commons

“Peace, peace!” wrote Percy Shelley in the climactic stanza of his great poem about the death of his friend and rival, John Keats. But Shelley’s poem, “Adonaïs,” is not about peace — rather the opposite. If anything, it’s about the strife and anguish from which human life is never free.

He is not dead, he doth not sleep,
He hath awaken’d from the dream of life;
‘Tis we, who lost in stormy visions, keep
With phantoms an unprofitable strife,
And in mad trance, strike with our spirit’s knife
Invulnerable nothings. We decay
Like corpses in a charnel; fear and grief
Convulse us and consume us day by day,
And cold hopes swarm like worms within our living clay.

This is a form of consolation common to poetry and religion, one much in demand over the past 12 months as we have lost David Bowie, Muhammad AliPrince,Leonard Cohen, Alan Rickman, George Michael, Carrie Fisher and Debbie Reynolds (just off the top of my head) and have suffered the not-entirely-metaphorical death of our democracy, which has been sick far longer than any of those people. If you grew up amid Anglo-American pop culture of the 1970s and ’80s, and if you once held a burnished view of American tradition and American possibility — that describes, I think, a large number of people — this has been a tough year. I can’t promise I can make any of it better, but I can assure you that all the emotions we now feel have been felt before. Maybe that counts for something.

Confronting the mortality of famous people is always a way of confronting our own, I suppose, just as the tales of their marriages and divorces and affairs seem to echo and deepen our own histories of relationship success or failure. If you belong to the micro-generation that assumed that most of the people on that list would always be part of our lives, as I do, then 2016 has offered an especially pungent reminder that there is no such thing as “always,” and that our day is coming sooner than we would like. If your year was also not easy for other, more personal reasons (as mine certainly was), that seems to go with the territory.

As a child, I rushed out to the driveway for the newspaper on the morning after Ali’s big Madison Square Garden fight with Joe Frazier, and was crushed to learn that the mighty hero had fallen. A few years after that, Bowie’s late ’70s records offered me my first glimpse into a realm of bohemian adventure that actually existed, in real life and on the same continent where I lived, and not just in books about the 1920s or the 19th century. Add a few more years, and Prince emerged as the perfect distillation of white and black pop, a symbol of racial and cultural liberation sent to free us from the Reagan years. I didn’t learn to appreciate Cohen’s music until adulthood, when (again, along with many other people) I realized that he was not some folk-rock phenomenon constrained by the ’60s but something closer to a modern-day prophet.

Each of them, like the other people on that list, had a long and complicated life with many conflicting currents, and I won’t even try to do justice to that complexity here. But it did not occur to me that I would live to see them all dead, or that those deaths would all occur in a year that had so many other ways to make us mourn for lost time and lost opportunities, so many ways of reminding us that time is fleeting, and to gather our rosebuds while we may.

I didn’t have the same personal relationships with other people on that list, or with others I haven’t mentioned (Edward Albee or Elie Wiesel or George Martin or Gloria Naylor or Maurice White or Mose Allison — we could go on). But you may, and people each of us knows almost certainly do. Someone close to me was really broken up over Alan Rickman, who was one of the greatest screen and stage actors of our time, and I don’t begrudge anyone, gay or otherwise, for perceiving George Michael as a sui generis figure — a Keatsian figure, if ever there was one — who broke new ground in pop music. (“Listen Without Prejudice Vol. 1” is simply a great record, so great it seemed to have defeated its creator in some ways.)

I don’t want to dwell too much on the perhaps-terminal decline of American democracy, which this publication and everyone else in the media has been worrying over for the last year and a half, like a dog with an old mutton bone. It’s not as if people who supported the incoming president are incapable of grief and sorrow (although I suspect they are underrepresented in the Bowie and Prince fanbases). But for many of us the inexplicable political events of 2016, which remain difficult to believe, even now that they have happened, are at once the atmosphere, the subtext and the inner meaning of all this death. I was not an especially avid supporter of Hillary Clinton, but for many American women (and men) the perverse tale of how she was denied the presidency yet again in her final campaign is another of this year’s great losses. The vision of a woman president came so close to reality, but remains a dream deferred.

We have a way, as human beings, of staring into the darkness and seeing light. We’re going to need that now. In some ways, what Shelley has to tell us in “Adonaïs” is highly conventional: Whatever you believe awaits us on the other side — something or nothing, heaven or hell — at least the struggles of this life are over. Mourning is essentially a form of self-indulgence; it is we who suffer, not the dead. Shelley wrote that poem, of course, while still amid the mad trance of life, locked in unprofitable strife with phantoms: He had one eye on his dead friend and the other on posterity, and was clearly trying to go head to head with John Milton’s “Lycidas,” written nearly two centuries earlier, the first really famous pastoral elegy for a dead friend in the English tradition.

Weep no more, woeful shepherds, weep no more,
For Lycidas, your sorrow, is not dead,
Sunk though he be beneath the wat’ry floor;
So sinks the day-star in the ocean bed,
And yet anon repairs his drooping head,
And tricks his beams, and with new spangled ore
Flames in the forehead of the morning sky:
So Lycidas sunk low, but mounted high
Through the dear might of him that walk’d the waves …

Milton refers us back to Christian redemption as the reason not to feel depressed about death and loss, or at least he thinks he does. (I’m inclined to argue that he invented Romanticism without meaning to, and was constantly at war with his own faith.) But the idea at work here, that light must come out of darkness and hope can be found amid deep personal despair — the belief in literal or allegorical transcendence — is such a cultural constant across literary and religious traditions that it has to mean something. Admittedly, that “something” might just be that biology drives us onward, and those of us who find ourselves still living while others die make up reasons to keep going, because our brains are over-evolved and we can’t help thinking about these things. Cats and beetles, so far as we can tell, don’t ask themselves these questions.

Friedrich Nietzsche’s famous maxim that “what does not kill me makes me stronger” has been repurposed so much by football coaches and military strategists that its original ambiguity has gotten lost. Like most of the mad German’s pronouncements, that one is double-edged and purposefully unclear. Nietzsche knew from experience, for example, that physical illness does not make you stronger in any ordinary sense. (That passage, in fact, comes from “Twilight of the Idols,” his next-to-last major work.) I take his statement to mean that confronting death and mortality directly, as we draw nearer to our own deaths, fortifies us to better use the hours and days we have left.

Nearly everyone I know is coming out of 2016 beset by deep feelings of grief and loss. If we have been made stronger in that sense, we will be more than strong enough for whatever lies ahead: death or transformation, political or cultural or personal. Walt Whitman was thinking of something like this, in a more optimistic key, in perhaps the greatest of his poems, “When Lilacs Last in the Dooryard Bloom’d.” He imagines making friends with death, holding hands with death, and even arriving at “a sacred knowledge of death,” as a way of dealing with the assassination of Abraham Lincoln (named in the poem only as “him I loved”), a grievous loss that did not quite kill America and may, for a while, have made it stronger.

And the streets how their throbbings throbb’d, and the cities pent — lo, then and there,
Falling upon them all and among them all, enveloping me with the rest,
Appear’d the cloud, appear’d the long black trail,
And I knew death, its thought, and the sacred knowledge of death.

Then with the knowledge of death as walking one side of me,
And the thought of death close-walking the other side of me,
And I in the middle as with companions, and as holding the hands of companions,
I fled forth to the hiding receiving night that talks not,
Down to the shores of the water, the path by the swamp in the dimness,
To the solemn shadowy cedars and ghostly pines so still.


9 Weird and Wonderful Facts About Death and Funeral Practice

Your ancestors may have shared a coffin.


It might not be something you want to think about very often, but it turns out that the way we treat our dead in the modern age is heavily influenced by the way our ancestors treated theirs.

When you look at death and funeral practices through the ages, repeated patterns of behaviour emerge, making it easy to see where some of our modern ideas about death – such as keeping an urn on your mantelpiece or having a gravestone – have come from.

So here are nine surprising facts about death and funeral practices through the ages:

1. Some prehistoric societies defleshed the bones

This was done with sharp knives. And we know this because human skeletons buried during this period show the traces of many cut marks to the skulls, limbs and other bones.

During the medieval period, bodies that needed to be transported over long distances for burial were also defleshed – by dismembering the body and boiling the pieces. The bones were then transported, while the soft tissues were buried close to the place of death.

2. Throwing spears at the dead

During the Middle Iron Age, “speared-corpse” burials were a pretty big deal in east Yorkshire. Spears were thrown or placed into the graves of some young men – and in a couple of instances they appear to have been thrown with enough force to pierce the body. It is unclear why this was done, but it may have been a military send-off – similar to the 21-gun salute at modern military funerals.

3. The Romans introduced gravestones

As an imported practice, the first gravestones in Britain were concentrated close to Roman military forts and more urbanised Romano-British settlements.

Back then, gravestones were more frequently dedicated to women and children than Roman soldiers. This was most likely because Roman soldiers were not legally allowed to marry, so monuments to their deceased family members legitimised their relationships in death in a way they couldn’t be in life.

After the end of Roman control in Britain in the fifth century, gravestones fell out of favour and did not become widely popular again until the modern era.

4. The Anglo Saxons preferred urns

During the early Anglo-Saxon period, cremated remains were often kept within the community for some time before burial. We know this because groups of urns were sometimes buried together. Urns were also included in burials of the deceased – who were likely their relatives.

5. Lots of people shared a coffin

During the medieval period, many parish churches had community coffins, which could be borrowed or leased to transport the deceased person from the home to the churchyard. When they arrived at the graveside, the body would be removed from the coffin and buried in a simple shroud.

6. And rosemary wasn’t just for potatoes

Sprigs of rosemary were often carried by people in the funeral procession and cast onto the coffin before burial, much as roses are today. And as an evergreen plant, rosemary was associated with eternal life. As a fragrant herb, it was also often placed inside coffins to conceal any odours that might be emerging from the corpse. This was important because bodies often lay in state for days and sometimes weeks before burial, while preparations were made and mourners travelled to attend the funeral.

7. Touching a murderer could heal

Throughout early modern times, and up until at least the mid 19th century, it was a common belief that the touch of a murderer – executed by hanging – could cure all kinds of illnesses, ranging from cancer and goitres to skin conditions. Afflicted persons would attend executions hoping to receive the “death stroke” of the executed prisoner.

8. There are still many mysteries

For almost a thousand years, during the British Iron Age, archaeologists don’t really know what kinds of funeral practices were being performed across much of Britain. And human remains only appear in a few places – like the burials in east Yorkshire. So for much of Britain, funeral practices are almost invisible. We suspect bodies were either exposed to the elements in a practice known as “excarnation”, or cremated and the ashes scattered.

9. But the living did respect the dead

Across time, people have engaged with past monuments to the dead, and it is common for people to respect older features of the landscape when deciding where to place new burials.

Bronze Age people created new funeral monuments and buried their dead in close proximity to Neolithic funeral monuments. This can be seen in the landscape around Stonehenge, which was created as an ancestral and funeral monument – and is full of Bronze Age burial mounds known as round barrows.

And when the Anglo-Saxons arrived in Britain, they frequently buried their dead close to Bronze and Iron Age monuments. Sometimes they dug into these older monuments and reused them to bury their own dead.

Even today, green burial grounds tend to respect preexisting field boundaries. And in at least one modern cemetery, burials are placed in alignment with medieval “ridge and furrow”. These are the peaks and troughs in the landscape resulting from medieval ploughing.

The Conversation

This article was originally published on The Conversation. Read the original article.


How People Died 100+ Years Ago, and How We Die Today

Fascinating data shows life was hell just a century ago.

Photo Credit: myillusion / Shutterstock

An interesting new study compares the leading causes of death today against the leading causes of death in 1901, providing a look at how much our world has changed over the course of a century.

One thing we never learned growing up, while watching “Bonanza” or “Gunsmoke” or “Rawhide,” was how truly fragile life could be in the Old West. If we took our cowboy TV seriously, we might think the leading cause of death was a gunfight, or hanging for horse thievery, or maybe falling off a horse. The truth was much more mundane. The leading cause of death in 1901, almost 59,000 cases, was actually diarrhea, or other intestinal distresses. Not far behind was tuberculosis at 55,000. Pneumonia was third at 48,000.

Today we take medicines like antibiotics or vaccines for granted, but without them our death statistics would likely look very similar to those of 1901. Here’s the rest of the list:

4. Heart disease

5. Bright’s Disease (kidney disease)

6. Congenital disorders (birth defects)

7. Apoplexy (bleeding of internal organs)

8. Unknown causes

9. Premature birth

10. Convulsions

Looking at this list, we can see that none of these causes of death, save for heart and kidney disease, is widespread today. The lack of medical knowledge back then made survivable circumstances fatal.

The leading cause of death in the U.S. today is certainly not diarrhea. (For that, we just down a few tablespoons of Pepto Bismol.) Instead, heart disease has leapfrogged to number one, with 614,000 deaths. Following heart disease is cancer, with 591,000, and chronic lung disease a distant third at 147,000.

The rest of the list looks like this:

4. Accidents

5. Stroke

6. Alzheimer’s disease

7. Diabetes

8. Influenza and pneumonia

9. Kidney disease

10. Suicide

What strikes one most of all from this modern list is that most of these causes of death are age-related. The longer we live, the more likely we are to fall victim to heart disease, cancer, emphysema, stroke or Alzheimer’s. The reason most of these are not on the 1901 list is that the average life expectancy in that era was only about 50 years. People simply didn’t live long enough to get cancer back then. Meanwhile, most of the causes of death on the 1901 list are absent from the modern list because modern medicine has figured out treatments for them.

The modern list is also notable in that the fourth leading cause of death is accident, something missing from the 1901 list. We can speculate that the reasons for this have much to do with technology. Back in 1901, people got from here to there mostly by horse or railroad. Today, motor vehicles take us where we want to go, often at very high speeds. A horse accident was unlikely to cause death, and in fact, it was only the 11th-highest cause of death in 1901, about 550 deaths. Meanwhile over 32,000 people died in 2014 from car accidents alone. The list of deaths by accident in 1901 seems almost quaint today:

1. Heat and sunstroke (lack of AC, perhaps, or working outdoors under the hot sun?)

2. Railroad accidents

3. Drowning

4. Burns and scalds

5. Bone fractures/dislocations

6. Birth injuries

7. Accidental poisonings

8. Gunshot wounds (now there’s a familiar cause of death)

9. Suffocation

10. Poisonous gases (we can speculate from mining, mostly)

11. Horses and other vehicles (very few cars back then)

12. Mine injuries

13. Chronic poisoning (probably workplace substances no one knew would kill people over time)

14. Machinery

15. Hypothermia (think of this next time you complain your radiator is too hot in the winter)

16. Lightning

Breaking down causes of death by age group, these days babies and infants under one year old are not likely to die from diarrhea, as was the case in 1901. In fact, unless a child is born with a fatal birth defect, she is likely to make it past one year of age. In 1901, those who died at age 2 likely had pneumonia as their cause of death; ages 3 to 9, diphtheria; and 10 to 54, tuberculosis—all curable diseases today.

Meanwhile, today at age 2 right up to age 44, having conquered most of the childhood diseases (although anti-vaxxers seem determined to roll back those gains), you are likely to survive unless you are the victim of an accident. In 1901, from age 55 to 79, heart disease was the killer. Today, from age 45 to 64, look out for cancer. Finally, in 1901, if you made it to 80, death came most likely from natural causes as a result of old age, while heart disease is the cause of death in modern-day older people, age 65 and over.

The bottom line? We may look back nostalgically on times past, but that’s mostly because modern science gives us that luxury. If we actually lived back then, we would more than likely have died well before we had a chance to be nostalgic.

Larry Schwartz is a Brooklyn-based freelance writer with a focus on health, science and American history. 

Tech oligarchs are deathly serious about buying off the grim reaper.

4 Ways the One Percent Is Trying to Buy Their Immortality

Photo Credit: Shutterstock.com

Humankind has long dreamed of immortality. Surely, somewhere, Ponce de Leon’s Fountain of Youth awaits, allowing us to escape our inevitable fate of non-existence. Not surprisingly, some very wealthy tech executives are determined to buy their way out of that inevitability. These guys are living the high life and they don’t want it to stop.

Peter Thiel, the founder of PayPal, is worth somewhere north of $2 billion, and a member in good standing of the one-percent club that is projected to control half the planet’s wealth by next year. While economic inequality does not appear to weigh on him, other forms do. “Probably the most extreme form of inequality is between people who are alive and people who are dead,” he told the New Yorker recently. His thoughts on death? “Basically, I’m against it,” he told the Telegraph newspaper.

Oracle founder Larry Ellison has similar deep thoughts. “Death has never made any sense to me.” A billionaire many times over, he has contributed over $400 million to research so far, much of it through his Ellison Medical Foundation, in his quest to live forever. Larry Page and Sergey Brin, Mark Zuckerberg, Sean Parker, Pierre Omidyar of eBay have all placed hefty bets on science, and sometimes pseudo-science, to make sure they stick around a long, long time. “The talent migrating into the field is like nothing I’ve seen in my 40 years in the field,” Ken Dychtwald, a gerontology and longevity expert told Time magazine,  “and they’re convinced there is nothing you can’t do if you can turn biotechnology into information technology,”

The ethics behind the search for immortality are dicey, to say the least. “There will be breakthroughs in the next 15 or 20 years that will have to do with aging itself—actually stopping the biological clock,” Dychtwald told Time. “And I think that really rich people are going to get access to it… Imagine a time when ten thousand really rich people get to live forever, or not have to get dementia.”

Research into life extension costs a lot of money, and it seems reasonable to assume that if there are indeed breakthroughs, those various immortality pills, injections or whatever are going to cost a lot of money, too. Which raises the question, what about the rest of us? If there seems to be a budding confrontation between the economic haves and have-nots, what can be said about a world where there are mortals and immortals? Not to mention the fact that in a world straining to provide resources for 7 billion humans having a normal lifespan, people who never die are going to put a real strain on things.

“Extending the average human lifespan is a great example of something that is individually desirable by almost everyone,” Francis Fukuyama, a political theorist and bioethicist told the Washington Post, “but collectively not a good thing. For evolutionary reasons, there is a good reason why we die when we do.” Without an incentive to adapt in order to survive, said Fukuyama, social change would grind to a halt and dictators could rule for centuries, not decades.

Undaunted, these wealthy tech barons move forward, as confident in their quest to live forever as they were in founding their digital empires. For them, the traditional pace of science moves too slowly. Institutions like the National Institutes of Health, they believe, thrive too much on consensus and are not willing to take the risks that lead to major breakthroughs. “We want to jailbreak them [scientists] from existing research institutions and set them free,” reads the manifesto for Breakout Labs, Peter Thiel’s grant-making group. “It’s those who have an optimism about what can be done that will shape the future…I believe that evolution is a true account of nature but I think we should try to escape it or transcend it in our society,” Thiel told the Post.

Not every fabulously wealthy tech giant agrees with the goals of these immortalists. Bill Gates wrote on the website Reddit that he believes it is egocentric to seek to live forever when so many people on Earth lack basic needs and are dying of curable diseases like malaria and tuberculosis. Gates feels the resources would be better spent helping millions of people live normal lifespans rather than extending the lives of a relatively few multi-millionaires and billionaires.

Here are four prominent proposed pathways to immortality whose research is being supported by the modern tech oligarchy.

1. Avatars and artificial brains. Russian scientist Dmitry Itskov, 31, believes that immortality can be achieved by the year 2045. On his website, 2045.com, Itsky solicits money from the wealthy and in return promises the end of death within 30 years. He states the goal is, “to create technologies enabling the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.” His plan involves four steps. First, the development of a robot or “avatar” that is linked to our present selves by way of a computer chip. This is to be achieved by 2020. Following this, the human brain will be transplanted into the avatar. The target date for this achievement is 2025. By the year 2035, human consciousness will be transferred or downloaded into an artificial brain within the avatar, replacing the biological brain. And finally, by 2045, consciousness will evolve into a sort of Internet of global consciousness, with no need for a physical presence at all. Robots that have developed their own artificial intelligence will carry out necessary physical tasks. Humanity will have transformed into something else entirely.

If it sounds farfetched, that’s because it is. But the world’s tech sector has bought into the idea that humanity’s knowledge, because of the technology they have developed, will continue to grow exponentially, and what we know now will be dwarfed by what we know in five or 10 or 30 years.

2. Ending death by disease. Brian Singerman, a venture capitalist and partner in the Founders Fund, and his partners, including Peter Thiel and Sean Parker, are placing their faith and their wallets in the hands of biotech companies that are busily studying ways to cheat death. These companies are looking for ways to cure cancer and end the scourge of aging. Says Singerman in Inc. Magazine: “We have a company that’s charged with curing all viral disease, we have a company that’s charged with curing several types of cancer. These are not things that are incremental approaches. It’s all fine and good to have a drug that extends life by a certain amount of months or makes living with a disease easier. That’s not what we’re looking for. We are not looking for incremental change. We are looking for absolute cures in anything we do.”

Singerman believes that within 10 years all viral disease will be curable, and within that same time frame we will have a clearer understanding of what aging is, what causes it, and how to begin to stop it.

3. Genetic tinkering. Human beings are not roundworms. But breakthroughs in human life extension might have started with the lowly roundworm. In 2001, scientists burrowed down to the cellular level and added an extra gene (known as SIR2) to the roundworm. Normally, the roundworm’s lifespan is about two weeks. The roundworms with the extra gene lived for three weeks. This result mimicked the results obtained with calorie restriction, another way scientists have prolonged life in test animals. That may not sound like much, but if scientists manage to add a comparable increase to the normal human lifespan, we’d all live a third longer life. And that’s just the beginning. Ultimately, there are scientists who believe we can genetically turn off the aging processaltogether.

4. Cryogenics. No, Walt Disney is not frozen in some secret vault, waiting for the day when he can lead his Mickey Mouse empire again. But Ted Williams, baseball legend, is. Called cryogenics, the theory is they freeze your body (or, if you are feeling a bit short of change, since the process can cost up to $200,000, just your head), wait until science discovers a cure for what killed you, and then thaw the body out, revive and cure you. Critics of this method point out that at our current level of technology, by freezing the body, we are damaging the cells beyond repair. Imagine, for instance, a bag of frozen strawberries. Once defrosted, the formerly plump firm fruit is now soft and mushy. That is pretty much how the defrosted body would turn out.

But true believers dissent. Alcor, a leading cryogenics firm (and one of the recipients of Peter Thiel’s largesse), states on its website, “We believe medical technology will advance further in coming decades than it has in the past several centuries, enabling it to heal damage at the cellular and molecular levels and to restore full physical and mental health.”

Larry Schwartz is a Brooklyn-based freelance writer with a focus on health, science and American history. 

What Good Is Thinking About Death?

We’re all going to die and we all know it. This can be both a burden and a blessing.

Benjamin Vander Steen / Flickr

In the heart of every parent lives the tightly coiled nightmare that his child will die. It might spring at logical times—when a toddler runs into the street, say—or it might sneak up in quieter moments. The fear is a helpful evolutionary motivation for parents to protect their children, but it’s haunting nonetheless.

The ancient Stoic philosopher Epictetus advised parents to indulge that fear. “What harm is it, just when you are kissing your little child, to say: Tomorrow you will die?” he wrote in his Discourses.

Some might say Epictetus was an asshole. William Irvine thinks he was on to something.

“The Stoics had the insight that the prospect of death can actually make our lives much happier than they would otherwise be,” he says. “You’re supposed to allow yourself to have a flickering thought that someday you’re going to die, and someday the people you love are going to die. I’ve tried it, and it’s incredibly powerful. Well, I am a 21st-century practicing Stoic.”

He’s a little late to the party. Stoicism as a school of philosophy rose to prominence in the 3rd century B.C. in Greece, then migrated to the Roman Empire, and hung around there through the reign of emperor Marcus Aurelius, who died in 180 A.D. “That Stoicism has seen better days is obvious,” Irvine, a professor of philosophy at Wright State University, writes in his book A Guide to the Good Life: The Ancient Art of Stoic Joy. He stumbled across the philosophy when researching a book on Zen Buddhism—“I thought I wanted to be a Zen Buddhist,” he says, “but Stoicism just had a much more rational approach.”

Though the word “stoic” in modern parlance is associated with a lack of feeling, in his book, Irvine argues that the philosophy offers a recipe for happiness, in part by thinking about bad things that might happen to you. The big one, obviously, is death—both yours and that of people you love.

“We can do it on a daily basis, simply by imagining how things can be worse than they are,” he says. “Then when they aren’t that way, isn’t that just wonderful? Isn’t it simply wonderful that I get another day to get this right?”

For Irvine and the Stoics, thoughts of death inspire gratitude. For many others, thinking about The End inspires fear or anxiety. In fact, the latter may be the natural human condition.

* * *

“We are different from other animals in that we are uniquely aware of our own mortality,” says Ken Vail, an assistant professor of psychology at Cleveland State University. “Certainly other animals recognize they can die—if a cheetah chases an impala, or chases us, both us and the impala are going to run away. We recognize that as an immediate threat of mortality. But the impala doesn’t sit in the safety of its office aware of the fact that it will eventually die. And we do.”

This is the price we pay for the nice things consciousness has given us—self-reflection, art, engineering, long-term planning, cooking our food and adding spices to it instead of just chomping raw meat straight off the bones of another animal, etc. We’re all going to die and we all know it.

But we’re not always actively thinking about it. When people are reminded of death, they employ a variety of strategies to cope—not all of which are as well-adjusted as Stoic gratitude. That many kinds of human behavior stem from a fear of death is the basis of one of the most prominent theories in modern social psychology—terror-management theory.

Terror-management theory exists because one day, some 30-odd years ago, Sheldon Solomon was perusing the library at Skidmore College, where he’s a professor of psychology, and he happened to pick up The Birth and Death of Meaning, by Ernest Becker. “This is nothing to be proud of, but the cover is white with green splotches on it, and I was like ‘Ooh, what an interesting color,’” Solomon says. “Then I liked that it was a short book with big print. Again, nothing to be proud of, but true. And that’s why I reached for it.”

Once he opened the book, though, Solomon was taken by its central question—Why do people do what they do?—and how it was presented, without “turgid academic jargon,” he says. Becker offered an answer to that question: People do a lot of the things that they do to quell their fear of death. So Solomon and two of his friends from grad school, Jeff Greenberg and Tom Pyszczynski, set out to test that idea empirically.

* * *

The only antidote to death is immortality. And so, terror-management theory holds, when faced with the idea of death, people turn to things they believe will give them immortality, literal or otherwise. The hope of true immortality can be found in religion’s promises of heaven or reincarnation, or in some of science’s more dubious life-extension promises (Just freeze your dead body! They’ll wake it up later!).

More often though, it’s the hope of symbolic immortality that calms the frightened rabbits of death-fearing hearts—the idea that people are a part of something that will last longer than they do. Their culture, their country, their family, their work. When thinking of death, people cling more intensely to the institutions they’re a part of, and the worldviews they hold.

What that actually means in terms of behavior, is trickier. The research shows that what people do when they’re feeling aware of their mortality depends on the person, the situation she’s in, and whether she’s focusing on death or it’s just in the back of her mind. (The TMT literature, which details a wide range of effects, is now fairly substantial. A 2010 metareview found 238 TMT studies, and this page on the University of Missouri website lists nearly 600, though it doesn’t seem to have been updated since 2012).

When death is in the front of your mind—when you pass by a cemetery, when someone you know is sick (or when, in a lab, a researcher has just asked you about it)—the tendency, according to TMT, is to want to push those thoughts away. You might suppress the thoughts, distract yourself with something else, or comfort yourself with the idea that your death is a long way away, and anyway, you’re definitely going to go to the gym tomorrow.

A couple of studies have shown that conscious thoughts of death do increase health intentions, for exercise and medical screenings, though whether people actually follow through on those intentions is unclear. Promising yourself you’ll eat better may just be a strategy to get death off your mind.

When death is on people’s conscious minds, “they can wield logic to deal with it,” Vail says. “This would be similar to your mom saying, ‘Put on your seatbelt, you don’t want to die.’ So you think about that and recognize, yes, she’s right, you don’t want to bite it on the way to the grocery store, so you put on your seatbelt.”

According to Solomon, even young children use versions of these same strategies. His new book, written with Greenberg and Pyszczynski, The Worm at the Core: On the Role of Death in Life, cites the story of 5-year-old Richard, from a series of interviews the psychologist Sylvia Anthony conducted in the 60s and 70s:

“He swam up and down in his bath [and] he played with the possibility of never dying: ‘I don’t want to be dead, ever; I don’t want to die.’ … After his mother told 5-year-old Richard that he wouldn’t die for a long time, the little boy smiled and said, ‘That’s all right. I’ve been worried, and now I can get happy.’ Then he said he would like to dream about ‘going shopping and buying things.’”

Classic distraction move, Richard. Though at times, our own coping mechanisms may not be much more sophisticated. “Americans are arguably the best in the world at burying existential anxieties under a mound of French fries and a trip to Walmart to save a nickel on a lemon and a flamethrower,” Solomon says.

But shopping excursions can only distract you so much. Even once you stop actively thinking about it, death is still prominent in your nonconscious mind. “One metaphor is the file drawer,” Vail says. “You pull out a file and read it, then you get distracted, now you’re thinking about dinner. You put [the file] back in the drawer, you pull out dinner, now you’re looking it dinner, but whatever you were thinking about previously is now on the top of the file. It’s the closest thing to your conscious awareness.”

This is when, the research shows, people’s attitudes and behaviors are most affected—when you’ve recently been reminded of death, but it’s moved to the back of your mind.

Unfortunately, a lot of what death brings out when it’s sitting at the top of the file drawer is not humanity’s most sterling qualities. If people feel motivated to uphold their own cultures and worldviews in the face of death, it stands to reason that they might be less friendly toward other worldviews and the people who hold them.

The very first terror-management study involved “22 municipal-court judges in Tucson, Arizona,” according to The Worm at the Core. The judges were tasked with setting bail for alleged prostitutes, but first they were asked to take a survey. Some of them just answered personality questions, but some were also asked two questions about death: “Please briefly describe the emotions that the thought of your own death arouses in you,” and “Jot down, as specifically as you can, what you think will happen to you as you physically die, and once you are physically dead.” The standard bail at the time was $50, set by judges who didn’t take the survey. The ones who did take the survey set the bail an average of nine times higher.

“The results showed that the judges who thought about their own mortality reacted by trying to do the right thing as prescribed by their culture,” the book reads. “Accordingly, they upheld the law more vigorously than their colleagues who were not reminded of death.”

But, Solomon says, the researchers later repeated that study with students, and found that only those who thought prostitution was “morally reprehensible” opted to set a harsher bail. The logic goes that those students wanted to uphold their values, and punish transgressors. Since then, more studies have shown this tendency: When mortality’s on their minds, people prefer others in their (cultural/racial/national/religious) group to those outside it. This dynamic has manifested in silly ways—in one study liberals were more likely to make conservatives eat a gross hot sauce after a death reminder and vice versa—and in more serious ones—reminders of mortality have been shown to make peoplemore likely to stereotype others.

While wanting to promote your own worldviews can mean putting others’ down, that isn’t the only way people seek to feel like part of something greater than themselves—searching for that symbolic immortality. Looming mortality can also lead people to help others, donate to charity, and want to invest in caring families and relationships. (And studies have backed up that people do these things when reminded of death.)

These reactions have also been observed outside the lab, after the terrorist attacks of September 11, 2001, when death was likely top of mind for many Americans for quite a while. Comparisons of survey answers before and two months after 9/11 found increases in kindness, love, hope, spirituality, gratitude, leadership, and teamwork, which persisted (though to a slightly lesser degree) 10 months after the attacks. But Solomon, Greenberg, and Pyszczynski point out in their book that there was also a lot of fear and derogation by Americans of the “other” after 9/11, specifically Muslim and Arab others.

“It’s not the case that awareness of mortality and the ensuing terror-management process is an inherently negative one that causes prejudice and closed-mindedness and hostility but instead it appears to be simply rather a neutral process,” Vail says. “It’s one that motivates people to indiscriminately uphold and defend their cultural worldviews.”

How you manage your terror, then, depends on what’s already important to you—and that’s what you’ll turn to when confronted with mortality. In one study, empathetic people were more likely to forgive transgressions after a death reminder; in another, fundamentalist religious people were more compassionate after thinking of their own mortality—but only when compassionate values were framed in a religious context, such as excerpts from the Bible or Koran.

* * *

Terror-management theory contends that there’s something different about our fear of death, compared to other fears. Every other threat is survivable, after all. And in research, thinking about death has produced just as strong of an effect whether the alternative was something neutral, or another threat like rejection or pain. So a fear of death is not just like a fear of rejection, except more.

Except Steven Heine, a professor of psychology at the University of British Columbia, doesn’t think death is necessarily such a unique threat. In 2006, he and fellow researchers Travis Proulx and Kathleen Vohs developed the Meaning Maintenance Model, which says yes, thinking about death can inspire these attitudes and behaviors, but for a different reason. Death, according to their theory, is a threat to the way we understand the world, similar to uncertainty, being rejected by a friend, or even—Heine’s example—finding a red queen of spades in a deck of cards. All these things interrupt what Heine calls “meaning frameworks—understandings of how the world works. When we think about the fact that we’re going to die, it calls all of those assumptions into question. All these things I’m trying to do, I won’t be able to succeed, my relationships will be severed, the way I think I fit into the world, ultimately I no longer will. This is bothersome.”

But perhaps not more bothersome than other threats to meaning. Heine says Meaning Maintenance Model studies have found that thinking about death does not have a noticeably larger effect on people’s attitudes and behaviors than, say, watching a surreal movie. A metareview of TMT studies also notes that the effects of thinking about death are less significant when compared with thinking about something else that threatens someone’s sense of meaning.

Thoughts of death still lead people to uphold their worldviews according to this theory, but it’s because, when faced with an idea as confounding as one’s own mortality, people turn to the other things in their lives that still make sense to them. While the two theories have a lot in common, Heine says MMM can explain one thing that TMT cannot: suicide.

“TMT would argue that while we want to have a sense of meaning as a way of keeping away thoughts of death, one of the key motivators of suicide is feeling that your life isn’t very meaningful, wanting death when you feel like you don’t have sufficient meaning in your life,” he says.

The thing that makes death different, Heine says, is that it’s not solvable. With other meaning threats, you can try to fix the problem, or adjust your worldview to accommodate the new information. “The fact that we’re going to die is a problem that we can never fully resolve throughout our lives,” he says.

But maybe that’s for the best.

* * *

“I know we’re supposed to be super afraid of death. But it’s good, isn’t it?” asks Laura King, curator’s professor of psychological sciences at the University of Missouri, Columbia. “If life never ended, think about it, right? Isn’t that like every vampire story or sci-fi movie? If you live too long, after a while, you just lose it. Life no longer has any meaning, because it’s commonplace.”

King did a study in 2009 that offers an alternative, economical perspective on death and meaning. She showed that after reminders of death, people valued life more highly—and conversely, reading a passage that placed a high monetary value on the human body increased people’s number of death thoughts. This is the scarcity principle, plain and simple—the less you have of something, the more you value it.

But “most of us don’t live like we’re aware that life is a finite commodity,” King says. She describes an exercise she has her students do, in which they write down their life goals, and then write what they’d do if they only had three weeks to live. “Then you say, ‘Why aren’t you doing those things?’ They say, ‘Get real, hello, we have a future to plan for.’”

“Live every day as though it’s your last” is nice but profoundly unhelpful advice, when you know that today is probably not your last day. I’m not sure what I’d do if I was going to die tomorrow—round up all my loved ones and fly them to Paris? Or maybe just throw them a really nice dinner party, the kind where everyone ends up sprawled out on couches, overstuffed and warm from the wine.

Either way, I can’t do that today. I have to go to work.

“Everybody always says life is too short,” King says, “but it’s really long. It’s really, really long.”

Once people’s days truly are numbered, their priorities do seem to shift. According to research done on socioemotional selectivity theory, older people are more present-oriented than younger people, and are more selective in who they spend time with, sticking mostly with family and old, close friends. Other studies have shown them to also be more forgiving, and to care more for others, and less about enhancing themselves.

This all fits in well with Irvine’s Stoic philosophy. Rather than pulling curtains over the darkness on the other side of the window, you stare straight into it, so when you turn away you’re thankful for the light.

Irvine gives the mundane example of buying a lawn mower. “As I’m doing it, I have the realization that this is conceivably the last lawn mower I will ever buy,” he says. “I don’t like mowing the lawn, don’t get me wrong, but I’ve only got X number of times it’s going to happen. Some day, this moment, right now, is going to count as the good old days.”

* * *

Unfortunately, Western culture isn’t exactly death-friendly. Death is kept largely out of sight, out of mind, the details left to hospitals and funeral parlors. Though most Americans say they want to die at home, few actually do—only about 25 percent, according to the Centers for Disease Control and Prevention. Most other people die in hospitals, nursing homes, or other facilities.

This is why, in 2011, the mortician Caitlin Doughty founded The Order of the Good Death, a self-described “group of funeral industry professionals, academics, and artists exploring ways to prepare a death-phobic culture for inevitable mortality.” She’s also written a book about working in a crematory, Smoke Gets in Your Eyes, and hosts the “Ask a Mortician” webseries.

“Death doesn’t go away just because we hide it,” Doughty wrote to me in an email. “Hiding life’s truths doesn’t mean they disappear. It means they are forced into darker parts of our consciousness … Death is the most natural thing in the world, and treating it as deviant isn’t doing our culture any favors … We don’t control nature. We aren’t higher-ranking than nature.”

This is terror management writ large, a culture that pushes death away as best it can. Even though, ultimately, it can’t.

More people are coming around to Doughty’s way of thinking. “Death salons” and “death cafes,” where people gather to talk about their mortality have sprung up across the U.S., and many doctors, like the Being Mortal author Atul Gawande, are working to advance the conversation around end-of-life care, getting patients involved in planning for their deaths.

But the research shows the effects of thinking about death aren’t all grace and gratitude —so would bringing death out into the open ultimately help or hurt humanity?

“At first, thinking about death regularly made me move up and down and way up and way down the emotional spectrum,” Doughty writes. “But over time thinking about death moves you closer to magnanimity. You realize that you will have to give your body, your atoms and molecules, back to the universe when you’re done with them.”

She also points out that TMT studies are isolated instances, and don’t look at what happens when people think about death regularly, over time.

Maybe the key, then, is being deliberate. Not letting thoughts of death sneak up on you, but actively engaging with them, even if it’s hard. In one 2010 study, people who were more mindful were less defensive of their worldviews after being reminded of death, suggesting that “mindfulness can potentially disrupt some of these kinds of processes that go into terror management,” says Vail, the Cleveland State University psychologist.

Solomon, too, is hopeful. “I like to think there comes a moment where sustained efforts to come to terms with death pay off.” Vail suggests that freeing oneself from the psychological reactions to death might get rid of the good effects along with the bad, but Solomon’s willing to take the trade. “If you look at the problems that currently befall humanity—we can’t get along with each other, we’re pissing on the environment, [there’s] rampant economic instability by virtue of mindless conspicuous consumption—they’re all malignant manifestations of death anxiety running amok.”

It’s probably not possible to erase all fear of death—animals have a drive to survive, and we are animals, even with all that consciousness. Even if being mindful about death means getting rid of the good along with the bad consequences of death anxiety, people can be generous and love each other without being scared into it.

“Death destroys a man, but the idea of death saves him,” E.M. Forster once wrote. I don’t know if there’s really any salvation, but if we accept death, maybe we can just live.


Curing the fear of death

How “tripping out” could change everything

A chemical called “psilocybin” shows remarkable therapeutic promise. Only problem? It comes from magic mushrooms


 Curing the fear of death: How "tripping out" could change everything

(Credit: stilikone, Objowl via Shutterstock/Salon)

The second time I ate psychedelic mushrooms I was at a log cabin on a lake in northern Maine, and afterwards I sat in a grove of spruce trees for three and a half hours, saying over and over, “There’s so much to see!”

The mushrooms converted my worldview from an uninspired blur to childlike wonderment at everything I glimpsed. And now, according to recent news, certain cancer patients are having the same experience. The active ingredient in psychedelic mushrooms, psilocybin, is being administered on a trial basis to certain participating cancer patients to help them cope with their terminal diagnosis and enjoy the final months of their lives. The provisional results show remarkable success, with implications that may be much, much bigger.

As Michael Pollan notes in a recent New Yorker piece, this research is still in its early stages. Psychedelic mushrooms are presently classified as a Schedule 1 drug, meaning, from the perspective of our federal government, they have no medical use and are prohibited. But the scientific community is taking some steps that – over time, and after much deliberation – could eventually change that.

Here’s how it works: In a controlled setting, cancer patients receive psilocybin plus coaching to help them make the most of the experience. Then they trip, an experience that puts ordinary life, including their cancer, in a new perspective. And that changed outlook stays with them over time. This last part might seem surprising, but at my desk I keep a picture of the spot where I had my own transcendental experience several years ago; it reminds me that my daily tribulations are not all there is to existence, nor are they what actually matter.

The preliminary research findings are convincing. You could even call them awe-inspiring. In one experiment, an astounding two-thirds of participants said the trip was “among the top five most spiritually significant experiences of their lives.” Pollan describes one cancer patient in detail, a man whose psilocybin session was followed by months that were “the happiest in his life” — even though they were also his last. Said the man’s wife: “[After his trip] it was about being with people, enjoying his sandwich and the walk on the promenade. It was as if we lived a lifetime in a year.”

Which made me do a fist pump for science: Great work, folks. Keep this up! Researchers point out that these studies are small and there’s plenty they don’t know. They also stress the difference between taking psilocybin in a clinical setting — one that’s structured and facilitated by experts — and taking the drug recreationally. (By a lake in Maine, say.) Pollan suggests that the only commonality between the two is the molecules being ingested. My (admittedly anecdotal) experience suggests matters aren’t quite that clear-cut. But even that distinction misses a larger point, which is the potential for this research to help a great many people, with cancer or without, to access a deeper sense of joy in their lives. The awe I felt by that lake in Maine — and the satisfaction and peacefulness that Pollan’s cancer patient felt while eating his sandwich and walking on the promenade — is typically absent from regular life. But that doesn’t mean it has to be.

The growing popularity of mindfulness and meditation suggests that many of us would like to inject a bit more wonder into our lives. As well we should. Not to be a damp towel or anything, but we’re all going to die. “We’re all terminal,” as one researcher said to Pollan. While it’s possible that you’ll live to be 100, and hit every item on your bucket list, life is and always will be uncertain. On any given day, disaster could strike. You could go out for some vigorous exercise and suffer a fatal heart attack, like my dad did. There’s just no way to know.

In the meantime, most of us are caught in the drudgery of to-do lists and unread emails. Responsibility makes us focus on the practical side of things — the rent isn’t going to pay itself, after all — while the force of routine makes it seem like there isn’t anything dazzling to experience anyhow. Even if we’d like to call carpe diem our motto, what we actually do is more along the lines of the quotidian: Work, commute, eat, and nod off to sleep.

With that for a backdrop, it’s not surprising that many of us experience angst about our life’s purpose, not to mention a deep-seated dread over the unavoidable fact of our mortality. It can be a wrenching experience, one that sometimes results in panic attacks or depression. We seek out remedies to ease the discomfort: Some people meditate, others drink. If you seek formal treatment, though, you’ll find that the medical establishment doesn’t necessarily consider existential dread to be a disorder. That’s because it’s normal for us to question our existence and fear our demise. In the case of debilitating angst, though, a doctor is likely to recommend the regimen for generalized anxiety — some combo of therapy and meds.

Both of these can be essential in certain cases, of course; meds tend to facilitate acceptance of the way things are, while therapy can help us, over a long stretch of time, change the things that we can to some degree control. But psychedelics are different from either of these. They seem to open a door to a different way of experiencing life. Pollan quotes one source, a longtime advocate for the therapeutic use of psilocybin, who identifies the drug’s potential for “the betterment of well people.” Psychedelics may help ordinary people, who are wrestling with ordinary angst about death and the meaning of life, to really key into, and treasure, the various experiences of their finite existence.

In other words, psychedelics could possibly help us to be more like kids.

Small children often view the world around them with mystic wonder — pushing aside blades of grass to inspect a tiny bug that’s hidden underneath, or perhaps looking wide-eyed at a bright yellow flower poking through a crack in the sidewalk. (Nothing but a common dandelion, says the adult.) Maybe the best description of psilocybin’s effect is a reversion to that childlike awe at the complexity of the world around us, to the point that we can actually relish our lives.

What’s just as remarkable is that we’re not talking about a drug that needs to be administered on a daily or weekly or even monthly basis in order to be effective. These studies gave psilocybin to cancer patients a single time. Then, for months afterward, or longer, the patients reaped enormous benefit.

(The fact that psychedelics only need to be administered once could actually make it less likely that the research will receive ample funding, because pharmaceutical companies don’t see dollar signs in a drug that’s dispensed so sparingly. But that’s another matter )

Of course, some skepticism may be warranted. Recreational use of psychedelics has been associated with psychotic episodes. That’s a good reason for caution. And a potential criticism here is that psilocybin is doing nothing more than playing a hoax on the brain — a hoax that conjures up a mystical experience and converts us into spellbound kids. You might reasonably ask, “do I even want to wander around awe-struck at a dandelion the same way a 3-year-old might?”

So caution is reasonably advised. But what the research demonstrates is nonetheless remarkable: the way the experience seems to shake something loose in participants’ consciousness, something that lets them see beyond the dull gray of routine, or the grimness of cancer, to the joy in being with loved ones, the sensory pleasure of a good meal, or the astounding pink visuals of the sunset.


The myth of pure science

It’s all about political, economic, religious interests

Scientific research can flourish only in alliance with some ideology. Even Darwin couldn’t have done it alone

The myth of pure science: It's all about political, economic, religious interests
Charles Darwin (Credit: Wikimedia/Salon)
Excerpted from “Sapiens”

The Ideal of Progress

Until the Scientific Revolution most human cultures did not believe in progress. They thought the golden age was in the past, and that the world was stagnant, if not deteriorating. Strict adherence to the wisdom of the ages might perhaps bring back the good old times, and human ingenuity might conceivably improve this or that facet of daily life. However, it was considered impossible for human know-how to overcome the world’s fundamental problems. If even Muhammad, Jesus, Buddha and Confucius – who knew everything there is to know – were unable to abolish famine, disease, poverty and war from the world, how could we expect to do so?

Many faiths believed that some day a messiah would appear and end all wars, famines and even death itself. But the notion that humankind could do so by discovering new knowledge and inventing new tools was worse than ludicrous – it was hubris. The story of the Tower of Babel, the story of Icarus, the story of the Golem and countless other myths taught people that any attempt to go beyond human limitations would inevitably lead to disappointment and disaster.

When modern culture admitted that there were many important things that it still did not know, and when that admission of ignorance was married to the idea that scientific discoveries could give us new powers, people began suspecting that real progress might be possible after all. As science began to solve one unsolvable problem after another, many became convinced that humankind could overcome any and every problem by acquiring and applying new knowledge. Poverty, sickness, wars, famines, old age and death itself were not the inevitable fate of humankind. They were simply the fruits of our ignorance.

A famous example is lightning. Many cultures believed that lightning was the hammer of an angry god, used to punish sinners. In the middle of the eighteenth century, in one of the most celebrated experiments in scientific history, Benjamin Franklin flew a kite during a lightning storm to test the hypothesis that lightning is simply an electric current. Franklin’s empirical observations, coupled with his knowledge about the qualities of electrical energy, enabled him to invent the lightning rod and disarm the gods.

Poverty is another case in point. Many cultures have viewed poverty as an inescapable part of this imperfect world. According to the New Testament, shortly before the crucifixion a woman anointed Christ with precious oil worth 300 denarii. Jesus’ disciples scolded the woman for wasting such a huge sum of money instead of giving it to the poor, but Jesus defended her, saying that ‘The poor you will always have with you, and you can help them any time you want. But you will not always have me’ (Mark 14:7). Today, fewer and fewer people, including fewer and fewer Christians, agree with Jesus on this matter. Poverty is increasingly seen as a technical problem amenable to intervention. It’s common wisdom that policies based on the latest findings in agronomy, economics, medicine and sociology can eliminate poverty.

And indeed, many parts of the world have already been freed from the worst forms of deprivation. Throughout history, societies have suffered from two kinds of poverty: social poverty, which withholds from some people the opportunities available to others; and biological poverty, which puts the very lives of individuals at risk due to lack of food and shelter. Perhaps social poverty can never be eradicated, but in many countries around the world biological poverty is a thing of the past.

Until recently, most people hovered very close to the biological poverty line, below which a person lacks enough calories to sustain life for long. Even small miscalculations or misfortunes could easily push people below that line, into starvation. Natural disasters and man-made calamities often plunged entire populations over the abyss, causing the death of millions. Today most of the world’s people have a safety net stretched below them. Individuals are protected from personal misfortune by insurance, state-sponsored social security and a plethora of local and international NGOs. When calamity strikes an entire region, worldwide relief efforts are usually successful in preventing the worst. People still suffer from numerous degradations, humiliations and poverty-related illnesses, but in most countries nobody is starving to death. In fact, in many societies more people are in danger of dying from obesity than from starvation.

The Gilgamesh Project

Of all mankind’s ostensibly insoluble problems, one has remained the most vexing, interesting and important: the problem of death itself. Before the late modern era, most religions and ideologies took it for granted that death was our inevitable fate. Moreover, most faiths turned death into the main source of meaning in life. Try to imagine Islam, Christianity or the ancient Egyptian religion in a world without death. These creeds taught people that they must come to terms with death and pin their hopes on the afterlife, rather than seek to overcome death and live for ever here on earth. The best minds were busy giving meaning to death, not trying to escape it.

That is the theme of the most ancient myth to come down to us – the Gilgamesh myth of ancient Sumer. Its hero is the strongest and most capable man in the world, King Gilgamesh of Uruk, who could defeat anyone in battle. One day, Gilgamesh’s best friend, Enkidu, died. Gilgamesh sat by the body and observed it for many days, until he saw a worm dropping out of his friend’s nostril. At that moment Gilgamesh was gripped by a terrible horror, and he resolved that he himself would never die. He would somehow find a way to defeat death. Gilgamesh then undertook a journey to the end of the universe, killing lions, battling scorpion-men and finding his way into the underworld. There he shattered the mysterious “stone things” of Urshanabi, the ferryman of the river of the dead, and found Utnapishtim, the last survivor of the primordial flood. Yet Gilgamesh failed in his quest. He returned home empty-handed, as mortal as ever, but with one new piece of wisdom. When the gods created man, Gilgamesh had learned, they set death as man’s inevitable destiny, and man must learn to live with it.

Disciples of progress do not share this defeatist attitude. For men of science, death is not an inevitable destiny, but merely a technical problem. People die not because the gods decreed it, but due to various technical failures – a heart attack, cancer, an infection. And every technical problem has a technical solution. If the heart flutters, it can be stimulated by a pacemaker or replaced by a new heart. If cancer rampages, it can be killed with drugs or radiation. If bacteria proliferate, they can be subdued with antibiotics. True, at present we cannot solve all technical problems. But we are working on them. Our best minds are not wasting their time trying to give meaning to death. Instead, they are busy investigating the physiological, hormonal and genetic systems responsible for disease and old age. They are developing new medicines, revolutionary treatments and artificial organs that will lengthen our lives and might one day vanquish the Grim Reaper himself.

Until recently, you would not have heard scientists, or anyone else, speak so bluntly. ‘Defeat death?! What nonsense! We are only trying to cure cancer, tuberculosis and Alzheimer’s disease,’ they insisted. People avoided the issue of death because the goal seemed too elusive. Why create unreasonable expectations? We’re now at a point, however, where we can be frank about it. The leading project of the Scientific Revolution is to give humankind eternal life. Even if killing death seems a distant goal, we have already achieved things that were inconceivable a few centuries ago. In 1199, King Richard the Lionheart was struck by an arrow in his left shoulder. Today we’d say he incurred a minor injury. But in 1199, in the absence of antibiotics and effective sterilisation methods, this minor flesh wound turned infected and gangrene set in. The only way to stop the spread of gangrene in twelfth-century Europe was to cut off the infected limb, impossible when the infection was in a shoulder. The gangrene spread through the Lionheart’s body and no one could help the king. He died in great agony two weeks later.

As recently as the nineteenth century, the best doctors still did not know how to prevent infection and stop the putrefaction of tissues. In field hospitals doctors routinely cut off the hands and legs of soldiers who received even minor limb injuries, fearing gangrene. These amputations, as well as all other medical procedures (such as tooth extraction), were done without any anaesthetics. The first anaesthetics – ether, chloroform and morphine – entered regular usage in Western medicine only in the middle of the nineteenth century. Before the advent of chloroform, four soldiers had to hold down a wounded comrade while the doctor sawed off the injured limb. On the morning after the battle of Waterloo (1815), heaps of sawn-off hands and legs could be seen adjacent to the field hospitals. In those days, carpenters and butchers who enlisted to the army were often sent to serve in the medical corps, because surgery required little more than knowing your way with knives and saws.

In the two centuries since Waterloo, things have changed beyond recognition. Pills, injections and sophisticated operations save us from a spate of illnesses and injuries that once dealt an inescapable death sentence. They also protect us against countless daily aches and ailments, which premodern people simply accepted as part of life. The average life expectancy jumped from around twenty-five to forty years, to around sixty-seven in the entire world, and to around eighty years in the developed world.

Death suffered its worst setbacks in the arena of child mortality. Until the twentieth century, between a quarter and a third of the children of agricultural societies never reached adulthood. Most succumbed to childhood diseases such as diphtheria, measles and smallpox. In seventeenth-century England, 150 out of every 1,000 newborns died during their first year, and a third of all children were dead before they reached fifteen. Today, only five out of 1,000 English babies die during their first year, and only seven out of 1,000 die before age fifteen.

We can better grasp the full impact of these figures by setting aside statistics and telling some stories. A good example is the family of King Edward I of England (1237–1307) and his wife, Queen Eleanor (1241–90). Their children enjoyed the best conditions and the most nurturing surroundings that could be provided in medieval Europe. They lived in palaces, ate as much food as they liked, had plenty of warm clothing, well-stocked fireplaces, the cleanest water available, an army of servants and the best doctors. The sources mention sixteen children that Queen Eleanor bore between 1255 and 1284:

1. An anonymous daughter, born in 1255, died at birth.

2. A daughter, Catherine, died either at age one or age three.

3. A daughter, Joan, died at six months.

4. A son, John, died at age five.

5. A son, Henry, died at age six.

6. A daughter, Eleanor, died at age twenty-nine.

7. An anonymous daughter died at five months.

8. A daughter, Joan, died at age thirty-five.

9. A son, Alphonso, died at age ten.

10. A daughter, Margaret, died at age fifty-eight.

11. A daughter, Berengeria, died at age two.

12. An anonymous daughter died shortly after birth.

13. A daughter, Mary, died at age fifty-three.

14. An anonymous son died shortly after birth.

15. A daughter, Elizabeth, died at age thirty-four.

16. A son, Edward.

The youngest, Edward, was the first of the boys to survive the dangerous years of childhood, and at his father’s death he ascended the English throne as King Edward II. In other words, it took Eleanor sixteen tries to carry out the most fundamental mission of an English queen – to provide her husband with a male heir. Edward II’s mother must have been a woman of exceptional patience and fortitude. Not so the woman Edward chose for his wife, Isabella of France. She had him murdered when he was forty-three.

To the best of our knowledge, Eleanor and Edward I were a healthy couple and passed no fatal hereditary illnesses on to their children. Nevertheless, ten out of the sixteen – 62 per cent – died during childhood. Only six managed to live beyond the age of eleven, and only three – just 18 per cent – lived beyond the age of forty. In addition to these births, Eleanor most likely had a number of pregnancies that ended in miscarriage. On average, Edward and Eleanor lost a child every three years, ten children one after another. It’s nearly impossible for a parent today to imagine such loss.

How long will the Gilgamesh Project – the quest for immortality – take to complete? A hundred years? Five hundred years? A thousand years? When we recall how little we knew about the human body in 1900, and how much knowledge we have gained in a single century, there is cause for optimism. Genetic engineers have recently managed to double the average life expectancy of Caenorhabditis elegans worms. Could they do the same for Homo sapiens? Nanotechnology experts are developing a bionic immune system composed of millions of nano-robots, who would inhabit our bodies, open blocked blood vessels, fight viruses and bacteria, eliminate cancerous cells and even reverse ageing processes. A few serious scholars suggest that by 2050, some humans will become a-mortal (not immortal, because they could still die of some accident, but a-mortal, meaning that in the absence of fatal trauma their lives could be extended indefinitely).

Whether or not Project Gilgamesh succeeds, from a historical perspective it is fascinating to see that most late-modern religions and ideologies have already taken death and the afterlife out of the equation. Until the eighteenth century, religions considered death and its aftermath central to the meaning of life. Beginning in the eighteenth century, religions and ideologies such as liberalism, socialism and feminism lost all interest in the afterlife. What, exactly, happens to a Communist after he or she dies? What happens to a capitalist? What happens to a feminist? It is pointless to look for the answer in the writings of Marx, Adam Smith or Simone de Beauvoir. The only modern ideology that still awards death a central role is nationalism. In its more poetic and desperate moments, nationalism promises that whoever dies for the nation will forever live in its collective memory. Yet this promise is so fuzzy that even most nationalists do not really know what to make of it.

The Sugar Daddy of Science

We are living in a technical age. Many are convinced that science and technology hold the answers to all our problems. We should just let the scientists and technicians go on with their work, and they will create heaven here on earth. But science is not an enterprise that takes place on some superior moral or spiritual plane above the rest of human activity. Like all other parts of our culture, it is shaped by economic, political and religious interests.

Science is a very expensive affair. A biologist seeking to understand the human immune system requires laboratories, test tubes, chemicals and electron microscopes, not to mention lab assistants, electricians, plumbers and cleaners. An economist seeking to model credit markets must buy computers, set up giant databanks and develop complicated data-processing programs. An archaeologist who wishes to understand the behaviour of archaic hunter-gatherers must travel to distant lands, excavate ancient ruins and date fossilised bones and artefacts. All of this costs money.

During the past 500 years modern science has achieved wonders thanks largely to the willingness of governments, businesses, foundations and private donors to channel billions of dollars into scientific research. These billions have done much more to chart the universe, map the planet and catalogue the animal kingdom than did Galileo Galilei, Christopher Columbus and Charles Darwin. If these particular geniuses had never been born, their insights would probably have occurred to others. But if the proper funding were unavailable, no intellectual brilliance could have compensated for that. If Darwin had never been born, for example, we’d today attribute the theory of evolution to Alfred Russel Wallace, who came up with the idea of evolution via natural selection independently of Darwin and just a few years later. But if the European powers had not financed geographical, zoological and botanical research around the world, neither Darwin nor Wallace would have had the necessary empirical data to develop the theory of evolution. It is likely that they would not even have tried.

Why did the billions start flowing from government and business coffers into labs and universities? In academic circles, many are naive enough to believe in pure science. They believe that government and business altruistically give them money to pursue whatever research projects strike their fancy. But this hardly describes the realities of science funding.

Most scientific studies are funded because somebody believes they can help attain some political, economic or religious goal. For example, in the sixteenth century, kings and bankers channelled enormous resources to finance geographical expeditions around the world but not a penny for studying child psychology. This is because kings and bankers surmised that the discovery of new geographical knowledge would enable them to conquer new lands and set up trade empires, whereas they couldn’t see any profit in understanding child psychology.

In the 1940s the governments of America and the Soviet Union channelled enormous resources to the study of nuclear physics rather than underwater archaeology. They surmised that studying nuclear physics would enable them to develop nuclear weapons, whereas underwater archaeology was unlikely to help win wars. Scientists themselves are not always aware of the political, economic and religious interests that control the flow of money; many scientists do, in fact, act out of pure intellectual curiosity. However, only rarely do scientists dictate the scientific agenda.

Even if we wanted to finance pure science unaffected by political, economic or religious interests, it would probably be impossible. Our resources are limited, after all. Ask a congressman to allocate an additional million dollars to the National Science Foundation for basic research, and he’ll justifiably ask whether that money wouldn’t be better used to fund teacher training or to give a needed tax break to a troubled factory in his district. To channel limited resources we must answer questions such as ‘What is more important?’ and ‘What is good?’ And these are not scientific questions. Science can explain what exists in the world, how things work, and what might be in the future. By definition, it has no pretensions to knowing what should be in the future. Only religions and ideologies seek to answer such questions.

Consider the following quandary: two biologists from the same department, possessing the same professional skills, have both applied for a million-dollar grant to finance their current research projects. Professor Slughorn wants to study a disease that infects the udders of cows, causing a 10 per cent decrease in their milk production. Professor Sprout wants to study whether cows suffer mentally when they are separated from their calves. Assuming that the amount of money is limited, and that it is impossible to finance both research projects, which one should be funded?

There is no scientific answer to this question. There are only political, economic and religious answers. In today’s world, it is obvious that Slughorn has a better chance of getting the money. Not because udder diseases are scientifically more interesting than bovine mentality, but because the dairy industry, which stands to benefit from the research, has more political and economic clout than the animal-rights lobby.

Perhaps in a strict Hindu society, where cows are sacred, or in a society committed to animal rights, Professor Sprout would have a better shot. But as long as she lives in a society that values the commercial potential of milk and the health of its human citizens over the feelings of cows, she’d best write up her research proposal so as to appeal to those assumptions. For example, she might write that ‘Depression leads to a decrease in milk production. If we understand the mental world of dairy cows, we could develop psychiatric medication that will improve their mood, thus raising milk production by up to 10 per cent. I estimate that there is a global annual market of $250 million for bovine psychiatric medications.’

Science is unable to set its own priorities. It is also incapable of determining what to do with its discoveries. For example, from a purely scientific viewpoint it is unclear what we should do with our increasing understanding of genetics. Should we use this knowledge to cure cancer, to create a race of genetically engineered supermen, or to engineer dairy cows with super-sized udders? It is obvious that a liberal government, a Communist government, a Nazi government and a capitalist business corporation would use the very same scientific discovery for completely different purposes, and there is no scientific reason to prefer one usage over others.

In short, scientific research can flourish only in alliance with some religion or ideology. The ideology justifies the costs of the research. In exchange, the ideology influences the scientific agenda and determines what to do with the discoveries. Hence in order to comprehend how humankind has reached Alamogordo and the moon – rather than any number of alternative destinations – it is not enough to survey the achievements of physicists, biologists and sociologists. We have to take into account the ideological, political and economic forces that shaped physics, biology and sociology, pushing them in certain directions while neglecting others.

Two forces in particular deserve our attention: imperialism and capitalism. The feedback loop between science, empire and capital has arguably been history’s chief engine for the past 500 years. The following chapters analyse its workings. First we’ll look at how the twin turbines of science and empire were latched to one another, and then learn how both were hitched up to the money pump of capitalism.

Excerpted from “Sapiens: A Brief History of Humankind” by Yuval Noah Harari. Published by Harpers. Copyright 2015 by Yuval Noah Harari. Reprinted with permission of the publisher. All rights reserved.