Who Gives More of Their Money to Charity?

People Who Make More or Less Than $200k a Year?

Philanthropy and income have an inverse relationship.

Billionaire CEO Nicholas Woodman, news reports trumpeted earlier this month, has set aside $450 million worth of his GoPro software stock to set up a brand-new charitable foundation.

“We wake up every morning grateful for the opportunities life has given us,” Woodman and his wife Jill noted in a joint statement. “We hope to return the favor as best we can.”

Stories about charitable billionaires have long been a media staple. The defenders of our economic order love them — and regularly trot them out to justify America’s ever more top-heavy concentration of income and wealth.

Our charities depend, the argument goes, on the generosity of the rich. The richer the rich, the better off our charitable enterprises will be.

But this defense of inequality, analysts have understood for quite some time, holds precious little water. Low- and middle-income people, the research shows, give a greater share of their incomes to charity than people of decidedly more ample means.

The Chronicle of Philanthropy, the nation’s top monitor of everything charitable, last week dramatically added to this research.

Between 2006 and 2012, a new Chronicle analysis of IRS tax return data reveals, Americans who make over $200,000 a year decreased the share of their income they devote to charity by 4.6 percent.

Over those same years, a time of recession and limited recovery, these same affluent Americans saw their own incomes increase. For the nation’s top 5 percent of income earners, that increase averaged 9.9 percent.

By contrast, those Americans making less than $100,000 actually increased their giving between 2006 and 2012. The most generous Americans of all? Those making less than $25,000. Amid the hard times of recent years, low-income Americans devoted 16.6 percent more of their meager incomes to charity.

Overall, those making under $100,000 increased their giving by 4.5 percent.

In the half-dozen years this new study covers, the Chronicle of Philanthropy concludes, “poor and middle class Americans dug deeper into their wallets to give to charity, even though they were earning less.”

America’s affluent do still remain, in absolute terms, the nation’s largest givers to charity. In 2012, the Chronicle analysis shows, those earning under $100,000 handed charities $57.3 billion. Americans making over $200,000 gave away $77.5 billion.

But that $77.5 billion pales against at how much more the rich could — rather painlessly — be giving. Between 2006 and 2012, the combined wealth of the Forbes 400 alone increased by $1.04 trillion.

What the rich do give to charity often does people truly in need no good at all. Wealthy people do the bulk of their giving to colleges and cultural institutions, notes Chronicle of Philanthropy editor Stacy Palmer. Food banks and other social service charities “depend more on lower income Americans.”

Low- and middle-income people, adds Palmer, “know people who lost their jobs or are homeless.” They’ve been sacrificing “to help their neighbors.”

America’s increasing economic segregation, meanwhile, has left America’s rich less and less exposed to “neighbors” struggling to get by. That’s opening up, says Vox policy analyst Danielle Kurtzleben, an “empathy gap.”

“After all,” she explains, “if I can’t see you, I’m less likely to help you.”

The more wealth concentrates, the more nonprofits chase after these less-than-empathetic rich for donations. The priorities of these rich, notes Kurtzleben, become the priorities for more and more nonprofits.

The end result? Elite universities get mega-million-dollar donations to build mahogany-appointed students dorms. Art museums get new wings. Hospitals get windfalls to tackle the diseases that spook the high-end set.

Some in that set do seem to sense the growing disconnect between real need and real resources. Last week billionaire hedge fund manager David Einhorn announced a $50 million gift to help Cornell University set students up in “real-world experiences” that address the challenges hard-pressed communities face.

“When you go out beyond the classroom and into the community and find problems and have to deal with people in the real world,” says Einhorn, “you develop skills for empathy.”

True enough — but in a society growing ever more unequal and separate, not enough. In that society — our society — the privileged will continue to go “blind to how people outside their own class are living,” as Danielle Kurtzleben puts it.

We need, in short, much more than Empathy 101. We need more equality.

Labor journalist Sam Pizzigati, an Institute for Policy Studies associate fellow, writes widely about inequality. His latest book is “The Rich Don’t Always Win: The Forgotten Triumph over Plutocracy that Created the American Middle Class, 1900-1970.”

 

http://www.alternet.org/economy/guess-who-gives-more-their-money-charity-people-who-make-more-or-less-200k-year?akid=12386.265072.PjWDq0&rd=1&src=newsletter1023920&t=15&paging=off&current_page=1#bookmark

How technology shrunk America forever

The end of the Old World:

The 19th century saw an explosion of changes in America. The way people saw the world would never be the same

The end of the Old World: How technology shrunk America forever
(Credit: AP/Library of Congress)

It has become customary to mark the beginning of the Industrial revolution in eighteenth-century England. Historians usually identify two or sometimes three phases of the Industrial revolution, which are associated with different sources of energy and related technologies. In preindustrial Europe, the primary energy sources were human, animal, and natural (wind, water, and fire).

By the middle of the eighteenth century, much of Europe had been deforested to supply wood for domestic and industrial consumption. J.R. McNeill points out that the combination of energy sources, machines, and ways of organizing production came together to form “clusters” that determined the course of industrialization and, by extension, shaped economic and social developments. a later cluster did not immediately replace its predecessor; rather, different regimes overlapped, though often they were not integrated. With each new cluster, however, the speed of production increased, leading to differential rates of production. The first phase of the Industrial revolution began around 1750 with the shift from human and animal labor to machine-based production. This change was brought about by the use of water power and later steam engines in the textile mills of Great Britain.

The second phase dates from the 1820s, when there was a shift to fossil fuels—primarily coal. By the middle of the nineteenth century, another cluster emerged from the integration of coal, iron, steel, and railroads. The fossil fuel regime was not, of course, limited to coal. Edwin L. Drake drilled the first commercially successful well in Titusville, Pennsylvania, in 1859 and the big gushers erupted first in the 1870s in Baku on the Caspian Sea and later in Spindeltop, Texas (1901). Oil, however, did not replace coal as the main source of fuel in transportation until the 1930s.3 Coal, of course, is still widely used in manufacturing today because it remains one of the cheapest sources of energy. Though global consumption of coal has leveled off since 2000, its use continues to increase in China. Indeed, China currently uses almost as much coal as the rest of the world and reliable sources predict that by 2017, India will be importing as much coal as China.



The third phase of the Industrial revolution began in the closing decades of the nineteenth century. The development of technologies for producing and distributing electricity cheaply and efficiently further transformed industrial processes and created the possibility for new systems of communication as well as the unprecedented capability for the production and dissemination of new forms of entertainment, media, and information. The impact of electrification can be seen in four primary areas.

First, the availability of electricity made the assembly line and mass production possible. When Henry Ford adapted technology used in Chicago’s meatpacking houses to produce cars (1913), he set in motion changes whose effects are still being felt. Second, the introduction of the incandescent light bulb (1881) transformed private and public space. As early as the late 1880s, electrical lighting was used in homes, factories, and on streets. Assembly lines and lights inevitably led to the acceleration of urbanization. Third, the invention of the telegraph (ca.1840) and telephone (1876) enabled the communication and transmission of information across greater distances at faster rates of speed than ever before. Finally, electronic tabulating machines, invented by Herman Hollerith in 1889, made it possible to collect and manage data in new ways. Though his contributions have not been widely acknowledged, Hollerith actually forms a bridge between the Industrial revolution and the so-called post-industrial information age. The son of German immigrants, Hollerith graduated from Columbia University’s School of Mines and went on to found Tabulating Machine Company (1896). He created the first automatic card-feed mechanism and key-punch system with which an operator using a keyboard could process as many as three hundred cards an hour. Under the direction of Thomas J. Watson, Hollerith’s company merged with three others in 1911 to form Computing Tabulating recording Company. In 1924, the company was renamed International Business Machines Corporation (IBM).

There is much to be learned from such periodizations, but they have serious limitations. The developments I have identified overlap and interact in ways that subvert any simple linear narrative. Instead of thinking merely in terms of resources, products, and periods, it is also important to think in terms of networks and flows. The foundation for today’s wired world was laid more than two centuries ago. Beginning in the early nineteenth century, local communities, then states and nations, and finally the entire globe became increasingly connected. Though varying from time to time and place to place, there were two primary forms of networks: those that directed material flows (fuels, commodities, products, people), and those that channeled immaterial flows (communications, information, data, images, and currencies). From the earliest stages of development, these networks were inextricably interconnected. There would have been no telegraph network without railroads and no railroad system without the telegraph network, and neither could have existed without coal and iron. Networks, in other words, are never separate but form networks of networks in which material and immaterial flows circulate. As these networks continued to expand, and became more and more complex, there was a steady increase in the importance of immaterial flows, even for material processes. The combination of expanding connectivity and the growing importance of information technologies led to the acceleration of both material and immaterial flows. This emerging network of networks created positive feedback loops in which the rate of acceleration increased.

While developments in transportation, communications, information, and management were all important, industrialization as we know it is inseparable from the transportation revolution that trains created. In his foreword to Wolfgang Schivelbusch’s informative study “The Railway Journey: The Industrialization of Time and Space in the 19th Century,” Alan Trachtenberg writes, “Nothing else in the nineteenth century seemed as vivid and dramatic a sign of modernity as the railroad. Scientists and statesmen joined capitalists in promoting the locomotive as the engine of ‘progress,’ a promise of imminent Utopia.”

In England, railway technology developed as an extension of coal mining. The shift from human and natural sources of energy to fossil fuels created a growing demand for coal. While steam engines had been used since the second half of the eighteenth century in British mines to run fans and pumps like those my great-grandfather had operated in the Pennsylvania coalfields, it was not until 1901, when Oliver Evans invented a high-pressure, mobile steam engine, that locomotives were produced. By the beginning of the nineteenth century, the coal mined in the area around Newcastle was being transported throughout England on rail lines. It did not take long for this new rapid transit system to develop—by the 1820s, railroads had expanded to carry passengers, and half a century later rail networks spanned all of Europe.

What most impressed people about this new transportation network was its speed. The average speed of early railways in England was twenty to thirty miles per hour, which was approximately three times faster than stagecoaches. The increase in speed transformed the experience of time and space. Countless writers from this era use the same words to describe train travel as Karl Marx had used to describe emerging global financial markets. Trains, like capital, “annihilate space with time.”

Traveling on the recently opened Paris-rouen-orléans railway line in 1843, the German poet, journalist, and literary critic Heinrich Heine wrote: “What changes must now occur, in our way of looking at things, in our notions! Even the elementary concepts of time and space have begun to vacillate. Space is killed by the railways, and we are left with time alone. . . . Now you can travel to orleans in four and a half hours, and it takes no longer to get to rouen. Just imagine what will happen when the lines to Belgium and Germany are completed and connected up with their railways! I feel as if the mountains and forests of all countries were advancing on Paris. Even now, I can smell the German linden trees; the North Sea’s breakers are rolling against my door.” This new experience of space and time that speed brought about had profound psychological effects that I will consider later.

Throughout the nineteenth century, the United States lagged behind Great Britain in terms of industrial capacity: in 1869, England was the source of 20 percent of the world’s industrial production, while the United States contributed just 7 percent. By the start of World War I, however, america’s industrial capacity surpassed that of England: that is, by 1913, the scales had tipped—32 percent came from the United States and only 14 percent from England. While England had a long history before the Industrial revolution, the history of the United States effectively begins with the Industrial revolution. There are other important differences as well. Whereas in Great Britain the transportation revolution grew out of the industrialization of manufacturing primarily, but not exclusively, in textile factories, in the United States mechanization began in agriculture and spread to transportation before it transformed manufacturing. In other words, in Great Britain, the Industrial Revolution in manufacturing came first and the transportation revolution second, while in the United States, this order was reversed.

When the Industrial revolution began in the United States, most of the country beyond the Eastern Seaboard was largely undeveloped. Settling this uncharted territory required the development of an extensive transportation network. Throughout the early decades of the nineteenth century, the transportation system consisted of a network of rudimentary roads connecting towns and villages with the countryside. New England, Boston, New york, Philadelphia, Baltimore, and Washington were joined by highways suitable for stagecoach travel. Inland travel was largely confined to rivers and waterways. The completion of the Erie Canal (1817–25) marked the first stage in the development of an extensive network linking rivers, lakes, canals, and waterways along which produce and people flowed. Like so much else in America, the railroad system began in Boston. By 1840, only 18,181 miles of track had been laid. During the following decade, however, there was an explosive expansion of the nation’s rail system financed by securities and bonds traded on stock markets in America and London. By the 1860s, the railroad network east of the Mississippi river was using routes roughly similar to those employed today.

Where some saw loss, others saw gain. In 1844, inveterate New Englander ralph Waldo Emerson associated the textile loom with the railroad when he reflected, “Not only is distance annihilated, but when, as now, the locomotive and the steamboat, like enormous shuttles, shoot every day across the thousand various threads of national descent and employment, and bind them fast in one web, an hourly assimilation goes forward, and there is no danger that local peculiarities and hostilities should be preserved.” Gazing at tracks vanishing in the distance, Emerson saw a new world opening that, he believed, would overcome the parochialisms of the past. For many people in the nineteenth century, this new world promising endless resources and endless opportunity was the american West. A transcontinental railroad had been proposed as early as 1820 but was not completed until 1869.

On May 10, 1869, Leland Stanford, who would become the governor of California and, in 1891, founder of Stanford University, drove the final spike in the railroad that joined east and west. Nothing would ever be the same again. This event was not merely local, but also, as Emerson had surmised, global. Like the California gold and Nevada silver spike that leland had driven to join the rails, the material transportation network and immaterial communication network intersected at that moment to create what Rebecca Solnit correctly identifies as “the first live national media event.” The spike “had been wired to connect to the telegraph lines that ran east and west along the railroad tracks. The instant Stanford struck the spike, a signal would go around the nation. . . . The signal set off cannons in San Francisco and New York. In the nation’s capital the telegraph signal caused a ball to drop, one of the balls that visibly signaled the exact time in observatories in many places then (of which the ball dropped in New york’s Times Square at the stroke of the New year is a last relic). The joining of the rails would be heard in every city equipped with fire-alarm telegrams, in Philadelphia, omaha, Buffalo, Chicago, and Sacramento. Celebrations would be held all over the nation.” This carefully orchestrated spectacle, which was made possible by the convergence of multiple national networks, was worthy of the future Hollywood and the technological wizards of Silicon Valley whose relentless innovation Stanford’s university would later nourish. What most impressed people at the time was the speed of global communication, which now is taken for granted.

Flickering Images—Changing Minds

Industrialization not only changes systems of production and distribution of commodities and products, but also imposes new disciplinary practices that transform bodies and change minds. During the early years of train travel, bodily acceleration had an enormous psychological effect that some people found disorienting and others found exhilarating. The mechanization of movement created what ann Friedberg describes as the “mobile gaze,” which transforms one’s surroundings and alters both the content and, more important, the structure, of perception. This mobile gaze takes two forms: the person can move and the surroundings remain immobile (train, bicycle, automobile, airplane, elevator), or the person can remain immobile and the surroundings move (panorama, kinetoscope, film).

When considering the impact of trains on the mobilization of the gaze, it is important to note that different designs for railway passenger cars had different perceptual and psychological effects. Early European passenger cars were modeled on stagecoaches in which individuals had seats in separate compartments; early american passenger cars, by contrast, were modeled on steamboats in which people shared a common space and were free to move around. The European design tended to reinforce social and economic hierarchies that the american design tried to break down. Eventually, american railroads adopted the European model of fixed individual seating but had separate rows facing in the same direction rather than different compartments. As we will see, the resulting compartmentalization of perception anticipates the cellularization of attention that accompanies today’s distributed high-speed digital networks.

During the early years, there were numerous accounts of the experience of railway travel by ordinary people, distinguished writers, and even physicians, in which certain themes recur. The most common complaint is the sense of disorientation brought about by the experience of unprecedented speed. There are frequent reports of the dispersion and fragmentation of attention that are remarkably similar to contemporary personal and clinical descriptions of attention-deficit hyperactivity disorder (ADHD). With the landscape incessantly rushing by faster than it could be apprehended, people suffered overstimulation, which created a sense of psychological exhaustion and physical distress. Some physicians went so far as to maintain that the experience of speed caused “neurasthenia, neuralgia, nervous dyspepsia, early tooth decay, and even premature baldness.”

In 1892, Sir James Crichton-Browne attributed the significant increase in the mortality rate between 1859 and 1888 to “the tension, excitement, and incessant mobility of modern life.” Commenting on these statistics, Max Nordau might well be describing the harried pace of life today. “Every line we read or write, every human face we see, every conversation we carry on, every scene we perceive through the window of the flying express, sets in activity our sensory nerves and our brain centers. Even the little shocks of railway travelling, not perceived by consciousness, the perpetual noises and the various sights in the streets of a large town, our suspense pending the sequel of progressing events, the constant expectation of the newspaper, of the postman, of visitors, cost our brains wear and tear.” During the years around the turn of the last century, a sense of what Stephen kern aptly describes as “cultural hypochondria” pervaded society. Like today’s parents concerned about the psychological and physical effects of their kids playing video games, nineteenth-century physicians worried about the effect of people sitting in railway cars for hours watching the world rush by in a stream of images that seemed to be detached from real people and actual things.

In addition to the experience of disorientation, dispersion, fragmentation, and fatigue, rapid train travel created a sense of anxiety. People feared that with the increase in speed, machinery would spin out of control, resulting in serious accidents. An 1829 description of a train ride expresses the anxiety that speed created. “It is really flying, and it is impossible to divest yourself of the notion of instant death to all upon the least accident happening.” a decade and a half later, an anonymous German explained that the reason for such anxiety is the always “close possibility of an accident, and the inability to exercise any influence on the running of the cars.” When several serious accidents actually occurred, anxiety spread like a virus. Anxiety, however, is always a strange experience—it not only repels, it also attracts; danger and the anxiety it brings are always part of speed’s draw.

Perhaps this was a reason that not everyone found trains so distressing. For some people, the experience of speed was “dreamlike” and bordered on ecstasy. In 1843, Emerson wrote in his Journals, “Dreamlike travelling on the railroad. The towns which I pass between Philadelphia and New york make no distinct impression. They are like pictures on a wall.” The movement of the train creates a loss of focus that blurs the mobile gaze. A few years earlier, Victor Hugo’s description of train travel sounds like an acid trip as much as a train trip. In either case, the issue is speed. “The flowers by the side of the road are no longer flowers but flecks, or rather streaks, of red or white; there are no longer any points, everything becomes a streak; grain fields are great shocks of yellow hair; fields of alfalfa, long green tresses; the towns, the steeples, and the trees perform a crazy mingling dance on the horizon; from time to time, a shadow, a shape, a specter appears and disappears with lightning speed behind the window; it’s a railway guard.” The flickering images fleeting past train windows are like a film running too fast to comprehend.

Transportation was not the only thing accelerating in the nineteenth century—the pace of life itself was speeding up as never before. listening to the whistle of the train headed to Boston in his cabin beside Walden Pond, Thoreau mused, “The startings and arrivals of the cars are now the epochs in the village day. They go and come with such regularity and precision, and their whistle can be heard so far, that the farmers set their clocks by them, and thus one well conducted institution regulates a whole country. Have not men improved somewhat in punctuality since the railroad was invented? Do they not talk and think faster in the depot than they did in the stage office? There is something electrifying in the atmosphere of the former place. I have been astonished by some of the miracles it has wrought.” And yet Thoreau, more than others, knew that these changes also had a dark side.

The transition from agricultural to industrial capitalism brought with it a massive migration from the country, where life was slow and governed by natural rhythms, to the city, where life was fast and governed by mechanical, standardized time. The convergence of industrialization, transportation, and electrification made urbanization inevitable. The faster that cities expanded, the more some writers and poets idealized rustic life in the country. Nowhere is such idealization more evident than in the writings of British romantics. The rapid swirl of people, machines, and commodities created a sense of vertigo as disorienting as train travel. Wordsworth writes in The Prelude,

oh, blank confusion! True epitome
of what the mighty City is herself
To thousands upon thousands of her sons, living among the same perpetual whirl
of trivial objects, melted and reduced
To one identity, by differences
That have no law, no meaning, no end—

By 1850, fifteen cities in the United States had a population exceeding 50,000. New york was the largest (1,080,330), followed by Philadelphia (565,529), Baltimore (212,418), and Boston (177,840). Increasing domestic trade that resulted from the railroad and growing foreign trade that accompanied improved ocean travel contributed significantly to this growth. While commerce was prevalent in early cities, manufacturing expanded rapidly during the latter half of the eighteenth century. The most important factor contributing to nineteenth-century urbanization was the rapid development of the money economy. Once again, it is a matter of circulating flows, not merely of human bodies but of mobile commodities. Money and cities formed a positive feedback loop—as the money supply grew, cities expanded, and as cities expanded, the money supply grew.

The fast pace of urban life was as disorienting for many people as the speed of the train. In his seminal essay “The Metropolis and Mental life,” Georg Simmel observes, “The psychological foundation upon which the metropolitan individuality is erected, is the intensification of emotional life due to the swift and continuous shift of external and internal stimuli. Man is a creature whose existence is dependent on differences, i.e., his mind is stimulated by the difference between present impressions and those which have preceded. . . . To the extent that the metropolis creates these psychological conditions—with every crossing of the street, with the tempo and multiplicity of economic, occupational and social life—it creates the sensory foundations of mental life, and in the degree of awareness necessitated by our organization as creatures dependent on differences, a deep contrast with the slower, more habitual, more smooth flowing rhythm of the sensory-mental phase of small town and rural existence.” The expansion of the money economy created a fundamental contradiction at the heart of metropolitan life. On the one hand, cities brought together different people from all backgrounds and walks of life, and on the other hand, emerging industrial capitalism leveled these differences by disciplining bodies and programming minds. “Money,” Simmel continues, “is concerned only with what is common to all, i.e., with the exchange value which reduces all quality and individuality to a purely quantitative level.” The migration from country to city that came with the transition from agricultural to industrial capitalism involved a shift from homogeneous communities to heterogeneous assemblages of different people, qualitative to quantitative methods of assessment and evaluation, as well as concrete to abstract networks of exchange of goods and services, and a slow to fast pace of life. I will consider further aspects of these disciplinary practices in Chapter 3; for now, it is important to understand the implications of the mechanization or industrialization of perception.

I have already noted similarities between the experience of looking through a window on a speeding train to the experience of watching a film that is running too fast. During the latter half of the nineteenth century a remarkable series of inventions transformed not only what people experienced in the world but how they experienced it: photography (Louis-Jacques-Mandé Daguerre, ca. 1837), the telegraph (Samuel F. B. Morse, ca. 1840), the stock ticker (Thomas alva Edison, 1869), the telephone (alexander Graham Bell, 1876), the chronophotographic gun (Étienne-Jules Maney, 1882), the kinetoscope (Edison, 1894), the zoopraxiscope (Eadweard Muybridge, 1893), the phantoscope (Charles Jenkins, 1894), and cinematography (Auguste and Louis Lumière, 1895). The way in which human beings perceive and conceive the world is not hardwired in the brain but changes with new technologies of production and reproduction.

Just as the screens of today’s TVs, computers, video games, and mobile devices are restructuring how we process experience, so too did new technologies at the end of the nineteenth century change the world by transforming how people apprehended it. While each innovation had a distinctive effect, there is a discernible overall trajectory to these developments. Industrial technologies of production and reproduction extended processes of dematerialization that eventually led first to consumer capitalism and then to today’s financial capitalism. The crucial variable in these developments is the way in which material and immaterial networks intersect to produce a progressive detachment of images, representations, information, and data from concrete objects and actual events. Marveling at what he regarded as the novelty of photographs, Oliver Wendell Holmes commented, “Form is henceforth divorced from matter. In fact, matter as a visible object is of no great use any longer, except as the mould on which form is shaped. Give us a few negatives of a thing worth seeing, taken from different points of view, and that is all we want of it. Pull it down or burn it up, if you please. . . . Matter in large masses must always be fixed and dear, form is cheap and transportable. We have got the fruit of creation now, and need not trouble ourselves about the core.”

Technologies for the reproduction and transmission of images and information expand the process of abstraction initiated by the money economy to create a play of freely floating signs without anything to ground, certify, or secure them. With new networks made possible by the combination of electrification and the invention of the telegraph, telephone, and stock ticker, communication was liberated from the strictures imposed by physical means of conveyance. In previous energy regimes, messages could be sent no faster than people, horses, carriages, trains, ships, or automobiles could move. Dematerialized words, sounds, information, and eventually images, by contrast, could be transmitted across great distances at high speed. With this dematerialization and acceleration, Marx’s prediction—that “everything solid melts into air”—was realized. But this was just the beginning. It would take more than a century for electrical currents to become virtual currencies whose transmission would approach the speed limit.

Excerpted from “Speed Limits: Where Time Went and Why We Have So Little Left,” by Mark C. Taylor, published October 2014 by Yale University Press. Copyright ©2014 by Mark C. Taylor. Reprinted by permission of Yale University Press.

http://www.salon.com/2014/10/19/the_end_of_the_old_world_how_technology_shrunk_america_forever/?source=newsletter

Friedrich Nietzsche on Why a Fulfilling Life Requires Embracing Rather than Running from Difficulty

by

A century and a half before our modern fetishism of failure, a seminal philosophical case for its value.

German philosopher, poet, composer, and writer Friedrich Nietzsche (October 15, 1844–August 25, 1900) is among humanity’s most enduring, influential, and oft-cited minds — and he seemed remarkably confident that he would end up that way. Nietzsche famously called the populace of philosophers “cabbage-heads,” lamenting: “It is my fate to have to be the first decent human being. I have a terrible fear that I shall one day be pronounced holy.” In one letter, he considered the prospect of posterity enjoying his work: “It seems to me that to take a book of mine into his hands is one of the rarest distinctions that anyone can confer upon himself. I even assume that he removes his shoes when he does so — not to speak of boots.”

A century and a half later, Nietzsche’s healthy ego has proven largely right — for a surprising and surprisingly modern reason: the assurance he offers that life’s greatest rewards spring from our brush with adversity. More than a century before our present celebration of “the gift of failure” and our fetishism of failure as a conduit to fearlessness, Nietzsche extolled these values with equal parts pomp and perspicuity.

In one particularly emblematic specimen from his many aphorisms, penned in 1887 and published in the posthumous selection from his notebooks, The Will to Power (public library), Nietzsche writes under the heading “Types of my disciples”:

To those human beings who are of any concern to me I wish suffering, desolation, sickness, ill-treatment, indignities — I wish that they should not remain unfamiliar with profound self-contempt, the torture of self-mistrust, the wretchedness of the vanquished: I have no pity for them, because I wish them the only thing that can prove today whether one is worth anything or not — that one endures.

(Half a century later, Willa Cather echoed this sentiment poignantly in a troubled letter to her brother: “The test of one’s decency is how much of a fight one can put up after one has stopped caring.”)

With his signature blend of wit and wisdom, Alain de Botton — who contemplates such subjects as the psychological functions of art and what literature does for the soul — writes in the altogether wonderful The Consolations of Philosophy (public library):

Alone among the cabbage-heads, Nietzsche had realized that difficulties of every sort were to be welcomed by those seeking fulfillment.

Not only that, but Nietzsche also believed that hardship and joy operated in a kind of osmotic relationship — diminishing one would diminish the other — or, as Anaïs Nin memorably put it, “great art was born of great terrors, great loneliness, great inhibitions, instabilities, and it always balances them.” In The Gay Science (public library), his treatise on poetry where his famous “God is dead” proclamation was coined, he wrote:

What if pleasure and displeasure were so tied together that whoever wanted to have as much as possible of one must also have as much as possible of the other — that whoever wanted to learn to “jubilate up to the heavens” would also have to be prepared for “depression unto death”?

You have the choice: either as little displeasure as possible, painlessness in brief … or as much displeasure as possible as the price for the growth of an abundance of subtle pleasures and joys that have rarely been relished yet? If you decide for the former and desire to diminish and lower the level of human pain, you also have to diminish and lower the level of their capacity for joy.

He was convinced that the most notable human lives reflected this osmosis:

Examine the lives of the best and most fruitful people and peoples and ask yourselves whether a tree that is supposed to grow to a proud height can dispense with bad weather and storms; whether misfortune and external resistance, some kinds of hatred, jealousy, stubbornness, mistrust, hardness, avarice, and violence do not belong among the favorable conditions without which any great growth even of virtue is scarcely possible.

De Botton distills Nietzsche’s convictions and their enduring legacy:

The most fulfilling human projects appeared inseparable from a degree of torment, the sources of our greatest joys lying awkwardly close to those of our greatest pains…

Why? Because no one is able to produce a great work of art without experience, nor achieve a worldly position immediately, nor be a great lover at the first attempt; and in the interval between initial failure and subsequent success, in the gap between who we wish one day to be and who we are at present, must come pain, anxiety, envy and humiliation. We suffer because we cannot spontaneously master the ingredients of fulfillment.

Nietzsche was striving to correct the belief that fulfillment must come easily or not at all, a belief ruinous in its effects, for it leads us to withdraw prematurely from challenges that might have been overcome if only we had been prepared for the savagery legitimately demanded by almost everything valuable.

(Or, as F. Scott Fitzgerald put it in his atrociously, delightfully ungrammatical proclamation, “Nothing any good isn’t hard.”)

Nietzsche arrived at this ideas the roundabout way. As a young man, he was heavily influenced by Schopenhauer. At the age of twenty-one, he chanced upon Schopenhauer’s masterwork The World as Will and Representation and later recounted this seminal life turn:

I took it in my hand as something totally unfamiliar and turned the pages. I do not know which demon was whispering to me: ‘Take this book home.’ In any case, it happened, which was contrary to my custom of otherwise never rushing into buying a book. Back at the house I threw myself into the corner of a sofa with my new treasure, and began to let that dynamic, dismal genius work on me. Each line cried out with renunciation, negation, resignation. I was looking into a mirror that reflected the world, life and my own mind with hideous magnificence.

And isn’t that what the greatest books do for us, why we read and write at all? But Nietzsche eventually came to disagree with Schopenhauer’s defeatism and slowly blossomed into his own ideas on the value of difficulty. In an 1876 letter to Cosima Wagner — the second wife of the famed composer Richard Wagner, whom Nietzsche had befriended — he professed, more than a decade after encountering Schopenhauer:

Would you be amazed if I confess something that has gradually come about, but which has more or less suddenly entered my consciousness: a disagreement with Schopenhauer’s teaching? On virtually all general propositions I am not on his side.

This turning point is how Nietzsche arrived at the conviction that hardship is the springboard for happiness and fulfillment. De Botton captures this beautifully:

Because fulfillment is an illusion, the wise must devote themselves to avoiding pain rather than seeking pleasure, living quietly, as Schopenhauer counseled, ‘in a small fireproof room’ — advice that now struck Nietzsche as both timid and untrue, a perverse attempt to dwell, as he was to put it pejoratively several years later, ‘hidden in forests like shy deer.’ Fulfillment was to be reached not by avoiding pain, but by recognizing its role as a natural, inevitable step on the way to reaching anything good.

And this, perhaps, is the reason why nihilism in general, and Nietzsche in particular, has had a recent resurgence in pop culture — the subject of a fantastic recent Radiolab episode. The wise and wonderful Jad Abumrad elegantly captures the allure of such teachings:

All this pop-nihilism around us is not about tearing down power structures or embracing nothingness — it’s just, “Look at me! Look how brave I am!”

Quoting Nietzsche, in other words, is a way for us to signal others that we’re unafraid, that difficulty won’t break us, that adversity will only assure us.

And perhaps there is nothing wrong with that. After all, Viktor Frankl was the opposite of a nihilist, and yet we flock to him for the same reason — to be assured, to be consoled, to feel like we can endure.

The Will to Power remains indispensable and The Consolations of Philosophy is excellent in its totality. Complement them with a lighter serving of Nietzsche — his ten rules for writers, penned in a love letter.

 

http://www.brainpickings.org/2014/10/15/nietzsche-on-difficulty/

Marijuana’s History: How One Plant Spread Through the World

Google makes us all dumber

…the neuroscience of search engines

As search engines get better, we become lazier. We’re hooked on easy answers and undervalue asking good questions

Google makes us all dumber: The neuroscience of search engines
(Credit: Ollyy via Shutterstock)

In 1964, Pablo Picasso was asked by an interviewer about the new electronic calculating machines, soon to become known as computers. He replied, “But they are useless. They can only give you answers.”

We live in the age of answers. The ancient library at Alexandria was believed to hold the world’s entire store of knowledge. Today, there is enough information in the world for every person alive to be given three times as much as was held in Alexandria’s entire collection —and nearly all of it is available to anyone with an internet connection.

This library accompanies us everywhere, and Google, chief librarian, fields our inquiries with stunning efficiency. Dinner table disputes are resolved by smartphone; undergraduates stitch together a patchwork of Wikipedia entries into an essay. In a remarkably short period of time, we have become habituated to an endless supply of easy answers. You might even say dependent.

Google is known as a search engine, yet there is barely any searching involved anymore. The gap between a question crystallizing in your mind and an answer appearing at the top of your screen is shrinking all the time. As a consequence, our ability to ask questions is atrophying. Google’s head of search, Amit Singhal, asked if people are getting better at articulating their search queries, sighed and said: “The more accurate the machine gets, the lazier the questions become.”

Google’s strategy for dealing with our slapdash questioning is to make the question superfluous. Singhal is focused on eliminating “every possible friction point between [users], their thoughts and the information they want to find.” Larry Page has talked of a day when a Google search chip is implanted in people’s brains: “When you think about something you don’t really know much about, you will automatically get information.” One day, the gap between question and answer will disappear.

I believe we should strive to keep it open. That gap is where our curiosity lives. We undervalue it at our peril.

The Internet can make us feel omniscient. But it’s the feeling of not knowing which inspires the desire to learn. The psychologist George Loewenstein gave us the simplest and most powerful definition of curiosity, describing it as the response to an “information gap.” When you know just enough to know that you don’t know everything, you experience the itch to know more. Loewenstein pointed out that a person who knows the capitals of three out of 50 American states is likely to think of herself as knowing something (“I know three state capitals”). But a person who has learned the names of 47 state capitals is likely to think of herself as not knowing three state capitals, and thus more likely to make the effort to learn those other three.



That word “effort” is important. It’s hardly surprising that we love the ease and fluency of the modern web: our brains are designed to avoid anything that seems like hard work. The psychologists Susan Fiske and Shelley Taylor coined the term “cognitive miser” to describe the stinginess with which the brain allocates limited attention, and its in-built propensity to seek mental short-cuts. The easier it is for us to acquire information, however, the less likely it is to stick. Difficulty and frustration — the very friction that Google aims to eliminate — ensure that our brain integrates new information more securely. Robert Bjork, of the University of California, uses the phrase “desirable difficulties” to describe the counterintuitive notion that we learn better when the learning is hard. Bjork recommends, for instance, spacing teaching sessions further apart so that students have to make more effort to recall what they learned last time.

A great question should launch a journey of exploration. Instant answers can leave us idling at base camp. When a question is given time to incubate, it can take us to places we hadn’t planned to visit. Left unanswered, it acts like a searchlight ranging across the landscape of different possibilities, the very consideration of which makes our thinking deeper and broader. Searching for an answer in a printed book is inefficient, and takes longer than in its digital counterpart. But while flicking through those pages your eye may alight on information that you didn’t even know you wanted to know.

The gap between question and answer is where creativity thrives and scientific progress is made. When we celebrate our greatest thinkers, we usually focus on their ingenious answers. But the thinkers themselves tend to see it the other way around. “Looking back,” said Charles Darwin, “I think it was more difficult to see what the problems were than to solve them.” The writer Anton Chekhov declared, “The role of the artist is to ask questions, not answer them.” The very definition of a bad work of art is one that insists on telling its audience the answers, and a scientist who believes she has all the answers is not a scientist.

According to the great physicist James Clerk Maxwell, “thoroughly conscious ignorance is the prelude to every real advance in science.” Good questions induce this state of conscious ignorance, focusing our attention on what we don’t know. The neuroscientist Stuart Firestein teaches a course on ignorance at Columbia University, because, he says, “science produces ignorance at a faster rate than it produces knowledge.” Raising a toast to Einstein, George Bernard Shaw remarked, “Science is always wrong. It never solves a problem without creating ten more.”

Humans are born consciously ignorant. Compared to other mammals, we are pushed out into the world prematurely, and stay dependent on elders for much longer. Endowed with so few answers at birth, children are driven to question everything. In 2007, Michelle Chouinard, a psychology professor at the University of California, analyzed recordings of four children interacting with their respective caregivers for two hours at a time, for a total of more than two hundred hours. She found that, on average, the children posed more than a hundred questions every hour.

Very small children use questions to elicit information — “What is this called?” But as they grow older, their questions become more probing. They start looking for explanations and insight, to ask “Why?” and “How?”. Extrapolating from Chouinard’s data, the Harvard professor Paul Harris estimates that between the ages of 3 and 5, children ask 40,000 such questions. The numbers are impressive, but what’s really amazing is the ability to ask such a question at all. Somehow, children instinctively know there is a vast amount they don’t know, and they need to dig beneath the world of appearances.

In a 1984 study by British researchers Barbara Tizard and Martin Hughes, four-year-old girls were recorded talking to their mothers at home. When the researchers analyzed the tapes, they found that some children asked more “How” and “Why” questions than others, and engaged in longer passages of “intellectual search” — a series of linked questions, each following from the other. (In one such conversation, four-year-old Rosy engaged her mother in a long exchange about why the window cleaner was given money.) The more confident questioners weren’t necessarily the children who got more answers from their parents, but the ones who got more questions. Parents who threw questions back to their children — “I don’t know, what do you think?” — raised children who asked more questions of them. Questioning, it turn out, is contagious.

Childish curiosity only gets us so far, however. To ask good questions, it helps if you have built your own library of answers. It’s been proposed that the Internet relieves us of the onerous burden of memorizing information. Why cram our heads with facts, like the date of the French revolution, when they can be summoned up in a swipe and a couple of clicks? But knowledge doesn’t just fill the brain up; it makes it work better. To see what I mean, try memorizing the following string of fourteen digits in five seconds:

74830582894062

Hard, isn’t it? Virtually impossible. Now try memorizing this string of fourteen letters:

lucy in the sky with diamonds

This time, you barely needed a second. The contrast is so striking that it seems like a completely different problem, but fundamentally, it’s the same. The only difference is that one string of symbols triggers a set of associations with knowledge you have stored deep in your memory. Without thinking, you can group the letters into words, the words into a sentence you understand as grammatical — and the sentence is one you recognize as the title of a song by the Beatles. The knowledge you’ve gathered over years has made your brain’s central processing unit more powerful.

This tells us something about the idea we should outsource our memories to the web: it’s a short-cut to stupidity. The less we know, the worse we are at processing new information, and the slower we are to arrive at pertinent inquiry. You’re unlikely to ask a truly penetrating question about the presidency of Richard Nixon if you have just had to look up who he is. According to researchers who study innovation, the average age at which scientists and inventors make breakthroughs is increasing over time. As knowledge accumulates across generations, it takes longer for individuals to acquire it, and thus longer to be in a position to ask the questions which, in Susan Sontag’s phrase, “destroy the answers”.

My argument isn’t with technology, but the way we use it. It’s not that the Internet is making us stupid or incurious. Only we can do that. It’s that we will only realize the potential of technology and humans working together when each is focused on its strengths — and that means we need to consciously cultivate effortful curiosity. Smart machines are taking over more and more of the tasks assumed to be the preserve of humans. But no machine, however sophisticated, can yet be said to be curious. The technology visionary Kevin Kelly succinctly defines the appropriate division of labor: “Machines are for answers; humans are for questions.”

The practice of asking perceptive, informed, curious questions is a cultural habit we should inculcate at every level of society. In school, students are generally expected to answer questions rather than ask them. But educational researchers have found that students learn better when they’re gently directed towards the lacunae in their knowledge, allowing their questions bubble up through the gaps. Wikipedia and Google are best treated as starting points rather than destinations, and we should recognize that human interaction will always play a vital role in fueling the quest for knowledge. After all, Google never says, “I don’t know — what do you think?”

The Internet has the potential to be the greatest tool for intellectual exploration ever invented, but only if it is treated as a complement to our talent for inquiry rather than a replacement for it. In a world awash in ready-made answers, the ability to pose difficult, even unanswerable questions is more important than ever.

Picasso was half-right: computers are useless without truly curious humans.

Ian Leslie is the author of “Curious: The Desire To Know and Why Your Future Depends On It.” He writes on psychology, trends and politics for The Economist, The Guardian, Slate and Granta. He lives in London. Follow him on Twitter at @mrianleslie.

http://www.salon.com/2014/10/12/google_makes_us_all_dumber_the_neuroscience_of_search_engines/?source=newsletter

Pen & Ink: An Illustrated Collection of Unusual, Deeply Human Stories Behind People’s Tattoos

by

Stories that “speak of lives you’ll never live and experiences you know precisely.”

We wear the stories of our lives — sometimes through our clothes, sometimes even more deeply, through the innermost physical membrane that separates self from world. More than mere acts of creative self-mutilation, tattoos have long served a number of unusual purposes, from celebrating science to asserting the power structures of Russia’s prison system to offering a lens on the psychology of regret.

In Pen & Ink: Tattoos and the Stories Behind Them (public library), based on their popular Tumblr of the same title, illustrator and visual storyteller Wendy MacNaughton — she of extraordinary sensitivity to the human experience — and editor Isaac Fitzgerald catalog the wild, wicked, wonderfully human stories behind people’s tattoos.

From a librarian’s Sendak-like depiction of a Norwegian folktale her grandfather used to tell her, to a writer who gets a tattoo for each novel he writes, to a journalist who immortalized the first tenet of the Karen revolution for Burma’s independence, the stories — sometimes poetic, sometimes political, always deeply personal — brim with the uncontainable, layered humanity that is MacNaughton’s true medium.

The people’s titles are as interesting as the stories themselves — amalgamations of the many selves we each contain and spend our lives trying to reconcile, the stuff of Whitman’s multitudes — from a “pedicab operator and journalist” to an “actress / director / BDSM educator” to “cartoonist and bouncer.”

The inimitable Cheryl Strayed — who knows a great deal about the tiny beautiful things of which life is made and whose own inked piece of personal history is among the stories — writes in the introduction:

As long as I live I’ll never tire of people-watching. On city buses and park benches. In small-town cafes and crowded elevators. At concerts and swimming pools. To people-watch is to glimpse the mysterious and the banal, the public face and the private gesture, the strangest other and the most familiar self. It’s to wonder how and why and what and who and hardly ever find out.

This book is the answer to those questions. It’s an intimate collection of portraits and stories behind the images we carry on our flesh in the form of tattoos.

Each of the stories is like being let in on sixty-three secrets by sixty-three strangers who passed you on the street or sat across from you on the train. They’re raw and real and funny and sweet. They speak of lives you’ll never live and experiences you know precisely. Together, they do the work of great literature — gathering a force so true they ultimately tell a story that includes all.

Chris Colin, writer

For writer Chris Colin, the tattoo serves as a sort of personal cartography of time, as well as a reminder of how transient our selves are:

I got this tattoo because I suspected one day I would think it would be stupid. I wanted to mark time, or mark the me that thought it was a good idea. Seventeen years later. I hardly remember it’s there. But when I do, it reminds me that whatever I think now I probably won’t think later.

Yuri Allison, student

For student Yuri Allison, it’s a symbolic reminder of her own inability to remember, a meta-monument to memory, that vital yet enormously flawed human faculty:

I have an episodic memory disorder. I don’t have any long-term memory. My childhood is completely blank, as is my schooling until high school. Technically I can’t recall anything that’s beyond three years in the past. I find it very difficult to talk about, simply because I still can’t wrap my head around the idea myself, so when someone talks to me about a memory we are supposed to share I simply smile and say that I don’t remember. Just like my memories, lip tattoos are known to fade with time.

Roxane Gay, writer and professor

For writer, educator, and “bad feminist” Roxane Gay, it is a deliberate editing of what Paul Valéry called “the three-body problem”:

I hardly remember not hating my body. I got most of my seven arm tattoos when I was nineteen. I wanted to be able to look at my body and see something I didn’t loathe, that was part of my body by choosing entirely. Really, that’s all I ever wanted.

Morgan English, research director

For research director Morgan English, the tattoo is a depiction of “a series of childhood moments” strung together to capture her grandmother’s singular spirit in an abstract way:

My grandma died in a freak accident in May of last year. She was healthy as an ox — traveling the world with her boyfriend well into her 80s — then she broke her foot, which created a blood clot that traveled to her brain. Three days later, she was gone.

The respect and admiration I have for her is difficult to articulate. here was a woman who endured two depressions (post-WWI Weimar Germany, from which she escaped to the U.S. in 1929, just before our stock market crashed) followed by a series of traumatic events (incestuous rape, a violent husband, the suicide of her only son). You’d think these things would break a person, or at least harden them, but she only grew more focused. She once told me, “Fix your eyes on the solution, it’s the only way things get solved! Just keep moving and you’ll become the woman you’ve always wanted to be.”

Thao Nguyen, musician

The hardships, joys, and complexities of family are a running theme. Thao Nguyen, one of my favorite musicians, writes:

I moved across the country from my family, not to be far away, but with no concern for being close.

I was a taciturn family friend. Not a sister. Not a daughter. But no matter the distance, a part of me was always certain I would come back to be an aunt.

One week after my nephew, Sullivan, was born, I had his name on my wrist. There’s plenty of space for any of his siblings who might follow.

It’s been almost two years now and I go home to visit when I can, not just to pass through. I listen, I ask questions, I commit my family to memory, how they lighten up, how they grimace. I hate the time I wasted, and I fear the rate of everyone’s disappearance. Now when I leave, the distance between us is not nearly as expansive. Often it is no more than my eyes to my arm. Should I forget that I belong to people, I have Sullivan to remind me.

Caroline Paul, writer

Writer Caroline Paul — incidentally, MacNaughton’s partner and co-author of the excellent Lost Cat, one of the best books of 2013 — inks a kinship of ideals:

My brother had a secret life for twenty years as a member of the Animal Liberation Front. He was finally caught and sentenced to four years for burning down a horse slaughterhouse. I got this tattoo for him, while he was in prison. It’s my only tattoo.

It says “My heroes rescue animals.”

Mac McClelland, journalist

Journalist Mac McClelland, author of For Us Surrender Is Out of the Question: A Story from Burma’s Never-Ending War, immortalizes the dermis-deep commitment to a different kind of rights cause:

The first tenet of the Karen revolution for independence from the Burmese junta is “For us to surrender is out of the question.” Little kids wear T-shirts emblazoned with it; adults bring it up, drunk and patriotic at parties. After I came home from living with Karen refugees on the Thai/Burma border in 2006, and before I wrote a book of the same title, I got the first tenet and the fourth — “We must decide our own political destiny” — tattooed on each side of my rib cage so I wouldn’t ever forget what some people were fighting for.

Mona Eltahawy, writer and public speaker

An undercurrent of political and humanitarian commitment runs throughout the book. Writer and public speaker Mona Eltahawy shares the harrowing story of her inked indignation:

I lost something the night the Egyptian riot police beat me and sexually assaulted me. I was detained for six hours at the Interior Ministry and another six by military intelligence, where I was interrogated while I was blindfolded. During my time at the Interior Ministry I’d been able to surreptitiously use an activist’s smart phone to tweet “Beaten, arrested, Interior Ministry.” About a minute later the phone’s battery died. I won’t allow myself to imagine what could have happened if I hadn’t been able to send out that tweet. After I was finally released, I found out that within fifteen minutes of the tweet #freemona was rending globally, Al Jazeera and The Guardian reported my detention, and the state department tweeted me back to tell me they were on the case. I knew I was lucky. If it wasn’t for my name, my fame, my tweet, my double citizenship, and so many other privileges I might be dead.

Sekhmet. The goddess of retribution and sex. The head of a lioness. Tits and hips. The key of life in one hand, the staff of power in the other. That paradoxical — or perhaps they’re two sides of one coin — mix of pain and pleasure. Retribution and sex. I’d never wanted a tattoo before, but as sadness washed away and my anger and the Vicodin wore off, it became important to both celebrate my survival and make a mark on my body of my own choosing.

Michelle Crouch, public radio intern

But what makes the book so immeasurably wonderful is its perfectly balanced dance across the spectrum of human experience, where the dark and the luminous are given equal share. Public radio intern Michelle Crouch shares one of the sweetest stories, inspired by artist Steven Powers’s graffiti love letters to the city:

I used to ride the Market-Frankford line [in Philadelphia] all the way west to get to work. After 46th Street the train runs on an elevated track and as I rode to this job I hated, colorful murals began popping up at eye level. They said things like “YOUR EVERAFTER IS ALL I’M AFTER” and “HOLD TIGHT” and “WHAT’S MINE IS YOURS.” They cheered me up. Once, on my day off, I walked from 46th to 63rd Street on a sort of pilgrimage and met the artist who greeted me from a crane as he painted the letters “W-A-N-T” on a brick wall. When I heard he was designing a series of tattoos based on the love letter murals, I decided to get one. A guy I’d just started dating accompanied me to the tattoo shop. I picked out “WHAT’S MINE IS YOURS.” The words remind me to be generous. I try to live them every day.

Now I have a job I like and I’m married to that boy I had just started dating. Marriage strikes me as being a lot like the tattoo — another way of making generosity permanent.

Pen & Ink is absolutely delightful from cover to cover. Supplement it with the project’s ongoing online incarnation, then treat yourself to MacNaughton’s spectacular Meanwhile and her Brain Pickings artist series contributions.

Images courtesy of Wendy MacNaughton / Bloomsbury

http://www.brainpickings.org/2014/10/07/pen-and-ink-tattoos-wendy-macnaughton/

When death is a life-affirming choice

Brittany Maynard’s bold final campaign for death with dignity

 

When death is a life-affirming choice
Brittany Maynard (Credit: CNN)

Like every one of us, Brittany Maynard is going to die. But like very few of us, she knows exactly when. In just a short time, the 29 year-old’s life will come to an end.

In a powerful essay for CNN, the volunteer advocate for Compassion and Choices writes this week of how earlier this year, “after months of suffering from debilitating headaches,” she got the nightmare diagnosis of brain cancer. In April, her doctors told her that she likely had only six months or so to live. She’d been married less than a year. She and her husband had been trying to start a family. Now, she and her loved ones are trying to help her die. The former Bay Area resident acknowledges that “There is no treatment that would save my life, and the recommended treatments would have destroyed the time I had left.” And with only five states in the US with death with dignity laws, Maynard and her husband moved to Portland and established residency there so that she could obtain what she refers to simply as “the medication” that will help her in her final act. She plans to celebrate her husband’s birthday on October 26. She plans on making her exit soon after.

Maynard’s choice is a brave and difficult one. And by speaking out about it, she’s part of a growing collection of patients who’ve made the often-taboo conversation around end-of-life decisions more humane and, with luck, compassionate. Earlier this year, Toronto criminal lawyer Edward Hung shared the story of his “terminal incurable” ALS, and his decision to go to Switzerland for the “assisted death” that Canadian law forbids. Last year, writer Jane Lotter went viral with her eloquent self-penned obituary, and its postscript that with the help of “powdered barbiturates, provided by hospice officials… Jane took advantage of Washington state’s compassionate Death With Dignity Act.”



It is a generous thing to use one’s few remaining days to speak out in the hope of raising awareness and helping others. Not many of us want to leave this world at all. But to the extent that we have any say in the matter, wouldn’t most of us choose to do it as peacefully and lovingly as possible? For far too many of us, the final laps around the track are wracked with suffering and unnecessary fighting over the fulfillment of even our own most explicit wishes. For far too many of us, death is framed in terms of “losing a battle,” as if anything but going down gasping and swinging is an admission of defeat. A report on “Dying In America” issued last month by the Institute of Medicine revealed that “Most Americans who indicate their end-of-life wishes say they want to die at home, with a focus on alleviating pain and suffering. That’s not happening.” We need sweeping and uniform change, across every state. We need Maynard’s choice to be available to anyone facing a terminal diagnosis and a desire to be as free of needless pain and intervention as possible. We need that for ourselves and for our loved ones. However much time we’re given, we all deserve a death that honors the life that came before it. We deserve a death that, like the great Geoffrey Holder’s, can be choreographed to our own unique tune. And the chance to do what Maynard says she hopes for, to be able to say to the people we care about, “I love you; come be by my side, and come say goodbye as I pass into whatever’s next.”

Mary Elizabeth Williams is a staff writer for Salon and the author of “Gimme Shelter: My Three Years Searching for the American Dream.” Follow her on Twitter: @embeedub.

http://www.salon.com/2014/10/08/when_death_is_a_life_affirming_choice/?source=newsletter