How technology shrunk America forever

The end of the Old World:

The 19th century saw an explosion of changes in America. The way people saw the world would never be the same

The end of the Old World: How technology shrunk America forever
(Credit: AP/Library of Congress)

It has become customary to mark the beginning of the Industrial revolution in eighteenth-century England. Historians usually identify two or sometimes three phases of the Industrial revolution, which are associated with different sources of energy and related technologies. In preindustrial Europe, the primary energy sources were human, animal, and natural (wind, water, and fire).

By the middle of the eighteenth century, much of Europe had been deforested to supply wood for domestic and industrial consumption. J.R. McNeill points out that the combination of energy sources, machines, and ways of organizing production came together to form “clusters” that determined the course of industrialization and, by extension, shaped economic and social developments. a later cluster did not immediately replace its predecessor; rather, different regimes overlapped, though often they were not integrated. With each new cluster, however, the speed of production increased, leading to differential rates of production. The first phase of the Industrial revolution began around 1750 with the shift from human and animal labor to machine-based production. This change was brought about by the use of water power and later steam engines in the textile mills of Great Britain.

The second phase dates from the 1820s, when there was a shift to fossil fuels—primarily coal. By the middle of the nineteenth century, another cluster emerged from the integration of coal, iron, steel, and railroads. The fossil fuel regime was not, of course, limited to coal. Edwin L. Drake drilled the first commercially successful well in Titusville, Pennsylvania, in 1859 and the big gushers erupted first in the 1870s in Baku on the Caspian Sea and later in Spindeltop, Texas (1901). Oil, however, did not replace coal as the main source of fuel in transportation until the 1930s.3 Coal, of course, is still widely used in manufacturing today because it remains one of the cheapest sources of energy. Though global consumption of coal has leveled off since 2000, its use continues to increase in China. Indeed, China currently uses almost as much coal as the rest of the world and reliable sources predict that by 2017, India will be importing as much coal as China.



The third phase of the Industrial revolution began in the closing decades of the nineteenth century. The development of technologies for producing and distributing electricity cheaply and efficiently further transformed industrial processes and created the possibility for new systems of communication as well as the unprecedented capability for the production and dissemination of new forms of entertainment, media, and information. The impact of electrification can be seen in four primary areas.

First, the availability of electricity made the assembly line and mass production possible. When Henry Ford adapted technology used in Chicago’s meatpacking houses to produce cars (1913), he set in motion changes whose effects are still being felt. Second, the introduction of the incandescent light bulb (1881) transformed private and public space. As early as the late 1880s, electrical lighting was used in homes, factories, and on streets. Assembly lines and lights inevitably led to the acceleration of urbanization. Third, the invention of the telegraph (ca.1840) and telephone (1876) enabled the communication and transmission of information across greater distances at faster rates of speed than ever before. Finally, electronic tabulating machines, invented by Herman Hollerith in 1889, made it possible to collect and manage data in new ways. Though his contributions have not been widely acknowledged, Hollerith actually forms a bridge between the Industrial revolution and the so-called post-industrial information age. The son of German immigrants, Hollerith graduated from Columbia University’s School of Mines and went on to found Tabulating Machine Company (1896). He created the first automatic card-feed mechanism and key-punch system with which an operator using a keyboard could process as many as three hundred cards an hour. Under the direction of Thomas J. Watson, Hollerith’s company merged with three others in 1911 to form Computing Tabulating recording Company. In 1924, the company was renamed International Business Machines Corporation (IBM).

There is much to be learned from such periodizations, but they have serious limitations. The developments I have identified overlap and interact in ways that subvert any simple linear narrative. Instead of thinking merely in terms of resources, products, and periods, it is also important to think in terms of networks and flows. The foundation for today’s wired world was laid more than two centuries ago. Beginning in the early nineteenth century, local communities, then states and nations, and finally the entire globe became increasingly connected. Though varying from time to time and place to place, there were two primary forms of networks: those that directed material flows (fuels, commodities, products, people), and those that channeled immaterial flows (communications, information, data, images, and currencies). From the earliest stages of development, these networks were inextricably interconnected. There would have been no telegraph network without railroads and no railroad system without the telegraph network, and neither could have existed without coal and iron. Networks, in other words, are never separate but form networks of networks in which material and immaterial flows circulate. As these networks continued to expand, and became more and more complex, there was a steady increase in the importance of immaterial flows, even for material processes. The combination of expanding connectivity and the growing importance of information technologies led to the acceleration of both material and immaterial flows. This emerging network of networks created positive feedback loops in which the rate of acceleration increased.

While developments in transportation, communications, information, and management were all important, industrialization as we know it is inseparable from the transportation revolution that trains created. In his foreword to Wolfgang Schivelbusch’s informative study “The Railway Journey: The Industrialization of Time and Space in the 19th Century,” Alan Trachtenberg writes, “Nothing else in the nineteenth century seemed as vivid and dramatic a sign of modernity as the railroad. Scientists and statesmen joined capitalists in promoting the locomotive as the engine of ‘progress,’ a promise of imminent Utopia.”

In England, railway technology developed as an extension of coal mining. The shift from human and natural sources of energy to fossil fuels created a growing demand for coal. While steam engines had been used since the second half of the eighteenth century in British mines to run fans and pumps like those my great-grandfather had operated in the Pennsylvania coalfields, it was not until 1901, when Oliver Evans invented a high-pressure, mobile steam engine, that locomotives were produced. By the beginning of the nineteenth century, the coal mined in the area around Newcastle was being transported throughout England on rail lines. It did not take long for this new rapid transit system to develop—by the 1820s, railroads had expanded to carry passengers, and half a century later rail networks spanned all of Europe.

What most impressed people about this new transportation network was its speed. The average speed of early railways in England was twenty to thirty miles per hour, which was approximately three times faster than stagecoaches. The increase in speed transformed the experience of time and space. Countless writers from this era use the same words to describe train travel as Karl Marx had used to describe emerging global financial markets. Trains, like capital, “annihilate space with time.”

Traveling on the recently opened Paris-rouen-orléans railway line in 1843, the German poet, journalist, and literary critic Heinrich Heine wrote: “What changes must now occur, in our way of looking at things, in our notions! Even the elementary concepts of time and space have begun to vacillate. Space is killed by the railways, and we are left with time alone. . . . Now you can travel to orleans in four and a half hours, and it takes no longer to get to rouen. Just imagine what will happen when the lines to Belgium and Germany are completed and connected up with their railways! I feel as if the mountains and forests of all countries were advancing on Paris. Even now, I can smell the German linden trees; the North Sea’s breakers are rolling against my door.” This new experience of space and time that speed brought about had profound psychological effects that I will consider later.

Throughout the nineteenth century, the United States lagged behind Great Britain in terms of industrial capacity: in 1869, England was the source of 20 percent of the world’s industrial production, while the United States contributed just 7 percent. By the start of World War I, however, america’s industrial capacity surpassed that of England: that is, by 1913, the scales had tipped—32 percent came from the United States and only 14 percent from England. While England had a long history before the Industrial revolution, the history of the United States effectively begins with the Industrial revolution. There are other important differences as well. Whereas in Great Britain the transportation revolution grew out of the industrialization of manufacturing primarily, but not exclusively, in textile factories, in the United States mechanization began in agriculture and spread to transportation before it transformed manufacturing. In other words, in Great Britain, the Industrial Revolution in manufacturing came first and the transportation revolution second, while in the United States, this order was reversed.

When the Industrial revolution began in the United States, most of the country beyond the Eastern Seaboard was largely undeveloped. Settling this uncharted territory required the development of an extensive transportation network. Throughout the early decades of the nineteenth century, the transportation system consisted of a network of rudimentary roads connecting towns and villages with the countryside. New England, Boston, New york, Philadelphia, Baltimore, and Washington were joined by highways suitable for stagecoach travel. Inland travel was largely confined to rivers and waterways. The completion of the Erie Canal (1817–25) marked the first stage in the development of an extensive network linking rivers, lakes, canals, and waterways along which produce and people flowed. Like so much else in America, the railroad system began in Boston. By 1840, only 18,181 miles of track had been laid. During the following decade, however, there was an explosive expansion of the nation’s rail system financed by securities and bonds traded on stock markets in America and London. By the 1860s, the railroad network east of the Mississippi river was using routes roughly similar to those employed today.

Where some saw loss, others saw gain. In 1844, inveterate New Englander ralph Waldo Emerson associated the textile loom with the railroad when he reflected, “Not only is distance annihilated, but when, as now, the locomotive and the steamboat, like enormous shuttles, shoot every day across the thousand various threads of national descent and employment, and bind them fast in one web, an hourly assimilation goes forward, and there is no danger that local peculiarities and hostilities should be preserved.” Gazing at tracks vanishing in the distance, Emerson saw a new world opening that, he believed, would overcome the parochialisms of the past. For many people in the nineteenth century, this new world promising endless resources and endless opportunity was the american West. A transcontinental railroad had been proposed as early as 1820 but was not completed until 1869.

On May 10, 1869, Leland Stanford, who would become the governor of California and, in 1891, founder of Stanford University, drove the final spike in the railroad that joined east and west. Nothing would ever be the same again. This event was not merely local, but also, as Emerson had surmised, global. Like the California gold and Nevada silver spike that leland had driven to join the rails, the material transportation network and immaterial communication network intersected at that moment to create what Rebecca Solnit correctly identifies as “the first live national media event.” The spike “had been wired to connect to the telegraph lines that ran east and west along the railroad tracks. The instant Stanford struck the spike, a signal would go around the nation. . . . The signal set off cannons in San Francisco and New York. In the nation’s capital the telegraph signal caused a ball to drop, one of the balls that visibly signaled the exact time in observatories in many places then (of which the ball dropped in New york’s Times Square at the stroke of the New year is a last relic). The joining of the rails would be heard in every city equipped with fire-alarm telegrams, in Philadelphia, omaha, Buffalo, Chicago, and Sacramento. Celebrations would be held all over the nation.” This carefully orchestrated spectacle, which was made possible by the convergence of multiple national networks, was worthy of the future Hollywood and the technological wizards of Silicon Valley whose relentless innovation Stanford’s university would later nourish. What most impressed people at the time was the speed of global communication, which now is taken for granted.

Flickering Images—Changing Minds

Industrialization not only changes systems of production and distribution of commodities and products, but also imposes new disciplinary practices that transform bodies and change minds. During the early years of train travel, bodily acceleration had an enormous psychological effect that some people found disorienting and others found exhilarating. The mechanization of movement created what ann Friedberg describes as the “mobile gaze,” which transforms one’s surroundings and alters both the content and, more important, the structure, of perception. This mobile gaze takes two forms: the person can move and the surroundings remain immobile (train, bicycle, automobile, airplane, elevator), or the person can remain immobile and the surroundings move (panorama, kinetoscope, film).

When considering the impact of trains on the mobilization of the gaze, it is important to note that different designs for railway passenger cars had different perceptual and psychological effects. Early European passenger cars were modeled on stagecoaches in which individuals had seats in separate compartments; early american passenger cars, by contrast, were modeled on steamboats in which people shared a common space and were free to move around. The European design tended to reinforce social and economic hierarchies that the american design tried to break down. Eventually, american railroads adopted the European model of fixed individual seating but had separate rows facing in the same direction rather than different compartments. As we will see, the resulting compartmentalization of perception anticipates the cellularization of attention that accompanies today’s distributed high-speed digital networks.

During the early years, there were numerous accounts of the experience of railway travel by ordinary people, distinguished writers, and even physicians, in which certain themes recur. The most common complaint is the sense of disorientation brought about by the experience of unprecedented speed. There are frequent reports of the dispersion and fragmentation of attention that are remarkably similar to contemporary personal and clinical descriptions of attention-deficit hyperactivity disorder (ADHD). With the landscape incessantly rushing by faster than it could be apprehended, people suffered overstimulation, which created a sense of psychological exhaustion and physical distress. Some physicians went so far as to maintain that the experience of speed caused “neurasthenia, neuralgia, nervous dyspepsia, early tooth decay, and even premature baldness.”

In 1892, Sir James Crichton-Browne attributed the significant increase in the mortality rate between 1859 and 1888 to “the tension, excitement, and incessant mobility of modern life.” Commenting on these statistics, Max Nordau might well be describing the harried pace of life today. “Every line we read or write, every human face we see, every conversation we carry on, every scene we perceive through the window of the flying express, sets in activity our sensory nerves and our brain centers. Even the little shocks of railway travelling, not perceived by consciousness, the perpetual noises and the various sights in the streets of a large town, our suspense pending the sequel of progressing events, the constant expectation of the newspaper, of the postman, of visitors, cost our brains wear and tear.” During the years around the turn of the last century, a sense of what Stephen kern aptly describes as “cultural hypochondria” pervaded society. Like today’s parents concerned about the psychological and physical effects of their kids playing video games, nineteenth-century physicians worried about the effect of people sitting in railway cars for hours watching the world rush by in a stream of images that seemed to be detached from real people and actual things.

In addition to the experience of disorientation, dispersion, fragmentation, and fatigue, rapid train travel created a sense of anxiety. People feared that with the increase in speed, machinery would spin out of control, resulting in serious accidents. An 1829 description of a train ride expresses the anxiety that speed created. “It is really flying, and it is impossible to divest yourself of the notion of instant death to all upon the least accident happening.” a decade and a half later, an anonymous German explained that the reason for such anxiety is the always “close possibility of an accident, and the inability to exercise any influence on the running of the cars.” When several serious accidents actually occurred, anxiety spread like a virus. Anxiety, however, is always a strange experience—it not only repels, it also attracts; danger and the anxiety it brings are always part of speed’s draw.

Perhaps this was a reason that not everyone found trains so distressing. For some people, the experience of speed was “dreamlike” and bordered on ecstasy. In 1843, Emerson wrote in his Journals, “Dreamlike travelling on the railroad. The towns which I pass between Philadelphia and New york make no distinct impression. They are like pictures on a wall.” The movement of the train creates a loss of focus that blurs the mobile gaze. A few years earlier, Victor Hugo’s description of train travel sounds like an acid trip as much as a train trip. In either case, the issue is speed. “The flowers by the side of the road are no longer flowers but flecks, or rather streaks, of red or white; there are no longer any points, everything becomes a streak; grain fields are great shocks of yellow hair; fields of alfalfa, long green tresses; the towns, the steeples, and the trees perform a crazy mingling dance on the horizon; from time to time, a shadow, a shape, a specter appears and disappears with lightning speed behind the window; it’s a railway guard.” The flickering images fleeting past train windows are like a film running too fast to comprehend.

Transportation was not the only thing accelerating in the nineteenth century—the pace of life itself was speeding up as never before. listening to the whistle of the train headed to Boston in his cabin beside Walden Pond, Thoreau mused, “The startings and arrivals of the cars are now the epochs in the village day. They go and come with such regularity and precision, and their whistle can be heard so far, that the farmers set their clocks by them, and thus one well conducted institution regulates a whole country. Have not men improved somewhat in punctuality since the railroad was invented? Do they not talk and think faster in the depot than they did in the stage office? There is something electrifying in the atmosphere of the former place. I have been astonished by some of the miracles it has wrought.” And yet Thoreau, more than others, knew that these changes also had a dark side.

The transition from agricultural to industrial capitalism brought with it a massive migration from the country, where life was slow and governed by natural rhythms, to the city, where life was fast and governed by mechanical, standardized time. The convergence of industrialization, transportation, and electrification made urbanization inevitable. The faster that cities expanded, the more some writers and poets idealized rustic life in the country. Nowhere is such idealization more evident than in the writings of British romantics. The rapid swirl of people, machines, and commodities created a sense of vertigo as disorienting as train travel. Wordsworth writes in The Prelude,

oh, blank confusion! True epitome
of what the mighty City is herself
To thousands upon thousands of her sons, living among the same perpetual whirl
of trivial objects, melted and reduced
To one identity, by differences
That have no law, no meaning, no end—

By 1850, fifteen cities in the United States had a population exceeding 50,000. New york was the largest (1,080,330), followed by Philadelphia (565,529), Baltimore (212,418), and Boston (177,840). Increasing domestic trade that resulted from the railroad and growing foreign trade that accompanied improved ocean travel contributed significantly to this growth. While commerce was prevalent in early cities, manufacturing expanded rapidly during the latter half of the eighteenth century. The most important factor contributing to nineteenth-century urbanization was the rapid development of the money economy. Once again, it is a matter of circulating flows, not merely of human bodies but of mobile commodities. Money and cities formed a positive feedback loop—as the money supply grew, cities expanded, and as cities expanded, the money supply grew.

The fast pace of urban life was as disorienting for many people as the speed of the train. In his seminal essay “The Metropolis and Mental life,” Georg Simmel observes, “The psychological foundation upon which the metropolitan individuality is erected, is the intensification of emotional life due to the swift and continuous shift of external and internal stimuli. Man is a creature whose existence is dependent on differences, i.e., his mind is stimulated by the difference between present impressions and those which have preceded. . . . To the extent that the metropolis creates these psychological conditions—with every crossing of the street, with the tempo and multiplicity of economic, occupational and social life—it creates the sensory foundations of mental life, and in the degree of awareness necessitated by our organization as creatures dependent on differences, a deep contrast with the slower, more habitual, more smooth flowing rhythm of the sensory-mental phase of small town and rural existence.” The expansion of the money economy created a fundamental contradiction at the heart of metropolitan life. On the one hand, cities brought together different people from all backgrounds and walks of life, and on the other hand, emerging industrial capitalism leveled these differences by disciplining bodies and programming minds. “Money,” Simmel continues, “is concerned only with what is common to all, i.e., with the exchange value which reduces all quality and individuality to a purely quantitative level.” The migration from country to city that came with the transition from agricultural to industrial capitalism involved a shift from homogeneous communities to heterogeneous assemblages of different people, qualitative to quantitative methods of assessment and evaluation, as well as concrete to abstract networks of exchange of goods and services, and a slow to fast pace of life. I will consider further aspects of these disciplinary practices in Chapter 3; for now, it is important to understand the implications of the mechanization or industrialization of perception.

I have already noted similarities between the experience of looking through a window on a speeding train to the experience of watching a film that is running too fast. During the latter half of the nineteenth century a remarkable series of inventions transformed not only what people experienced in the world but how they experienced it: photography (Louis-Jacques-Mandé Daguerre, ca. 1837), the telegraph (Samuel F. B. Morse, ca. 1840), the stock ticker (Thomas alva Edison, 1869), the telephone (alexander Graham Bell, 1876), the chronophotographic gun (Étienne-Jules Maney, 1882), the kinetoscope (Edison, 1894), the zoopraxiscope (Eadweard Muybridge, 1893), the phantoscope (Charles Jenkins, 1894), and cinematography (Auguste and Louis Lumière, 1895). The way in which human beings perceive and conceive the world is not hardwired in the brain but changes with new technologies of production and reproduction.

Just as the screens of today’s TVs, computers, video games, and mobile devices are restructuring how we process experience, so too did new technologies at the end of the nineteenth century change the world by transforming how people apprehended it. While each innovation had a distinctive effect, there is a discernible overall trajectory to these developments. Industrial technologies of production and reproduction extended processes of dematerialization that eventually led first to consumer capitalism and then to today’s financial capitalism. The crucial variable in these developments is the way in which material and immaterial networks intersect to produce a progressive detachment of images, representations, information, and data from concrete objects and actual events. Marveling at what he regarded as the novelty of photographs, Oliver Wendell Holmes commented, “Form is henceforth divorced from matter. In fact, matter as a visible object is of no great use any longer, except as the mould on which form is shaped. Give us a few negatives of a thing worth seeing, taken from different points of view, and that is all we want of it. Pull it down or burn it up, if you please. . . . Matter in large masses must always be fixed and dear, form is cheap and transportable. We have got the fruit of creation now, and need not trouble ourselves about the core.”

Technologies for the reproduction and transmission of images and information expand the process of abstraction initiated by the money economy to create a play of freely floating signs without anything to ground, certify, or secure them. With new networks made possible by the combination of electrification and the invention of the telegraph, telephone, and stock ticker, communication was liberated from the strictures imposed by physical means of conveyance. In previous energy regimes, messages could be sent no faster than people, horses, carriages, trains, ships, or automobiles could move. Dematerialized words, sounds, information, and eventually images, by contrast, could be transmitted across great distances at high speed. With this dematerialization and acceleration, Marx’s prediction—that “everything solid melts into air”—was realized. But this was just the beginning. It would take more than a century for electrical currents to become virtual currencies whose transmission would approach the speed limit.

Excerpted from “Speed Limits: Where Time Went and Why We Have So Little Left,” by Mark C. Taylor, published October 2014 by Yale University Press. Copyright ©2014 by Mark C. Taylor. Reprinted by permission of Yale University Press.

http://www.salon.com/2014/10/19/the_end_of_the_old_world_how_technology_shrunk_america_forever/?source=newsletter

Friedrich Nietzsche on Why a Fulfilling Life Requires Embracing Rather than Running from Difficulty

by

A century and a half before our modern fetishism of failure, a seminal philosophical case for its value.

German philosopher, poet, composer, and writer Friedrich Nietzsche (October 15, 1844–August 25, 1900) is among humanity’s most enduring, influential, and oft-cited minds — and he seemed remarkably confident that he would end up that way. Nietzsche famously called the populace of philosophers “cabbage-heads,” lamenting: “It is my fate to have to be the first decent human being. I have a terrible fear that I shall one day be pronounced holy.” In one letter, he considered the prospect of posterity enjoying his work: “It seems to me that to take a book of mine into his hands is one of the rarest distinctions that anyone can confer upon himself. I even assume that he removes his shoes when he does so — not to speak of boots.”

A century and a half later, Nietzsche’s healthy ego has proven largely right — for a surprising and surprisingly modern reason: the assurance he offers that life’s greatest rewards spring from our brush with adversity. More than a century before our present celebration of “the gift of failure” and our fetishism of failure as a conduit to fearlessness, Nietzsche extolled these values with equal parts pomp and perspicuity.

In one particularly emblematic specimen from his many aphorisms, penned in 1887 and published in the posthumous selection from his notebooks, The Will to Power (public library), Nietzsche writes under the heading “Types of my disciples”:

To those human beings who are of any concern to me I wish suffering, desolation, sickness, ill-treatment, indignities — I wish that they should not remain unfamiliar with profound self-contempt, the torture of self-mistrust, the wretchedness of the vanquished: I have no pity for them, because I wish them the only thing that can prove today whether one is worth anything or not — that one endures.

(Half a century later, Willa Cather echoed this sentiment poignantly in a troubled letter to her brother: “The test of one’s decency is how much of a fight one can put up after one has stopped caring.”)

With his signature blend of wit and wisdom, Alain de Botton — who contemplates such subjects as the psychological functions of art and what literature does for the soul — writes in the altogether wonderful The Consolations of Philosophy (public library):

Alone among the cabbage-heads, Nietzsche had realized that difficulties of every sort were to be welcomed by those seeking fulfillment.

Not only that, but Nietzsche also believed that hardship and joy operated in a kind of osmotic relationship — diminishing one would diminish the other — or, as Anaïs Nin memorably put it, “great art was born of great terrors, great loneliness, great inhibitions, instabilities, and it always balances them.” In The Gay Science (public library), his treatise on poetry where his famous “God is dead” proclamation was coined, he wrote:

What if pleasure and displeasure were so tied together that whoever wanted to have as much as possible of one must also have as much as possible of the other — that whoever wanted to learn to “jubilate up to the heavens” would also have to be prepared for “depression unto death”?

You have the choice: either as little displeasure as possible, painlessness in brief … or as much displeasure as possible as the price for the growth of an abundance of subtle pleasures and joys that have rarely been relished yet? If you decide for the former and desire to diminish and lower the level of human pain, you also have to diminish and lower the level of their capacity for joy.

He was convinced that the most notable human lives reflected this osmosis:

Examine the lives of the best and most fruitful people and peoples and ask yourselves whether a tree that is supposed to grow to a proud height can dispense with bad weather and storms; whether misfortune and external resistance, some kinds of hatred, jealousy, stubbornness, mistrust, hardness, avarice, and violence do not belong among the favorable conditions without which any great growth even of virtue is scarcely possible.

De Botton distills Nietzsche’s convictions and their enduring legacy:

The most fulfilling human projects appeared inseparable from a degree of torment, the sources of our greatest joys lying awkwardly close to those of our greatest pains…

Why? Because no one is able to produce a great work of art without experience, nor achieve a worldly position immediately, nor be a great lover at the first attempt; and in the interval between initial failure and subsequent success, in the gap between who we wish one day to be and who we are at present, must come pain, anxiety, envy and humiliation. We suffer because we cannot spontaneously master the ingredients of fulfillment.

Nietzsche was striving to correct the belief that fulfillment must come easily or not at all, a belief ruinous in its effects, for it leads us to withdraw prematurely from challenges that might have been overcome if only we had been prepared for the savagery legitimately demanded by almost everything valuable.

(Or, as F. Scott Fitzgerald put it in his atrociously, delightfully ungrammatical proclamation, “Nothing any good isn’t hard.”)

Nietzsche arrived at this ideas the roundabout way. As a young man, he was heavily influenced by Schopenhauer. At the age of twenty-one, he chanced upon Schopenhauer’s masterwork The World as Will and Representation and later recounted this seminal life turn:

I took it in my hand as something totally unfamiliar and turned the pages. I do not know which demon was whispering to me: ‘Take this book home.’ In any case, it happened, which was contrary to my custom of otherwise never rushing into buying a book. Back at the house I threw myself into the corner of a sofa with my new treasure, and began to let that dynamic, dismal genius work on me. Each line cried out with renunciation, negation, resignation. I was looking into a mirror that reflected the world, life and my own mind with hideous magnificence.

And isn’t that what the greatest books do for us, why we read and write at all? But Nietzsche eventually came to disagree with Schopenhauer’s defeatism and slowly blossomed into his own ideas on the value of difficulty. In an 1876 letter to Cosima Wagner — the second wife of the famed composer Richard Wagner, whom Nietzsche had befriended — he professed, more than a decade after encountering Schopenhauer:

Would you be amazed if I confess something that has gradually come about, but which has more or less suddenly entered my consciousness: a disagreement with Schopenhauer’s teaching? On virtually all general propositions I am not on his side.

This turning point is how Nietzsche arrived at the conviction that hardship is the springboard for happiness and fulfillment. De Botton captures this beautifully:

Because fulfillment is an illusion, the wise must devote themselves to avoiding pain rather than seeking pleasure, living quietly, as Schopenhauer counseled, ‘in a small fireproof room’ — advice that now struck Nietzsche as both timid and untrue, a perverse attempt to dwell, as he was to put it pejoratively several years later, ‘hidden in forests like shy deer.’ Fulfillment was to be reached not by avoiding pain, but by recognizing its role as a natural, inevitable step on the way to reaching anything good.

And this, perhaps, is the reason why nihilism in general, and Nietzsche in particular, has had a recent resurgence in pop culture — the subject of a fantastic recent Radiolab episode. The wise and wonderful Jad Abumrad elegantly captures the allure of such teachings:

All this pop-nihilism around us is not about tearing down power structures or embracing nothingness — it’s just, “Look at me! Look how brave I am!”

Quoting Nietzsche, in other words, is a way for us to signal others that we’re unafraid, that difficulty won’t break us, that adversity will only assure us.

And perhaps there is nothing wrong with that. After all, Viktor Frankl was the opposite of a nihilist, and yet we flock to him for the same reason — to be assured, to be consoled, to feel like we can endure.

The Will to Power remains indispensable and The Consolations of Philosophy is excellent in its totality. Complement them with a lighter serving of Nietzsche — his ten rules for writers, penned in a love letter.

 

http://www.brainpickings.org/2014/10/15/nietzsche-on-difficulty/

A “silent majority” of young people without college degrees and decent jobs are on a downwardly-mobile slide.


A Majority of Millennials Don’t Have a College Degree—That’s Going to Cost Everybody

Photo Credit: Shutterstock.com

 There’s a lot of hoopla in the media about how Millennials are the best-educated generation in history, blah, blah, blah. But according to a Pew survey, that’s a distortion of reality. In fact, two-thirds of Millennials between ages 25 and 32 don’t have a bachelor’s degree. The education gap among this generation is higher than for any other in history in terms of how those with a college degree will fare compared to those without. Reflecting a trend that has been gaining momentum in the rest of America, Millennials are rapidly getting sorted into winners and losers. Most of them are losing. That’s going to cost this generation a lot —and the rest of society, too.

According to Pew, young college graduates are ahead of their less-educated peers on just about every measure of economic well-being and how they are faring in the course of their careers. Their parents and grandparents’ generations did not take as big of a hit by not going to college, but for Millennials, the blow is severe. Without serious intervention, its effects will be permanent.

Young college grads working full-time are earning an eye-popping $17,500 more per year than those with only a high school diploma. To put this in perspective, in 1979 when the first Baby Boomers were the same age that Millennials are today, a high school graduate could earn around three-quarters (77 percent) of what his or her college-educated peer took in. But Millennials with only a high school diploma earn only 62 percent of what the college grads earn.

According to Pew, young people with a college degree are also more likely to have full-time jobs, much more likely to have a job of any kind, and more likely to believe that their job will lead to a fulfilling career. But forty-two percent of those with a high school diploma or less see their work as “just a job to get by.” In stark contrast, only 14 percent of college grads have such a negative assessment of their jobs.

Granted, college is expensive. But nine out of 10 Millennials say it’s worth it — even those who have had to borrow to foot the bill. They seem to have absorbed the fact that in a precarious economy, a college diploma is the bare minimum for security and stability.

Why are those with less education doing so badly? The Great Recession is part of the answer. There has also been a trend in which  jobs, when they return after a financial crisis, are worse than those that were lost. After the recession of the 80s, for example, unionized labor never again found jobs as good as the ones they’d had before the downturn. The same things has happened this time, only even more dramatically. The jobs that are returning are often part-time, underpaid, lacking in benefits and short on opportunities to advance. It’s great to embark on a career as an engineer at Apple, not so great to work in an Apple retail store, where pay is low and the hope for a career is minimal. The Great Recession amplified a trend of McJobs that had been gaining strength for decades, stoked by the decline in unions, deregulation, outsourcing, and poor corporate governance that have tilted the balance of power away from employees to such a degree that many young people now expect exploitation and poor conditions on the job simply as a matter of course, with no experience of how things could be any different.

All this is not to say that having a college degree gives you a free pass: This generation of college-educated adults is doing slightly worse on certain measures, like the percentage without jobs, than Gen Xers, Baby Boomers or members of the silent generation when they were in their mid-20s and early 30s. But today’s young people who don’t go to college are doing much worse than those in similar situations in the generations that came before.

Povety is one of the biggest threats to Millenials without college degrees. Nearly a quarter (22 percent) of young people ages 25 to 32 without a college degree live in poverty today, whereas only 6 percent of the college-educated fall into this camp. When Baby Boomers were the same age as today’s Millenials, only 7 percent of those with only a high school diploma were living in poverty.

It’s true that more Millennials than past generations have college degrees, and it’s also true that the value of those diplomas has increased. Given those facts you might think might that the Millennial generation should be earning more than earlier generations of young adults. You would be wrong — and that’s because it’s more costly not to have a college education than ever before. So the education have-nots are pulling the average of the whole generation down. The typical high school graduate’s earnings dropped by more than $3,000, from $31,384 in 1965 to $28,000 in 2013.

There are also more Millennials who don’t even have a high school diploma than previous generations: Some have taken to calling Millennials “Generation Dropout.” A 2013 article in the Atlantic Monthly noted that compared to other countries, the newest wave of employees is actually less educated than their parents because of the lower number completing high school. A recent program on NPR called the 25- to 32-year-old cohort without college degrees and decent jobs the “Silent Majority.”

In 1965, young college graduates earned $7,499 more than those with a high school diploma. But the earnings gap by educational attainment has steadily widened since then, and today it has more than doubled to $17,500 among Millennials ages 25 to 32.

All of this is alarming because it means that less-educated workers are going to have a really hard time. Compared to the Silent Generation, those with high school or less are three times more likely to be jobless.

When you look at the length of time the typical job seeker spends looking for work, less educated Millennials are again faring poorly. In 2013 the average unemployed college-educated Millennial had been looking for work for 27 weeks—more than double the time it took an unemployed college-educated 25- to 32-year-old in 1979 to find a job (12 weeks). And again, today’s young high school graduates do worse on this measure compared to the college-educated or their peers in earlier generations. Millennial high school graduates spend, on average, four weeks longer looking for work than college graduates (31 weeks vs. 27 weeks).

These young people are ending up in dire straits — stuck in debt, unable to set up their own households, and having to put off starting families (recent research shows that many women who face economic hard times in their 20s will never end up having kids). It’s not that they don’t want to grow up, it’s that they don’t have access to the things that make independence possible, like a good education, a good job, a strong social safety net, affordable childcare, and so on.

How much is this going to cost America as a nation? It’s too early to say for sure, but Millennial underemployment, which is directly linked to undereducation, is already costing $25 billion a year, largely because of the lost tax revenue. But what about the other costs? The increased rates of alcoholism and substance abuse? The broken relationships? The depression? The long list of physical ailments that go along with the stress of not being able to gain and keep a financial foothold?

Once upon a time, more forward-thinking politicians and politicos recognized that young people who have the bad luck to try to launch into adulthood in the wake of an economic crisis not of their own making need real help. They need jobs programs, training and decent work conditions that could improve not only their individual lives but the health of the whole society and economy. We have the blueprint of how to do this from the New Deal. It’s going to cost everyone if America leaves these young people to suffer this cruel fate.

Lynn Parramore is an AlterNet senior editor. She is cofounder of Recessionwire, founding editor of New Deal 2.0, and author of “Reading the Sphinx: Ancient Egypt in Nineteenth-Century Literary Culture.” She received her Ph.D. in English and cultural theory from NYU. She is the director of AlterNet’s New Economic Dialogue Project. Follow her on Twitter @LynnParramore.

http://www.alternet.org/education/surprise-majority-millennials-dont-have-college-degree-thats-going-cost-everybody?akid=12378.265072.6qEBLL&rd=1&src=newsletter1023736&t=7&paging=off&current_page=1#bookmark

Chomsky: There’s an Overt Corporate Effort to Indoctrinate American Children | Alternet

History teacher Dan Falcone and English teacher Saul Isaacson spoke with Noam Chomsky in his Cambridge office on September 16, 2014, about education and indoctrination, the 1960s, the Powell memorandum, democracy, the creation of ISIS, the media and the way “capitalism” actually works in the United States.

 

Chomsky: There’s an Overt Corporate Effort to Indoctrinate American Children | Alternet.

 

10 Facts About Being Homeless in the USA

As the Crisis Deepens, the Government is Doing Less to Help
http://www.willowcommunitychurch.org/homelesswithdogs.jpg

by BILL QUIGLEY

Three True Stories

Renee Delisle was one of over 3500 homeless people in Santa Cruz when she found out she was pregnant.  The Santa Cruz Sentinel reported she was turned away from a shelter because they did not have space for her.  While other homeless people slept in cars or under culverts, Renee ended up living in an abandoned elevator shaft until her water broke.

Jerome Murdough, 56, a homeless former Marine, was arrested for trespass in New York because he was found sleeping in a public housing stairwell on a cold night.  The New York Times reported that one week later, Jerome died of hypothermia in a jail cell heated to over 100 degrees.

Paula Corb and her two daughters lost their home and have lived in their minivan for four years.  They did laundry in a church annex, went to the bathroom at gas stations, and did their studies under street lamps, according to America Tonight.

 

Fact One.  Over half a million people are homeless

On any given night, there are over 600,000 homeless people in the US according to the US Department of Housing and Urban Development (HUD).  Most people are either spending the night in homeless shelters or in some sort of short term transitional housing.  Slightly more than a third are living in cars, under bridges or in some other way living unsheltered.

Fact Two.  One quarter of homeless people are children

HUD reports that on any given night over 138,000 of the homeless in the US are children under the age of 18. Thousands of these homeless children are unaccompanied according to HUD.  Another federal program, No Child Left Behind, defines homeless children more broadly and includes not just those living in shelters or transitional housing but also those who are sharing the housing of other persons due to economic hardship, living in cars, parks, bus or train stations, or awaiting foster care placement.  Under this definition, the National Center for Homeless Education reported in September 2014 that local school districts reported there are over one million homeless children in public schools.

Fact Three.  Tens of thousands of veterans are homeless

Over 57,000 veterans are homeless each night.  Sixty percent of them were in shelters, the rest unsheltered.  Nearly 5000 are female.

Fact Four.  Domestic violence is a leading cause of homelessness in women

More than 90% of homeless women are victims of severe physical or sexual abuse and escaping that abuse is a leading cause of their homelessness.

Fact Five. Many people are homeless because they cannot afford rent

The lack of affordable housing is a primary cause of homelessness according to the National Law Center on Homelessness and Poverty.  HUD has seen its budget slashed by over 50% in recent decades resulting in the loss of 10,000 units of subsidized low income housing each and every year.

Fact Six.  There are fewer places for poor people to rent than before

One eighth of the nation’s supply of low income housing has been permanently lost since 2001.  The US needs at least 7 million more affordable apartments for low income families and as a result millions of families spend more than half their monthly income on rent.

Fact Seven.  In the last few years millions have lost their homes

Over five million homes have been foreclosed on since 2008, one out of every ten homes with a mortgage.  This has caused even more people to search for affordable rental property.

Fact Eight.  The Government does not help as much as you think

There is enough public rental assistance to help about one out of every four extremely low income households.  Those who do not receive help are on multi-year waiting lists.  For example, Charlotte just opened up their applications for public housing assistance for the first time in 14 years and over 10,000 people applied.

Fact Nine.  One in five homeless people suffer from untreated severe mental illness

While about 6% of the general population suffers from severe mental illness, 20 to 25% of the homeless suffer from severe mental illness according to government studies.  Half of this population self-medicate and are at further risk of addiction and poor physical health.  A University of Pennsylvania study tracking nearly 5000 homeless people for two years discovered that investing in comprehensive health support and treatment of physical and mental illnesses is less costly than incarceration, shelter and hospital services for the untreated homeless.

Fact Ten.  Cities are increasingly making homelessness a crime

A 2014 survey of 187 cities by the National Law Center on Homelessness & Poverty found: 24% make it a city-wide crime to beg in public; 33% make it illegal to stand around or loiter anyplace in the city; 18% make it a crime to sleep anywhere in public; 43% make it illegal to sleep in your car; and 53% make it illegal to sit or lay down in particular public places.   And the number of cities criminalizing homelessness is steadily increasing.

For more information look to the National Law Center on Homelessness & Poverty, the National Center for Homeless Education and the National Coalition on the Homeless.

Bill Quigley teaches law at Loyola University New Orleans.  You can reach Bill at quigley77@gmail.com

 

 

Google makes us all dumber

…the neuroscience of search engines

As search engines get better, we become lazier. We’re hooked on easy answers and undervalue asking good questions

Google makes us all dumber: The neuroscience of search engines
(Credit: Ollyy via Shutterstock)

In 1964, Pablo Picasso was asked by an interviewer about the new electronic calculating machines, soon to become known as computers. He replied, “But they are useless. They can only give you answers.”

We live in the age of answers. The ancient library at Alexandria was believed to hold the world’s entire store of knowledge. Today, there is enough information in the world for every person alive to be given three times as much as was held in Alexandria’s entire collection —and nearly all of it is available to anyone with an internet connection.

This library accompanies us everywhere, and Google, chief librarian, fields our inquiries with stunning efficiency. Dinner table disputes are resolved by smartphone; undergraduates stitch together a patchwork of Wikipedia entries into an essay. In a remarkably short period of time, we have become habituated to an endless supply of easy answers. You might even say dependent.

Google is known as a search engine, yet there is barely any searching involved anymore. The gap between a question crystallizing in your mind and an answer appearing at the top of your screen is shrinking all the time. As a consequence, our ability to ask questions is atrophying. Google’s head of search, Amit Singhal, asked if people are getting better at articulating their search queries, sighed and said: “The more accurate the machine gets, the lazier the questions become.”

Google’s strategy for dealing with our slapdash questioning is to make the question superfluous. Singhal is focused on eliminating “every possible friction point between [users], their thoughts and the information they want to find.” Larry Page has talked of a day when a Google search chip is implanted in people’s brains: “When you think about something you don’t really know much about, you will automatically get information.” One day, the gap between question and answer will disappear.

I believe we should strive to keep it open. That gap is where our curiosity lives. We undervalue it at our peril.

The Internet can make us feel omniscient. But it’s the feeling of not knowing which inspires the desire to learn. The psychologist George Loewenstein gave us the simplest and most powerful definition of curiosity, describing it as the response to an “information gap.” When you know just enough to know that you don’t know everything, you experience the itch to know more. Loewenstein pointed out that a person who knows the capitals of three out of 50 American states is likely to think of herself as knowing something (“I know three state capitals”). But a person who has learned the names of 47 state capitals is likely to think of herself as not knowing three state capitals, and thus more likely to make the effort to learn those other three.



That word “effort” is important. It’s hardly surprising that we love the ease and fluency of the modern web: our brains are designed to avoid anything that seems like hard work. The psychologists Susan Fiske and Shelley Taylor coined the term “cognitive miser” to describe the stinginess with which the brain allocates limited attention, and its in-built propensity to seek mental short-cuts. The easier it is for us to acquire information, however, the less likely it is to stick. Difficulty and frustration — the very friction that Google aims to eliminate — ensure that our brain integrates new information more securely. Robert Bjork, of the University of California, uses the phrase “desirable difficulties” to describe the counterintuitive notion that we learn better when the learning is hard. Bjork recommends, for instance, spacing teaching sessions further apart so that students have to make more effort to recall what they learned last time.

A great question should launch a journey of exploration. Instant answers can leave us idling at base camp. When a question is given time to incubate, it can take us to places we hadn’t planned to visit. Left unanswered, it acts like a searchlight ranging across the landscape of different possibilities, the very consideration of which makes our thinking deeper and broader. Searching for an answer in a printed book is inefficient, and takes longer than in its digital counterpart. But while flicking through those pages your eye may alight on information that you didn’t even know you wanted to know.

The gap between question and answer is where creativity thrives and scientific progress is made. When we celebrate our greatest thinkers, we usually focus on their ingenious answers. But the thinkers themselves tend to see it the other way around. “Looking back,” said Charles Darwin, “I think it was more difficult to see what the problems were than to solve them.” The writer Anton Chekhov declared, “The role of the artist is to ask questions, not answer them.” The very definition of a bad work of art is one that insists on telling its audience the answers, and a scientist who believes she has all the answers is not a scientist.

According to the great physicist James Clerk Maxwell, “thoroughly conscious ignorance is the prelude to every real advance in science.” Good questions induce this state of conscious ignorance, focusing our attention on what we don’t know. The neuroscientist Stuart Firestein teaches a course on ignorance at Columbia University, because, he says, “science produces ignorance at a faster rate than it produces knowledge.” Raising a toast to Einstein, George Bernard Shaw remarked, “Science is always wrong. It never solves a problem without creating ten more.”

Humans are born consciously ignorant. Compared to other mammals, we are pushed out into the world prematurely, and stay dependent on elders for much longer. Endowed with so few answers at birth, children are driven to question everything. In 2007, Michelle Chouinard, a psychology professor at the University of California, analyzed recordings of four children interacting with their respective caregivers for two hours at a time, for a total of more than two hundred hours. She found that, on average, the children posed more than a hundred questions every hour.

Very small children use questions to elicit information — “What is this called?” But as they grow older, their questions become more probing. They start looking for explanations and insight, to ask “Why?” and “How?”. Extrapolating from Chouinard’s data, the Harvard professor Paul Harris estimates that between the ages of 3 and 5, children ask 40,000 such questions. The numbers are impressive, but what’s really amazing is the ability to ask such a question at all. Somehow, children instinctively know there is a vast amount they don’t know, and they need to dig beneath the world of appearances.

In a 1984 study by British researchers Barbara Tizard and Martin Hughes, four-year-old girls were recorded talking to their mothers at home. When the researchers analyzed the tapes, they found that some children asked more “How” and “Why” questions than others, and engaged in longer passages of “intellectual search” — a series of linked questions, each following from the other. (In one such conversation, four-year-old Rosy engaged her mother in a long exchange about why the window cleaner was given money.) The more confident questioners weren’t necessarily the children who got more answers from their parents, but the ones who got more questions. Parents who threw questions back to their children — “I don’t know, what do you think?” — raised children who asked more questions of them. Questioning, it turn out, is contagious.

Childish curiosity only gets us so far, however. To ask good questions, it helps if you have built your own library of answers. It’s been proposed that the Internet relieves us of the onerous burden of memorizing information. Why cram our heads with facts, like the date of the French revolution, when they can be summoned up in a swipe and a couple of clicks? But knowledge doesn’t just fill the brain up; it makes it work better. To see what I mean, try memorizing the following string of fourteen digits in five seconds:

74830582894062

Hard, isn’t it? Virtually impossible. Now try memorizing this string of fourteen letters:

lucy in the sky with diamonds

This time, you barely needed a second. The contrast is so striking that it seems like a completely different problem, but fundamentally, it’s the same. The only difference is that one string of symbols triggers a set of associations with knowledge you have stored deep in your memory. Without thinking, you can group the letters into words, the words into a sentence you understand as grammatical — and the sentence is one you recognize as the title of a song by the Beatles. The knowledge you’ve gathered over years has made your brain’s central processing unit more powerful.

This tells us something about the idea we should outsource our memories to the web: it’s a short-cut to stupidity. The less we know, the worse we are at processing new information, and the slower we are to arrive at pertinent inquiry. You’re unlikely to ask a truly penetrating question about the presidency of Richard Nixon if you have just had to look up who he is. According to researchers who study innovation, the average age at which scientists and inventors make breakthroughs is increasing over time. As knowledge accumulates across generations, it takes longer for individuals to acquire it, and thus longer to be in a position to ask the questions which, in Susan Sontag’s phrase, “destroy the answers”.

My argument isn’t with technology, but the way we use it. It’s not that the Internet is making us stupid or incurious. Only we can do that. It’s that we will only realize the potential of technology and humans working together when each is focused on its strengths — and that means we need to consciously cultivate effortful curiosity. Smart machines are taking over more and more of the tasks assumed to be the preserve of humans. But no machine, however sophisticated, can yet be said to be curious. The technology visionary Kevin Kelly succinctly defines the appropriate division of labor: “Machines are for answers; humans are for questions.”

The practice of asking perceptive, informed, curious questions is a cultural habit we should inculcate at every level of society. In school, students are generally expected to answer questions rather than ask them. But educational researchers have found that students learn better when they’re gently directed towards the lacunae in their knowledge, allowing their questions bubble up through the gaps. Wikipedia and Google are best treated as starting points rather than destinations, and we should recognize that human interaction will always play a vital role in fueling the quest for knowledge. After all, Google never says, “I don’t know — what do you think?”

The Internet has the potential to be the greatest tool for intellectual exploration ever invented, but only if it is treated as a complement to our talent for inquiry rather than a replacement for it. In a world awash in ready-made answers, the ability to pose difficult, even unanswerable questions is more important than ever.

Picasso was half-right: computers are useless without truly curious humans.

Ian Leslie is the author of “Curious: The Desire To Know and Why Your Future Depends On It.” He writes on psychology, trends and politics for The Economist, The Guardian, Slate and Granta. He lives in London. Follow him on Twitter at @mrianleslie.

http://www.salon.com/2014/10/12/google_makes_us_all_dumber_the_neuroscience_of_search_engines/?source=newsletter

Joan Didion Answers the Proust Questionnaire

by

“Misery is feeling estranged from people I love. Misery is also not working. The two seem to go together.”

In the 1880s, long before he claimed his status as one of the greatest authors of all time, teenage Marcel Proust (July 10, 1871–November 18, 1922) filled out an English-language questionnaire given to him by his friend Antoinette, the daughter of France’s then-president, as part of her “confession album” — a Victorian version of today’s popular personality tests, designed to reveal the answerer’s tastes, aspirations, and sensibility in a series of simple questions. Proust’s original manuscript, titled “by Marcel Proust himself,” wasn’t discovered until 1924, two years after his death. Decades later, the French television host Bernard Pivot, whose work inspired James Lipton’s Inside the Actor’s Studio, saw in the questionnaire an excellent lubricant for his interviews and began administering it to his guests in the 1970s and 1980s. In 1993, Vanity Fair resurrected the tradition and started publishing various public figures’ answers to the Proust Questionnaire on the last page of each issue.

In 2009, the magazine released Vanity Fair’s Proust Questionnaire: 101 Luminaries Ponder Love, Death, Happiness, and the Meaning of Life (public library) — a compendium of answers by such cultural icons as Jane Goodall, David Bowie, Allen Ginsberg, Hedy Lamarr, Gore Vidal, and Julia Child.

Unsurprisingly, some of the most wonderful answers come from 69-year-old Joan Didion — a woman who has endured more personal tragedy than most and has written about it with great dignity and grace, extracting from her experience wisdom on such subtle and monumental aspects of existence as grief, self-respect, keeping a notebook.

Portrait of Joan Didion by Robert Risko for Vanity Fair

Didion’s answers are particularly poignant for their timing — she answered The Proust Questionnaire in October of 2003, several weeks before her husband died of a heart attack while her only daughter lay comatose in the ICU; though Didion’s daughter did recover from the coma, acute pancreatitis took her life eighteen months later.

What is your greatest fear?

I have an irrational fear of snakes. When my husband and I moved to a part of Los Angeles County with many rattlesnakes, I tried to desensitize myself by driving every day to a place called Hermosa Reptile Import-Export and forcing myself to watch the anacondas. This seemed to work, but a few yeas later, when we were living in Malibu and I had a Corvette, a king snake (a “good” snake, not poisonous, by no means anaconda-like) dropped from a garage rafter into my car. My daughter, then four, brought it to show me. I am ashamed to say I ran away. I still think about what would have happened had I driven to the market and noticed my passenger, the snake, on the Pacific Coast Highway.

What is the trait you most deplore in yourself?

I find “speaking one’s mind” pretty overrated, in that it usually turns out to be a way of aggrandizing the speaker at the expense of the helpless listener.

What is your favorite journey?

A long time ago, before they showed movies on airplanes and decided to make you close the blinds, I used to love flying west and watching the country open up, the checkerboarded farms of the Midwest giving way to the vast stretches of nothing. I also loved flying over the Pole from Europe to Los Angeles during the day, when you could see ice floes and islands s in the sea change almost imperceptibly to lakes in the land. This shift in perception was very thrilling to me.

On what occasion do you lie?

I probably lie constantly, if the definition of lying includes white lies, social lies, lies to ease a situation or make someone feel better. My mother was incapable of lying. I remember her driving into a blinding storm to vote for an acquaintance in an S.P.C.A. election. “I told Dorothy I would,” she said when I tried to dissuade her. “How will Dorothy know?” I asked. “That’s not the point,” my mother said. I’m sorry to report that this was amazing to me.

What do you dislike most about your appearance?

For a while there I disliked being short, but I got used to it. Which is not to say I wouldn’t have preferred to be five-ten and get sent clothes by designers.

Which words or phrases do you most overuse?

Most people who write find themselves overusing certain words or constructions (if they worked once, they get hardwired), so much so that a real part of the exercise is getting those repetitions out.

When and where were you happiest?

Once, in a novel, Democracy, I had the main character, Inez Victor, consider this very question, which was hard for her. She drinks her coffee, she smokes a cigarette, she thinks it over, she comes to a conclusion: “In retrospect she seemed to have been most happy in borrowed houses, and at lunch. She recalled being extremely happy eating lunch by herself in a hotel room in Chicago, once when snow was drifting on the window ledges. There was a lunch in Paris that she remembered in detail: a late lunch with Harry and the twins at Pré Catelan in the rain.” These lunches and borrowed houses didn’t come from nowhere.

What talent would you most like to have?

I long to be fluent in languages other than English. I am resigned to the fact that this will not happen. A lot of things get in the way, not least a stubborn fear of losing my only real asset since childhood, the ability to put English sentences together.

If you could change one thing about yourself, what would it be?

I’m afraid that “one thing” would just lead to another thing, making this a question only the truly greedy would try to answer.

What is your most treasured possession?

I treasure things my daughter has given me, for example (I think of this because it is always on my desk), a picture book called Baby Animals and Their Mothers.

What do you regard as the lowest depth of misery?

Misery is feeling estranged from people I love. Misery is also not working. The two seem to go together.

Where would you like to live?

I want to live somewhere else every month or so. Right now I would like to be living on Kailua Beach, on the windward side of Oahu. Around November, I’m quite sure I will want to be living in Paris, preferably in the Hotel Bristol. I like hotels a lot. When we were living in houses in Los Angeles I used to make charts showing how we could save money by living in a bungalow in Bel-Air, but my husband never bought it.

What is your favorite occupation?

I like making gumbo. I like gardening. I like writing, at least when it’s going well, maybe because it seems to be exactly as tactile a thing to do as making gumbo or gardening.

What is your most marked characteristic?

If I listened to other people, I would think my most marked characteristic was being thin. What strikes me about myself, however, is no t my thinness but a certain remoteness. I tune out a lot.

Who is your favorite hero of fiction?

Axel Heyst in Joseph Conrad’s Victory has always attracted me as a character. Standing out on that dock in, I think (I may be wrong, because I have no memory), Sumatra. His great venture, the Tropical Belt Coal Company, gone to ruin behind him. And then he does something so impossibly brave that he can only be doing it because he has passed entirely beyond concern for himself.

 

http://www.brainpickings.org/2014/10/02/joan-didion-proust-questionnaire/