Mugshots of female Nazi concentration camp guards

The ordinary faces of evil:
10.22.2014

3yrimpFriedawaltergrd111111.jpg
Frieda Walter: sentenced to three years imprisonment.

Though their actions were monstrous, they are not monsters. There are no horns, no sharp teeth, no demonic eyes, no number of the Beast. They are just ordinary women. Mothers, sisters, grandmothers, aunts, widows, spinsters. Ordinary women, ordinary human beings.

In the photographs they look shameful, guilty, scared, brazen, stupid, cunning, disappointed, desperate, confused. These women were Nazi guards at the Belsen-Bergen concentration camp during the Second World War, and were all tried and found guilty of carrying out horrendous crimes against their fellow human beings—mothers, fathers, sisters, brothers, daughters, sons. Interesting how “evil” looks just like you and me.

1yrhildeLiesewitzgrd666.jpg
Hilde Liesewitz: sentenced to one year imprisonment.

5yrimpgertrudeFeistgrd888.jpg
Gertrude Feist: sentenced to five years imprisonment.

10yrgertrudeSaurergrd777.jpg
Gertrude Saurer: sentenced to ten years imprisonment.

10yrimpAnnahempelgrd.jpg
Anna Hempel: sentenced to ten years imprisonment.

10yrimpouterlyHertabothe22dec51x101010.jpg
Herta Bothe accompanied a death march of woman from central Poland to Bergen-Belsen concentration camp. She was sentenced to ten years imprisonment but was released early from prison on December 22nd, 1951.

10yrshildegardLohbauergrd555.jpg
Hildegard Lohbauer: sentenced to ten years imprisonment.

10yrsilsaforstergrd333.jpg
Ilse Forster: sentenced to ten years imprisonment.

15yrsimphelenekoppergrd444.jpg
Helene Kopper: sentenced to fifteen years imprisonment.

15yrsimphertElhert222.jpg
Herta Ehlert: sentenced to fifteen years imprisonment.

dthsenthng13dec45elizabethVolkenrathgrd999_(1).jpg
Elizabeth Volkenrath: Head Wardess at Belsen-Bergen: sentenced to death. She was hanged on 13 December 1945.

sntdeathjuananbormanngrd1.jpg
Juana Bormann: sentenced to death.

Via Vintage Everyday

http://dangerousminds.net/comments/the_ordinary_faces_of_evil_mugshots_of_female_nazi

The Impulse Society

How Our Growing Desperation for Instant Connection Is Ruining Us

Consumer culture does everything in its power to persuade us that adversity has no place in our lives.

The following is an excerpt from Paul Roberts’ new book, The Impulse Society: America in the Age of Instant Gratification (Bloomsbury, 2014). Reprinted here with permission.

The metaphor of the expanding fragile modern self is quite apt. To personalize is, in effect, to reject the world “as is,” and instead to insist on bending it to our preferences, as if mastery and dominance were our only mode. But humans aren’t meant only for mastery. We’re also meant to adapt to something larger. Our large brains are specialized for cooperation and compromise and negotiation—with other individuals, but also with the broader world, which, for most of history, did not cater to our preferences or likes. For all our ancestors’ tremendous skills at modifying and improving their environment, daily survival depended as much on their capacity to conform themselves and their expectations to the world as they found it. Indeed, it was only by enduring adversity and disappointment that we humans gained the strength and knowledge and perspective that are essential to sustainable mastery.

Virtually every traditional culture understood this and regarded adversity as inseparable from, and essential to, the formation of strong, self-sufficient individuals. Yet the modern conception of “character” now leaves little space for discomfort or real adversity. To the contrary, under the Impulse Society, consumer culture does everything in its considerable power to persuade us that adversity and difficulty and even awkwardness have no place in our lives (or belong only in discrete, self-enhancing moments, such as ropes courses or really hard ab workouts). Discomfort, difficulty, anxiety, suffering, depression, rejection, uncertainty, or ambiguity—in the Impulse Society, these aren’t opportunities to mature and toughen or become. Rather, they represent errors and inefficiencies, and thus opportunities to correct—nearly always with more consumption and self-expression.

So rather than having to wait a few days for a package, we have it overnighted. Or we pay for same-day service. Or we pine for the moment when Amazon launches drone delivery and can get us our package in thirty minutes.* And as the system gets faster at gratifying our desires, the possibility that we might actually be more satisfied by waiting and enduring a delay never arises. Just as nature abhors a vacuum, the efficient consumer market abhors delay and adversity, and by extension, it cannot abide the strength of character that delay and adversity and inefficiency generally might produce. To the efficient market, “character” and “virtue” are themselves inefficiencies—impediments to the volume-based, share-price-maximizing economy. Once some new increment of self-expressive, self-gratifying, self-promoting capability is made available, the unstated but overriding assumption of contemporary consumer culture is that this capability can and should be put to use. Which means we now allow the efficient market and the treadmills and the relentless cycles of capital and innovation to determine how, and how far, we will take our self-expression and, by extension, our selves— even when doing so leaves us in a weaker state.

Consider the way our social relationships, and the larger processes of community, are changing under the relentless pressure of our new efficiencies. We know how important community is for individual development. It’s in the context of community that we absorb the social rules and prerequisites for interaction and success. It’s here that we come to understand and, ideally, to internalize, the need for limits and self-control, for patience and persistence and long-term commitments; the pressure of community is one way society persuades us to control our myopia and selfishness. (Or as economists Sam Bowles and Herbert Gintis have put it, community is the vehicle through which “society’s ‘oughts’ become its members’ ‘wants.’ ”) But community’s function isn’t simply to say “no.” It’s in the context of our social relationships where we discover our capacities and strengths. It’s here that we gain our sense of worth as individuals, as citizens and as social producers—active participants who don’t merely consume social goods, but contribute something the community needs.

But community doesn’t simply teach us to be productive citizens. People with strong social connections generally have a much better time. We enjoy better physical and mental health, recover faster from sickness or injury, and are less likely to suffer eating or sleeping disorders. We report being happier and rank our quality of life as higher—and do so even when the community that we’re connected to isn’t particularly well off or educated. Indeed, social connectedness is actually more important than affluence: regular social activities such as volunteering, church attendance, entertaining friends, or joining a club provide us with the same boost to happiness as does a doubling of personal income. As Harvard’s Robert Putnam notes, “The single most common finding from a half century’s research on the correlates of life satisfaction, not only in the United States but around the world, is that happiness is best predicted by the breadth and depth of one’s social connections.”

Unfortunately, for all the importance of social connectedness, we haven’t done a terribly good job of preserving it under the Impulse Society. Under the steady pressure of commercial and technological efficiencies, many of the tight social structures of the past have been eliminated or replaced with entirely new social arrangements. True, many of these new arrangements are clearly superior—even in ostensibly free societies, traditional communities left little room for individual growth or experimentation or happiness. Yet our new arrangements, which invariably seek to give individuals substantially more control over how they connect, exact a price. More and more, social connection becomes just another form of consumption, one we expect to tailor to our personal preferences and schedules—almost as if community was no longer a necessity or an obligation, but a matter of personal style, something to engage as it suits our mood or preference. And while such freedom has its obvious attractions, it clearly has downsides. In gaining so much control over the process of social connection, we may be depriving ourselves of some of the robust give-and-take of traditional interaction that is essential to becoming a functional, fulfilled individual.

Consider our vaunted and increasing capacity to communicate and connect digitally. In theory, our smartphones and social media allow us the opportunity to be more social than at any time in history. And yet, because there are few natural limits to this format—we can, in effect, communicate incessantly, posting every conceivable life event, expressing every thought no matter how incompletely formed or inappropriate or mundane—we may be diluting the value of the connection.

Studies suggest, for example, that the efficiency with which we can respond to an online provocation routinely leads to escalations that can destroy offline relationships. “People seem aware that these kinds of crucial conversations should not take place on social media,” notes Joseph Grenny, whose firm, VitalSmarts, surveys online behavior. “Yet there seems to be a compulsion to resolve emotions right now and via the convenience of these channels.”

Even when our online communications are entirely friendly, the ease with which we can reach out often undermines the very connection we seek to create. Sherry Turkle, a sociologist and clinical psychologist who has spent decades researching digital interactions, argues that because it is now possible to be in virtually constant contact with others, we tend to communicate so excessively that even a momentary lapse can leave us feeling isolated or abandoned. Where people in the pre-digital age did not think it alarming to go hours or days or even weeks without hearing from someone, the digital mind can become uncomfortable and anxious without instant feedback. In her book Alone Together, Turkle describes a social world of collapsing time horizons. College students text their parents daily, and even hourly, over the smallest matters—and feel anxious if they can’t get a quick response. Lovers break up over the failure to reply instantly to a text; friendships sour when posts aren’t “liked” fast enough. Parents call 911 if Junior doesn’t respond immediately to a text or a phone call—a degree of panic that was simply unknown before constant digital contact. Here, too, is a world made increasingly insecure by its own capabilities and its own accelerating efficiencies.

This same efficiency-driven insecurity now lurks just below the surface in nearly all digital interactions. Whatever the relationship (romantic, familial, professional), the very nature of our technology inclines us to a constant state of emotional suspense. Thanks to the casual, abbreviated nature of digital communication, we converse in fragments of thoughts and feelings that can be completed only through more interaction—we are always waiting to know how the story ends. The result, Turkle says, is a communication style, and a relationship style, that allow us to “express emotions while they are being formed” and in which “feelings are not fully experienced until they are communicated.” In other words, what was once primarily an interior process—thoughts were formed and feelings experienced before we expressed them—has now become a process that is external and iterative and public. Identity itself comes to depend on iterative interaction—giving rise to what Turkle calls the “collaborative self.” Meanwhile, our skills as a private, self-contained person vanish. “What is not being cultivated here,” Turkle writes, “is the ability to be alone and reflect on one’s emotions in private.” For all the emphasis on independence and individual freedom under the Impulse Society, we may be losing the capacity to truly be on our own.

In a culture obsessed with individual self-interest, such an incapacity is surely one of the greatest ironies of the Impulse Society. Yet it many ways it was inevitable. Herded along by a consumer culture that is both solicitous and manipulative, one that proposes absolute individual liberty while enforcing absolute material dependence—we rely completely on the machine of the marketplace—it is all too easy to emerge with a self-image, and a sense of self, that are both wildly inflated and fundamentally weak and insecure. Unable to fully experience the satisfactions of genuine independence and individuality, we compensate with more personalized self-expression and gratification, which only push us further from the real relationships that might have helped us to a stable, fulfilling existence.

 

http://www.alternet.org/books/impulse-society-how-our-growing-desperation-instant-connection-ruining-us?akid=12390.265072.bjTHr8&rd=1&src=newsletter1024073&t=9&paging=off&current_page=1#bookmark

 

Obama Is a Republican

He’s the heir to Richard Nixon, not Saul Alinsky.

illustration by Michael Hogue

illustration by Michael Hogue

Back in 2008, Boston University professor Andrew Bacevich wrote an article for this magazine making a conservative case for Barack Obama. While much of it was based on disgust with the warmongering and budgetary profligacy of the Republican Party under George W. Bush, which he expected to continue under 2008 Republican nominee Sen. John McCain, Bacevich thought Obama at least represented hope for ending the Iraq War and shrinking the national-security state.

I wrote a piece for the New Republic soon afterward about the Obamacon phenomenon—prominent conservatives and Republicans who were openly supporting Obama. Many saw in him a classic conservative temperament: someone who avoided lofty rhetoric, an ambitious agenda, and a Utopian vision that would conflict with human nature, real-world barriers to radical reform, and the American system of government.

Among the Obamacons were Ken Duberstein, Ronald Reagan’s chief of staff; Charles Fried, Reagan’s solicitor general; Ken Adelman, director of the Arms Control and Disarmament Agency for Reagan; Jeffrey Hart, longtime senior editor of National Review; Colin Powell, Reagan’s national security adviser and secretary of state for George W. Bush; and Scott McClellan, Bush’s press secretary. There were many others as well.

According to exit polls in 2008, Obama ended up with 20 percent of the conservative vote. Even in 2012, after four years of relentless conservative attacks, he still got 17 percent of the conservative vote, with 11 percent of Tea Party supporters saying they cast their ballots for Obama.

They were not wrong. In my opinion, Obama has governed as a moderate conservative—essentially as what used to be called a liberal Republican before all such people disappeared from the GOP. He has been conservative to exactly the same degree that Richard Nixon basically governed as a moderate liberal, something no conservative would deny today. (Ultra-leftist Noam Chomsky recently called Nixon “the last liberal president.”)

Here’s the proof:

Iraq/Afghanistan/ISIS

One of Obama’s first decisions after the election was to keep national-security policy essentially on automatic pilot from the Bush administration. He signaled this by announcing on November 25, 2008, that he planned to keep Robert M. Gates on as secretary of defense. Arguably, Gates had more to do with determining Republican policy on foreign and defense policy between the two Bush presidents than any other individual, serving successively as deputy national security adviser in the White House, director of Central Intelligence, and secretary of defense.

Another early indication of Obama’s hawkishness was naming his rival for the Democratic nomination, Sen. Hillary Clinton, as secretary of state. During the campaign, Clinton ran well to his right on foreign policy, so much so that she earned the grudging endorsement of prominent neoconservatives such as Bill Kristol and David Brooks.

Obama, Kristol told the Washington Post in August 2007, “is becoming the antiwar candidate, and Hillary Clinton is becoming the responsible Democrat who could become commander in chief in a post-9/11 world.” Writing in the New York Times on February 5, 2008, Brooks praised Clinton for hanging tough on Iraq “through the dark days of 2005.”

Right-wing columnist Ann Coulter found Clinton more acceptable on national-security policy than even the eventual Republican nominee, Senator McCain. Clinton, Coulter told Fox’s Sean Hannity on January 31, 2008, was “more conservative than he [McCain] is. I think she would be stronger in the war on terrorism.” Coulter even said she would campaign for Clinton over McCain in a general election match up.

After Obama named Clinton secretary of state, there was “a deep sigh” of relief among Republicans throughout Washington, according to reporting by The Daily Beast’s John Batchelor. He noted that not a single Republican voiced any public criticism of her appointment.

By 2011, Republicans were so enamored with Clinton’s support for their policies that Dick Cheney even suggested publicly that she run against Obama in 2012. The irony is that as secretary of state, Clinton was generally well to Obama’s left, according to Vali Nasr’s book The Dispensable Nation. This may simply reflect her assumption of state’s historical role as the dovish voice in every administration. Or it could mean that Obama is far more hawkish than conservatives have given him credit for.

Although Obama followed through on George W. Bush’s commitment to pull U.S. troops out of Iraq in 2011, in 2014 he announced a new campaign against ISIS, an Islamic militant group based in Syria and Iraq.

Stimulus/Deficit

With the economy collapsing, the first major issue confronting Obama in 2009 was some sort of economic stimulus. Christina Romer, chair of the Council of Economic Advisers, whose academic work at the University of California, Berkeley, frequently focused on the Great Depression, estimated that the stimulus needed to be in the range of $1.8 trillion, according to Noam Scheiber’s book The Escape Artists.

The American Recovery and Reinvestment Act was enacted in February 2009 with a gross cost of $816 billion. Although this legislation was passed without a single Republican vote, it is foolish to assume that the election of McCain would have resulted in savings of $816 billion. There is no doubt that he would have put forward a stimulus plan of roughly the same order of magnitude, but tilted more toward Republican priorities.

A Republican stimulus would undoubtedly have had more tax cuts and less spending, even though every serious study has shown that tax cuts are the least effective method of economic stimulus in a recession. Even so, tax cuts made up 35 percent of the budgetary cost of the stimulus bill—$291 billion—despite an estimate from Obama’s Council of Economic Advisers that tax cuts barely raised the gross domestic product $1 for every $1 of tax cut. By contrast, $1 of government purchases raised GDP $1.55 for every $1 spent. Obama also extended the Bush tax cuts for two years in 2010.

It’s worth remembering as well that Bush did not exactly bequeath Obama a good fiscal hand. Fiscal year 2009 began on October 1, 2008, and one third of it was baked in the cake the day Obama took the oath of office. On January 7, 2009, the Congressional Budget Office projected significant deficits without considering any Obama initiatives. It estimated a deficit of $1.186 trillion for 2009 with no change in policy. The Office of Management and Budget estimated in November of that year that Bush-era policies, such as Medicare Part D, were responsible for more than half of projected deficits over the next decade.

Republicans give no credit to Obama for the significant deficit reduction that has occurred on his watch—just as they ignore the fact that Bush inherited an projected budget surplus of $5.6 trillion over the following decade, which he turned into an actual deficit of $6.1 trillion, according to a CBO study—but the improvement is real.

Screenshot 2014-10-20 12.59.16

 

 

 

 

 

 

 

 

 

Republicans would have us believe that their tight-fisted approach to spending is what brought down the deficit. But in fact, Obama has been very conservative, fiscally, since day one, to the consternation of his own party. According to reporting by the Washington Post and New York Times, Obama actually endorsed much deeper cuts in spending and the deficit than did the Republicans during the 2011 budget negotiations, but Republicans walked away.

Obama’s economic conservatism extends to monetary policy as well. His Federal Reserve appointments have all been moderate to conservative, well within the economic mainstream. He even reappointed Republican Ben Bernanke as chairman in 2009. Many liberals have faulted Obama for not appointing board members willing to be more aggressive in using monetary policy to stimulate the economy and reduce unemployment.

Obama’s other economic appointments, such as Larry Summers at the National Economic Council and Tim Geithner at Treasury, were also moderate to conservative. Summers served on the Council of Economic Advisers staff in Reagan’s White House. Geithner joined the Treasury during the Reagan administration and served throughout the George H.W. Bush administration.

Health Reform

Contrary to rants that Obama’s 2010 health reform, the Patient Protection and Affordable Care Act (ACA), is the most socialistic legislation in American history, the reality is that it is virtually textbook Republican health policy, with a pedigree from the Heritage Foundation and Massachusetts Gov. Mitt Romney, among others.

It’s important to remember that historically the left-Democratic approach to healthcare reform was always based on a fully government-run system such as Medicare or Medicaid. During debate on health reform in 2009, this approach was called “single payer,” with the government being the single payer. One benefit of this approach is cost control: the government could use its monopsony buying power to force down prices just as Walmart does with its suppliers.

Conservatives wanted to avoid too much government control and were adamantly opposed to single-payer. But they recognized that certain problems required more than a pure free-market solution. One problem in particular is covering people with pre-existing conditions, one of the most popular provisions in ACA. The difficulty is that people may wait until they get sick before buying insurance and then expect full coverage for their conditions. Obviously, this free-rider problem would bankrupt the health-insurance system unless there was a fix.

The conservative solution was the individual mandate—forcing people to buy private health insurance, with subsidies for the poor. This approach was first put forward by Heritage Foundation economist Stuart Butler in a 1989 paper, “A Framework for Reform,” published in a Heritage Foundation book, A National Health System for America. In it, Butler said the number one element of a conservative health system was this: “Every resident of the U.S. must, by law, be enrolled in an adequate health care plan to cover major health costs.” He went on to say:

Under this arrangement, all households would be required to protect themselves from major medical costs by purchasing health insurance or enrolling in a prepaid health plan. The degree of financial protection can be debated, but the principle of mandatory family protection is central to a universal health care system in America.

In 1991, prominent conservative health economist Mark V. Pauley also endorsed the individual mandate as central to healthcare reform. In an article in the journal Health Affairs, Pauley said:

All citizens should be required to obtain a basic level of health insurance. Not having health insurance imposes a risk of delaying medical care; it also may impose costs on others, because we as a society provide care to the uninsured. … Permitting individuals to remain uninsured results in inefficient use of medical care, inequity in the incidence of costs of uncompensated care, and tax-related distortions.

In 2004, Senate Majority Leader Bill Frist (R-Tenn.) endorsed an individual mandate in a speech to the National Press Club. “I believe higher-income Americans today do have a societal and personal responsibility to cover in some way themselves and their children,” he said. Even libertarian Ron Bailey, writing in Reason, conceded the necessity of a mandate in a November 2004 article titled, “Mandatory Health Insurance Now!” Said Bailey: “Why shouldn’t we require people who now get health care at the expense of the rest of us pay for their coverage themselves? … Mandatory health insurance would not be unlike the laws that require drivers to purchase auto insurance or pay into state-run risk pools.”

Among those enamored with the emerging conservative health reform based on an individual mandate was Mitt Romney, who was elected governor of Massachusetts in 2002. In 2004, he put forward a state health reform plan to which he later added an individual mandate. As Romney explained in June 2005, “No more ‘free riding,’ if you will, where an individual says: ‘I’m not going to pay, even though I can afford it. I’m not going to get insurance, even though I can afford it. I’m instead going to just show up and make the taxpayers pay for me’.”

The following month, Romney emphasized his point: “We can’t have as a nation 40 million people—or, in my state, half a million—saying, ‘I don’t have insurance, and if I get sick, I want someone else to pay’.”

In 2006, Governor Romney signed the Massachusetts health reform into law, including the individual mandate. Defending his legislation in a Wall Street Journal article, he said:

I proposed that everyone must either purchase a product of their choice or demonstrate that they can pay for their own health care. It’s a personal responsibility principle.

Some of my libertarian friends balk at what looks like an individual mandate. But remember, someone has to pay for the health care that must, by law, be provided: Either the individual pays or the taxpayers pay. A free ride on government is not libertarian.

As late as 2008, Robert Moffitt of the Heritage Foundation was still defending the individual mandate as reasonable, non-ideological and nonpartisan in an article for the Harvard Health Policy Reviewthisarticleappeared-novdec14

So what changed just a year later, when Obama put forward a health-reform plan that was almost a carbon copy of those previously endorsed by the Heritage Foundation, Mitt Romney, and other Republicans? The only thing is that it was now supported by a Democratic president that Republicans vowed to fight on every single issue, according to Robert Draper’s book Do Not Ask What Good We Do.

Senior Obama adviser David Axelrod later admitted that Romney’s Massachusetts plan was the “template” for Obama’s plan. “That work inspired our own health plan,” he said in 2011. But no one in the White House said so back in 2009. I once asked a senior Obama aide why. His answer was that once Republicans refused to negotiate on health reform and Obama had to win only with Democratic votes, it would have been counterproductive, politically, to point out the Obama plan’s Republican roots.

The left wing of the House Democratic caucus was dubious enough about Obama’s plan as it was, preferring a single-payer plan. Thus it was necessary for Obama to portray his plan as more liberal than it really was to get the Democratic votes needed for passage, which of course played right into the Republicans’ hands. But the reality is that ACA remains a very modest reform based on Republican and conservative ideas.

Other Rightward Policies 

Below are a few other issues on which Obama has consistently tilted rightward:

Drugs: Although it has become blindingly obvious that throwing people in jail for marijuana use is insane policy and a number of states have moved to decriminalize its use, Obama continued the harsh anti-drug policy of previous administrations, and his Department of Justice continues to treat marijuana as a dangerous drug. As Time put it in 2012: “The Obama Administration is cracking down on medical marijuana dispensaries and growers just as harshly as the Administration of George W. Bush did.”

National-security leaks: At least since Nixon, a hallmark of Republican administrations has been an obsession with leaks of unauthorized information, and pushing the envelope on government snooping. By all accounts, Obama’s penchant for secrecy and withholding information from the press is on a par with the worst Republican offenders. Journalist Dan Froomkin charges that Obama has essentially institutionalized George W. Bush’s policies. Nixon operative Roger Stone thinks Obama has actually gone beyond what his old boss tried to do.

Race: I think almost everyone, including me, thought the election of our first black president would lead to new efforts to improve the dismal economic condition of African-Americans. In fact, Obama has seldom touched on the issue of race, and when he has he has emphasized the conservative themes of responsibility and self-help. Even when Republicans have suppressed minority voting, in a grotesque campaign to fight nonexistent voter fraud, Obama has said and done nothing.

Gay marriage: Simply stating public support for gay marriage would seem to have been a no-brainer for Obama, but it took him two long years to speak out on the subject and only after being pressured to do so.

Corporate profits: Despite Republican harping about Obama being anti-business, corporate profits and the stock market have risen to record levels during his administration. Even those progressives who defend Obama against critics on the left concede that he has bent over backward to protect corporate profits. As Theda Skocpol and Lawrence Jacobs put it: “In practice, [Obama] helped Wall Street avert financial catastrophe and furthered measures to support businesses and cater to mainstream public opinion. …  He has always done so through specific policies that protect and further opportunities for businesses to make profits.”

I think Cornell West nailed it when he recently charged that Obama has never been a real progressive in the first place. “He posed as a progressive and turned out to be counterfeit,” West said. “We ended up with a Wall Street presidency, a drone presidency, a national security presidency.”

I don’t expect any conservatives to recognize the truth of Obama’s fundamental conservatism for at least a couple of decades—perhaps only after a real progressive presidency. In any case, today they are too invested in painting him as the devil incarnate in order to frighten grassroots Republicans into voting to keep Obama from confiscating all their guns, throwing them into FEMA re-education camps, and other nonsense that is believed by many Republicans. But just as they eventually came to appreciate Bill Clinton’s core conservatism, Republicans will someday see that Obama was no less conservative.

Bruce Bartlett is the author of The Benefit and the Burden: Tax Reform—Why We Need It and What It Will Take.

http://www.theamericanconservative.com/articles/obama-is-a-republican/

How technology shrunk America forever

The end of the Old World:

The 19th century saw an explosion of changes in America. The way people saw the world would never be the same

The end of the Old World: How technology shrunk America forever
(Credit: AP/Library of Congress)

It has become customary to mark the beginning of the Industrial revolution in eighteenth-century England. Historians usually identify two or sometimes three phases of the Industrial revolution, which are associated with different sources of energy and related technologies. In preindustrial Europe, the primary energy sources were human, animal, and natural (wind, water, and fire).

By the middle of the eighteenth century, much of Europe had been deforested to supply wood for domestic and industrial consumption. J.R. McNeill points out that the combination of energy sources, machines, and ways of organizing production came together to form “clusters” that determined the course of industrialization and, by extension, shaped economic and social developments. a later cluster did not immediately replace its predecessor; rather, different regimes overlapped, though often they were not integrated. With each new cluster, however, the speed of production increased, leading to differential rates of production. The first phase of the Industrial revolution began around 1750 with the shift from human and animal labor to machine-based production. This change was brought about by the use of water power and later steam engines in the textile mills of Great Britain.

The second phase dates from the 1820s, when there was a shift to fossil fuels—primarily coal. By the middle of the nineteenth century, another cluster emerged from the integration of coal, iron, steel, and railroads. The fossil fuel regime was not, of course, limited to coal. Edwin L. Drake drilled the first commercially successful well in Titusville, Pennsylvania, in 1859 and the big gushers erupted first in the 1870s in Baku on the Caspian Sea and later in Spindeltop, Texas (1901). Oil, however, did not replace coal as the main source of fuel in transportation until the 1930s.3 Coal, of course, is still widely used in manufacturing today because it remains one of the cheapest sources of energy. Though global consumption of coal has leveled off since 2000, its use continues to increase in China. Indeed, China currently uses almost as much coal as the rest of the world and reliable sources predict that by 2017, India will be importing as much coal as China.



The third phase of the Industrial revolution began in the closing decades of the nineteenth century. The development of technologies for producing and distributing electricity cheaply and efficiently further transformed industrial processes and created the possibility for new systems of communication as well as the unprecedented capability for the production and dissemination of new forms of entertainment, media, and information. The impact of electrification can be seen in four primary areas.

First, the availability of electricity made the assembly line and mass production possible. When Henry Ford adapted technology used in Chicago’s meatpacking houses to produce cars (1913), he set in motion changes whose effects are still being felt. Second, the introduction of the incandescent light bulb (1881) transformed private and public space. As early as the late 1880s, electrical lighting was used in homes, factories, and on streets. Assembly lines and lights inevitably led to the acceleration of urbanization. Third, the invention of the telegraph (ca.1840) and telephone (1876) enabled the communication and transmission of information across greater distances at faster rates of speed than ever before. Finally, electronic tabulating machines, invented by Herman Hollerith in 1889, made it possible to collect and manage data in new ways. Though his contributions have not been widely acknowledged, Hollerith actually forms a bridge between the Industrial revolution and the so-called post-industrial information age. The son of German immigrants, Hollerith graduated from Columbia University’s School of Mines and went on to found Tabulating Machine Company (1896). He created the first automatic card-feed mechanism and key-punch system with which an operator using a keyboard could process as many as three hundred cards an hour. Under the direction of Thomas J. Watson, Hollerith’s company merged with three others in 1911 to form Computing Tabulating recording Company. In 1924, the company was renamed International Business Machines Corporation (IBM).

There is much to be learned from such periodizations, but they have serious limitations. The developments I have identified overlap and interact in ways that subvert any simple linear narrative. Instead of thinking merely in terms of resources, products, and periods, it is also important to think in terms of networks and flows. The foundation for today’s wired world was laid more than two centuries ago. Beginning in the early nineteenth century, local communities, then states and nations, and finally the entire globe became increasingly connected. Though varying from time to time and place to place, there were two primary forms of networks: those that directed material flows (fuels, commodities, products, people), and those that channeled immaterial flows (communications, information, data, images, and currencies). From the earliest stages of development, these networks were inextricably interconnected. There would have been no telegraph network without railroads and no railroad system without the telegraph network, and neither could have existed without coal and iron. Networks, in other words, are never separate but form networks of networks in which material and immaterial flows circulate. As these networks continued to expand, and became more and more complex, there was a steady increase in the importance of immaterial flows, even for material processes. The combination of expanding connectivity and the growing importance of information technologies led to the acceleration of both material and immaterial flows. This emerging network of networks created positive feedback loops in which the rate of acceleration increased.

While developments in transportation, communications, information, and management were all important, industrialization as we know it is inseparable from the transportation revolution that trains created. In his foreword to Wolfgang Schivelbusch’s informative study “The Railway Journey: The Industrialization of Time and Space in the 19th Century,” Alan Trachtenberg writes, “Nothing else in the nineteenth century seemed as vivid and dramatic a sign of modernity as the railroad. Scientists and statesmen joined capitalists in promoting the locomotive as the engine of ‘progress,’ a promise of imminent Utopia.”

In England, railway technology developed as an extension of coal mining. The shift from human and natural sources of energy to fossil fuels created a growing demand for coal. While steam engines had been used since the second half of the eighteenth century in British mines to run fans and pumps like those my great-grandfather had operated in the Pennsylvania coalfields, it was not until 1901, when Oliver Evans invented a high-pressure, mobile steam engine, that locomotives were produced. By the beginning of the nineteenth century, the coal mined in the area around Newcastle was being transported throughout England on rail lines. It did not take long for this new rapid transit system to develop—by the 1820s, railroads had expanded to carry passengers, and half a century later rail networks spanned all of Europe.

What most impressed people about this new transportation network was its speed. The average speed of early railways in England was twenty to thirty miles per hour, which was approximately three times faster than stagecoaches. The increase in speed transformed the experience of time and space. Countless writers from this era use the same words to describe train travel as Karl Marx had used to describe emerging global financial markets. Trains, like capital, “annihilate space with time.”

Traveling on the recently opened Paris-rouen-orléans railway line in 1843, the German poet, journalist, and literary critic Heinrich Heine wrote: “What changes must now occur, in our way of looking at things, in our notions! Even the elementary concepts of time and space have begun to vacillate. Space is killed by the railways, and we are left with time alone. . . . Now you can travel to orleans in four and a half hours, and it takes no longer to get to rouen. Just imagine what will happen when the lines to Belgium and Germany are completed and connected up with their railways! I feel as if the mountains and forests of all countries were advancing on Paris. Even now, I can smell the German linden trees; the North Sea’s breakers are rolling against my door.” This new experience of space and time that speed brought about had profound psychological effects that I will consider later.

Throughout the nineteenth century, the United States lagged behind Great Britain in terms of industrial capacity: in 1869, England was the source of 20 percent of the world’s industrial production, while the United States contributed just 7 percent. By the start of World War I, however, america’s industrial capacity surpassed that of England: that is, by 1913, the scales had tipped—32 percent came from the United States and only 14 percent from England. While England had a long history before the Industrial revolution, the history of the United States effectively begins with the Industrial revolution. There are other important differences as well. Whereas in Great Britain the transportation revolution grew out of the industrialization of manufacturing primarily, but not exclusively, in textile factories, in the United States mechanization began in agriculture and spread to transportation before it transformed manufacturing. In other words, in Great Britain, the Industrial Revolution in manufacturing came first and the transportation revolution second, while in the United States, this order was reversed.

When the Industrial revolution began in the United States, most of the country beyond the Eastern Seaboard was largely undeveloped. Settling this uncharted territory required the development of an extensive transportation network. Throughout the early decades of the nineteenth century, the transportation system consisted of a network of rudimentary roads connecting towns and villages with the countryside. New England, Boston, New york, Philadelphia, Baltimore, and Washington were joined by highways suitable for stagecoach travel. Inland travel was largely confined to rivers and waterways. The completion of the Erie Canal (1817–25) marked the first stage in the development of an extensive network linking rivers, lakes, canals, and waterways along which produce and people flowed. Like so much else in America, the railroad system began in Boston. By 1840, only 18,181 miles of track had been laid. During the following decade, however, there was an explosive expansion of the nation’s rail system financed by securities and bonds traded on stock markets in America and London. By the 1860s, the railroad network east of the Mississippi river was using routes roughly similar to those employed today.

Where some saw loss, others saw gain. In 1844, inveterate New Englander ralph Waldo Emerson associated the textile loom with the railroad when he reflected, “Not only is distance annihilated, but when, as now, the locomotive and the steamboat, like enormous shuttles, shoot every day across the thousand various threads of national descent and employment, and bind them fast in one web, an hourly assimilation goes forward, and there is no danger that local peculiarities and hostilities should be preserved.” Gazing at tracks vanishing in the distance, Emerson saw a new world opening that, he believed, would overcome the parochialisms of the past. For many people in the nineteenth century, this new world promising endless resources and endless opportunity was the american West. A transcontinental railroad had been proposed as early as 1820 but was not completed until 1869.

On May 10, 1869, Leland Stanford, who would become the governor of California and, in 1891, founder of Stanford University, drove the final spike in the railroad that joined east and west. Nothing would ever be the same again. This event was not merely local, but also, as Emerson had surmised, global. Like the California gold and Nevada silver spike that leland had driven to join the rails, the material transportation network and immaterial communication network intersected at that moment to create what Rebecca Solnit correctly identifies as “the first live national media event.” The spike “had been wired to connect to the telegraph lines that ran east and west along the railroad tracks. The instant Stanford struck the spike, a signal would go around the nation. . . . The signal set off cannons in San Francisco and New York. In the nation’s capital the telegraph signal caused a ball to drop, one of the balls that visibly signaled the exact time in observatories in many places then (of which the ball dropped in New york’s Times Square at the stroke of the New year is a last relic). The joining of the rails would be heard in every city equipped with fire-alarm telegrams, in Philadelphia, omaha, Buffalo, Chicago, and Sacramento. Celebrations would be held all over the nation.” This carefully orchestrated spectacle, which was made possible by the convergence of multiple national networks, was worthy of the future Hollywood and the technological wizards of Silicon Valley whose relentless innovation Stanford’s university would later nourish. What most impressed people at the time was the speed of global communication, which now is taken for granted.

Flickering Images—Changing Minds

Industrialization not only changes systems of production and distribution of commodities and products, but also imposes new disciplinary practices that transform bodies and change minds. During the early years of train travel, bodily acceleration had an enormous psychological effect that some people found disorienting and others found exhilarating. The mechanization of movement created what ann Friedberg describes as the “mobile gaze,” which transforms one’s surroundings and alters both the content and, more important, the structure, of perception. This mobile gaze takes two forms: the person can move and the surroundings remain immobile (train, bicycle, automobile, airplane, elevator), or the person can remain immobile and the surroundings move (panorama, kinetoscope, film).

When considering the impact of trains on the mobilization of the gaze, it is important to note that different designs for railway passenger cars had different perceptual and psychological effects. Early European passenger cars were modeled on stagecoaches in which individuals had seats in separate compartments; early american passenger cars, by contrast, were modeled on steamboats in which people shared a common space and were free to move around. The European design tended to reinforce social and economic hierarchies that the american design tried to break down. Eventually, american railroads adopted the European model of fixed individual seating but had separate rows facing in the same direction rather than different compartments. As we will see, the resulting compartmentalization of perception anticipates the cellularization of attention that accompanies today’s distributed high-speed digital networks.

During the early years, there were numerous accounts of the experience of railway travel by ordinary people, distinguished writers, and even physicians, in which certain themes recur. The most common complaint is the sense of disorientation brought about by the experience of unprecedented speed. There are frequent reports of the dispersion and fragmentation of attention that are remarkably similar to contemporary personal and clinical descriptions of attention-deficit hyperactivity disorder (ADHD). With the landscape incessantly rushing by faster than it could be apprehended, people suffered overstimulation, which created a sense of psychological exhaustion and physical distress. Some physicians went so far as to maintain that the experience of speed caused “neurasthenia, neuralgia, nervous dyspepsia, early tooth decay, and even premature baldness.”

In 1892, Sir James Crichton-Browne attributed the significant increase in the mortality rate between 1859 and 1888 to “the tension, excitement, and incessant mobility of modern life.” Commenting on these statistics, Max Nordau might well be describing the harried pace of life today. “Every line we read or write, every human face we see, every conversation we carry on, every scene we perceive through the window of the flying express, sets in activity our sensory nerves and our brain centers. Even the little shocks of railway travelling, not perceived by consciousness, the perpetual noises and the various sights in the streets of a large town, our suspense pending the sequel of progressing events, the constant expectation of the newspaper, of the postman, of visitors, cost our brains wear and tear.” During the years around the turn of the last century, a sense of what Stephen kern aptly describes as “cultural hypochondria” pervaded society. Like today’s parents concerned about the psychological and physical effects of their kids playing video games, nineteenth-century physicians worried about the effect of people sitting in railway cars for hours watching the world rush by in a stream of images that seemed to be detached from real people and actual things.

In addition to the experience of disorientation, dispersion, fragmentation, and fatigue, rapid train travel created a sense of anxiety. People feared that with the increase in speed, machinery would spin out of control, resulting in serious accidents. An 1829 description of a train ride expresses the anxiety that speed created. “It is really flying, and it is impossible to divest yourself of the notion of instant death to all upon the least accident happening.” a decade and a half later, an anonymous German explained that the reason for such anxiety is the always “close possibility of an accident, and the inability to exercise any influence on the running of the cars.” When several serious accidents actually occurred, anxiety spread like a virus. Anxiety, however, is always a strange experience—it not only repels, it also attracts; danger and the anxiety it brings are always part of speed’s draw.

Perhaps this was a reason that not everyone found trains so distressing. For some people, the experience of speed was “dreamlike” and bordered on ecstasy. In 1843, Emerson wrote in his Journals, “Dreamlike travelling on the railroad. The towns which I pass between Philadelphia and New york make no distinct impression. They are like pictures on a wall.” The movement of the train creates a loss of focus that blurs the mobile gaze. A few years earlier, Victor Hugo’s description of train travel sounds like an acid trip as much as a train trip. In either case, the issue is speed. “The flowers by the side of the road are no longer flowers but flecks, or rather streaks, of red or white; there are no longer any points, everything becomes a streak; grain fields are great shocks of yellow hair; fields of alfalfa, long green tresses; the towns, the steeples, and the trees perform a crazy mingling dance on the horizon; from time to time, a shadow, a shape, a specter appears and disappears with lightning speed behind the window; it’s a railway guard.” The flickering images fleeting past train windows are like a film running too fast to comprehend.

Transportation was not the only thing accelerating in the nineteenth century—the pace of life itself was speeding up as never before. listening to the whistle of the train headed to Boston in his cabin beside Walden Pond, Thoreau mused, “The startings and arrivals of the cars are now the epochs in the village day. They go and come with such regularity and precision, and their whistle can be heard so far, that the farmers set their clocks by them, and thus one well conducted institution regulates a whole country. Have not men improved somewhat in punctuality since the railroad was invented? Do they not talk and think faster in the depot than they did in the stage office? There is something electrifying in the atmosphere of the former place. I have been astonished by some of the miracles it has wrought.” And yet Thoreau, more than others, knew that these changes also had a dark side.

The transition from agricultural to industrial capitalism brought with it a massive migration from the country, where life was slow and governed by natural rhythms, to the city, where life was fast and governed by mechanical, standardized time. The convergence of industrialization, transportation, and electrification made urbanization inevitable. The faster that cities expanded, the more some writers and poets idealized rustic life in the country. Nowhere is such idealization more evident than in the writings of British romantics. The rapid swirl of people, machines, and commodities created a sense of vertigo as disorienting as train travel. Wordsworth writes in The Prelude,

oh, blank confusion! True epitome
of what the mighty City is herself
To thousands upon thousands of her sons, living among the same perpetual whirl
of trivial objects, melted and reduced
To one identity, by differences
That have no law, no meaning, no end—

By 1850, fifteen cities in the United States had a population exceeding 50,000. New york was the largest (1,080,330), followed by Philadelphia (565,529), Baltimore (212,418), and Boston (177,840). Increasing domestic trade that resulted from the railroad and growing foreign trade that accompanied improved ocean travel contributed significantly to this growth. While commerce was prevalent in early cities, manufacturing expanded rapidly during the latter half of the eighteenth century. The most important factor contributing to nineteenth-century urbanization was the rapid development of the money economy. Once again, it is a matter of circulating flows, not merely of human bodies but of mobile commodities. Money and cities formed a positive feedback loop—as the money supply grew, cities expanded, and as cities expanded, the money supply grew.

The fast pace of urban life was as disorienting for many people as the speed of the train. In his seminal essay “The Metropolis and Mental life,” Georg Simmel observes, “The psychological foundation upon which the metropolitan individuality is erected, is the intensification of emotional life due to the swift and continuous shift of external and internal stimuli. Man is a creature whose existence is dependent on differences, i.e., his mind is stimulated by the difference between present impressions and those which have preceded. . . . To the extent that the metropolis creates these psychological conditions—with every crossing of the street, with the tempo and multiplicity of economic, occupational and social life—it creates the sensory foundations of mental life, and in the degree of awareness necessitated by our organization as creatures dependent on differences, a deep contrast with the slower, more habitual, more smooth flowing rhythm of the sensory-mental phase of small town and rural existence.” The expansion of the money economy created a fundamental contradiction at the heart of metropolitan life. On the one hand, cities brought together different people from all backgrounds and walks of life, and on the other hand, emerging industrial capitalism leveled these differences by disciplining bodies and programming minds. “Money,” Simmel continues, “is concerned only with what is common to all, i.e., with the exchange value which reduces all quality and individuality to a purely quantitative level.” The migration from country to city that came with the transition from agricultural to industrial capitalism involved a shift from homogeneous communities to heterogeneous assemblages of different people, qualitative to quantitative methods of assessment and evaluation, as well as concrete to abstract networks of exchange of goods and services, and a slow to fast pace of life. I will consider further aspects of these disciplinary practices in Chapter 3; for now, it is important to understand the implications of the mechanization or industrialization of perception.

I have already noted similarities between the experience of looking through a window on a speeding train to the experience of watching a film that is running too fast. During the latter half of the nineteenth century a remarkable series of inventions transformed not only what people experienced in the world but how they experienced it: photography (Louis-Jacques-Mandé Daguerre, ca. 1837), the telegraph (Samuel F. B. Morse, ca. 1840), the stock ticker (Thomas alva Edison, 1869), the telephone (alexander Graham Bell, 1876), the chronophotographic gun (Étienne-Jules Maney, 1882), the kinetoscope (Edison, 1894), the zoopraxiscope (Eadweard Muybridge, 1893), the phantoscope (Charles Jenkins, 1894), and cinematography (Auguste and Louis Lumière, 1895). The way in which human beings perceive and conceive the world is not hardwired in the brain but changes with new technologies of production and reproduction.

Just as the screens of today’s TVs, computers, video games, and mobile devices are restructuring how we process experience, so too did new technologies at the end of the nineteenth century change the world by transforming how people apprehended it. While each innovation had a distinctive effect, there is a discernible overall trajectory to these developments. Industrial technologies of production and reproduction extended processes of dematerialization that eventually led first to consumer capitalism and then to today’s financial capitalism. The crucial variable in these developments is the way in which material and immaterial networks intersect to produce a progressive detachment of images, representations, information, and data from concrete objects and actual events. Marveling at what he regarded as the novelty of photographs, Oliver Wendell Holmes commented, “Form is henceforth divorced from matter. In fact, matter as a visible object is of no great use any longer, except as the mould on which form is shaped. Give us a few negatives of a thing worth seeing, taken from different points of view, and that is all we want of it. Pull it down or burn it up, if you please. . . . Matter in large masses must always be fixed and dear, form is cheap and transportable. We have got the fruit of creation now, and need not trouble ourselves about the core.”

Technologies for the reproduction and transmission of images and information expand the process of abstraction initiated by the money economy to create a play of freely floating signs without anything to ground, certify, or secure them. With new networks made possible by the combination of electrification and the invention of the telegraph, telephone, and stock ticker, communication was liberated from the strictures imposed by physical means of conveyance. In previous energy regimes, messages could be sent no faster than people, horses, carriages, trains, ships, or automobiles could move. Dematerialized words, sounds, information, and eventually images, by contrast, could be transmitted across great distances at high speed. With this dematerialization and acceleration, Marx’s prediction—that “everything solid melts into air”—was realized. But this was just the beginning. It would take more than a century for electrical currents to become virtual currencies whose transmission would approach the speed limit.

Excerpted from “Speed Limits: Where Time Went and Why We Have So Little Left,” by Mark C. Taylor, published October 2014 by Yale University Press. Copyright ©2014 by Mark C. Taylor. Reprinted by permission of Yale University Press.

http://www.salon.com/2014/10/19/the_end_of_the_old_world_how_technology_shrunk_america_forever/?source=newsletter

“We’ve Created a Generation of People Who Hate America”


Filmmaker Laura Poitras on Our Surveillance State

Back to that Hong Kong hotel room with Snowden.

Photo Credit: Mopic / Shutterstock.com

Here’s a Ripley’s Believe It or Not! stat from our new age of national security. How many Americans have security clearances? The answer: 5.1 million, a figure that reflects the explosive growth of the national security state in the post-9/11 era. Imagine the kind of system needed just to vet that many people for access to our secret world (to the tune of billions of dollars). We’re talking here about the total population of Norway and significantly more people than you can find in Costa Rica, Ireland, or New Zealand. And yet it’s only about 1.6% of the American population, while on ever more matters, the unvetted 98.4% of us are meant to be left in the dark.

For our own safety, of course. That goes without saying.

All of this offers a new definition of democracy in which we, the people, are to know only what the national security state cares to tell us.  Under this system, ignorance is the necessary, legally enforced prerequisite for feeling protected.  In this sense, it is telling that the only crime for which those inside the national security state can be held accountable in post-9/11 Washington is not potential perjury before Congress, or the destruction of evidence of a crime, or torture, or kidnapping, or assassination, or the deaths of prisoners in an extralegal prison system, but whistleblowing; that is, telling the American people something about what their government is actually doing.  And that crime, and only that crime, has been prosecuted to the full extent of the law (and beyond) with a vigor unmatched in American history.  To offer a single example, the only American to go to jail for the CIA’s Bush-era torture program was John Kiriakou, a CIA whistleblower who revealed the name of an agent involved in the program to a reporter.

In these years, as power drained from Congress, an increasingly imperial White House has launched various wars (redefined by its lawyers as anything but), as well as a global assassination campaign in which the White House has its own “kill list” and the president himself decides on global hits.  Then, without regard for national sovereignty or the fact that someone is an American citizen (and upon the secret invocation of legal mumbo-jumbo), the drones are sent off to do the necessary killing.

And yet that doesn’t mean that we, the people, know nothing.  Against increasing odds, there has been some fine reporting in the mainstream media by the likes of James Risen and Barton Gellman on the security state’s post-legal activities and above all, despite the Obama administration’s regular use of the World War I era Espionage Act, whistleblowers have stepped forward from within the government to offer us sometimes staggering amounts of information about the system that has been set up in our name but without our knowledge.

Among them, one young man, whose name is now known worldwide, stands out.  In June of last year, thanks to journalist Glenn Greenwald and filmmaker Laura Poitras, Edward Snowden, a contractor for the NSA and previously the CIA, stepped into our lives from a hotel room in Hong Kong.  With a treasure trove of documents that are still being released, he changed the way just about all of us view our world.  He has been charged under the Espionage Act.  If indeed he was a “spy,” then the spying he did was for us, for the American people and for the world.  What he revealed to a stunned planet was a global surveillance state whose reach and ambitions were unique, a system based on a single premise: that privacy was no more and that no one was, in theory (and to a remarkable extent in practice), unsurveillable.

Its builders imagined only one exemption: themselves.  This was undoubtedly at least part of the reason why, when Snowden let us peek in on them, they reacted with such over-the-top venom.  Whatever they felt at a policy level, it’s clear that they also felt violated, something that, as far as we can tell, left them with no empathy whatsoever for the rest of us.  One thing that Snowden proved, however, was that the system they built was ready-made for blowback.

Sixteen months after his NSA documents began to be released by the Guardian and the Washington Post, I think it may be possible to speak of the Snowden Era.  And now, a remarkable new film, Citizenfour, which had its premiere at the New York Film Festival on October 10th and will open in select theaters nationwide on October 24th, offers us a window into just how it all happened.  It is already being mentioned as a possible Oscar winner.

Director Laura Poitras, like reporter Glenn Greenwald, is now known almost as widely as Snowden himself, for helping facilitate his entry into the world.  Her new film, the last in a trilogy she’s completed (the previous two being My Country, My Country on the Iraq War and The Oath on Guantanamo), takes you back to June 2013 and locks you in that Hong Kong hotel room with Snowden, Greenwald, Ewen MacAskill of the Guardian, and Poitras herself for eight days that changed the world.  It’s a riveting, surprisingly unclaustrophic, and unforgettable experience.

Before that moment, we were quite literally in the dark.  After it, we have a better sense, at least, of the nature of the darkness that envelops us. Having seen her film in a packed house at the New York Film Festival, I sat down with Poitras in a tiny conference room at the Loews Regency Hotel in New York City to discuss just how our world has changed and her part in it.

Tom Engelhardt: Could you start by laying out briefly what you think we’ve learned from Edward Snowden about how our world really works?

Laura Poitras: The most striking thing Snowden has revealed is the depth of what the NSA and the Five Eyes countries [Australia, Canada, New Zealand, Great Britain, and the U.S.] are doing, their hunger for all data, for total bulk dragnet surveillance where they try to collect all communications and do it all sorts of different ways. Their ethos is “collect it all.” I worked on a story with Jim Risen of the New York Times about a document — a four-year plan for signals intelligence — in which they describe the era as being “the golden age of signals intelligence.”  For them, that’s what the Internet is: the basis for a golden age to spy on everyone.

This focus on bulk, dragnet, suspicionless surveillance of the planet is certainly what’s most staggering.  There were many programs that did that.  In addition, you have both the NSA and the GCHQ [British intelligence] doing things like targeting engineers at telecoms.  There was an article published at The Intercept that cited an NSA document Snowden provided, part of which was titled “I Hunt Sysadmins” [systems administrators].  They try to find the custodians of information, the people who are the gateway to customer data, and target them.  So there’s this passive collection of everything, and then things that they can’t get that way, they go after in other ways.

 I think one of the most shocking things is how little our elected officials knew about what the NSA was doing.  Congress is learning from the reporting and that’s staggering.  Snowden and [former NSA employee] William Binney, who’s also in the film as a whistleblower from a different generation, are technical people who understand the dangers.  We laypeople may have some understanding of these technologies, but they really grasp the dangers of how they can be used.  One of the most frightening things, I think, is the capacity for retroactive searching, so you can go back in time and trace who someone is in contact with and where they’ve been.  Certainly, when it comes to my profession as a journalist, that allows the government to trace what you’re reporting, who you’re talking to, and where you’ve been.  So no matter whether or not I have a commitment to protect my sources, the government may still have information that might allow them to identify whom I’m talking to.

TE: To ask the same question another way, what would the world be like without Edward Snowden?  After all, it seems to me that, in some sense, we are now in the Snowden era.

LP: I agree that Snowden has presented us with choices on how we want to move forward into the future.  We’re at a crossroads and we still don’t quite know which path we’re going to take.  Without Snowden, just about everyone would still be in the dark about the amount of information the government is collecting. I think that Snowden has changed consciousness about the dangers of surveillance.  We see lawyers who take their phones out of meetings now.  People are starting to understand that the devices we carry with us reveal our location, who we’re talking to, and all kinds of other information.  So you have a genuine shift of consciousness post the Snowden revelations.

TE: There’s clearly been no evidence of a shift in governmental consciousness, though.

LP: Those who are experts in the fields of surveillance, privacy, and technology say that there need to be two tracks: a policy track and a technology track.  The technology track is encryption.  It works and if you want privacy, then you should use it.  We’ve already seen shifts happening in some of the big companies — Google, Apple — that now understand how vulnerable their customer data is, and that if it’s vulnerable, then their business is, too, and so you see a beefing up of encryption technologies.  At the same time, no programs have been dismantled at the governmental level, despite international pressure.

TE: In Citizenfour, we spend what must be an hour essentially locked in a room in a Hong Kong hotel with Snowden, Glenn Greenwald, Ewan MacAskill, and you, and it’s riveting.  Snowden is almost preternaturally prepossessing and self-possessed.  I think of a novelist whose dream character just walks into his or her head.  It must have been like that with you and Snowden.  But what if he’d been a graying guy with the same documents and far less intelligent things to say about them?  In other words, how exactly did who he was make your movie and remake our world?

LP: Those are two questions.  One is: What was my initial experience?  The other: How do I think it impacted the movie?  We’ve been editing it and showing it to small groups, and I had no doubt that he’s articulate and genuine on screen.  But to see him in a full room [at the New York Film Festival premiere on the night of October 10th], I’m like, wow!  He really commands the screen! And I experienced the film in a new way with a packed house.

TE: But how did you experience him the first time yourself?  I mean you didn’t know who you were going to meet, right?

LP: So I was in correspondence with an anonymous source for about five months and in the process of developing a dialogue you build ideas, of course, about who that person might be.  My idea was that he was in his late forties, early fifties.  I figured he must be Internet generation because he was super tech-savvy, but I thought that, given the level of access and information he was able to discuss, he had to be older.  And so my first experience was that I had to do a reboot of my expectations.  Like fantastic, great, he’s young and charismatic and I was like wow, this is so disorienting, I have to reboot.  In retrospect, I can see that it’s really powerful that somebody so smart, so young, and with so much to lose risked so much.

He was so at peace with the choice he had made and knowing that the consequences could mean the end of his life and that this was still the right decision.  He believed in it, and whatever the consequences, he was willing to accept them.  To meet somebody who has made those kinds of decisions is extraordinary.  And to be able to document that and also how Glenn [Greenwald] stepped in and pushed for this reporting to happen in an aggressive way changed the narrative. Because Glenn and I come at it from an outsider’s perspective, the narrative unfolded in a way that nobody quite knew how to respond to.  That’s why I think the government was initially on its heels.  You know, it’s not everyday that a whistleblower is actually willing to be identified.

TE: My guess is that Snowden has given us the feeling that we now grasp the nature of the global surveillance state that is watching us, but I always think to myself, well, he was just one guy coming out of one of 17 interlocked intelligence outfits. Given the remarkable way your film ends — the punch line, you might say — with another source or sources coming forward from somewhere inside that world to reveal, among other things, information about the enormous watchlist that you yourself are on, I’m curious: What do you think is still to be known?  I suspect that if whistleblowers were to emerge from the top five or six agencies, the CIA, the DIA, the National Geospatial Intelligence Agency, and so on, with similar documentation to Snowden’s, we would simply be staggered by the system that’s been created in our name.

LP: I can’t speculate on what we don’t know, but I think you’re right in terms of the scale and scope of things and the need for that information to be made public. I mean, just consider the CIA and its effort to suppress the Senate’s review of its torture program. Take in the fact that we live in a country that a) legalized torture and b) where no one was ever held to account for it, and now the government’s internal look at what happened is being suppressed by the CIA.  That’s a frightening landscape to be in.

In terms of sources coming forward, I really reject this idea of talking about one, two, three sources.  There are many sources that have informed the reporting we’ve done and I think that Americans owe them a debt of gratitude for taking the risk they do.  From a personal perspective, because I’m on a watchlist and went through years of trying to find out why, of having the government refuse to confirm or deny the very existence of such a list, it’s so meaningful to have its existence brought into the open so that the public knows there is a watchlist, and so that the courts can now address the legality of it.  I mean, the person who revealed this has done a huge public service and I’m personally thankful.

TE: You’re referring to the unknown leaker who’s mentioned visually and elliptically at the end of your movie and who revealed that the major watchlist your on has more than 1.2 million names on it.  In that context, what’s it like to travel as Laura Poitras today?  How do you embody the new national security state?

LP: In 2012, I was ready to edit and I chose to leave the U.S. because I didn’t feel I could protect my source footage when I crossed the U.S. border.  The decision was based on six years of being stopped and questioned every time I returned to the United States.  And I just did the math and realized that the risks were too high to edit in the U.S., so I started working in Berlin in 2012.  And then, in January 2013, I got the first email from Snowden.

TE: So you were protecting…

LP: …other footage.  I had been filming with NSA whistleblower William Binney, with Julian Assange, with Jacob Appelbaum of the Tor Project, people who have also been targeted by the U.S., and I felt that this material I had was not safe.  I was put on a watchlist in 2006.  I was detained and questioned at the border returning to the U.S. probably around 40 times.  If I counted domestic stops and every time I was stopped at European transit points, you’re probably getting closer to 80 to 100 times. It became a regular thing, being asked where I’d been and who I’d met with. I found myself caught up in a system you can’t ever seem to get out of, this Kafkaesque watchlist that the U.S. doesn’t even acknowledge.

TE: Were you stopped this time coming in?

LP: I was not. The detentions stopped in 2012 after a pretty extraordinary incident.

I was coming back in through Newark Airport and I was stopped.  I took out my notebook because I always take notes on what time I’m stopped and who the agents are and stuff like that.  This time, they threatened to handcuff me for taking notes.  They said, “Put the pen down!” They claimed my pen could be a weapon and hurt someone.

“Put the pen down! The pen is dangerous!” And I’m like, you’re not… you’ve got to be crazy. Several people yelled at me every time I moved my pen down to take notes as if it were a knife. After that, I decided this has gotten crazy, I’d better do something and I called Glenn. He wrote a piece about my experiences. In response to his article, they actually backed off.

TE:  Snowden has told us a lot about the global surveillance structure that’s been built.  We know a lot less about what they are doing with all this information.  I’m struck at how poorly they’ve been able to use such information in, for example, their war on terror.  I mean, they always seem to be a step behind in the Middle East — not just behind events but behind what I think someone using purely open source information could tell them.  This I find startling.  What sense do you have of what they’re doing with the reams, the yottabytes, of data they’re pulling in?

LP: Snowden and many other people, including Bill Binney, have said that this mentality — of trying to suck up everything they can — has left them drowning in information and so they miss what would be considered more obvious leads.  In the end, the system they’ve created doesn’t lead to what they describe as their goal, which is security, because they have too much information to process.

I don’t quite know how to fully understand it.  I think about this a lot because I made a film about the Iraq War and one about Guantanamo.  From my perspective, in response to the 9/11 attacks, the U.S. took a small, very radical group of terrorists and engaged in activities that have created two generations of anti-American sentiment motivated by things like Guantanamo and Abu Ghraib.  Instead of figuring out a way to respond to a small group of people, we’ve created generations of people who are really angry and hate us.  And then I think, if the goal is security, how do these two things align, because there are more people who hate the United States right now, more people intent on doing us harm?  So either the goal that they proclaim is not the goal or they’re just unable to come to terms with the fact that we’ve made huge mistakes in how we’ve responded.

TE: I’m struck by the fact that failure has, in its own way, been a launching pad for success.  I mean, the building of an unparallelled intelligence apparatus and the greatest explosion of intelligence gathering in history came out of the 9/11 failure.  Nobody was held accountable, nobody was punished, nobody was demoted or anything, and every similar failure, including the one on the White House lawn recently, simply leads to the bolstering of the system.

LP: So how do you understand that?

TE: I don’t think that these are people who are thinking: we need to fail to succeed. I’m not conspiratorial in that way, but I do think that, strangely, failure has built the system and I find that odd. More than that I don’t know.

LP: I don’t disagree. The fact that the CIA knew that two of the 9/11 hijackers were entering the United States and didn’t notify the FBI and that nobody lost their job is shocking.  Instead, we occupied Iraq, which had nothing to do with 9/11.  I mean, how did those choices get made?

John Pilger

“The major western democracies are moving towards corporatism. Democracy has become a business plan, with a bottom line for every human activity, every dream, every decency, every hope. The main parliamentary parties are now devoted to the same economic policies – socialism for the rich, capitalism for the poor – and the same foreign policy of servility to endless war. This is not democracy. It is to politics what McDonalds is to food.”

- John Pilger

 

Friedrich Nietzsche on Why a Fulfilling Life Requires Embracing Rather than Running from Difficulty

by

A century and a half before our modern fetishism of failure, a seminal philosophical case for its value.

German philosopher, poet, composer, and writer Friedrich Nietzsche (October 15, 1844–August 25, 1900) is among humanity’s most enduring, influential, and oft-cited minds — and he seemed remarkably confident that he would end up that way. Nietzsche famously called the populace of philosophers “cabbage-heads,” lamenting: “It is my fate to have to be the first decent human being. I have a terrible fear that I shall one day be pronounced holy.” In one letter, he considered the prospect of posterity enjoying his work: “It seems to me that to take a book of mine into his hands is one of the rarest distinctions that anyone can confer upon himself. I even assume that he removes his shoes when he does so — not to speak of boots.”

A century and a half later, Nietzsche’s healthy ego has proven largely right — for a surprising and surprisingly modern reason: the assurance he offers that life’s greatest rewards spring from our brush with adversity. More than a century before our present celebration of “the gift of failure” and our fetishism of failure as a conduit to fearlessness, Nietzsche extolled these values with equal parts pomp and perspicuity.

In one particularly emblematic specimen from his many aphorisms, penned in 1887 and published in the posthumous selection from his notebooks, The Will to Power (public library), Nietzsche writes under the heading “Types of my disciples”:

To those human beings who are of any concern to me I wish suffering, desolation, sickness, ill-treatment, indignities — I wish that they should not remain unfamiliar with profound self-contempt, the torture of self-mistrust, the wretchedness of the vanquished: I have no pity for them, because I wish them the only thing that can prove today whether one is worth anything or not — that one endures.

(Half a century later, Willa Cather echoed this sentiment poignantly in a troubled letter to her brother: “The test of one’s decency is how much of a fight one can put up after one has stopped caring.”)

With his signature blend of wit and wisdom, Alain de Botton — who contemplates such subjects as the psychological functions of art and what literature does for the soul — writes in the altogether wonderful The Consolations of Philosophy (public library):

Alone among the cabbage-heads, Nietzsche had realized that difficulties of every sort were to be welcomed by those seeking fulfillment.

Not only that, but Nietzsche also believed that hardship and joy operated in a kind of osmotic relationship — diminishing one would diminish the other — or, as Anaïs Nin memorably put it, “great art was born of great terrors, great loneliness, great inhibitions, instabilities, and it always balances them.” In The Gay Science (public library), his treatise on poetry where his famous “God is dead” proclamation was coined, he wrote:

What if pleasure and displeasure were so tied together that whoever wanted to have as much as possible of one must also have as much as possible of the other — that whoever wanted to learn to “jubilate up to the heavens” would also have to be prepared for “depression unto death”?

You have the choice: either as little displeasure as possible, painlessness in brief … or as much displeasure as possible as the price for the growth of an abundance of subtle pleasures and joys that have rarely been relished yet? If you decide for the former and desire to diminish and lower the level of human pain, you also have to diminish and lower the level of their capacity for joy.

He was convinced that the most notable human lives reflected this osmosis:

Examine the lives of the best and most fruitful people and peoples and ask yourselves whether a tree that is supposed to grow to a proud height can dispense with bad weather and storms; whether misfortune and external resistance, some kinds of hatred, jealousy, stubbornness, mistrust, hardness, avarice, and violence do not belong among the favorable conditions without which any great growth even of virtue is scarcely possible.

De Botton distills Nietzsche’s convictions and their enduring legacy:

The most fulfilling human projects appeared inseparable from a degree of torment, the sources of our greatest joys lying awkwardly close to those of our greatest pains…

Why? Because no one is able to produce a great work of art without experience, nor achieve a worldly position immediately, nor be a great lover at the first attempt; and in the interval between initial failure and subsequent success, in the gap between who we wish one day to be and who we are at present, must come pain, anxiety, envy and humiliation. We suffer because we cannot spontaneously master the ingredients of fulfillment.

Nietzsche was striving to correct the belief that fulfillment must come easily or not at all, a belief ruinous in its effects, for it leads us to withdraw prematurely from challenges that might have been overcome if only we had been prepared for the savagery legitimately demanded by almost everything valuable.

(Or, as F. Scott Fitzgerald put it in his atrociously, delightfully ungrammatical proclamation, “Nothing any good isn’t hard.”)

Nietzsche arrived at this ideas the roundabout way. As a young man, he was heavily influenced by Schopenhauer. At the age of twenty-one, he chanced upon Schopenhauer’s masterwork The World as Will and Representation and later recounted this seminal life turn:

I took it in my hand as something totally unfamiliar and turned the pages. I do not know which demon was whispering to me: ‘Take this book home.’ In any case, it happened, which was contrary to my custom of otherwise never rushing into buying a book. Back at the house I threw myself into the corner of a sofa with my new treasure, and began to let that dynamic, dismal genius work on me. Each line cried out with renunciation, negation, resignation. I was looking into a mirror that reflected the world, life and my own mind with hideous magnificence.

And isn’t that what the greatest books do for us, why we read and write at all? But Nietzsche eventually came to disagree with Schopenhauer’s defeatism and slowly blossomed into his own ideas on the value of difficulty. In an 1876 letter to Cosima Wagner — the second wife of the famed composer Richard Wagner, whom Nietzsche had befriended — he professed, more than a decade after encountering Schopenhauer:

Would you be amazed if I confess something that has gradually come about, but which has more or less suddenly entered my consciousness: a disagreement with Schopenhauer’s teaching? On virtually all general propositions I am not on his side.

This turning point is how Nietzsche arrived at the conviction that hardship is the springboard for happiness and fulfillment. De Botton captures this beautifully:

Because fulfillment is an illusion, the wise must devote themselves to avoiding pain rather than seeking pleasure, living quietly, as Schopenhauer counseled, ‘in a small fireproof room’ — advice that now struck Nietzsche as both timid and untrue, a perverse attempt to dwell, as he was to put it pejoratively several years later, ‘hidden in forests like shy deer.’ Fulfillment was to be reached not by avoiding pain, but by recognizing its role as a natural, inevitable step on the way to reaching anything good.

And this, perhaps, is the reason why nihilism in general, and Nietzsche in particular, has had a recent resurgence in pop culture — the subject of a fantastic recent Radiolab episode. The wise and wonderful Jad Abumrad elegantly captures the allure of such teachings:

All this pop-nihilism around us is not about tearing down power structures or embracing nothingness — it’s just, “Look at me! Look how brave I am!”

Quoting Nietzsche, in other words, is a way for us to signal others that we’re unafraid, that difficulty won’t break us, that adversity will only assure us.

And perhaps there is nothing wrong with that. After all, Viktor Frankl was the opposite of a nihilist, and yet we flock to him for the same reason — to be assured, to be consoled, to feel like we can endure.

The Will to Power remains indispensable and The Consolations of Philosophy is excellent in its totality. Complement them with a lighter serving of Nietzsche — his ten rules for writers, penned in a love letter.

 

http://www.brainpickings.org/2014/10/15/nietzsche-on-difficulty/

Follow

Get every new post delivered to your Inbox.

Join 1,588 other followers