Urban art by Mr Pilgrim. When standing in front of it, it appears as if you are a puppet on a string being manipulated by the large hand above.
People Who Make More or Less Than $200k a Year?
Billionaire CEO Nicholas Woodman, news reports trumpeted earlier this month, has set aside $450 million worth of his GoPro software stock to set up a brand-new charitable foundation.
“We wake up every morning grateful for the opportunities life has given us,” Woodman and his wife Jill noted in a joint statement. “We hope to return the favor as best we can.”
Stories about charitable billionaires have long been a media staple. The defenders of our economic order love them — and regularly trot them out to justify America’s ever more top-heavy concentration of income and wealth.
Our charities depend, the argument goes, on the generosity of the rich. The richer the rich, the better off our charitable enterprises will be.
But this defense of inequality, analysts have understood for quite some time, holds precious little water. Low- and middle-income people, the research shows, give a greater share of their incomes to charity than people of decidedly more ample means.
The Chronicle of Philanthropy, the nation’s top monitor of everything charitable, last week dramatically added to this research.
Between 2006 and 2012, a new Chronicle analysis of IRS tax return data reveals, Americans who make over $200,000 a year decreased the share of their income they devote to charity by 4.6 percent.
Over those same years, a time of recession and limited recovery, these same affluent Americans saw their own incomes increase. For the nation’s top 5 percent of income earners, that increase averaged 9.9 percent.
By contrast, those Americans making less than $100,000 actually increased their giving between 2006 and 2012. The most generous Americans of all? Those making less than $25,000. Amid the hard times of recent years, low-income Americans devoted 16.6 percent more of their meager incomes to charity.
Overall, those making under $100,000 increased their giving by 4.5 percent.
In the half-dozen years this new study covers, the Chronicle of Philanthropy concludes, “poor and middle class Americans dug deeper into their wallets to give to charity, even though they were earning less.”
America’s affluent do still remain, in absolute terms, the nation’s largest givers to charity. In 2012, the Chronicle analysis shows, those earning under $100,000 handed charities $57.3 billion. Americans making over $200,000 gave away $77.5 billion.
But that $77.5 billion pales against at how much more the rich could — rather painlessly — be giving. Between 2006 and 2012, the combined wealth of the Forbes 400 alone increased by $1.04 trillion.
What the rich do give to charity often does people truly in need no good at all. Wealthy people do the bulk of their giving to colleges and cultural institutions, notes Chronicle of Philanthropy editor Stacy Palmer. Food banks and other social service charities “depend more on lower income Americans.”
Low- and middle-income people, adds Palmer, “know people who lost their jobs or are homeless.” They’ve been sacrificing “to help their neighbors.”
America’s increasing economic segregation, meanwhile, has left America’s rich less and less exposed to “neighbors” struggling to get by. That’s opening up, says Vox policy analyst Danielle Kurtzleben, an “empathy gap.”
“After all,” she explains, “if I can’t see you, I’m less likely to help you.”
The more wealth concentrates, the more nonprofits chase after these less-than-empathetic rich for donations. The priorities of these rich, notes Kurtzleben, become the priorities for more and more nonprofits.
The end result? Elite universities get mega-million-dollar donations to build mahogany-appointed students dorms. Art museums get new wings. Hospitals get windfalls to tackle the diseases that spook the high-end set.
Some in that set do seem to sense the growing disconnect between real need and real resources. Last week billionaire hedge fund manager David Einhorn announced a $50 million gift to help Cornell University set students up in “real-world experiences” that address the challenges hard-pressed communities face.
“When you go out beyond the classroom and into the community and find problems and have to deal with people in the real world,” says Einhorn, “you develop skills for empathy.”
True enough — but in a society growing ever more unequal and separate, not enough. In that society — our society — the privileged will continue to go “blind to how people outside their own class are living,” as Danielle Kurtzleben puts it.
We need, in short, much more than Empathy 101. We need more equality.
He’s the heir to Richard Nixon, not Saul Alinsky.
Back in 2008, Boston University professor Andrew Bacevich wrote an article for this magazine making a conservative case for Barack Obama. While much of it was based on disgust with the warmongering and budgetary profligacy of the Republican Party under George W. Bush, which he expected to continue under 2008 Republican nominee Sen. John McCain, Bacevich thought Obama at least represented hope for ending the Iraq War and shrinking the national-security state.
I wrote a piece for the New Republic soon afterward about the Obamacon phenomenon—prominent conservatives and Republicans who were openly supporting Obama. Many saw in him a classic conservative temperament: someone who avoided lofty rhetoric, an ambitious agenda, and a Utopian vision that would conflict with human nature, real-world barriers to radical reform, and the American system of government.
Among the Obamacons were Ken Duberstein, Ronald Reagan’s chief of staff; Charles Fried, Reagan’s solicitor general; Ken Adelman, director of the Arms Control and Disarmament Agency for Reagan; Jeffrey Hart, longtime senior editor of National Review; Colin Powell, Reagan’s national security adviser and secretary of state for George W. Bush; and Scott McClellan, Bush’s press secretary. There were many others as well.
According to exit polls in 2008, Obama ended up with 20 percent of the conservative vote. Even in 2012, after four years of relentless conservative attacks, he still got 17 percent of the conservative vote, with 11 percent of Tea Party supporters saying they cast their ballots for Obama.
They were not wrong. In my opinion, Obama has governed as a moderate conservative—essentially as what used to be called a liberal Republican before all such people disappeared from the GOP. He has been conservative to exactly the same degree that Richard Nixon basically governed as a moderate liberal, something no conservative would deny today. (Ultra-leftist Noam Chomsky recently called Nixon “the last liberal president.”)
Here’s the proof:
One of Obama’s first decisions after the election was to keep national-security policy essentially on automatic pilot from the Bush administration. He signaled this by announcing on November 25, 2008, that he planned to keep Robert M. Gates on as secretary of defense. Arguably, Gates had more to do with determining Republican policy on foreign and defense policy between the two Bush presidents than any other individual, serving successively as deputy national security adviser in the White House, director of Central Intelligence, and secretary of defense.
Another early indication of Obama’s hawkishness was naming his rival for the Democratic nomination, Sen. Hillary Clinton, as secretary of state. During the campaign, Clinton ran well to his right on foreign policy, so much so that she earned the grudging endorsement of prominent neoconservatives such as Bill Kristol and David Brooks.
Obama, Kristol told the Washington Post in August 2007, “is becoming the antiwar candidate, and Hillary Clinton is becoming the responsible Democrat who could become commander in chief in a post-9/11 world.” Writing in the New York Times on February 5, 2008, Brooks praised Clinton for hanging tough on Iraq “through the dark days of 2005.”
Right-wing columnist Ann Coulter found Clinton more acceptable on national-security policy than even the eventual Republican nominee, Senator McCain. Clinton, Coulter told Fox’s Sean Hannity on January 31, 2008, was “more conservative than he [McCain] is. I think she would be stronger in the war on terrorism.” Coulter even said she would campaign for Clinton over McCain in a general election match up.
After Obama named Clinton secretary of state, there was “a deep sigh” of relief among Republicans throughout Washington, according to reporting by The Daily Beast’s John Batchelor. He noted that not a single Republican voiced any public criticism of her appointment.
By 2011, Republicans were so enamored with Clinton’s support for their policies that Dick Cheney even suggested publicly that she run against Obama in 2012. The irony is that as secretary of state, Clinton was generally well to Obama’s left, according to Vali Nasr’s book The Dispensable Nation. This may simply reflect her assumption of state’s historical role as the dovish voice in every administration. Or it could mean that Obama is far more hawkish than conservatives have given him credit for.
Although Obama followed through on George W. Bush’s commitment to pull U.S. troops out of Iraq in 2011, in 2014 he announced a new campaign against ISIS, an Islamic militant group based in Syria and Iraq.
With the economy collapsing, the first major issue confronting Obama in 2009 was some sort of economic stimulus. Christina Romer, chair of the Council of Economic Advisers, whose academic work at the University of California, Berkeley, frequently focused on the Great Depression, estimated that the stimulus needed to be in the range of $1.8 trillion, according to Noam Scheiber’s book The Escape Artists.
The American Recovery and Reinvestment Act was enacted in February 2009 with a gross cost of $816 billion. Although this legislation was passed without a single Republican vote, it is foolish to assume that the election of McCain would have resulted in savings of $816 billion. There is no doubt that he would have put forward a stimulus plan of roughly the same order of magnitude, but tilted more toward Republican priorities.
A Republican stimulus would undoubtedly have had more tax cuts and less spending, even though every serious study has shown that tax cuts are the least effective method of economic stimulus in a recession. Even so, tax cuts made up 35 percent of the budgetary cost of the stimulus bill—$291 billion—despite an estimate from Obama’s Council of Economic Advisers that tax cuts barely raised the gross domestic product $1 for every $1 of tax cut. By contrast, $1 of government purchases raised GDP $1.55 for every $1 spent. Obama also extended the Bush tax cuts for two years in 2010.
It’s worth remembering as well that Bush did not exactly bequeath Obama a good fiscal hand. Fiscal year 2009 began on October 1, 2008, and one third of it was baked in the cake the day Obama took the oath of office. On January 7, 2009, the Congressional Budget Office projected significant deficits without considering any Obama initiatives. It estimated a deficit of $1.186 trillion for 2009 with no change in policy. The Office of Management and Budget estimated in November of that year that Bush-era policies, such as Medicare Part D, were responsible for more than half of projected deficits over the next decade.
Republicans give no credit to Obama for the significant deficit reduction that has occurred on his watch—just as they ignore the fact that Bush inherited an projected budget surplus of $5.6 trillion over the following decade, which he turned into an actual deficit of $6.1 trillion, according to a CBO study—but the improvement is real.
Republicans would have us believe that their tight-fisted approach to spending is what brought down the deficit. But in fact, Obama has been very conservative, fiscally, since day one, to the consternation of his own party. According to reporting by the Washington Post and New York Times, Obama actually endorsed much deeper cuts in spending and the deficit than did the Republicans during the 2011 budget negotiations, but Republicans walked away.
Obama’s economic conservatism extends to monetary policy as well. His Federal Reserve appointments have all been moderate to conservative, well within the economic mainstream. He even reappointed Republican Ben Bernanke as chairman in 2009. Many liberals have faulted Obama for not appointing board members willing to be more aggressive in using monetary policy to stimulate the economy and reduce unemployment.
Obama’s other economic appointments, such as Larry Summers at the National Economic Council and Tim Geithner at Treasury, were also moderate to conservative. Summers served on the Council of Economic Advisers staff in Reagan’s White House. Geithner joined the Treasury during the Reagan administration and served throughout the George H.W. Bush administration.
Contrary to rants that Obama’s 2010 health reform, the Patient Protection and Affordable Care Act (ACA), is the most socialistic legislation in American history, the reality is that it is virtually textbook Republican health policy, with a pedigree from the Heritage Foundation and Massachusetts Gov. Mitt Romney, among others.
It’s important to remember that historically the left-Democratic approach to healthcare reform was always based on a fully government-run system such as Medicare or Medicaid. During debate on health reform in 2009, this approach was called “single payer,” with the government being the single payer. One benefit of this approach is cost control: the government could use its monopsony buying power to force down prices just as Walmart does with its suppliers.
Conservatives wanted to avoid too much government control and were adamantly opposed to single-payer. But they recognized that certain problems required more than a pure free-market solution. One problem in particular is covering people with pre-existing conditions, one of the most popular provisions in ACA. The difficulty is that people may wait until they get sick before buying insurance and then expect full coverage for their conditions. Obviously, this free-rider problem would bankrupt the health-insurance system unless there was a fix.
The conservative solution was the individual mandate—forcing people to buy private health insurance, with subsidies for the poor. This approach was first put forward by Heritage Foundation economist Stuart Butler in a 1989 paper, “A Framework for Reform,” published in a Heritage Foundation book, A National Health System for America. In it, Butler said the number one element of a conservative health system was this: “Every resident of the U.S. must, by law, be enrolled in an adequate health care plan to cover major health costs.” He went on to say:
Under this arrangement, all households would be required to protect themselves from major medical costs by purchasing health insurance or enrolling in a prepaid health plan. The degree of financial protection can be debated, but the principle of mandatory family protection is central to a universal health care system in America.
In 1991, prominent conservative health economist Mark V. Pauley also endorsed the individual mandate as central to healthcare reform. In an article in the journal Health Affairs, Pauley said:
All citizens should be required to obtain a basic level of health insurance. Not having health insurance imposes a risk of delaying medical care; it also may impose costs on others, because we as a society provide care to the uninsured. … Permitting individuals to remain uninsured results in inefficient use of medical care, inequity in the incidence of costs of uncompensated care, and tax-related distortions.
In 2004, Senate Majority Leader Bill Frist (R-Tenn.) endorsed an individual mandate in a speech to the National Press Club. “I believe higher-income Americans today do have a societal and personal responsibility to cover in some way themselves and their children,” he said. Even libertarian Ron Bailey, writing in Reason, conceded the necessity of a mandate in a November 2004 article titled, “Mandatory Health Insurance Now!” Said Bailey: “Why shouldn’t we require people who now get health care at the expense of the rest of us pay for their coverage themselves? … Mandatory health insurance would not be unlike the laws that require drivers to purchase auto insurance or pay into state-run risk pools.”
Among those enamored with the emerging conservative health reform based on an individual mandate was Mitt Romney, who was elected governor of Massachusetts in 2002. In 2004, he put forward a state health reform plan to which he later added an individual mandate. As Romney explained in June 2005, “No more ‘free riding,’ if you will, where an individual says: ‘I’m not going to pay, even though I can afford it. I’m not going to get insurance, even though I can afford it. I’m instead going to just show up and make the taxpayers pay for me’.”
The following month, Romney emphasized his point: “We can’t have as a nation 40 million people—or, in my state, half a million—saying, ‘I don’t have insurance, and if I get sick, I want someone else to pay’.”
In 2006, Governor Romney signed the Massachusetts health reform into law, including the individual mandate. Defending his legislation in a Wall Street Journal article, he said:
I proposed that everyone must either purchase a product of their choice or demonstrate that they can pay for their own health care. It’s a personal responsibility principle.
Some of my libertarian friends balk at what looks like an individual mandate. But remember, someone has to pay for the health care that must, by law, be provided: Either the individual pays or the taxpayers pay. A free ride on government is not libertarian.
As late as 2008, Robert Moffitt of the Heritage Foundation was still defending the individual mandate as reasonable, non-ideological and nonpartisan in an article for the Harvard Health Policy Review.
So what changed just a year later, when Obama put forward a health-reform plan that was almost a carbon copy of those previously endorsed by the Heritage Foundation, Mitt Romney, and other Republicans? The only thing is that it was now supported by a Democratic president that Republicans vowed to fight on every single issue, according to Robert Draper’s book Do Not Ask What Good We Do.
Senior Obama adviser David Axelrod later admitted that Romney’s Massachusetts plan was the “template” for Obama’s plan. “That work inspired our own health plan,” he said in 2011. But no one in the White House said so back in 2009. I once asked a senior Obama aide why. His answer was that once Republicans refused to negotiate on health reform and Obama had to win only with Democratic votes, it would have been counterproductive, politically, to point out the Obama plan’s Republican roots.
The left wing of the House Democratic caucus was dubious enough about Obama’s plan as it was, preferring a single-payer plan. Thus it was necessary for Obama to portray his plan as more liberal than it really was to get the Democratic votes needed for passage, which of course played right into the Republicans’ hands. But the reality is that ACA remains a very modest reform based on Republican and conservative ideas.
Other Rightward Policies
Below are a few other issues on which Obama has consistently tilted rightward:
Drugs: Although it has become blindingly obvious that throwing people in jail for marijuana use is insane policy and a number of states have moved to decriminalize its use, Obama continued the harsh anti-drug policy of previous administrations, and his Department of Justice continues to treat marijuana as a dangerous drug. As Time put it in 2012: “The Obama Administration is cracking down on medical marijuana dispensaries and growers just as harshly as the Administration of George W. Bush did.”
National-security leaks: At least since Nixon, a hallmark of Republican administrations has been an obsession with leaks of unauthorized information, and pushing the envelope on government snooping. By all accounts, Obama’s penchant for secrecy and withholding information from the press is on a par with the worst Republican offenders. Journalist Dan Froomkin charges that Obama has essentially institutionalized George W. Bush’s policies. Nixon operative Roger Stone thinks Obama has actually gone beyond what his old boss tried to do.
Race: I think almost everyone, including me, thought the election of our first black president would lead to new efforts to improve the dismal economic condition of African-Americans. In fact, Obama has seldom touched on the issue of race, and when he has he has emphasized the conservative themes of responsibility and self-help. Even when Republicans have suppressed minority voting, in a grotesque campaign to fight nonexistent voter fraud, Obama has said and done nothing.
Gay marriage: Simply stating public support for gay marriage would seem to have been a no-brainer for Obama, but it took him two long years to speak out on the subject and only after being pressured to do so.
Corporate profits: Despite Republican harping about Obama being anti-business, corporate profits and the stock market have risen to record levels during his administration. Even those progressives who defend Obama against critics on the left concede that he has bent over backward to protect corporate profits. As Theda Skocpol and Lawrence Jacobs put it: “In practice, [Obama] helped Wall Street avert financial catastrophe and furthered measures to support businesses and cater to mainstream public opinion. … He has always done so through specific policies that protect and further opportunities for businesses to make profits.”
I think Cornell West nailed it when he recently charged that Obama has never been a real progressive in the first place. “He posed as a progressive and turned out to be counterfeit,” West said. “We ended up with a Wall Street presidency, a drone presidency, a national security presidency.”
I don’t expect any conservatives to recognize the truth of Obama’s fundamental conservatism for at least a couple of decades—perhaps only after a real progressive presidency. In any case, today they are too invested in painting him as the devil incarnate in order to frighten grassroots Republicans into voting to keep Obama from confiscating all their guns, throwing them into FEMA re-education camps, and other nonsense that is believed by many Republicans. But just as they eventually came to appreciate Bill Clinton’s core conservatism, Republicans will someday see that Obama was no less conservative.
Bruce Bartlett is the author of The Benefit and the Burden: Tax Reform—Why We Need It and What It Will Take.
The bottom line is that the global financial meltdown of 2008-’09 and the European debt crisis of 2010-’12 have never truly been resolved. After governments disbursed record bailouts in the wake of the Wall Street crash, the world’s leading central banks simply papered over the remaining weaknesses by subsidizing essentially defunct financial institutions to the tune of trillions of dollars, buying up swaths of toxic assets and providing loans at negative real interest rates in the hope of reviving the credit system and saving the banks.
But instead of fixing the underlying problems of structural indebtedness, record unemployment, rampant inequality and a seemingly never-ending recession, these measures have only made matters worse. For one, they have fed an enormous credit bubble that dwarfs even the previous one, which nearly sank the world economy back in 2008. The latest Geneva report by the International Center for Monetary and Banking Studies notes that total world debt — excluding that of the financial sector — has shot up 38% since the collapse of Lehman Brothers, reaching new historic highs. Last year, global public and private debt stood at 212% of global output, up from 180% in 2008.
With this tidal wave of cheap credit sloshing through the world financial system, investors went looking for the highest yields. Since the US housing market and European bond markets were still reeling from the last crisis, they turned towards the stock exchange. Between mid-2013 and mid-2014, the average global return on equity rose to a whopping 18 percent. Yale economist Robert Shiller has shown that “the gap between stock prices and corporate earnings is now larger than it was in the previous pre-crisis periods,” and “if markets were to return to their normal earning levels, the average stock market in the world should fall by about 30 per cent.”
At the same time, trillions of dollars found their way into the housing markets of emerging economies. In Brazil, for instance, foreign investment in urban transformation projects for the World Cup and the Olympics led to a speculative housing bubble that saw residential property prices rise by more than 80% between 2007 and 2013. In Turkey, too, a massive influx of foreign credit fed a construction boom that has dramatically transformed the urban landscape, with skyscrapers, shopping malls and infrastructural mega-projects mushrooming across the Istanbul skyline. In both countries the resultant social displacements have led to sustained protest and social unrest.
The mother of all credit bubbles, however, has been quietly building up elsewhere, in China, where the government — in a desperate bid to ward off the spillover effects of the Great Recession — has pumped over $13 trillion worth of credit into the economy. This has in turn given rise to a monstrous $4.4 trillion shadow banking system and a housing bubble of truly epic proportions, leaving ghost towns sprawled across the country. It has also turned China into one of the most indebted developing countries in the world, with total public and private debt (excluding financial institutions) skyrocketing to 217% of GDP last year, up from from 147% of GDP in 2008.
The shadow banking system is not just a Chinese problem. In the United States and Europe, debt creation is also increasingly the product of off-balance sheet lending by non-bank financial institutions like hedge funds, insurance companies, private equity funds and broker dealers. The shadow banking system remains largely unregulated, allowing lenders to take much greater risks than ordinary banks could. For this reason, the IMF has warned that the world’s $70 trillion shadow banking system poses a major threat to global financial stability. In the US, shadow banking activities already amount to 2.5 times the size of conventional bank activity.
Meanwhile, troubling signs are emerging in Europe, where four years of austerity have trapped the world’s largest economy in a debilitating deflationary spiral. Even Germany, the EU’s economic powerhouse, is falling back into recession, while Greece returned to the eye of the storm after its stock market went into free fall last week. Greek bonds are now trading far above the 7% mark, which back in 2011 was widely considered to be “the point of no return.” Investors appear to be concerned over the health of Greek banks, the rising popularity of the anti-austerity party SYRIZA, and government plans to exit the bailout early in order to stem SYRIZA’s rise in the polls.
Add to this the growing geopolitical instability in Ukraine and the Middle East, and the conditions appear to be ripe for another round of market panic. Sooner or later, one of the bubbles is bound to pop — and the consequences will not be pretty. What is different this time around is that total debt levels are now even more unmanageable than they were back in 2008, while governments — having already used up most of their fiscal and monetary firepower over the past six years — are even less capable of mounting a proper response. We do not yet know when or where the next crisis will strike, but when it does it will be big. This time we better come prepared.
Jerome Roos is a PhD researcher in International Political Economy at the European University Institute, and founding editor of ROAR Magazine. This article was written as part of his regular column for TeleSUR English.
The end of the Old World:
The 19th century saw an explosion of changes in America. The way people saw the world would never be the same
It has become customary to mark the beginning of the Industrial revolution in eighteenth-century England. Historians usually identify two or sometimes three phases of the Industrial revolution, which are associated with different sources of energy and related technologies. In preindustrial Europe, the primary energy sources were human, animal, and natural (wind, water, and fire).
By the middle of the eighteenth century, much of Europe had been deforested to supply wood for domestic and industrial consumption. J.R. McNeill points out that the combination of energy sources, machines, and ways of organizing production came together to form “clusters” that determined the course of industrialization and, by extension, shaped economic and social developments. a later cluster did not immediately replace its predecessor; rather, different regimes overlapped, though often they were not integrated. With each new cluster, however, the speed of production increased, leading to differential rates of production. The first phase of the Industrial revolution began around 1750 with the shift from human and animal labor to machine-based production. This change was brought about by the use of water power and later steam engines in the textile mills of Great Britain.
The second phase dates from the 1820s, when there was a shift to fossil fuels—primarily coal. By the middle of the nineteenth century, another cluster emerged from the integration of coal, iron, steel, and railroads. The fossil fuel regime was not, of course, limited to coal. Edwin L. Drake drilled the first commercially successful well in Titusville, Pennsylvania, in 1859 and the big gushers erupted first in the 1870s in Baku on the Caspian Sea and later in Spindeltop, Texas (1901). Oil, however, did not replace coal as the main source of fuel in transportation until the 1930s.3 Coal, of course, is still widely used in manufacturing today because it remains one of the cheapest sources of energy. Though global consumption of coal has leveled off since 2000, its use continues to increase in China. Indeed, China currently uses almost as much coal as the rest of the world and reliable sources predict that by 2017, India will be importing as much coal as China.
The third phase of the Industrial revolution began in the closing decades of the nineteenth century. The development of technologies for producing and distributing electricity cheaply and efficiently further transformed industrial processes and created the possibility for new systems of communication as well as the unprecedented capability for the production and dissemination of new forms of entertainment, media, and information. The impact of electrification can be seen in four primary areas.
First, the availability of electricity made the assembly line and mass production possible. When Henry Ford adapted technology used in Chicago’s meatpacking houses to produce cars (1913), he set in motion changes whose effects are still being felt. Second, the introduction of the incandescent light bulb (1881) transformed private and public space. As early as the late 1880s, electrical lighting was used in homes, factories, and on streets. Assembly lines and lights inevitably led to the acceleration of urbanization. Third, the invention of the telegraph (ca.1840) and telephone (1876) enabled the communication and transmission of information across greater distances at faster rates of speed than ever before. Finally, electronic tabulating machines, invented by Herman Hollerith in 1889, made it possible to collect and manage data in new ways. Though his contributions have not been widely acknowledged, Hollerith actually forms a bridge between the Industrial revolution and the so-called post-industrial information age. The son of German immigrants, Hollerith graduated from Columbia University’s School of Mines and went on to found Tabulating Machine Company (1896). He created the first automatic card-feed mechanism and key-punch system with which an operator using a keyboard could process as many as three hundred cards an hour. Under the direction of Thomas J. Watson, Hollerith’s company merged with three others in 1911 to form Computing Tabulating recording Company. In 1924, the company was renamed International Business Machines Corporation (IBM).
There is much to be learned from such periodizations, but they have serious limitations. The developments I have identified overlap and interact in ways that subvert any simple linear narrative. Instead of thinking merely in terms of resources, products, and periods, it is also important to think in terms of networks and flows. The foundation for today’s wired world was laid more than two centuries ago. Beginning in the early nineteenth century, local communities, then states and nations, and finally the entire globe became increasingly connected. Though varying from time to time and place to place, there were two primary forms of networks: those that directed material flows (fuels, commodities, products, people), and those that channeled immaterial flows (communications, information, data, images, and currencies). From the earliest stages of development, these networks were inextricably interconnected. There would have been no telegraph network without railroads and no railroad system without the telegraph network, and neither could have existed without coal and iron. Networks, in other words, are never separate but form networks of networks in which material and immaterial flows circulate. As these networks continued to expand, and became more and more complex, there was a steady increase in the importance of immaterial flows, even for material processes. The combination of expanding connectivity and the growing importance of information technologies led to the acceleration of both material and immaterial flows. This emerging network of networks created positive feedback loops in which the rate of acceleration increased.
While developments in transportation, communications, information, and management were all important, industrialization as we know it is inseparable from the transportation revolution that trains created. In his foreword to Wolfgang Schivelbusch’s informative study “The Railway Journey: The Industrialization of Time and Space in the 19th Century,” Alan Trachtenberg writes, “Nothing else in the nineteenth century seemed as vivid and dramatic a sign of modernity as the railroad. Scientists and statesmen joined capitalists in promoting the locomotive as the engine of ‘progress,’ a promise of imminent Utopia.”
In England, railway technology developed as an extension of coal mining. The shift from human and natural sources of energy to fossil fuels created a growing demand for coal. While steam engines had been used since the second half of the eighteenth century in British mines to run fans and pumps like those my great-grandfather had operated in the Pennsylvania coalfields, it was not until 1901, when Oliver Evans invented a high-pressure, mobile steam engine, that locomotives were produced. By the beginning of the nineteenth century, the coal mined in the area around Newcastle was being transported throughout England on rail lines. It did not take long for this new rapid transit system to develop—by the 1820s, railroads had expanded to carry passengers, and half a century later rail networks spanned all of Europe.
What most impressed people about this new transportation network was its speed. The average speed of early railways in England was twenty to thirty miles per hour, which was approximately three times faster than stagecoaches. The increase in speed transformed the experience of time and space. Countless writers from this era use the same words to describe train travel as Karl Marx had used to describe emerging global financial markets. Trains, like capital, “annihilate space with time.”
Traveling on the recently opened Paris-rouen-orléans railway line in 1843, the German poet, journalist, and literary critic Heinrich Heine wrote: “What changes must now occur, in our way of looking at things, in our notions! Even the elementary concepts of time and space have begun to vacillate. Space is killed by the railways, and we are left with time alone. . . . Now you can travel to orleans in four and a half hours, and it takes no longer to get to rouen. Just imagine what will happen when the lines to Belgium and Germany are completed and connected up with their railways! I feel as if the mountains and forests of all countries were advancing on Paris. Even now, I can smell the German linden trees; the North Sea’s breakers are rolling against my door.” This new experience of space and time that speed brought about had profound psychological effects that I will consider later.
Throughout the nineteenth century, the United States lagged behind Great Britain in terms of industrial capacity: in 1869, England was the source of 20 percent of the world’s industrial production, while the United States contributed just 7 percent. By the start of World War I, however, america’s industrial capacity surpassed that of England: that is, by 1913, the scales had tipped—32 percent came from the United States and only 14 percent from England. While England had a long history before the Industrial revolution, the history of the United States effectively begins with the Industrial revolution. There are other important differences as well. Whereas in Great Britain the transportation revolution grew out of the industrialization of manufacturing primarily, but not exclusively, in textile factories, in the United States mechanization began in agriculture and spread to transportation before it transformed manufacturing. In other words, in Great Britain, the Industrial Revolution in manufacturing came first and the transportation revolution second, while in the United States, this order was reversed.
When the Industrial revolution began in the United States, most of the country beyond the Eastern Seaboard was largely undeveloped. Settling this uncharted territory required the development of an extensive transportation network. Throughout the early decades of the nineteenth century, the transportation system consisted of a network of rudimentary roads connecting towns and villages with the countryside. New England, Boston, New york, Philadelphia, Baltimore, and Washington were joined by highways suitable for stagecoach travel. Inland travel was largely confined to rivers and waterways. The completion of the Erie Canal (1817–25) marked the first stage in the development of an extensive network linking rivers, lakes, canals, and waterways along which produce and people flowed. Like so much else in America, the railroad system began in Boston. By 1840, only 18,181 miles of track had been laid. During the following decade, however, there was an explosive expansion of the nation’s rail system financed by securities and bonds traded on stock markets in America and London. By the 1860s, the railroad network east of the Mississippi river was using routes roughly similar to those employed today.
Where some saw loss, others saw gain. In 1844, inveterate New Englander ralph Waldo Emerson associated the textile loom with the railroad when he reflected, “Not only is distance annihilated, but when, as now, the locomotive and the steamboat, like enormous shuttles, shoot every day across the thousand various threads of national descent and employment, and bind them fast in one web, an hourly assimilation goes forward, and there is no danger that local peculiarities and hostilities should be preserved.” Gazing at tracks vanishing in the distance, Emerson saw a new world opening that, he believed, would overcome the parochialisms of the past. For many people in the nineteenth century, this new world promising endless resources and endless opportunity was the american West. A transcontinental railroad had been proposed as early as 1820 but was not completed until 1869.
On May 10, 1869, Leland Stanford, who would become the governor of California and, in 1891, founder of Stanford University, drove the final spike in the railroad that joined east and west. Nothing would ever be the same again. This event was not merely local, but also, as Emerson had surmised, global. Like the California gold and Nevada silver spike that leland had driven to join the rails, the material transportation network and immaterial communication network intersected at that moment to create what Rebecca Solnit correctly identifies as “the first live national media event.” The spike “had been wired to connect to the telegraph lines that ran east and west along the railroad tracks. The instant Stanford struck the spike, a signal would go around the nation. . . . The signal set off cannons in San Francisco and New York. In the nation’s capital the telegraph signal caused a ball to drop, one of the balls that visibly signaled the exact time in observatories in many places then (of which the ball dropped in New york’s Times Square at the stroke of the New year is a last relic). The joining of the rails would be heard in every city equipped with fire-alarm telegrams, in Philadelphia, omaha, Buffalo, Chicago, and Sacramento. Celebrations would be held all over the nation.” This carefully orchestrated spectacle, which was made possible by the convergence of multiple national networks, was worthy of the future Hollywood and the technological wizards of Silicon Valley whose relentless innovation Stanford’s university would later nourish. What most impressed people at the time was the speed of global communication, which now is taken for granted.
Flickering Images—Changing Minds
Industrialization not only changes systems of production and distribution of commodities and products, but also imposes new disciplinary practices that transform bodies and change minds. During the early years of train travel, bodily acceleration had an enormous psychological effect that some people found disorienting and others found exhilarating. The mechanization of movement created what ann Friedberg describes as the “mobile gaze,” which transforms one’s surroundings and alters both the content and, more important, the structure, of perception. This mobile gaze takes two forms: the person can move and the surroundings remain immobile (train, bicycle, automobile, airplane, elevator), or the person can remain immobile and the surroundings move (panorama, kinetoscope, film).
When considering the impact of trains on the mobilization of the gaze, it is important to note that different designs for railway passenger cars had different perceptual and psychological effects. Early European passenger cars were modeled on stagecoaches in which individuals had seats in separate compartments; early american passenger cars, by contrast, were modeled on steamboats in which people shared a common space and were free to move around. The European design tended to reinforce social and economic hierarchies that the american design tried to break down. Eventually, american railroads adopted the European model of fixed individual seating but had separate rows facing in the same direction rather than different compartments. As we will see, the resulting compartmentalization of perception anticipates the cellularization of attention that accompanies today’s distributed high-speed digital networks.
During the early years, there were numerous accounts of the experience of railway travel by ordinary people, distinguished writers, and even physicians, in which certain themes recur. The most common complaint is the sense of disorientation brought about by the experience of unprecedented speed. There are frequent reports of the dispersion and fragmentation of attention that are remarkably similar to contemporary personal and clinical descriptions of attention-deficit hyperactivity disorder (ADHD). With the landscape incessantly rushing by faster than it could be apprehended, people suffered overstimulation, which created a sense of psychological exhaustion and physical distress. Some physicians went so far as to maintain that the experience of speed caused “neurasthenia, neuralgia, nervous dyspepsia, early tooth decay, and even premature baldness.”
In 1892, Sir James Crichton-Browne attributed the significant increase in the mortality rate between 1859 and 1888 to “the tension, excitement, and incessant mobility of modern life.” Commenting on these statistics, Max Nordau might well be describing the harried pace of life today. “Every line we read or write, every human face we see, every conversation we carry on, every scene we perceive through the window of the flying express, sets in activity our sensory nerves and our brain centers. Even the little shocks of railway travelling, not perceived by consciousness, the perpetual noises and the various sights in the streets of a large town, our suspense pending the sequel of progressing events, the constant expectation of the newspaper, of the postman, of visitors, cost our brains wear and tear.” During the years around the turn of the last century, a sense of what Stephen kern aptly describes as “cultural hypochondria” pervaded society. Like today’s parents concerned about the psychological and physical effects of their kids playing video games, nineteenth-century physicians worried about the effect of people sitting in railway cars for hours watching the world rush by in a stream of images that seemed to be detached from real people and actual things.
In addition to the experience of disorientation, dispersion, fragmentation, and fatigue, rapid train travel created a sense of anxiety. People feared that with the increase in speed, machinery would spin out of control, resulting in serious accidents. An 1829 description of a train ride expresses the anxiety that speed created. “It is really flying, and it is impossible to divest yourself of the notion of instant death to all upon the least accident happening.” a decade and a half later, an anonymous German explained that the reason for such anxiety is the always “close possibility of an accident, and the inability to exercise any influence on the running of the cars.” When several serious accidents actually occurred, anxiety spread like a virus. Anxiety, however, is always a strange experience—it not only repels, it also attracts; danger and the anxiety it brings are always part of speed’s draw.
Perhaps this was a reason that not everyone found trains so distressing. For some people, the experience of speed was “dreamlike” and bordered on ecstasy. In 1843, Emerson wrote in his Journals, “Dreamlike travelling on the railroad. The towns which I pass between Philadelphia and New york make no distinct impression. They are like pictures on a wall.” The movement of the train creates a loss of focus that blurs the mobile gaze. A few years earlier, Victor Hugo’s description of train travel sounds like an acid trip as much as a train trip. In either case, the issue is speed. “The flowers by the side of the road are no longer flowers but flecks, or rather streaks, of red or white; there are no longer any points, everything becomes a streak; grain fields are great shocks of yellow hair; fields of alfalfa, long green tresses; the towns, the steeples, and the trees perform a crazy mingling dance on the horizon; from time to time, a shadow, a shape, a specter appears and disappears with lightning speed behind the window; it’s a railway guard.” The flickering images fleeting past train windows are like a film running too fast to comprehend.
Transportation was not the only thing accelerating in the nineteenth century—the pace of life itself was speeding up as never before. listening to the whistle of the train headed to Boston in his cabin beside Walden Pond, Thoreau mused, “The startings and arrivals of the cars are now the epochs in the village day. They go and come with such regularity and precision, and their whistle can be heard so far, that the farmers set their clocks by them, and thus one well conducted institution regulates a whole country. Have not men improved somewhat in punctuality since the railroad was invented? Do they not talk and think faster in the depot than they did in the stage office? There is something electrifying in the atmosphere of the former place. I have been astonished by some of the miracles it has wrought.” And yet Thoreau, more than others, knew that these changes also had a dark side.
The transition from agricultural to industrial capitalism brought with it a massive migration from the country, where life was slow and governed by natural rhythms, to the city, where life was fast and governed by mechanical, standardized time. The convergence of industrialization, transportation, and electrification made urbanization inevitable. The faster that cities expanded, the more some writers and poets idealized rustic life in the country. Nowhere is such idealization more evident than in the writings of British romantics. The rapid swirl of people, machines, and commodities created a sense of vertigo as disorienting as train travel. Wordsworth writes in The Prelude,
oh, blank confusion! True epitome
of what the mighty City is herself
To thousands upon thousands of her sons, living among the same perpetual whirl
of trivial objects, melted and reduced
To one identity, by differences
That have no law, no meaning, no end—
By 1850, fifteen cities in the United States had a population exceeding 50,000. New york was the largest (1,080,330), followed by Philadelphia (565,529), Baltimore (212,418), and Boston (177,840). Increasing domestic trade that resulted from the railroad and growing foreign trade that accompanied improved ocean travel contributed significantly to this growth. While commerce was prevalent in early cities, manufacturing expanded rapidly during the latter half of the eighteenth century. The most important factor contributing to nineteenth-century urbanization was the rapid development of the money economy. Once again, it is a matter of circulating flows, not merely of human bodies but of mobile commodities. Money and cities formed a positive feedback loop—as the money supply grew, cities expanded, and as cities expanded, the money supply grew.
The fast pace of urban life was as disorienting for many people as the speed of the train. In his seminal essay “The Metropolis and Mental life,” Georg Simmel observes, “The psychological foundation upon which the metropolitan individuality is erected, is the intensification of emotional life due to the swift and continuous shift of external and internal stimuli. Man is a creature whose existence is dependent on differences, i.e., his mind is stimulated by the difference between present impressions and those which have preceded. . . . To the extent that the metropolis creates these psychological conditions—with every crossing of the street, with the tempo and multiplicity of economic, occupational and social life—it creates the sensory foundations of mental life, and in the degree of awareness necessitated by our organization as creatures dependent on differences, a deep contrast with the slower, more habitual, more smooth flowing rhythm of the sensory-mental phase of small town and rural existence.” The expansion of the money economy created a fundamental contradiction at the heart of metropolitan life. On the one hand, cities brought together different people from all backgrounds and walks of life, and on the other hand, emerging industrial capitalism leveled these differences by disciplining bodies and programming minds. “Money,” Simmel continues, “is concerned only with what is common to all, i.e., with the exchange value which reduces all quality and individuality to a purely quantitative level.” The migration from country to city that came with the transition from agricultural to industrial capitalism involved a shift from homogeneous communities to heterogeneous assemblages of different people, qualitative to quantitative methods of assessment and evaluation, as well as concrete to abstract networks of exchange of goods and services, and a slow to fast pace of life. I will consider further aspects of these disciplinary practices in Chapter 3; for now, it is important to understand the implications of the mechanization or industrialization of perception.
I have already noted similarities between the experience of looking through a window on a speeding train to the experience of watching a film that is running too fast. During the latter half of the nineteenth century a remarkable series of inventions transformed not only what people experienced in the world but how they experienced it: photography (Louis-Jacques-Mandé Daguerre, ca. 1837), the telegraph (Samuel F. B. Morse, ca. 1840), the stock ticker (Thomas alva Edison, 1869), the telephone (alexander Graham Bell, 1876), the chronophotographic gun (Étienne-Jules Maney, 1882), the kinetoscope (Edison, 1894), the zoopraxiscope (Eadweard Muybridge, 1893), the phantoscope (Charles Jenkins, 1894), and cinematography (Auguste and Louis Lumière, 1895). The way in which human beings perceive and conceive the world is not hardwired in the brain but changes with new technologies of production and reproduction.
Just as the screens of today’s TVs, computers, video games, and mobile devices are restructuring how we process experience, so too did new technologies at the end of the nineteenth century change the world by transforming how people apprehended it. While each innovation had a distinctive effect, there is a discernible overall trajectory to these developments. Industrial technologies of production and reproduction extended processes of dematerialization that eventually led first to consumer capitalism and then to today’s financial capitalism. The crucial variable in these developments is the way in which material and immaterial networks intersect to produce a progressive detachment of images, representations, information, and data from concrete objects and actual events. Marveling at what he regarded as the novelty of photographs, Oliver Wendell Holmes commented, “Form is henceforth divorced from matter. In fact, matter as a visible object is of no great use any longer, except as the mould on which form is shaped. Give us a few negatives of a thing worth seeing, taken from different points of view, and that is all we want of it. Pull it down or burn it up, if you please. . . . Matter in large masses must always be fixed and dear, form is cheap and transportable. We have got the fruit of creation now, and need not trouble ourselves about the core.”
Technologies for the reproduction and transmission of images and information expand the process of abstraction initiated by the money economy to create a play of freely floating signs without anything to ground, certify, or secure them. With new networks made possible by the combination of electrification and the invention of the telegraph, telephone, and stock ticker, communication was liberated from the strictures imposed by physical means of conveyance. In previous energy regimes, messages could be sent no faster than people, horses, carriages, trains, ships, or automobiles could move. Dematerialized words, sounds, information, and eventually images, by contrast, could be transmitted across great distances at high speed. With this dematerialization and acceleration, Marx’s prediction—that “everything solid melts into air”—was realized. But this was just the beginning. It would take more than a century for electrical currents to become virtual currencies whose transmission would approach the speed limit.
Excerpted from “Speed Limits: Where Time Went and Why We Have So Little Left,” by Mark C. Taylor, published October 2014 by Yale University Press. Copyright ©2014 by Mark C. Taylor. Reprinted by permission of Yale University Press.
Filmmaker Laura Poitras on Our Surveillance State
Photo Credit: Mopic / Shutterstock.com
Here’s a Ripley’s Believe It or Not! stat from our new age of national security. How many Americans have security clearances? The answer: 5.1 million, a figure that reflects the explosive growth of the national security state in the post-9/11 era. Imagine the kind of system needed just to vet that many people for access to our secret world (to the tune of billions of dollars). We’re talking here about the total population of Norway and significantly more people than you can find in Costa Rica, Ireland, or New Zealand. And yet it’s only about 1.6% of the American population, while on ever more matters, the unvetted 98.4% of us are meant to be left in the dark.
For our own safety, of course. That goes without saying.
All of this offers a new definition of democracy in which we, the people, are to know only what the national security state cares to tell us. Under this system, ignorance is the necessary, legally enforced prerequisite for feeling protected. In this sense, it is telling that the only crime for which those inside the national security state can be held accountable in post-9/11 Washington is not potential perjury before Congress, or the destruction of evidence of a crime, or torture, or kidnapping, or assassination, or the deaths of prisoners in an extralegal prison system, but whistleblowing; that is, telling the American people something about what their government is actually doing. And that crime, and only that crime, has been prosecuted to the full extent of the law (and beyond) with a vigor unmatched in American history. To offer a single example, the only American to go to jail for the CIA’s Bush-era torture program was John Kiriakou, a CIA whistleblower who revealed the name of an agent involved in the program to a reporter.
In these years, as power drained from Congress, an increasingly imperial White House has launched various wars (redefined by its lawyers as anything but), as well as a global assassination campaign in which the White House has its own “kill list” and the president himself decides on global hits. Then, without regard for national sovereignty or the fact that someone is an American citizen (and upon the secret invocation of legal mumbo-jumbo), the drones are sent off to do the necessary killing.
And yet that doesn’t mean that we, the people, know nothing. Against increasing odds, there has been some fine reporting in the mainstream media by the likes of James Risen and Barton Gellman on the security state’s post-legal activities and above all, despite the Obama administration’s regular use of the World War I era Espionage Act, whistleblowers have stepped forward from within the government to offer us sometimes staggering amounts of information about the system that has been set up in our name but without our knowledge.
Among them, one young man, whose name is now known worldwide, stands out. In June of last year, thanks to journalist Glenn Greenwald and filmmaker Laura Poitras, Edward Snowden, a contractor for the NSA and previously the CIA, stepped into our lives from a hotel room in Hong Kong. With a treasure trove of documents that are still being released, he changed the way just about all of us view our world. He has been charged under the Espionage Act. If indeed he was a “spy,” then the spying he did was for us, for the American people and for the world. What he revealed to a stunned planet was a global surveillance state whose reach and ambitions were unique, a system based on a single premise: that privacy was no more and that no one was, in theory (and to a remarkable extent in practice), unsurveillable.
Its builders imagined only one exemption: themselves. This was undoubtedly at least part of the reason why, when Snowden let us peek in on them, they reacted with such over-the-top venom. Whatever they felt at a policy level, it’s clear that they also felt violated, something that, as far as we can tell, left them with no empathy whatsoever for the rest of us. One thing that Snowden proved, however, was that the system they built was ready-made for blowback.
Sixteen months after his NSA documents began to be released by the Guardian and the Washington Post, I think it may be possible to speak of the Snowden Era. And now, a remarkable new film, Citizenfour, which had its premiere at the New York Film Festival on October 10th and will open in select theaters nationwide on October 24th, offers us a window into just how it all happened. It is already being mentioned as a possible Oscar winner.
Director Laura Poitras, like reporter Glenn Greenwald, is now known almost as widely as Snowden himself, for helping facilitate his entry into the world. Her new film, the last in a trilogy she’s completed (the previous two being My Country, My Country on the Iraq War and The Oath on Guantanamo), takes you back to June 2013 and locks you in that Hong Kong hotel room with Snowden, Greenwald, Ewen MacAskill of the Guardian, and Poitras herself for eight days that changed the world. It’s a riveting, surprisingly unclaustrophic, and unforgettable experience.
Before that moment, we were quite literally in the dark. After it, we have a better sense, at least, of the nature of the darkness that envelops us. Having seen her film in a packed house at the New York Film Festival, I sat down with Poitras in a tiny conference room at the Loews Regency Hotel in New York City to discuss just how our world has changed and her part in it.
Tom Engelhardt: Could you start by laying out briefly what you think we’ve learned from Edward Snowden about how our world really works?
Laura Poitras: The most striking thing Snowden has revealed is the depth of what the NSA and the Five Eyes countries [Australia, Canada, New Zealand, Great Britain, and the U.S.] are doing, their hunger for all data, for total bulk dragnet surveillance where they try to collect all communications and do it all sorts of different ways. Their ethos is “collect it all.” I worked on a story with Jim Risen of the New York Times about a document — a four-year plan for signals intelligence — in which they describe the era as being “the golden age of signals intelligence.” For them, that’s what the Internet is: the basis for a golden age to spy on everyone.
This focus on bulk, dragnet, suspicionless surveillance of the planet is certainly what’s most staggering. There were many programs that did that. In addition, you have both the NSA and the GCHQ [British intelligence] doing things like targeting engineers at telecoms. There was an article published at The Intercept that cited an NSA document Snowden provided, part of which was titled “I Hunt Sysadmins” [systems administrators]. They try to find the custodians of information, the people who are the gateway to customer data, and target them. So there’s this passive collection of everything, and then things that they can’t get that way, they go after in other ways.
I think one of the most shocking things is how little our elected officials knew about what the NSA was doing. Congress is learning from the reporting and that’s staggering. Snowden and [former NSA employee] William Binney, who’s also in the film as a whistleblower from a different generation, are technical people who understand the dangers. We laypeople may have some understanding of these technologies, but they really grasp the dangers of how they can be used. One of the most frightening things, I think, is the capacity for retroactive searching, so you can go back in time and trace who someone is in contact with and where they’ve been. Certainly, when it comes to my profession as a journalist, that allows the government to trace what you’re reporting, who you’re talking to, and where you’ve been. So no matter whether or not I have a commitment to protect my sources, the government may still have information that might allow them to identify whom I’m talking to.
TE: To ask the same question another way, what would the world be like without Edward Snowden? After all, it seems to me that, in some sense, we are now in the Snowden era.
LP: I agree that Snowden has presented us with choices on how we want to move forward into the future. We’re at a crossroads and we still don’t quite know which path we’re going to take. Without Snowden, just about everyone would still be in the dark about the amount of information the government is collecting. I think that Snowden has changed consciousness about the dangers of surveillance. We see lawyers who take their phones out of meetings now. People are starting to understand that the devices we carry with us reveal our location, who we’re talking to, and all kinds of other information. So you have a genuine shift of consciousness post the Snowden revelations.
TE: There’s clearly been no evidence of a shift in governmental consciousness, though.
LP: Those who are experts in the fields of surveillance, privacy, and technology say that there need to be two tracks: a policy track and a technology track. The technology track is encryption. It works and if you want privacy, then you should use it. We’ve already seen shifts happening in some of the big companies — Google, Apple — that now understand how vulnerable their customer data is, and that if it’s vulnerable, then their business is, too, and so you see a beefing up of encryption technologies. At the same time, no programs have been dismantled at the governmental level, despite international pressure.
TE: In Citizenfour, we spend what must be an hour essentially locked in a room in a Hong Kong hotel with Snowden, Glenn Greenwald, Ewan MacAskill, and you, and it’s riveting. Snowden is almost preternaturally prepossessing and self-possessed. I think of a novelist whose dream character just walks into his or her head. It must have been like that with you and Snowden. But what if he’d been a graying guy with the same documents and far less intelligent things to say about them? In other words, how exactly did who he was make your movie and remake our world?
LP: Those are two questions. One is: What was my initial experience? The other: How do I think it impacted the movie? We’ve been editing it and showing it to small groups, and I had no doubt that he’s articulate and genuine on screen. But to see him in a full room [at the New York Film Festival premiere on the night of October 10th], I’m like, wow! He really commands the screen! And I experienced the film in a new way with a packed house.
TE: But how did you experience him the first time yourself? I mean you didn’t know who you were going to meet, right?
LP: So I was in correspondence with an anonymous source for about five months and in the process of developing a dialogue you build ideas, of course, about who that person might be. My idea was that he was in his late forties, early fifties. I figured he must be Internet generation because he was super tech-savvy, but I thought that, given the level of access and information he was able to discuss, he had to be older. And so my first experience was that I had to do a reboot of my expectations. Like fantastic, great, he’s young and charismatic and I was like wow, this is so disorienting, I have to reboot. In retrospect, I can see that it’s really powerful that somebody so smart, so young, and with so much to lose risked so much.
He was so at peace with the choice he had made and knowing that the consequences could mean the end of his life and that this was still the right decision. He believed in it, and whatever the consequences, he was willing to accept them. To meet somebody who has made those kinds of decisions is extraordinary. And to be able to document that and also how Glenn [Greenwald] stepped in and pushed for this reporting to happen in an aggressive way changed the narrative. Because Glenn and I come at it from an outsider’s perspective, the narrative unfolded in a way that nobody quite knew how to respond to. That’s why I think the government was initially on its heels. You know, it’s not everyday that a whistleblower is actually willing to be identified.
TE: My guess is that Snowden has given us the feeling that we now grasp the nature of the global surveillance state that is watching us, but I always think to myself, well, he was just one guy coming out of one of 17 interlocked intelligence outfits. Given the remarkable way your film ends — the punch line, you might say — with another source or sources coming forward from somewhere inside that world to reveal, among other things, information about the enormous watchlist that you yourself are on, I’m curious: What do you think is still to be known? I suspect that if whistleblowers were to emerge from the top five or six agencies, the CIA, the DIA, the National Geospatial Intelligence Agency, and so on, with similar documentation to Snowden’s, we would simply be staggered by the system that’s been created in our name.
LP: I can’t speculate on what we don’t know, but I think you’re right in terms of the scale and scope of things and the need for that information to be made public. I mean, just consider the CIA and its effort to suppress the Senate’s review of its torture program. Take in the fact that we live in a country that a) legalized torture and b) where no one was ever held to account for it, and now the government’s internal look at what happened is being suppressed by the CIA. That’s a frightening landscape to be in.
In terms of sources coming forward, I really reject this idea of talking about one, two, three sources. There are many sources that have informed the reporting we’ve done and I think that Americans owe them a debt of gratitude for taking the risk they do. From a personal perspective, because I’m on a watchlist and went through years of trying to find out why, of having the government refuse to confirm or deny the very existence of such a list, it’s so meaningful to have its existence brought into the open so that the public knows there is a watchlist, and so that the courts can now address the legality of it. I mean, the person who revealed this has done a huge public service and I’m personally thankful.
TE: You’re referring to the unknown leaker who’s mentioned visually and elliptically at the end of your movie and who revealed that the major watchlist your on has more than 1.2 million names on it. In that context, what’s it like to travel as Laura Poitras today? How do you embody the new national security state?
LP: In 2012, I was ready to edit and I chose to leave the U.S. because I didn’t feel I could protect my source footage when I crossed the U.S. border. The decision was based on six years of being stopped and questioned every time I returned to the United States. And I just did the math and realized that the risks were too high to edit in the U.S., so I started working in Berlin in 2012. And then, in January 2013, I got the first email from Snowden.
TE: So you were protecting…
LP: …other footage. I had been filming with NSA whistleblower William Binney, with Julian Assange, with Jacob Appelbaum of the Tor Project, people who have also been targeted by the U.S., and I felt that this material I had was not safe. I was put on a watchlist in 2006. I was detained and questioned at the border returning to the U.S. probably around 40 times. If I counted domestic stops and every time I was stopped at European transit points, you’re probably getting closer to 80 to 100 times. It became a regular thing, being asked where I’d been and who I’d met with. I found myself caught up in a system you can’t ever seem to get out of, this Kafkaesque watchlist that the U.S. doesn’t even acknowledge.
TE: Were you stopped this time coming in?
LP: I was not. The detentions stopped in 2012 after a pretty extraordinary incident.
I was coming back in through Newark Airport and I was stopped. I took out my notebook because I always take notes on what time I’m stopped and who the agents are and stuff like that. This time, they threatened to handcuff me for taking notes. They said, “Put the pen down!” They claimed my pen could be a weapon and hurt someone.
“Put the pen down! The pen is dangerous!” And I’m like, you’re not… you’ve got to be crazy. Several people yelled at me every time I moved my pen down to take notes as if it were a knife. After that, I decided this has gotten crazy, I’d better do something and I called Glenn. He wrote a piece about my experiences. In response to his article, they actually backed off.
TE: Snowden has told us a lot about the global surveillance structure that’s been built. We know a lot less about what they are doing with all this information. I’m struck at how poorly they’ve been able to use such information in, for example, their war on terror. I mean, they always seem to be a step behind in the Middle East — not just behind events but behind what I think someone using purely open source information could tell them. This I find startling. What sense do you have of what they’re doing with the reams, the yottabytes, of data they’re pulling in?
LP: Snowden and many other people, including Bill Binney, have said that this mentality — of trying to suck up everything they can — has left them drowning in information and so they miss what would be considered more obvious leads. In the end, the system they’ve created doesn’t lead to what they describe as their goal, which is security, because they have too much information to process.
I don’t quite know how to fully understand it. I think about this a lot because I made a film about the Iraq War and one about Guantanamo. From my perspective, in response to the 9/11 attacks, the U.S. took a small, very radical group of terrorists and engaged in activities that have created two generations of anti-American sentiment motivated by things like Guantanamo and Abu Ghraib. Instead of figuring out a way to respond to a small group of people, we’ve created generations of people who are really angry and hate us. And then I think, if the goal is security, how do these two things align, because there are more people who hate the United States right now, more people intent on doing us harm? So either the goal that they proclaim is not the goal or they’re just unable to come to terms with the fact that we’ve made huge mistakes in how we’ve responded.
TE: I’m struck by the fact that failure has, in its own way, been a launching pad for success. I mean, the building of an unparallelled intelligence apparatus and the greatest explosion of intelligence gathering in history came out of the 9/11 failure. Nobody was held accountable, nobody was punished, nobody was demoted or anything, and every similar failure, including the one on the White House lawn recently, simply leads to the bolstering of the system.
LP: So how do you understand that?
TE: I don’t think that these are people who are thinking: we need to fail to succeed. I’m not conspiratorial in that way, but I do think that, strangely, failure has built the system and I find that odd. More than that I don’t know.
LP: I don’t disagree. The fact that the CIA knew that two of the 9/11 hijackers were entering the United States and didn’t notify the FBI and that nobody lost their job is shocking. Instead, we occupied Iraq, which had nothing to do with 9/11. I mean, how did those choices get made?