Blackwater mercenaries convicted for role in 2007 Iraq massacre

http://www.wired.com/images_blogs/photos/uncategorized/2007/12/14/apg_blackwater_070918_ms.jpg

By Thomas Gaist
23 October 2014

A federal court jury convicted four former Blackwater Worldwide mercenaries on charges of murder and manslaughter Wednesday for their role in the 2007 massacre in Nisour Square in Baghdad, which left 17 unarmed Iraqi civilians dead and another 20 wounded.

Blackwater earned global notoriety for the massacre, which was one expression of a brutal US war and occupation that has left hundreds of thousands of Iraqis dead and laid waste to an entire society. The Nisour Square massacre stands alongside similar atrocities carried out by US forces in Haditha, Fallujah, the Abu Ghraib prison facility and elsewhere.

Former Blackwater sniper Nicholas Slatten was convicted of first degree murder. Evan Liberty, Paul Slough and Dustin Heard were all found guilty of voluntary manslaughter and using a machine gun to carry out a violent crime. The convictions carry minimum sentences of 30 years in prison for Liberty, Slough and Heard and a potential life sentence for Slatten.

The decision is subject to appeal, which could take a year or more, and the verdicts could be overturned in the process.

After 28 days of deliberation following an 11-week trial, the jury in a federal district court in Washington decisively rejected the defense team’s arguments that the mercenaries had fired on the crowd in self-defense. This story had already been thoroughly debunked by an Iraqi government study and independent investigations by reporters at the New York Times and Washington Post .

On September 16, 2007, the security contractors opened up with machine guns and grenade launchers into stopped traffic, before turning their sights on crowds of civilians seeking to flee the scene. The Blackwater forces suffered virtually no damage during the incident.

Civilian vehicles were riddled with dozens of bullets. One woman was shot as she held her dead son in her arms, with the vehicle she was in then incinerated. Blackwater helicopters also fired into cars from overhead.

Jurors were reportedly overwhelmed by the gruesome details supplied in testimony by witnesses. One juror was excused after informing the judge that testimony from a father about the death of his 9-year old son caused her to suffer from bouts of insomnia.

The massacre occurred amidst the massive wave of sectarian and ethnic bloodletting, fomented in 2007 by the US as part the “surge,” which forced hundreds of thousands of Iraqis to flee their homes in a matter of months, bringing to the total number of refugees produced by the US invasion to some 3.7 million.

The Obama administration, which prosecuted the case, has sought to spin the guilty verdict as an example of the US government’s supposed democratic values.

“This verdict is a resounding affirmation of the commitment of the American people to the rule of law, even in times of war,” US Attorney Ronald Machen said in an official statement. “Today’s verdict demonstrates the FBI’s dedication to investigating violations of US law no matter where they occur,” said top FBI official Andrew McCabe.

In reality, while the Blackwater mercenaries are guilty of horrendous crimes, these crimes flowed from the overarching crime: the illegal 2003 invasion of Iraq by the US government with the aim of extending its control over the oil-rich country.

For this crime, the entire political and military establishment stands guilty, and none of the principal architects have been prosecuted. This includes the top officials in the Bush administration: former president George W. Bush, former defense secretary Donald Rumsfeld, former vice president Dick Cheney and many others. The preparation and launching of the war of aggression was aided and abetted by Democrats and Republicans in the Congress, along with the mass media, which propagated the lies used to justify the war.

While shielding Bush-era war criminals from prosecution, the Obama administration has continued and extended the global program of war and violence of the US military. The invasions of Iraq and Afghanistan were followed by the war in Libya, the stoking of civil war in Syria, and a massive program of murder through drone warfare, with the populations of Yemen, Libya and Somalia subject to regular volleys of cruise missiles and laser guided-weapons.

Now the Obama administration has launched a new war in the Middle East, with troops returning to Iraq and preparations being put in place for a direct war in Syria. At the same time, the US military is increasingly turning its attention to larger threats to the interests of the American ruling class, including China and Russia.

The Blackwater verdict gives expression to growing popular revulsion against the neocolonial war policies of the government and the prominent role of fascistic mercenary forces. While the Justice Department brought the case, the verdict was undoubtedly received with a mixture of shock and apprehension by the Obama administration and the military.

Contrary to numerous reports in the corporate media portraying the convictions as a long-standing goal of US policy, in reality the military, political establishment and court system made strenuous efforts to protect the Blackwater agents from prosecution. The State Department granted the mercenaries partial immunity, and a federal judge dismissed the case against them in 2009 before it was later reinstated. The US also blocked efforts by Iraq to try the men in Baghdad.

Meanwhile, Blackwater, since renamed Xi and now Academi, remains a favored instrument of US foreign policy, with hundreds of its private gunmen serving as shock troops for the US-backed regime in Kiev in its terror war against the civilian population of east Ukraine. Supported by US intelligence, Blackwater operators have played a leadership role in the operations of neo-Nazi Right Sector militias and fascistic forces responsible for ongoing atrocities.

 

http://www.wsws.org/en/articles/2014/10/23/blac-o23.html

Mugshots of female Nazi concentration camp guards

The ordinary faces of evil:
10.22.2014

3yrimpFriedawaltergrd111111.jpg
Frieda Walter: sentenced to three years imprisonment.

Though their actions were monstrous, they are not monsters. There are no horns, no sharp teeth, no demonic eyes, no number of the Beast. They are just ordinary women. Mothers, sisters, grandmothers, aunts, widows, spinsters. Ordinary women, ordinary human beings.

In the photographs they look shameful, guilty, scared, brazen, stupid, cunning, disappointed, desperate, confused. These women were Nazi guards at the Belsen-Bergen concentration camp during the Second World War, and were all tried and found guilty of carrying out horrendous crimes against their fellow human beings—mothers, fathers, sisters, brothers, daughters, sons. Interesting how “evil” looks just like you and me.

1yrhildeLiesewitzgrd666.jpg
Hilde Liesewitz: sentenced to one year imprisonment.

5yrimpgertrudeFeistgrd888.jpg
Gertrude Feist: sentenced to five years imprisonment.

10yrgertrudeSaurergrd777.jpg
Gertrude Saurer: sentenced to ten years imprisonment.

10yrimpAnnahempelgrd.jpg
Anna Hempel: sentenced to ten years imprisonment.

10yrimpouterlyHertabothe22dec51x101010.jpg
Herta Bothe accompanied a death march of woman from central Poland to Bergen-Belsen concentration camp. She was sentenced to ten years imprisonment but was released early from prison on December 22nd, 1951.

10yrshildegardLohbauergrd555.jpg
Hildegard Lohbauer: sentenced to ten years imprisonment.

10yrsilsaforstergrd333.jpg
Ilse Forster: sentenced to ten years imprisonment.

15yrsimphelenekoppergrd444.jpg
Helene Kopper: sentenced to fifteen years imprisonment.

15yrsimphertElhert222.jpg
Herta Ehlert: sentenced to fifteen years imprisonment.

dthsenthng13dec45elizabethVolkenrathgrd999_(1).jpg
Elizabeth Volkenrath: Head Wardess at Belsen-Bergen: sentenced to death. She was hanged on 13 December 1945.

sntdeathjuananbormanngrd1.jpg
Juana Bormann: sentenced to death.

Via Vintage Everyday

http://dangerousminds.net/comments/the_ordinary_faces_of_evil_mugshots_of_female_nazi

Census report: Half of Americans poor or near poor

http://previous.presstv.ir/photo/20111118/yasaman.hashemi20111118065547607.jpg

By Andre Damon
22 October 2014

Forty-seven percent of Americans have incomes under twice the official poverty rate, making half of the country either poor or near-poor, according to figures released last week by the Census Bureau.

These figures are based on the Census Bureau’s Supplemental Poverty Measure (SPM), which takes into account government transfers and the regional cost-of-living in calculating the poverty rate. According to that calculation, there were 48.7 million people in poverty in the United States, three million higher than the official census figures released last month. The US poverty rate, according to the SPM, was 15.5 percent.

Data from the Census Bureau report

The release of these figures, as well as last month’s official poverty figures, have been greeted with silence in the media, despite the fact that the US is a mere two weeks away from a midterm election. As with every major social and political question, the issues of poverty and social inequality are being totally excluded from debate and discussion in the elections and ignored by the two big business parties.

The figures follow the release of a series of reports and studies documenting the growth of social inequality in the United States. Last week, Credit Suisse reported that the top one percent of the world’s population controls nearly half of all wealth, and that the United States has nearly ten times more super-wealthy people than any other country.

The census figures “show that poverty is still a major problem in the US,” said Christopher Wimer, Co-Director of the Center on Poverty and Social Policy at Columbia University, in a telephone interview Tuesday.

He said the SPM begins with a slightly higher poverty threshold then the official poverty figure, and then adjusts it based on the local cost of living and the prices of necessities of life.

As a result, both the poverty rate and the number of people in poverty are slightly higher than under the official poverty figure. But the biggest difference is that the more sophisticated supplemental measure shows the extent to which a much broader section of the population is struggling to make ends meet. “Because the supplemental poverty measure subtracts non-discretionary income, you get a lot more people hovering close to the poverty line,” Dr. Wimer said.

The official poverty threshold is calculated as “three times the cost of a minimum food diet in 1963,” adjusted for inflation. By that calculation, the poverty threshold of an adult living alone is $11,888, and an adult with two children is $18,769, both of which are absurdly low.

Since the SPM takes into account regional differences in the cost of living, it better reflects the true prevalence of poverty in high cost-of-living states, such as California, and cities, such as New York City.

“The supplemental poverty measure reflects the fact that the cost of living is much higher in many major metropolitan areas,” Dr. Wimer said. “Those areas also tend to have higher population densities, so that ends up affecting a lot of people.”

Based on the latest Census numbers, nearly one in four people in California lives below the poverty line. Using the supplemental measure, California has a poverty rate of 23.4 percent, compared with the state’s official poverty figures of about 16 percent.

Dr. Wimer said he and fellow researchers at Columbia University have followed a methodology similar to that used by the Census Bureau’s Supplemental Poverty Measure to study economic hardship among a representative sample of New York City residents.

They found that nearly a quarter of the city’s residents were in poverty—23 percent, compared to the official poverty rate of about 21 percent. Fifty-five percent of New York City residents had an income of below twice the poverty line.

Thirty-seven percent of New York City residents were affected by what the survey called “severe material hardships,” including “staying at a shelter, moving in with others or having utilities shut off.” The report added, “If we consider the number of New Yorkers who suffer moderate, if not truly severe, material adversity, the number climbs to 6 in 10 New Yorkers.”

Based on these findings, the report concluded, “Nearly two-thirds of New York City residents struggled to make ends meet at some point during 2012.”

The wealth of the super-rich, meanwhile, continues to soar, with the net worth of the Forbes 400 richest people in the United States surging 13 percent last year. Fifty-two members of the Forbes 400 resided in New York City, more than twice the number living in any other city.

Dr. Wimer noted that the Census SPM report shows the role played by government anti-poverty programs in keeping large sections of the population out of destitution. To the extent that there has been a decline in poverty in recent decades, “it is not driven by market income; the reduction has been coming from government policies and programs such as food stamps and unemployment insurance,” he said.

According to the Census SPM report, food stamps kept two percent of the population out of poverty in 2012, while unemployment insurance kept about one percent of the population out of poverty. The census figures reflect cutbacks in both of these programs in 2013.

With the expiration of federal extended unemployment benefits for the long-term unemployed at the end of last year, together with additional cutbacks to food stamps, the number of people affected by cuts to these vital anti-poverty programs will only increase.

Cuts to these programs have been implemented and supported by both the Democrats and Republicans. The Obama administration’s 2015 budget proposal, for example, calls for slashing the budget of the Department of Health and Human Services, which funds the Head Start preschool program, and the Department of Agriculture, which administers the food stamp program, by more than five percent.

 

http://www.wsws.org/en/articles/2014/10/22/pove-o22.html

Who Gives More of Their Money to Charity?

People Who Make More or Less Than $200k a Year?

Philanthropy and income have an inverse relationship.

Billionaire CEO Nicholas Woodman, news reports trumpeted earlier this month, has set aside $450 million worth of his GoPro software stock to set up a brand-new charitable foundation.

“We wake up every morning grateful for the opportunities life has given us,” Woodman and his wife Jill noted in a joint statement. “We hope to return the favor as best we can.”

Stories about charitable billionaires have long been a media staple. The defenders of our economic order love them — and regularly trot them out to justify America’s ever more top-heavy concentration of income and wealth.

Our charities depend, the argument goes, on the generosity of the rich. The richer the rich, the better off our charitable enterprises will be.

But this defense of inequality, analysts have understood for quite some time, holds precious little water. Low- and middle-income people, the research shows, give a greater share of their incomes to charity than people of decidedly more ample means.

The Chronicle of Philanthropy, the nation’s top monitor of everything charitable, last week dramatically added to this research.

Between 2006 and 2012, a new Chronicle analysis of IRS tax return data reveals, Americans who make over $200,000 a year decreased the share of their income they devote to charity by 4.6 percent.

Over those same years, a time of recession and limited recovery, these same affluent Americans saw their own incomes increase. For the nation’s top 5 percent of income earners, that increase averaged 9.9 percent.

By contrast, those Americans making less than $100,000 actually increased their giving between 2006 and 2012. The most generous Americans of all? Those making less than $25,000. Amid the hard times of recent years, low-income Americans devoted 16.6 percent more of their meager incomes to charity.

Overall, those making under $100,000 increased their giving by 4.5 percent.

In the half-dozen years this new study covers, the Chronicle of Philanthropy concludes, “poor and middle class Americans dug deeper into their wallets to give to charity, even though they were earning less.”

America’s affluent do still remain, in absolute terms, the nation’s largest givers to charity. In 2012, the Chronicle analysis shows, those earning under $100,000 handed charities $57.3 billion. Americans making over $200,000 gave away $77.5 billion.

But that $77.5 billion pales against at how much more the rich could — rather painlessly — be giving. Between 2006 and 2012, the combined wealth of the Forbes 400 alone increased by $1.04 trillion.

What the rich do give to charity often does people truly in need no good at all. Wealthy people do the bulk of their giving to colleges and cultural institutions, notes Chronicle of Philanthropy editor Stacy Palmer. Food banks and other social service charities “depend more on lower income Americans.”

Low- and middle-income people, adds Palmer, “know people who lost their jobs or are homeless.” They’ve been sacrificing “to help their neighbors.”

America’s increasing economic segregation, meanwhile, has left America’s rich less and less exposed to “neighbors” struggling to get by. That’s opening up, says Vox policy analyst Danielle Kurtzleben, an “empathy gap.”

“After all,” she explains, “if I can’t see you, I’m less likely to help you.”

The more wealth concentrates, the more nonprofits chase after these less-than-empathetic rich for donations. The priorities of these rich, notes Kurtzleben, become the priorities for more and more nonprofits.

The end result? Elite universities get mega-million-dollar donations to build mahogany-appointed students dorms. Art museums get new wings. Hospitals get windfalls to tackle the diseases that spook the high-end set.

Some in that set do seem to sense the growing disconnect between real need and real resources. Last week billionaire hedge fund manager David Einhorn announced a $50 million gift to help Cornell University set students up in “real-world experiences” that address the challenges hard-pressed communities face.

“When you go out beyond the classroom and into the community and find problems and have to deal with people in the real world,” says Einhorn, “you develop skills for empathy.”

True enough — but in a society growing ever more unequal and separate, not enough. In that society — our society — the privileged will continue to go “blind to how people outside their own class are living,” as Danielle Kurtzleben puts it.

We need, in short, much more than Empathy 101. We need more equality.

Labor journalist Sam Pizzigati, an Institute for Policy Studies associate fellow, writes widely about inequality. His latest book is “The Rich Don’t Always Win: The Forgotten Triumph over Plutocracy that Created the American Middle Class, 1900-1970.”

 

http://www.alternet.org/economy/guess-who-gives-more-their-money-charity-people-who-make-more-or-less-200k-year?akid=12386.265072.PjWDq0&rd=1&src=newsletter1023920&t=15&paging=off&current_page=1#bookmark

Obama Is a Republican

He’s the heir to Richard Nixon, not Saul Alinsky.

illustration by Michael Hogue

illustration by Michael Hogue

Back in 2008, Boston University professor Andrew Bacevich wrote an article for this magazine making a conservative case for Barack Obama. While much of it was based on disgust with the warmongering and budgetary profligacy of the Republican Party under George W. Bush, which he expected to continue under 2008 Republican nominee Sen. John McCain, Bacevich thought Obama at least represented hope for ending the Iraq War and shrinking the national-security state.

I wrote a piece for the New Republic soon afterward about the Obamacon phenomenon—prominent conservatives and Republicans who were openly supporting Obama. Many saw in him a classic conservative temperament: someone who avoided lofty rhetoric, an ambitious agenda, and a Utopian vision that would conflict with human nature, real-world barriers to radical reform, and the American system of government.

Among the Obamacons were Ken Duberstein, Ronald Reagan’s chief of staff; Charles Fried, Reagan’s solicitor general; Ken Adelman, director of the Arms Control and Disarmament Agency for Reagan; Jeffrey Hart, longtime senior editor of National Review; Colin Powell, Reagan’s national security adviser and secretary of state for George W. Bush; and Scott McClellan, Bush’s press secretary. There were many others as well.

According to exit polls in 2008, Obama ended up with 20 percent of the conservative vote. Even in 2012, after four years of relentless conservative attacks, he still got 17 percent of the conservative vote, with 11 percent of Tea Party supporters saying they cast their ballots for Obama.

They were not wrong. In my opinion, Obama has governed as a moderate conservative—essentially as what used to be called a liberal Republican before all such people disappeared from the GOP. He has been conservative to exactly the same degree that Richard Nixon basically governed as a moderate liberal, something no conservative would deny today. (Ultra-leftist Noam Chomsky recently called Nixon “the last liberal president.”)

Here’s the proof:

Iraq/Afghanistan/ISIS

One of Obama’s first decisions after the election was to keep national-security policy essentially on automatic pilot from the Bush administration. He signaled this by announcing on November 25, 2008, that he planned to keep Robert M. Gates on as secretary of defense. Arguably, Gates had more to do with determining Republican policy on foreign and defense policy between the two Bush presidents than any other individual, serving successively as deputy national security adviser in the White House, director of Central Intelligence, and secretary of defense.

Another early indication of Obama’s hawkishness was naming his rival for the Democratic nomination, Sen. Hillary Clinton, as secretary of state. During the campaign, Clinton ran well to his right on foreign policy, so much so that she earned the grudging endorsement of prominent neoconservatives such as Bill Kristol and David Brooks.

Obama, Kristol told the Washington Post in August 2007, “is becoming the antiwar candidate, and Hillary Clinton is becoming the responsible Democrat who could become commander in chief in a post-9/11 world.” Writing in the New York Times on February 5, 2008, Brooks praised Clinton for hanging tough on Iraq “through the dark days of 2005.”

Right-wing columnist Ann Coulter found Clinton more acceptable on national-security policy than even the eventual Republican nominee, Senator McCain. Clinton, Coulter told Fox’s Sean Hannity on January 31, 2008, was “more conservative than he [McCain] is. I think she would be stronger in the war on terrorism.” Coulter even said she would campaign for Clinton over McCain in a general election match up.

After Obama named Clinton secretary of state, there was “a deep sigh” of relief among Republicans throughout Washington, according to reporting by The Daily Beast’s John Batchelor. He noted that not a single Republican voiced any public criticism of her appointment.

By 2011, Republicans were so enamored with Clinton’s support for their policies that Dick Cheney even suggested publicly that she run against Obama in 2012. The irony is that as secretary of state, Clinton was generally well to Obama’s left, according to Vali Nasr’s book The Dispensable Nation. This may simply reflect her assumption of state’s historical role as the dovish voice in every administration. Or it could mean that Obama is far more hawkish than conservatives have given him credit for.

Although Obama followed through on George W. Bush’s commitment to pull U.S. troops out of Iraq in 2011, in 2014 he announced a new campaign against ISIS, an Islamic militant group based in Syria and Iraq.

Stimulus/Deficit

With the economy collapsing, the first major issue confronting Obama in 2009 was some sort of economic stimulus. Christina Romer, chair of the Council of Economic Advisers, whose academic work at the University of California, Berkeley, frequently focused on the Great Depression, estimated that the stimulus needed to be in the range of $1.8 trillion, according to Noam Scheiber’s book The Escape Artists.

The American Recovery and Reinvestment Act was enacted in February 2009 with a gross cost of $816 billion. Although this legislation was passed without a single Republican vote, it is foolish to assume that the election of McCain would have resulted in savings of $816 billion. There is no doubt that he would have put forward a stimulus plan of roughly the same order of magnitude, but tilted more toward Republican priorities.

A Republican stimulus would undoubtedly have had more tax cuts and less spending, even though every serious study has shown that tax cuts are the least effective method of economic stimulus in a recession. Even so, tax cuts made up 35 percent of the budgetary cost of the stimulus bill—$291 billion—despite an estimate from Obama’s Council of Economic Advisers that tax cuts barely raised the gross domestic product $1 for every $1 of tax cut. By contrast, $1 of government purchases raised GDP $1.55 for every $1 spent. Obama also extended the Bush tax cuts for two years in 2010.

It’s worth remembering as well that Bush did not exactly bequeath Obama a good fiscal hand. Fiscal year 2009 began on October 1, 2008, and one third of it was baked in the cake the day Obama took the oath of office. On January 7, 2009, the Congressional Budget Office projected significant deficits without considering any Obama initiatives. It estimated a deficit of $1.186 trillion for 2009 with no change in policy. The Office of Management and Budget estimated in November of that year that Bush-era policies, such as Medicare Part D, were responsible for more than half of projected deficits over the next decade.

Republicans give no credit to Obama for the significant deficit reduction that has occurred on his watch—just as they ignore the fact that Bush inherited an projected budget surplus of $5.6 trillion over the following decade, which he turned into an actual deficit of $6.1 trillion, according to a CBO study—but the improvement is real.

Screenshot 2014-10-20 12.59.16

 

 

 

 

 

 

 

 

 

Republicans would have us believe that their tight-fisted approach to spending is what brought down the deficit. But in fact, Obama has been very conservative, fiscally, since day one, to the consternation of his own party. According to reporting by the Washington Post and New York Times, Obama actually endorsed much deeper cuts in spending and the deficit than did the Republicans during the 2011 budget negotiations, but Republicans walked away.

Obama’s economic conservatism extends to monetary policy as well. His Federal Reserve appointments have all been moderate to conservative, well within the economic mainstream. He even reappointed Republican Ben Bernanke as chairman in 2009. Many liberals have faulted Obama for not appointing board members willing to be more aggressive in using monetary policy to stimulate the economy and reduce unemployment.

Obama’s other economic appointments, such as Larry Summers at the National Economic Council and Tim Geithner at Treasury, were also moderate to conservative. Summers served on the Council of Economic Advisers staff in Reagan’s White House. Geithner joined the Treasury during the Reagan administration and served throughout the George H.W. Bush administration.

Health Reform

Contrary to rants that Obama’s 2010 health reform, the Patient Protection and Affordable Care Act (ACA), is the most socialistic legislation in American history, the reality is that it is virtually textbook Republican health policy, with a pedigree from the Heritage Foundation and Massachusetts Gov. Mitt Romney, among others.

It’s important to remember that historically the left-Democratic approach to healthcare reform was always based on a fully government-run system such as Medicare or Medicaid. During debate on health reform in 2009, this approach was called “single payer,” with the government being the single payer. One benefit of this approach is cost control: the government could use its monopsony buying power to force down prices just as Walmart does with its suppliers.

Conservatives wanted to avoid too much government control and were adamantly opposed to single-payer. But they recognized that certain problems required more than a pure free-market solution. One problem in particular is covering people with pre-existing conditions, one of the most popular provisions in ACA. The difficulty is that people may wait until they get sick before buying insurance and then expect full coverage for their conditions. Obviously, this free-rider problem would bankrupt the health-insurance system unless there was a fix.

The conservative solution was the individual mandate—forcing people to buy private health insurance, with subsidies for the poor. This approach was first put forward by Heritage Foundation economist Stuart Butler in a 1989 paper, “A Framework for Reform,” published in a Heritage Foundation book, A National Health System for America. In it, Butler said the number one element of a conservative health system was this: “Every resident of the U.S. must, by law, be enrolled in an adequate health care plan to cover major health costs.” He went on to say:

Under this arrangement, all households would be required to protect themselves from major medical costs by purchasing health insurance or enrolling in a prepaid health plan. The degree of financial protection can be debated, but the principle of mandatory family protection is central to a universal health care system in America.

In 1991, prominent conservative health economist Mark V. Pauley also endorsed the individual mandate as central to healthcare reform. In an article in the journal Health Affairs, Pauley said:

All citizens should be required to obtain a basic level of health insurance. Not having health insurance imposes a risk of delaying medical care; it also may impose costs on others, because we as a society provide care to the uninsured. … Permitting individuals to remain uninsured results in inefficient use of medical care, inequity in the incidence of costs of uncompensated care, and tax-related distortions.

In 2004, Senate Majority Leader Bill Frist (R-Tenn.) endorsed an individual mandate in a speech to the National Press Club. “I believe higher-income Americans today do have a societal and personal responsibility to cover in some way themselves and their children,” he said. Even libertarian Ron Bailey, writing in Reason, conceded the necessity of a mandate in a November 2004 article titled, “Mandatory Health Insurance Now!” Said Bailey: “Why shouldn’t we require people who now get health care at the expense of the rest of us pay for their coverage themselves? … Mandatory health insurance would not be unlike the laws that require drivers to purchase auto insurance or pay into state-run risk pools.”

Among those enamored with the emerging conservative health reform based on an individual mandate was Mitt Romney, who was elected governor of Massachusetts in 2002. In 2004, he put forward a state health reform plan to which he later added an individual mandate. As Romney explained in June 2005, “No more ‘free riding,’ if you will, where an individual says: ‘I’m not going to pay, even though I can afford it. I’m not going to get insurance, even though I can afford it. I’m instead going to just show up and make the taxpayers pay for me’.”

The following month, Romney emphasized his point: “We can’t have as a nation 40 million people—or, in my state, half a million—saying, ‘I don’t have insurance, and if I get sick, I want someone else to pay’.”

In 2006, Governor Romney signed the Massachusetts health reform into law, including the individual mandate. Defending his legislation in a Wall Street Journal article, he said:

I proposed that everyone must either purchase a product of their choice or demonstrate that they can pay for their own health care. It’s a personal responsibility principle.

Some of my libertarian friends balk at what looks like an individual mandate. But remember, someone has to pay for the health care that must, by law, be provided: Either the individual pays or the taxpayers pay. A free ride on government is not libertarian.

As late as 2008, Robert Moffitt of the Heritage Foundation was still defending the individual mandate as reasonable, non-ideological and nonpartisan in an article for the Harvard Health Policy Reviewthisarticleappeared-novdec14

So what changed just a year later, when Obama put forward a health-reform plan that was almost a carbon copy of those previously endorsed by the Heritage Foundation, Mitt Romney, and other Republicans? The only thing is that it was now supported by a Democratic president that Republicans vowed to fight on every single issue, according to Robert Draper’s book Do Not Ask What Good We Do.

Senior Obama adviser David Axelrod later admitted that Romney’s Massachusetts plan was the “template” for Obama’s plan. “That work inspired our own health plan,” he said in 2011. But no one in the White House said so back in 2009. I once asked a senior Obama aide why. His answer was that once Republicans refused to negotiate on health reform and Obama had to win only with Democratic votes, it would have been counterproductive, politically, to point out the Obama plan’s Republican roots.

The left wing of the House Democratic caucus was dubious enough about Obama’s plan as it was, preferring a single-payer plan. Thus it was necessary for Obama to portray his plan as more liberal than it really was to get the Democratic votes needed for passage, which of course played right into the Republicans’ hands. But the reality is that ACA remains a very modest reform based on Republican and conservative ideas.

Other Rightward Policies 

Below are a few other issues on which Obama has consistently tilted rightward:

Drugs: Although it has become blindingly obvious that throwing people in jail for marijuana use is insane policy and a number of states have moved to decriminalize its use, Obama continued the harsh anti-drug policy of previous administrations, and his Department of Justice continues to treat marijuana as a dangerous drug. As Time put it in 2012: “The Obama Administration is cracking down on medical marijuana dispensaries and growers just as harshly as the Administration of George W. Bush did.”

National-security leaks: At least since Nixon, a hallmark of Republican administrations has been an obsession with leaks of unauthorized information, and pushing the envelope on government snooping. By all accounts, Obama’s penchant for secrecy and withholding information from the press is on a par with the worst Republican offenders. Journalist Dan Froomkin charges that Obama has essentially institutionalized George W. Bush’s policies. Nixon operative Roger Stone thinks Obama has actually gone beyond what his old boss tried to do.

Race: I think almost everyone, including me, thought the election of our first black president would lead to new efforts to improve the dismal economic condition of African-Americans. In fact, Obama has seldom touched on the issue of race, and when he has he has emphasized the conservative themes of responsibility and self-help. Even when Republicans have suppressed minority voting, in a grotesque campaign to fight nonexistent voter fraud, Obama has said and done nothing.

Gay marriage: Simply stating public support for gay marriage would seem to have been a no-brainer for Obama, but it took him two long years to speak out on the subject and only after being pressured to do so.

Corporate profits: Despite Republican harping about Obama being anti-business, corporate profits and the stock market have risen to record levels during his administration. Even those progressives who defend Obama against critics on the left concede that he has bent over backward to protect corporate profits. As Theda Skocpol and Lawrence Jacobs put it: “In practice, [Obama] helped Wall Street avert financial catastrophe and furthered measures to support businesses and cater to mainstream public opinion. …  He has always done so through specific policies that protect and further opportunities for businesses to make profits.”

I think Cornell West nailed it when he recently charged that Obama has never been a real progressive in the first place. “He posed as a progressive and turned out to be counterfeit,” West said. “We ended up with a Wall Street presidency, a drone presidency, a national security presidency.”

I don’t expect any conservatives to recognize the truth of Obama’s fundamental conservatism for at least a couple of decades—perhaps only after a real progressive presidency. In any case, today they are too invested in painting him as the devil incarnate in order to frighten grassroots Republicans into voting to keep Obama from confiscating all their guns, throwing them into FEMA re-education camps, and other nonsense that is believed by many Republicans. But just as they eventually came to appreciate Bill Clinton’s core conservatism, Republicans will someday see that Obama was no less conservative.

Bruce Bartlett is the author of The Benefit and the Burden: Tax Reform—Why We Need It and What It Will Take.

http://www.theamericanconservative.com/articles/obama-is-a-republican/

The next financial crisis may be just around the corner


by Jerome Roos on October 20, 2014

Post image for The next financial crisis may be just around the corner
As growth stalls, stocks tumble and investors fret, the question is no longer if there will be another crisis, but when and where it will strike first.The headlines look eerily familiar: global growth is stalling, stocks are tumbling and peripheral bond yields are rising sharply. With the Federal Reserve expected to wind down its asset buyback scheme later this month and the Ebola outbreak and geopolitical instability spooking investors, world markets are returning to a highly volatile state. “The market pathologies we all grew to know during the crisis of 2008 are returning,” the Financial Times wrote last week. The question is no longer if there will be another crisis, but when and where it will strike first.

The bottom line is that the global financial meltdown of 2008-’09 and the European debt crisis of 2010-’12 have never truly been resolved. After governments disbursed record bailouts in the wake of the Wall Street crash, the world’s leading central banks simply papered over the remaining weaknesses by subsidizing essentially defunct financial institutions to the tune of trillions of dollars, buying up swaths of toxic assets and providing loans at negative real interest rates in the hope of reviving the credit system and saving the banks.

But instead of fixing the underlying problems of structural indebtedness, record unemployment, rampant inequality and a seemingly never-ending recession, these measures have only made matters worse. For one, they have fed an enormous credit bubble that dwarfs even the previous one, which nearly sank the world economy back in 2008. The latest Geneva report by the International Center for Monetary and Banking Studies notes that total world debt — excluding that of the financial sector — has shot up 38% since the collapse of Lehman Brothers, reaching new historic highs. Last year, global public and private debt stood at 212% of global output, up from 180% in 2008.

With this tidal wave of cheap credit sloshing through the world financial system, investors went looking for the highest yields. Since the US housing market and European bond markets were still reeling from the last crisis, they turned towards the stock exchange. Between mid-2013 and mid-2014, the average global return on equity rose to a whopping 18 percent. Yale economist Robert Shiller has shown that “the gap between stock prices and corporate earnings is now larger than it was in the previous pre-crisis periods,” and “if markets were to return to their normal earning levels, the average stock market in the world should fall by about 30 per cent.”

At the same time, trillions of dollars found their way into the housing markets of emerging economies. In Brazil, for instance, foreign investment in urban transformation projects for the World Cup and the Olympics led to a speculative housing bubble that saw residential property prices rise by more than 80% between 2007 and 2013. In Turkey, too, a massive influx of foreign credit fed a construction boom that has dramatically transformed the urban landscape, with skyscrapers, shopping malls and infrastructural mega-projects mushrooming across the Istanbul skyline. In both countries the resultant social displacements have led to sustained protest and social unrest.

The mother of all credit bubbles, however, has been quietly building up elsewhere, in China, where the government — in a desperate bid to ward off the spillover effects of the Great Recession — has pumped over $13 trillion worth of credit into the economy. This has in turn given rise to a monstrous $4.4 trillion shadow banking system and a housing bubble of truly epic proportions, leaving ghost towns sprawled across the country. It has also turned China into one of the most indebted developing countries in the world, with total public and private debt (excluding financial institutions) skyrocketing to 217% of GDP last year, up from from 147% of GDP in 2008.

The shadow banking system is not just a Chinese problem. In the United States and Europe, debt creation is also increasingly the product of off-balance sheet lending by non-bank financial institutions like hedge funds, insurance companies, private equity funds and broker dealers. The shadow banking system remains largely unregulated, allowing lenders to take much greater risks than ordinary banks could. For this reason, the IMF has warned that the world’s $70 trillion shadow banking system poses a major threat to global financial stability. In the US, shadow banking activities already amount to 2.5 times the size of conventional bank activity.

Meanwhile, troubling signs are emerging in Europe, where four years of austerity have trapped the world’s largest economy in a debilitating deflationary spiral. Even Germany, the EU’s economic powerhouse, is falling back into recession, while Greece returned to the eye of the storm after its stock market went into free fall last week. Greek bonds are now trading far above the 7% mark, which back in 2011 was widely considered to be “the point of no return.” Investors appear to be concerned over the health of Greek banks, the rising popularity of the anti-austerity party SYRIZA, and government plans to exit the bailout early in order to stem SYRIZA’s rise in the polls.

Add to this the growing geopolitical instability in Ukraine and the Middle East, and the conditions appear to be ripe for another round of market panic. Sooner or later, one of the bubbles is bound to pop — and the consequences will not be pretty. What is different this time around is that total debt levels are now even more unmanageable than they were back in 2008, while governments — having already used up most of their fiscal and monetary firepower over the past six years — are even less capable of mounting a proper response. We do not yet know when or where the next crisis will strike, but when it does it will be big. This time we better come prepared.

Jerome Roos is a PhD researcher in International Political Economy at the European University Institute, and founding editor of ROAR Magazine. This article was written as part of his regular column for TeleSUR English.

http://roarmag.org/2014/10/market-panic-next-financial-crisis/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+roarmag+%28ROAR+Magazine%29

How technology shrunk America forever

The end of the Old World:

The 19th century saw an explosion of changes in America. The way people saw the world would never be the same

The end of the Old World: How technology shrunk America forever
(Credit: AP/Library of Congress)

It has become customary to mark the beginning of the Industrial revolution in eighteenth-century England. Historians usually identify two or sometimes three phases of the Industrial revolution, which are associated with different sources of energy and related technologies. In preindustrial Europe, the primary energy sources were human, animal, and natural (wind, water, and fire).

By the middle of the eighteenth century, much of Europe had been deforested to supply wood for domestic and industrial consumption. J.R. McNeill points out that the combination of energy sources, machines, and ways of organizing production came together to form “clusters” that determined the course of industrialization and, by extension, shaped economic and social developments. a later cluster did not immediately replace its predecessor; rather, different regimes overlapped, though often they were not integrated. With each new cluster, however, the speed of production increased, leading to differential rates of production. The first phase of the Industrial revolution began around 1750 with the shift from human and animal labor to machine-based production. This change was brought about by the use of water power and later steam engines in the textile mills of Great Britain.

The second phase dates from the 1820s, when there was a shift to fossil fuels—primarily coal. By the middle of the nineteenth century, another cluster emerged from the integration of coal, iron, steel, and railroads. The fossil fuel regime was not, of course, limited to coal. Edwin L. Drake drilled the first commercially successful well in Titusville, Pennsylvania, in 1859 and the big gushers erupted first in the 1870s in Baku on the Caspian Sea and later in Spindeltop, Texas (1901). Oil, however, did not replace coal as the main source of fuel in transportation until the 1930s.3 Coal, of course, is still widely used in manufacturing today because it remains one of the cheapest sources of energy. Though global consumption of coal has leveled off since 2000, its use continues to increase in China. Indeed, China currently uses almost as much coal as the rest of the world and reliable sources predict that by 2017, India will be importing as much coal as China.



The third phase of the Industrial revolution began in the closing decades of the nineteenth century. The development of technologies for producing and distributing electricity cheaply and efficiently further transformed industrial processes and created the possibility for new systems of communication as well as the unprecedented capability for the production and dissemination of new forms of entertainment, media, and information. The impact of electrification can be seen in four primary areas.

First, the availability of electricity made the assembly line and mass production possible. When Henry Ford adapted technology used in Chicago’s meatpacking houses to produce cars (1913), he set in motion changes whose effects are still being felt. Second, the introduction of the incandescent light bulb (1881) transformed private and public space. As early as the late 1880s, electrical lighting was used in homes, factories, and on streets. Assembly lines and lights inevitably led to the acceleration of urbanization. Third, the invention of the telegraph (ca.1840) and telephone (1876) enabled the communication and transmission of information across greater distances at faster rates of speed than ever before. Finally, electronic tabulating machines, invented by Herman Hollerith in 1889, made it possible to collect and manage data in new ways. Though his contributions have not been widely acknowledged, Hollerith actually forms a bridge between the Industrial revolution and the so-called post-industrial information age. The son of German immigrants, Hollerith graduated from Columbia University’s School of Mines and went on to found Tabulating Machine Company (1896). He created the first automatic card-feed mechanism and key-punch system with which an operator using a keyboard could process as many as three hundred cards an hour. Under the direction of Thomas J. Watson, Hollerith’s company merged with three others in 1911 to form Computing Tabulating recording Company. In 1924, the company was renamed International Business Machines Corporation (IBM).

There is much to be learned from such periodizations, but they have serious limitations. The developments I have identified overlap and interact in ways that subvert any simple linear narrative. Instead of thinking merely in terms of resources, products, and periods, it is also important to think in terms of networks and flows. The foundation for today’s wired world was laid more than two centuries ago. Beginning in the early nineteenth century, local communities, then states and nations, and finally the entire globe became increasingly connected. Though varying from time to time and place to place, there were two primary forms of networks: those that directed material flows (fuels, commodities, products, people), and those that channeled immaterial flows (communications, information, data, images, and currencies). From the earliest stages of development, these networks were inextricably interconnected. There would have been no telegraph network without railroads and no railroad system without the telegraph network, and neither could have existed without coal and iron. Networks, in other words, are never separate but form networks of networks in which material and immaterial flows circulate. As these networks continued to expand, and became more and more complex, there was a steady increase in the importance of immaterial flows, even for material processes. The combination of expanding connectivity and the growing importance of information technologies led to the acceleration of both material and immaterial flows. This emerging network of networks created positive feedback loops in which the rate of acceleration increased.

While developments in transportation, communications, information, and management were all important, industrialization as we know it is inseparable from the transportation revolution that trains created. In his foreword to Wolfgang Schivelbusch’s informative study “The Railway Journey: The Industrialization of Time and Space in the 19th Century,” Alan Trachtenberg writes, “Nothing else in the nineteenth century seemed as vivid and dramatic a sign of modernity as the railroad. Scientists and statesmen joined capitalists in promoting the locomotive as the engine of ‘progress,’ a promise of imminent Utopia.”

In England, railway technology developed as an extension of coal mining. The shift from human and natural sources of energy to fossil fuels created a growing demand for coal. While steam engines had been used since the second half of the eighteenth century in British mines to run fans and pumps like those my great-grandfather had operated in the Pennsylvania coalfields, it was not until 1901, when Oliver Evans invented a high-pressure, mobile steam engine, that locomotives were produced. By the beginning of the nineteenth century, the coal mined in the area around Newcastle was being transported throughout England on rail lines. It did not take long for this new rapid transit system to develop—by the 1820s, railroads had expanded to carry passengers, and half a century later rail networks spanned all of Europe.

What most impressed people about this new transportation network was its speed. The average speed of early railways in England was twenty to thirty miles per hour, which was approximately three times faster than stagecoaches. The increase in speed transformed the experience of time and space. Countless writers from this era use the same words to describe train travel as Karl Marx had used to describe emerging global financial markets. Trains, like capital, “annihilate space with time.”

Traveling on the recently opened Paris-rouen-orléans railway line in 1843, the German poet, journalist, and literary critic Heinrich Heine wrote: “What changes must now occur, in our way of looking at things, in our notions! Even the elementary concepts of time and space have begun to vacillate. Space is killed by the railways, and we are left with time alone. . . . Now you can travel to orleans in four and a half hours, and it takes no longer to get to rouen. Just imagine what will happen when the lines to Belgium and Germany are completed and connected up with their railways! I feel as if the mountains and forests of all countries were advancing on Paris. Even now, I can smell the German linden trees; the North Sea’s breakers are rolling against my door.” This new experience of space and time that speed brought about had profound psychological effects that I will consider later.

Throughout the nineteenth century, the United States lagged behind Great Britain in terms of industrial capacity: in 1869, England was the source of 20 percent of the world’s industrial production, while the United States contributed just 7 percent. By the start of World War I, however, america’s industrial capacity surpassed that of England: that is, by 1913, the scales had tipped—32 percent came from the United States and only 14 percent from England. While England had a long history before the Industrial revolution, the history of the United States effectively begins with the Industrial revolution. There are other important differences as well. Whereas in Great Britain the transportation revolution grew out of the industrialization of manufacturing primarily, but not exclusively, in textile factories, in the United States mechanization began in agriculture and spread to transportation before it transformed manufacturing. In other words, in Great Britain, the Industrial Revolution in manufacturing came first and the transportation revolution second, while in the United States, this order was reversed.

When the Industrial revolution began in the United States, most of the country beyond the Eastern Seaboard was largely undeveloped. Settling this uncharted territory required the development of an extensive transportation network. Throughout the early decades of the nineteenth century, the transportation system consisted of a network of rudimentary roads connecting towns and villages with the countryside. New England, Boston, New york, Philadelphia, Baltimore, and Washington were joined by highways suitable for stagecoach travel. Inland travel was largely confined to rivers and waterways. The completion of the Erie Canal (1817–25) marked the first stage in the development of an extensive network linking rivers, lakes, canals, and waterways along which produce and people flowed. Like so much else in America, the railroad system began in Boston. By 1840, only 18,181 miles of track had been laid. During the following decade, however, there was an explosive expansion of the nation’s rail system financed by securities and bonds traded on stock markets in America and London. By the 1860s, the railroad network east of the Mississippi river was using routes roughly similar to those employed today.

Where some saw loss, others saw gain. In 1844, inveterate New Englander ralph Waldo Emerson associated the textile loom with the railroad when he reflected, “Not only is distance annihilated, but when, as now, the locomotive and the steamboat, like enormous shuttles, shoot every day across the thousand various threads of national descent and employment, and bind them fast in one web, an hourly assimilation goes forward, and there is no danger that local peculiarities and hostilities should be preserved.” Gazing at tracks vanishing in the distance, Emerson saw a new world opening that, he believed, would overcome the parochialisms of the past. For many people in the nineteenth century, this new world promising endless resources and endless opportunity was the american West. A transcontinental railroad had been proposed as early as 1820 but was not completed until 1869.

On May 10, 1869, Leland Stanford, who would become the governor of California and, in 1891, founder of Stanford University, drove the final spike in the railroad that joined east and west. Nothing would ever be the same again. This event was not merely local, but also, as Emerson had surmised, global. Like the California gold and Nevada silver spike that leland had driven to join the rails, the material transportation network and immaterial communication network intersected at that moment to create what Rebecca Solnit correctly identifies as “the first live national media event.” The spike “had been wired to connect to the telegraph lines that ran east and west along the railroad tracks. The instant Stanford struck the spike, a signal would go around the nation. . . . The signal set off cannons in San Francisco and New York. In the nation’s capital the telegraph signal caused a ball to drop, one of the balls that visibly signaled the exact time in observatories in many places then (of which the ball dropped in New york’s Times Square at the stroke of the New year is a last relic). The joining of the rails would be heard in every city equipped with fire-alarm telegrams, in Philadelphia, omaha, Buffalo, Chicago, and Sacramento. Celebrations would be held all over the nation.” This carefully orchestrated spectacle, which was made possible by the convergence of multiple national networks, was worthy of the future Hollywood and the technological wizards of Silicon Valley whose relentless innovation Stanford’s university would later nourish. What most impressed people at the time was the speed of global communication, which now is taken for granted.

Flickering Images—Changing Minds

Industrialization not only changes systems of production and distribution of commodities and products, but also imposes new disciplinary practices that transform bodies and change minds. During the early years of train travel, bodily acceleration had an enormous psychological effect that some people found disorienting and others found exhilarating. The mechanization of movement created what ann Friedberg describes as the “mobile gaze,” which transforms one’s surroundings and alters both the content and, more important, the structure, of perception. This mobile gaze takes two forms: the person can move and the surroundings remain immobile (train, bicycle, automobile, airplane, elevator), or the person can remain immobile and the surroundings move (panorama, kinetoscope, film).

When considering the impact of trains on the mobilization of the gaze, it is important to note that different designs for railway passenger cars had different perceptual and psychological effects. Early European passenger cars were modeled on stagecoaches in which individuals had seats in separate compartments; early american passenger cars, by contrast, were modeled on steamboats in which people shared a common space and were free to move around. The European design tended to reinforce social and economic hierarchies that the american design tried to break down. Eventually, american railroads adopted the European model of fixed individual seating but had separate rows facing in the same direction rather than different compartments. As we will see, the resulting compartmentalization of perception anticipates the cellularization of attention that accompanies today’s distributed high-speed digital networks.

During the early years, there were numerous accounts of the experience of railway travel by ordinary people, distinguished writers, and even physicians, in which certain themes recur. The most common complaint is the sense of disorientation brought about by the experience of unprecedented speed. There are frequent reports of the dispersion and fragmentation of attention that are remarkably similar to contemporary personal and clinical descriptions of attention-deficit hyperactivity disorder (ADHD). With the landscape incessantly rushing by faster than it could be apprehended, people suffered overstimulation, which created a sense of psychological exhaustion and physical distress. Some physicians went so far as to maintain that the experience of speed caused “neurasthenia, neuralgia, nervous dyspepsia, early tooth decay, and even premature baldness.”

In 1892, Sir James Crichton-Browne attributed the significant increase in the mortality rate between 1859 and 1888 to “the tension, excitement, and incessant mobility of modern life.” Commenting on these statistics, Max Nordau might well be describing the harried pace of life today. “Every line we read or write, every human face we see, every conversation we carry on, every scene we perceive through the window of the flying express, sets in activity our sensory nerves and our brain centers. Even the little shocks of railway travelling, not perceived by consciousness, the perpetual noises and the various sights in the streets of a large town, our suspense pending the sequel of progressing events, the constant expectation of the newspaper, of the postman, of visitors, cost our brains wear and tear.” During the years around the turn of the last century, a sense of what Stephen kern aptly describes as “cultural hypochondria” pervaded society. Like today’s parents concerned about the psychological and physical effects of their kids playing video games, nineteenth-century physicians worried about the effect of people sitting in railway cars for hours watching the world rush by in a stream of images that seemed to be detached from real people and actual things.

In addition to the experience of disorientation, dispersion, fragmentation, and fatigue, rapid train travel created a sense of anxiety. People feared that with the increase in speed, machinery would spin out of control, resulting in serious accidents. An 1829 description of a train ride expresses the anxiety that speed created. “It is really flying, and it is impossible to divest yourself of the notion of instant death to all upon the least accident happening.” a decade and a half later, an anonymous German explained that the reason for such anxiety is the always “close possibility of an accident, and the inability to exercise any influence on the running of the cars.” When several serious accidents actually occurred, anxiety spread like a virus. Anxiety, however, is always a strange experience—it not only repels, it also attracts; danger and the anxiety it brings are always part of speed’s draw.

Perhaps this was a reason that not everyone found trains so distressing. For some people, the experience of speed was “dreamlike” and bordered on ecstasy. In 1843, Emerson wrote in his Journals, “Dreamlike travelling on the railroad. The towns which I pass between Philadelphia and New york make no distinct impression. They are like pictures on a wall.” The movement of the train creates a loss of focus that blurs the mobile gaze. A few years earlier, Victor Hugo’s description of train travel sounds like an acid trip as much as a train trip. In either case, the issue is speed. “The flowers by the side of the road are no longer flowers but flecks, or rather streaks, of red or white; there are no longer any points, everything becomes a streak; grain fields are great shocks of yellow hair; fields of alfalfa, long green tresses; the towns, the steeples, and the trees perform a crazy mingling dance on the horizon; from time to time, a shadow, a shape, a specter appears and disappears with lightning speed behind the window; it’s a railway guard.” The flickering images fleeting past train windows are like a film running too fast to comprehend.

Transportation was not the only thing accelerating in the nineteenth century—the pace of life itself was speeding up as never before. listening to the whistle of the train headed to Boston in his cabin beside Walden Pond, Thoreau mused, “The startings and arrivals of the cars are now the epochs in the village day. They go and come with such regularity and precision, and their whistle can be heard so far, that the farmers set their clocks by them, and thus one well conducted institution regulates a whole country. Have not men improved somewhat in punctuality since the railroad was invented? Do they not talk and think faster in the depot than they did in the stage office? There is something electrifying in the atmosphere of the former place. I have been astonished by some of the miracles it has wrought.” And yet Thoreau, more than others, knew that these changes also had a dark side.

The transition from agricultural to industrial capitalism brought with it a massive migration from the country, where life was slow and governed by natural rhythms, to the city, where life was fast and governed by mechanical, standardized time. The convergence of industrialization, transportation, and electrification made urbanization inevitable. The faster that cities expanded, the more some writers and poets idealized rustic life in the country. Nowhere is such idealization more evident than in the writings of British romantics. The rapid swirl of people, machines, and commodities created a sense of vertigo as disorienting as train travel. Wordsworth writes in The Prelude,

oh, blank confusion! True epitome
of what the mighty City is herself
To thousands upon thousands of her sons, living among the same perpetual whirl
of trivial objects, melted and reduced
To one identity, by differences
That have no law, no meaning, no end—

By 1850, fifteen cities in the United States had a population exceeding 50,000. New york was the largest (1,080,330), followed by Philadelphia (565,529), Baltimore (212,418), and Boston (177,840). Increasing domestic trade that resulted from the railroad and growing foreign trade that accompanied improved ocean travel contributed significantly to this growth. While commerce was prevalent in early cities, manufacturing expanded rapidly during the latter half of the eighteenth century. The most important factor contributing to nineteenth-century urbanization was the rapid development of the money economy. Once again, it is a matter of circulating flows, not merely of human bodies but of mobile commodities. Money and cities formed a positive feedback loop—as the money supply grew, cities expanded, and as cities expanded, the money supply grew.

The fast pace of urban life was as disorienting for many people as the speed of the train. In his seminal essay “The Metropolis and Mental life,” Georg Simmel observes, “The psychological foundation upon which the metropolitan individuality is erected, is the intensification of emotional life due to the swift and continuous shift of external and internal stimuli. Man is a creature whose existence is dependent on differences, i.e., his mind is stimulated by the difference between present impressions and those which have preceded. . . . To the extent that the metropolis creates these psychological conditions—with every crossing of the street, with the tempo and multiplicity of economic, occupational and social life—it creates the sensory foundations of mental life, and in the degree of awareness necessitated by our organization as creatures dependent on differences, a deep contrast with the slower, more habitual, more smooth flowing rhythm of the sensory-mental phase of small town and rural existence.” The expansion of the money economy created a fundamental contradiction at the heart of metropolitan life. On the one hand, cities brought together different people from all backgrounds and walks of life, and on the other hand, emerging industrial capitalism leveled these differences by disciplining bodies and programming minds. “Money,” Simmel continues, “is concerned only with what is common to all, i.e., with the exchange value which reduces all quality and individuality to a purely quantitative level.” The migration from country to city that came with the transition from agricultural to industrial capitalism involved a shift from homogeneous communities to heterogeneous assemblages of different people, qualitative to quantitative methods of assessment and evaluation, as well as concrete to abstract networks of exchange of goods and services, and a slow to fast pace of life. I will consider further aspects of these disciplinary practices in Chapter 3; for now, it is important to understand the implications of the mechanization or industrialization of perception.

I have already noted similarities between the experience of looking through a window on a speeding train to the experience of watching a film that is running too fast. During the latter half of the nineteenth century a remarkable series of inventions transformed not only what people experienced in the world but how they experienced it: photography (Louis-Jacques-Mandé Daguerre, ca. 1837), the telegraph (Samuel F. B. Morse, ca. 1840), the stock ticker (Thomas alva Edison, 1869), the telephone (alexander Graham Bell, 1876), the chronophotographic gun (Étienne-Jules Maney, 1882), the kinetoscope (Edison, 1894), the zoopraxiscope (Eadweard Muybridge, 1893), the phantoscope (Charles Jenkins, 1894), and cinematography (Auguste and Louis Lumière, 1895). The way in which human beings perceive and conceive the world is not hardwired in the brain but changes with new technologies of production and reproduction.

Just as the screens of today’s TVs, computers, video games, and mobile devices are restructuring how we process experience, so too did new technologies at the end of the nineteenth century change the world by transforming how people apprehended it. While each innovation had a distinctive effect, there is a discernible overall trajectory to these developments. Industrial technologies of production and reproduction extended processes of dematerialization that eventually led first to consumer capitalism and then to today’s financial capitalism. The crucial variable in these developments is the way in which material and immaterial networks intersect to produce a progressive detachment of images, representations, information, and data from concrete objects and actual events. Marveling at what he regarded as the novelty of photographs, Oliver Wendell Holmes commented, “Form is henceforth divorced from matter. In fact, matter as a visible object is of no great use any longer, except as the mould on which form is shaped. Give us a few negatives of a thing worth seeing, taken from different points of view, and that is all we want of it. Pull it down or burn it up, if you please. . . . Matter in large masses must always be fixed and dear, form is cheap and transportable. We have got the fruit of creation now, and need not trouble ourselves about the core.”

Technologies for the reproduction and transmission of images and information expand the process of abstraction initiated by the money economy to create a play of freely floating signs without anything to ground, certify, or secure them. With new networks made possible by the combination of electrification and the invention of the telegraph, telephone, and stock ticker, communication was liberated from the strictures imposed by physical means of conveyance. In previous energy regimes, messages could be sent no faster than people, horses, carriages, trains, ships, or automobiles could move. Dematerialized words, sounds, information, and eventually images, by contrast, could be transmitted across great distances at high speed. With this dematerialization and acceleration, Marx’s prediction—that “everything solid melts into air”—was realized. But this was just the beginning. It would take more than a century for electrical currents to become virtual currencies whose transmission would approach the speed limit.

Excerpted from “Speed Limits: Where Time Went and Why We Have So Little Left,” by Mark C. Taylor, published October 2014 by Yale University Press. Copyright ©2014 by Mark C. Taylor. Reprinted by permission of Yale University Press.

http://www.salon.com/2014/10/19/the_end_of_the_old_world_how_technology_shrunk_america_forever/?source=newsletter

Follow

Get every new post delivered to your Inbox.

Join 1,588 other followers