The young victims of America’s gun culture

An American slaughter: 

A new book explores the violence that comes with America’s gun culture, and the young casualties that come with it

An American slaughter: The young victims of America’s gun culture
(Credit: Apple/Salon)

This piece originally appeared on TomDispatch.

Every day, on average, seven kids and teens are shot dead in America. Election 2016 will undoubtedly prove consequential in many ways, but lowering that death count won’t be one of them. To grapple with fatalities on that scale — 2,500 dead children annually — a candidate would need a thoroughgoing plan for dealing with America’s gun culture that goes well beyond background checks. In addition, he or she would need to engage with the inequality, segregation, poverty, and lack of mental health resources that add up to the environment in which this level of violence becomes possible.  Think of it as the huge pile of dry tinder for which the easy availability of firearms is the combustible spark. In America in 2016, to advocate for anything like the kind of policies that might engage with such issues would instantly render a candidacy implausible, if not inconceivable — not least with the wealthy folks who now fund elections.

So the kids keep dying and, in the absence of any serious political or legislative attempt to tackle the causes of their deaths, the media and the political class move on to excuses. From claims of bad parenting to lack of personal responsibility, they regularly shift the blame from the societal to the individual level. Only one organized group at present takes the blame for such deaths.  The problem, it is suggested, isn’t American culture, but gang culture.

Researching my new book, Another Day in the Death of America, about all the children and teens shot dead on a single random Saturday in 2013, it became clear how often the presence of gangs in neighborhoods where so many of these kids die is used as a way to dismiss serious thinking about why this is happening. If a shooting can be described as “gang related,” then it can also be discounted as part of the “pathology” of urban life, particularly for people of color. In reality, the main cause, pathologically speaking, is a legislative system that refuses to control the distribution of firearms, making America the only country in the world in which such a book would have been possible.

“Gang Related”

The obsession with whether a shooting is “gang related” and the ignorance the term exposes brings to mind an interview I did 10 years ago with septuagenarian Buford Posey in rural Mississippi. He had lived in Philadelphia, Mississippi, around the time that three civil rights activists — James Chaney, Andrew Goodman, and Michael Schwerner — were murdered. As I spoke to him about that era and the people living in that town (some of whom, like him, were still alive), I would bring up a name and he would instantly interject, “Well, he was in the Klan,” or “Well, his Daddy was in the Klan,” or sometimes he would just say “Klan” and leave it at that.

After a while I had to stop him and ask for confirmation. “Hang on,” I said, “I can’t just let you say that about these people without some proof or corroboration. How do you know they were in the Klan?”

“Hell,” he responded matter-of-factly, “I was in the Klan. Near everybody around here was in the Klan around that time. Being in the Klan was no big deal.”

Our allegiances and affiliations are, of course, our choice. Neither Posey nor any of the other white men in Philadelphia had to join the Klan, and clearly some were more enthusiastic participants than others. (Posey himself would go on to support the civil rights movement.)

It’s no less true that context shapes such choices. If Posey had grown up in Vermont, it’s unlikely that he’d ever have joined the Klan. If a white Vermonter had been born and raised in Mississippi in those years, the likelihood is that he’d have had a pressed white sheet in the closet for special occasions.

At the time, for white men in Philadelphia the Klan was the social mixing place du jour. It was what you did if you had any hope of advancing locally, did not want to be left out of things, or simply preferred to swim with the tide. Since pretty much everyone you knew was involved in one way or another, to be white and live in Philadelphia then was to be, in some way, “Klan related.” That doesn’t mean being in the Klan should give anyone a pass, but it does mean that if you wanted to understand how it operated, why it had the reach it did, and ultimately how to defeat it rather than just condemn it, you first had to understand its appeal in that moment.

The same is true of gangs today in urban America. On the random day I picked for my book, 10 children and teens died by gun. Not all of their assailants have been caught and probably they never will be. Depending on how you define the term, however, it would be possible to argue that eight of those killings were gang related.  Either the assailant or the victim was (or was likely to have been) part of a group that could be called a gang.  Only two were clearly not gang related — either the victim and the shooter were not in a gang or membership in a gang had nothing to do with the shooting. But all 10 deaths did have one clear thing in common: they were all gun-related.

The emphasis on gang membership has always seemed to me like a way of filtering child deaths into two categories: deserving and undeserving. If a shooting was gang related then it’s assumed that the kid had it coming and was, in some way, responsible for his or her own death. Only those not gang related were innocents and so they alone were worthy of our sympathy.

Making a “Blacklist”

The more I spoke to families and people on the ground, the more it became clear how unhelpful the term “gang related” is in understanding who is getting shot and why.  As a term, it’s most often used not to describe but to dismiss.

Take Edwin Rajo, 16, who was shot dead in Houston, Texas, at about 8 p.m. on that November 23rd. He lived in Bellaire Gardens, a low-rise apartment complex on a busy road of commercial and residential properties in an area called Gulfton in southwest Houston. It sat between a store selling bridal wear and highly flammable-looking dresses for quinceañera — the celebration of a girl’s 15th birthday — and the back of a Fiesta supermarket, part of a Texas-based, Hispanic-oriented chain with garish neon lighting that makes you feel as though you’re shopping for groceries in Las Vegas. Opposite it was a pawnshop, a beauty salon, a Mexican taqueria, and a Salvadorean restaurant.

The Southwest Cholos ran this neighborhood, complex by complex. There was no avoiding them. “They start them really, really young,” one of Edwin’s teachers told me. “In elementary. Third grade, fourth grade. And that’s just how it is for kids… You join for protection. Even if you’re not cliqued in, so long as you’re associated with them, you’re good. You have to claim a clique to be safe. If you’re not, if you’re by yourself, you’re gonna get jumped.”

In other words, if you grow up in Bellaire Gardens you are a gang member in the same way that Soviet citizens were members of the Communist Party and Iraqis under Saddam Hussein, the Baath Party.  There is precious little choice, which means that, in and of itself, gang affiliation doesn’t tell you much.

Edwin, a playful and slightly immature teenager, was not, in fact, an active member of the Cholos, though he identified with them.  Indeed, you get the impression that they considered him something of a liability. “They accepted him,” said his teacher.  “He hung with them. But he wasn’t in yet.” His best friend in the complex, Camilla (not her real name), was in the gang, as allegedly was her mother. She sported the Cholo-style dress and had a gang name. After several altercations with someone from a rival gang, who threatened them and took a shot at Camilla’s brother, she decided to get a gun.

“We were thinking like little kids,” Camilla told me. “I didn’t really know anything about guns. I just know you shoot with it and that’s it.”

Sure enough, Edwin was at Camilla’s apartment that night and suggested they play with the gun. In the process, she shot him, not realizing that, even though the clip was out, one bullet was still in the chamber. So was that shooting gang related? After all, the shooter was in a gang. She had been threatened by someone from a rival gang and Edwin may indeed have had aspirations to be in her gang.

Or was it an accidental shooting in which two kids who knew nothing about guns acquired one and one of them got killed while they were messing around?

In an environment in which gangs run everything, most things most people do are in some way going to be “gang related.” But defining all affiliation as a kind of complicity in violence not only means writing off children in entire communities for being born in the wrong place at the wrong time, but criminalizing them in the process.

For one thing, the criteria for gang membership couldn’t be more subjective and loose. Gang leaders don’t exactly hand out membership cards. Sometimes it’s just a matter of young people hanging out. Take Stanley Taylor, who was shot dead in the early hours of that November morning in Charlotte, North Carolina. He spent a lot of his time on Beatties Ford Road with his friends. “I ain’t gonna say it was a gang,” says his buddy Trey. “But it was a neighborhood thing. Beatties Ford. We got our own little clique. We on the West Side. North Side is a whole different neighborhood you don’t even fool with. Everybody was together. This my brother, this my brother. We all in the same clique. We got each other’s back. I’m not going to let nobody else touch you. If you hit him, I’m gonna hit you. Cos I’m his brother.”

Stanley was shot at a gas station in the wake of an altercation with Demontre Rice, who was from the North Side, after Rice allegedly almost ran him over as he pulled in. It’s not obvious that either man knew where the other was from and yet if Rice were in a gang (something I can’t even confirm), that would, of course, make his killing gang related.

Sometimes gangs do have actual rites of initiation. Since, however, gang affiliation can be a guide to criminal activity, authorities are constantly trying to come up with more definite ways of identifying gang members. Almost inevitably, such attempts quickly fall back on stereotypes. A 1999 article inColorlines, for instance, typically pointed out that in “at least five states, wearing baggy FUBU jeans and being related to a gang suspect is enough to meet the ‘gang member’ definition. In Arizona, a tattoo and blue Adidas sneakers are sufficient.” In suburban Aurora, Colorado, local police decided that any two of the following constituted gang membership: “slang,” “clothing of a particular color,” “pagers,” “hairstyles,” “jewelry.”

Black people made up 11% of Aurora’s population and 80% of its gang database. The local head of the ACLU was heard to say, “They might as well call it a blacklist.”

Under the Gun

Gangs are neither new nor racially specific. From the Irish, Polish, Jewish, and Puerto Rican gangs of New York to the Mafia, various types of informal gatherings of mostly, but not exclusively, young men have long been part of Western life. They often connect the social, violent, entrepreneurial, and criminal.

None of this should in any way diminish the damaging, often lethal effects organized gangs have on the young. One of the boys who died that day, 18-year-old Tyshon Anderson from Chicago, was by all accounts a gang member. His godmother, Regina, had long expected his life to come to an early end. “He did burglary, sold drugs, he killed people. He had power in the street. He really did. Especially for such a young kid. He had power. A lot of people were intimidated by him and they were scared of him. I know he had bodies under his belt. I seen him grow up and I loved him and I know he could be a good kid. But there ain’t no point in sugarcoating it. He was a bad kid, too.” If I’d chosen another day that year, I could well have been reporting on one of Tyshon’s victims.

And although gangs involve a relatively small minority of young people, they still add up to significant numbers. According to the National Youth Gang Survey, in 2012 in the United States there were around 30,000 gangs and more than 800,000 gang members — roughly the population of Amsterdam.

What’s new in all this isn’t the gangs themselves, but how much deadlier they’ve become in recent years. According to the National Youth Gang Survey, between 2007 and 2012, gang membership rose by 8%, but gang-related homicides leapt by 20%. It seems that the principal reason why gang activity has become so much more deadly is the increasingly easy availability of guns — and of ever deadlier versions of such weaponry as well. Studies of Los Angeles County between 1979 and 1994 revealed that the proportion of gang incidents involving guns that ended in homicide leapt from 71% to 95%. “The contrast with the present is striking,” argues sociologist Malcolm Klein, after reaching a similar conclusion in Philadelphia and East Los Angeles. “Firearms are now standard. They are easily purchased or borrowed and are more readily available than in the past.”

This raises the stakes immeasurably when it comes to parents and caregivers trying to protect their adolescent children from bad company or poor choices (as parents of all classes and races tend to do). Identifying with a gang and doing something as seemingly harmless as wearing clothing of a certain color or befriending the wrong person can result in an early death.  As a result, Gustin Hinnant’s father in Goldsboro, North Carolina, used to burn his red clothes if he saw him wearing them too often.  Gustin died anyway, hit in the head by a stray bullet meant for another boy who was in a gang. Pedro Cortez’s grandmother in San Jose, California, used to similarly hide his red shirts — the color identified with the local Nortenos gang — just in case. Yet on that same November 23rd, Pedro, who was legally blind, was shot dead while walking in a park. He was dressed in black, but a friend who was with him was indeed wearing red.

Gangs are hardly unique to America, nor do Americans make worse parents than those elsewhere in the world, nor are their kids worse. There is, however, an unavoidable difference between the United States and all other western nations, or the book I wrote would have been inconceivable. This is the only place where, in addition to the tinder of poverty, inequality, and segregation, among other challenges, you have to include the combustible presence of guns — guns everywhere, guns so available that they are essentially unavoidable.

As long as Americans refuse to engage with that straightforward fact of their social landscape, the kinds of deaths I recorded in my book will keep happening with gruesome predictability.  In fact, I could have chosen almost any Saturday from at least the past two decades and produced the same work.

Dismissing such fatalities as “gang related” — as, that is, victims to be dumped in some morally inferior category — is a way of not facing an American reality. It sets the white noise of daily death sufficiently low to allow the country to go about its business undisturbed.  It ensures a confluence of culture, politics, and economics guaranteeing that an average of seven children will wake up but not go to bed every day of the year, while much of the rest of the country sleeps soundly.

Michael Moore in TrumpLand grovels in praise of Hillary Clinton

By Fred Mazelis
27 October 2016

Michael Moore in TrumpLand is a bare-bones documentary, essentially the recording of a one-man show presented by the American filmmaker in Wilmington, Ohio earlier this month and released just days later, three weeks before the presidential election.

Moore, who previously backed Vermont Senator Bernie Sanders and then became a reluctant supporter of Hillary Clinton after she won the Democratic presidential nomination, has now gone all-out to portray the former First Lady and Secretary of State as “our Pope Francis,” a positive standard bearer for the “left.” The man who occasionally used satire and a comic flair to scandalize the corporate and political establishment (Roger and Me, Bowling for Columbine) has come forward as the defender of the favored candidate of that establishment.

Michael Moore in TrumpLand

With the message that Hillary Clinton will be the second coming of Franklin D. Roosevelt, Moore has made a movie whose laugh lines fall flat and whose peroration in praise of the voice of Wall Street and the Pentagon is both politically appalling and pathetic.

The premise of TrumpLand is that Mr. Moore, the fearless stand-up comic, has ventured into the lion’s den. Wilmington, a town of some 12,500 in southwestern Ohio, is typical of cities and towns throughout the US where the fascistic candidate of the Republican Party has won support by appealing to the anger and frustration of working class voters who have seen their jobs and living standards decimated in the years since the 2008 financial crash and the decades of deindustrialization leading up to it.

Showing somewhat more flexibility than Clinton exhibited with her notorious comment about Trump voters as a “basket of deplorables,” Moore welcomes both Trump and Clinton supporters, as well as those planning to vote for third-party candidates, to Wilmington’s Murphy Theater. After some lame and reactionary gibes at Trump partisans—referring to “angry white guys” whose “days are numbered”—Moore declares his sympathy with the “legitimate concerns” of the Trump backers.

He warns, however, that while a vote for Trump will be a “human Molotov cocktail,” “the biggest ‘Fuck you!’ ever recorded in human history,” it will “only feel good for possibly a month.” Comparing the US election to the Brexit vote in Britain, he warns that “using the ballot as an anger management tool” will leave working people even worse off than before.

“Can’t we start saying something nice about her?” says Moore. He proceeds to poke fun at right-wing critics on such issues as the 2012 Benghazi attack in Libya, but says nothing about Clinton’s actual record as US Senator and Secretary of State: her notorious gloating about the murder of Muammar Gaddafi, the WikiLeaks revelations of her Wall Street speeches, her appeals for the prosecution of Edward Snowden, and her calls for aggressive military preparations or actual escalation of US intervention in Iran, Syria, China and Russia.

Advocating a vote for Clinton, Moore goes much further in TrumpLand than the bankrupt lesser-evil argument advanced in some quarters. He rhapsodizes about a first 100 days of a Hillary Clinton administration, filled with executive orders that will usher in a new era of social reform. Clinton will stop the deportation of immigrants, rescue the residents of lead-poisoned Flint, release all non-violent offenders from prison and prosecute all police who shoot unarmed black men. Clinton will supposedly “kick ass in Congress”—never mind her constant appeals for Republican support and promises to seek “compromise.”

Michael Moore in TrumpLand

Qualifying his praise slightly, Moore goes on to explain that his dream of Clinton as a reformer isn’t going to happen “without a revolution behind her.” Repeating the argument of Sanders, who shifted quickly from denouncing Clinton as the candidate of Wall Street to boosting her as a progressive champion, Moore calls for mobilizing support to “get behind” Clinton and “hold her” to the promises of the Democratic Party platform.

“If for some reason” Clinton does not deliver, Moore promises, tongue planted firmly in cheek, to run for president himself in 2020.

Moore goes beyond attempting, à la Sanders, to sell Clinton as a progressive alternative. The climax of the filmmaker’s plea on behalf of Clinton in TrumpLand is entirely within the deplorable framework of identity politics.

Running through Moore’s 70-minute show is the theme of Clinton as the first woman US president, and the supposedly earthshaking significance of gender. “Hillary is genuinely the first feminist of the modern era,” he proclaims, after screening a clip of her graduation speech from Wellesley College. Like the current Pope, Moore says, Clinton has “bided her time.” She endured all the attacks as First Lady, the failure of her attempts, supposedly, to secure universal healthcare. Now, however, “the majority gender has the chance to run this world.”

There are millions of young women and men, of course, firmly committed to equal rights, but unimpressed with Clinton or the claims that a woman president will reverse inequality or change the nature of the capitalist system.

Moore does not mention Margaret Thatcher, one of the most significant figures in the social counterrevolution that has been waged by global capitalism for the past 40 years. Nor does he allude to the current or recent female prime ministers or heads of state in Britain, Germany, Finland, Norway, Brazil, Chile, Australia, Argentina and elsewhere.

It is not an accident that the prominence of female leaders coincides with this period of reaction. The politics of identity, based on gender, race and sexual orientation, has been used to cultivate an upper middle class constituency that directly benefits from austerity and inequality, while the vast majority of the population, of all races and genders, suffers the consequences.

Moore is now the proud spokesperson for this brand of politics. His right-wing trajectory is one that has been followed by many others. There is some continuity, however, between his current love affair with Clinton and his earlier middle class radical posture. Even at his best, Moore depicted the working class as victims. Today his assigned task is to convince angry millennial voters who are justifiably disgusted with the two-party system to give Clinton a mandate, not for social reform, but for austerity and war.


Government announces huge Obamacare premium rises for 2017


By Kate Randall
26 October 2016

Open enrollment for the Affordable Care Act (ACA), commonly known as Obamacare, begins November 1, just a week before Election Day. US officials announced Monday that in 2017 insurers will hike the premiums for many health plans sold on ACA exchanges by an average of 25 percent.

The projected premium increases are of concern not only to those shopping for insurance coverage under Obamacare. They are part of a sea change in the US health care system, in which corporations and the government are increasingly burdening working families with rising health care costs while simultaneously working to ration care for the vast majority of Americans.

In a call with reporters on Monday, the Department of Health and Human Services (HHS) confirmed the 25 percent average price hike for the second cheapest (“silver”) plans, which are used as the benchmark to determine government subsidies. The dramatic increase compares to an average 7.5 percent premium hike in 2016 and a 2 percent rise in 2015. Average monthly increases are estimated at anywhere from $50 to $300.

In addition to the ACA premium hikes, HHS announced that more than one in five consumers using the site would have only one insurer to choose from in 2017. This is mainly the result of the pullout of insurance giants UnitedHealthcare, Humana and Aetna from the ACA marketplace over the past year. The average number of insurance carriers available per US county in 2017 is projected at 2.9, down from 5.7 in 2016.

The premium hikes and dwindling plan choices are a direct function of Obamacare’s subordination to the multibillion-dollar private insurance industry. Under the ACA’s so-called individual mandate, individuals and families without insurance through their employer or a government-run program such as Medicare or Medicaid must purchase insurance or pay a tax penalty. Those who go without insurance next year could face tax penalties of $700 a person or more.

Rising premiums and huge deductibles are only part of the Obamacare story. Earlier this month, the New York Times ran a front-page lead article with the headline, “Next President Likely to Shape Health Law Fate: Changes Seen as Needed.” The article was a semi-official announcement that major changes would be imposed after the November 8 election, regardless of the outcome of the presidential race, to bolster the profits of insurance companies participating in the program.

Among the changes under consideration, according to the Times, are increasing taxpayer subsidies to insurance firms for “high-cost enrollees,” increasing tax penalties on individuals and families for not buying insurance, and curbing “abuse” of special enrollment periods by people who sign up for coverage after becoming sick.

The failure of Obamacare to attract a sufficient number of younger, healthier customers has resulted in a pool of less healthy enrollees who are more costly to insure. The private insurers, unwilling to accept any encroachment on their profits, have responded by requesting and receiving premium increases of 25 to 50 percent or more from state insurance commissions, or by pulling out of the ACA marketplace altogether.

HHS officials argue that consumers shopping on for 2017 should be able to find plans comparable in price to last year. But in general these are the least expensive “bronze” plans that come with deductibles in excess of $5,000 for an individual and other high out-of-pocket costs. In a further effort to cut costs, insurers are also offering an increasing number of plans with narrow networks of doctors and hospitals, as well as limited prescription drug coverage.

The Obama administration says that about 8 in 10 of the expected 11.4 million Obamacare enrollees in 2017 will qualify for government subsidies. The ACA exchanges have enrolled more than 80 percent of those with incomes below 150 percent of the (absurdly low) federal poverty level who are potentially eligible for subsidies. But another 5 to 7 million people who buy insurance on their own do not receive federal subsidies.

According to Avalere, a health policy consulting company, only about 17 percent of potential ACA customers with incomes from three to four times the poverty level ($35,640 to $47,520 for an individual) have enrolled. For many people in this income bracket and above—who are between jobs, self-employed, or retired but not yet eligible for Medicare—ACA insurance is unaffordable, with or without subsidies.

Using the estimator on for 2017 plans in Maricopa County, Arizona, a couple in their early 40s with two children under age 19 and a household income of $60,000 would receive a $1,451 monthly subsidy for the least expensive silver plan, bringing their estimated premium down to $313 a month. However, with a $10,500 annual family deductible and other out-of-pocket costs, estimated yearly costs would be $14,305, or nearly one-quarter of their household income.

Obamacare—with its soaring premiums, high out-of-pocket costs and dwindling networks and services—is serving as the model for employers across the country as they seek to shift more health care costs onto their workers.

Attacks on health care benefits have featured prominently in a series of recent contract disputes, including strikes by 4,800 nurses at Allina Health in Minnesota, a strike by 5,500 faculty and coaches at Pennsylvania’s 14 state-run universities, a strike at Harvard University by 700 dining service workers, and a walkout of Libbey Glass workers in Toledo, Ohio. In each case, employers have sought to drastically reduce health benefits and shift workers to inferior plans with burdensome out-of-pocket costs.

Obamacare is also the spearhead of a gathering attack on Medicare, the government health insurance program for 53 million American seniors and disabled people. Last year, President Obama signed into law a bipartisan bill revising the payment system for Medicare providers to reward doctors for cutting costs and penalize them if the volume and frequency of the health services they provide are deemed too high. Doctors will have a financial incentive to withhold more extensive tests and services from Medicare recipients.

President Obama spoke Thursday at Miami Dade College in Florida to tout the achievements of the ACA. While conceding the “growing pains” facing his signature domestic policy, he pointed to the ACA’s extension of health insurance to 20 million people, its prohibition on insurers denying coverage to people with preexisting conditions, and its guarantee of coverage for certain “essential” medical services.

He did not acknowledge that the ACA imposes no serious restraints on the insurance companies, pharmaceutical firms or hospital chains, and uses financial coercion to drive people to buy bare-bones plans with high out-of-pocket costs. Nor did he take note of the intensified assault on health benefits by employers, both private and public, across the US.

Obama boasted, “All told, about another 10 percent of the country now have coverage.” He was silent on the national scandal of 29 million Americans remaining uninsured.

With Election Day less than two weeks away, news of the premium hikes has forced a response from the presidential campaigns of both big-business parties. Republican Donald Trump proclaimed at a rally Monday night in Tampa, Florida, “It’s over for Obamacare.” He has called for the law’s repeal, not to replace it with a more progressive alternative, but to leave even more people without insurance. His stated health care agenda includes turning Medicaid, the government health insurance program for the poor, into a voucher program.

While acknowledging that “premiums have gotten too high,” Democrat Hillary Clinton, a staunch defender of Obamacare, has called for providing a new tax credit of up to $5,000 to help people pay for premiums and out-of-pocket costs. Such a measure, as she well knows, stands virtually no chance of passage by Congress.

Neither the Democrats nor the Republicans have any intention of challenging the for-profit health care industry. The deepening attack on health care, exemplified by the projected 25 percent hike in Obamacare premiums, serves as a warning of the austerity agenda of the next administration, whichever party occupies the White House in January.


No Matter Who Wins the Election, Military Spending Is Here to Stay

The military-industrial complex is alive and well, and it’s gobbling up your tax dollars.

Through good times and bad, regardless of what’s actually happening in the world, one thing is certain: In the long run, the Pentagon budget won’t go down.

It’s not that that budget has never been reduced. At pivotal moments, like the end of World War II as well as war’s end in Korea and Vietnam, there were indeed temporary downturns, as there was after the Cold War ended. More recently, the Budget Control Act of 2011 threw a monkey wrench into the Pentagon’s plans for funding that would go ever onward and upward by putting a cap on the money Congress could pony up for it. The remarkable thing, though, is not that such moments have occurred, but how modest and short-lived they’ve proved to be.

Take the current budget. It’s down slightly from its peak in 2011, when it reached the highest level since World War II, but this year’s budget for the Pentagon and related agencies is nothing to sneeze at. It comes in at roughly $600 billionmore than the peak year of the massive arms build-up initiated by President Ronald Reagan back in the 1980s. To put this figure in perspective: Despite troop levels in Iraq and Afghanistan dropping sharply over the past eight years, the Obama administration has still managed to spend more on the Pentagon than the Bush administration did during its two terms in office.

What accounts for the Department of Defense’s ability to keep a stranglehold on your tax dollars year after endless year?

Pillar one supporting that edifice: ideology. As long as most Americans accept the notionthat it is the God-given mission and right of the United States to go anywhere on the planet and do more or less anything it cares to do with its military, you won’t see Pentagon spending brought under real control. Think of this as the military corollary to American exceptionalism—or just call it the doctrine of armed exceptionalism, if you will.

The second pillar supporting lavish military budgets (and this will hardly surprise you): the entrenched power of the arms lobby and its allies in the Pentagon and on Capitol Hill. The strategic placement of arms production facilities and military bases in key states and Congressional districts has created an economic dependency that has saved many a flawed weapons system from being unceremoniously dumped in the trash bin of history.

Lockheed Martin, for instance, has put together a handy map of how its troubled F-35 fighter jet has created 125,000 jobs in 46 states. The actual figures are, in fact, considerably lower, but the principle holds: Having subcontractors in dozens of states makes it harder for members of Congress to consider cutting or slowing down even a failed or failing program. Take as an example the M-1 tank, which the Army actually wanted to stop buying. Its plans were thwarted by the Ohio congressional delegation, which led a fight to add more M-1s to the budget in order to keep the General Dynamics production line in Lima, Ohio, up and running. In a similar fashion, prodded by the Missouri delegation, Congress added two different versions of Boeing’s F-18 aircraft to the budget to keep funds flowing to that company’s St. Louis area plant.

The one-two punch of an environment in which the military can do no wrong, while being outfitted for every global task imaginable, and what former Pentagon analyst Franklin “Chuck” Spinney has called “political engineering,” has been a tough combination to beat.


The overwhelming consensus in favor of a “cover the globe” military strategy has been broken from time to time by popular resistance to the idea of using war as a central tool of foreign policy. In such periods, getting Americans behind a program of feeding the military machine massive sums of money has generally required a heavy dose of fear.

For example, the last thing most Americans wanted after the devastation and hardship unleashed by World War II was to immediately put the country back on a war footing. The demobilization of millions of soldiers and a sharp cutback in weapons spending in the immediate postwar years rocked what President Dwight Eisenhower would later dubthe “military-industrial complex.”

As Wayne Biddle has noted in his seminal book Barons of the Sky, the US aerospace industry produced an astonishing 300,000-plus military aircraft during World War II. Not surprisingly, major weapons producers struggled to survive in a peacetime environment in which government demand for their products threatened to be a tiny fraction of wartime levels.

Lockheed President Robert Gross was terrified by the potential impact of war’s end on his company’s business, as were many of his industry cohorts. “As long as I live,” he said, “I will never forget those short, appalling weeks” of the immediate postwar period. To be clear, Gross was appalled not by the war itself, but by the drop off in orders occasioned by its end. He elaborated in a 1947 letter to a friend: “We had one underlying element of comfort and reassurance during the war. We knew we’d get paid for anything we built. Now we are almost entirely on our own.”

The postwar doldrums in military spending that worried him so were reversed only after the American public had been fed a steady, fear-filled diet of anti-communism. NSC-68, a secret memorandum the National Security Council prepared for President Harry Truman in April 1950, created the template for a policy based on the global “containment” of communism and grounded in a plan to encircle the Soviet Union with US military forces, bases, and alliances. This would, of course, prove to be a strikingly expensive proposition. The concluding paragraphs of that memorandum underscored exactly that point, calling for a “sustained buildup of US political, economic, and military strength… [to] frustrate the Kremlin design of a world dominated by its will.”

Senator Arthur Vandenberg put the thrust of this new Cold War policy in far simpler terms when he bluntly advised President Truman to “scare the hell out of the American people” to win support for a $400 million aid plan for Greece and Turkey. His suggestion would be put into effect not just for those two countries but to generate support for what President Eisenhower would later describe as “a permanent arms establishment of vast proportions.”

Industry leaders like Lockheed’s Gross were poised to take advantage of such planning. In a draft of a 1950 speech, he noted, giddily enough, that “for the first time in recorded history, one country has assumed global responsibility.” Meeting that responsibility would naturally mean using air transport to deliver “huge quantities of men, food, ammunition, tanks, gasoline, oil and thousands of other articles of war to a number of widely separated places on the face of the earth.” Lockheed, of course, stood ready to heed the call.

The next major challenge to armedCold War policy and to the further militarization of foreign policy came after the disastrous Vietnam War, which drove many Americans to question the wisdom of a policy of permanent global interventionism. That phenomenon would be dubbed the “Vietnam syndrome” by interventionists, as if opposition to such a military policy were a disease, not a position. Still, that “syndrome” carried considerable, if ever-decreasing, weight for a decade and a half, despite the Pentagon’s Reagan-inspired arms build-up of the 1980s.

With the 1991 Persian Gulf War, Washington decisively renewed its practice of responding to perceived foreign threats with large-scale military interventions. That quick victory over Iraqi autocrat Saddam Hussein’s forces in Kuwait was celebrated by many hawks as the end of the Vietnam-induced malaise. Amid victory parades and celebrations, President George H.W. Bush would enthusiastically exclaim: “And, by God, we’ve kicked the Vietnam syndrome once and for all.”

However, perhaps the biggest threat since World War II to an “arms establishment of vast proportions” came with the dissolution of the Soviet Union and the end of the Cold War, also in 1991. How to mainline fear into the American public and justify Cold War levels of spending when that other superpower, the Soviet Union, the primary threat of the previous nearly half-a-century, had just evaporated and there was next to nothing threatening on the horizon? General Colin Powell, then chairman of the Joint Chiefs of Staff, summed up the fears of that moment within the military and the arms complex when he said, “I’m running out of demons. I’m running out of villains. I’m down to Castro and Kim Il-sung.”

In reality, he underestimated the Pentagon’s ability to conjure up new threats. Military spending did indeed drop at the end of the Cold War, but the Pentagon helped staunch the bleeding relatively quickly before a “peace dividend” could be delivered to the American people. Instead, it put a firm floor under the fall by announcing what came to be known as the “rogue state” doctrine. Resources formerly aimed at the Soviet Union would now be focused on “regional hegemons” like Iraq and North Korea.


After the 9/11 attacks, the rogue state doctrine morphed into the “Global War on Terror” (GWOT), which neoconservative pundits soon labeled “World War IV.” The heightened fear campaign that went with it, in turn, helped sow the seeds for the 2003 invasion of Iraq, which was promoted by visions of mushroom clouds rising over American cities and a drumbeat of Bush administration claims (all false) that Saddam Hussein had weapons of mass destruction and ties to al-Qaeda. Some administration officials including Secretary of Defense Donald Rumsfeld even suggested that Saddam was like Hitler, as if a modest-sized Middle Eastern state could somehow muster the resources to conquer the globe.

The administration’s propaganda campaign would be supplemented by the work of right-wing corporate-funded think tanks like the Heritage Foundation and the American Enterprise Institute. And no one should be surprised to learn that the military-industrial complex and its money, its lobbyists, and its interests were in the middle of it all. Take Lockheed Martin Vice President Bruce Jackson, for example. In 1997, he became a director of the Project for the New American Century (PNAC) and so part of a gaggle of hawks including future Deputy Secretary of Defense Paul Wolfowitz, future Secretary of Defense Donald Rumsfeld, and future Vice President Dick Cheney. In those years, PNAC would advocate the overthrow of Saddam Hussein as part of its project to turn the planet into an American military protectorate. Many of its members would, of course, enter the Bush administration in crucial roles and become architects of the GWOT and the invasion of Iraq.

The Afghan and Iraq wars would prove an absolute bonanza for contractors as the Pentagon budget soared. Traditional weapons suppliers like Lockheed Martin and Boeing prospered, as did private contractors like Dick Cheney’s former employer, Halliburton, which made billions providing logistical support to US troops in the field. Other major beneficiaries included firms like Blackwater and DynCorp, whose employees guarded US facilities and oil pipelines while training Afghan and Iraqi security forces. As much as $60 billion of the funds funneled to such contractors in Iraq and Afghanistan would be “wasted,” but not from the point of view of companies for which waste could generate as much profit as a job well done. So Halliburton and its cohorts weren’t complaining.

On entering the Oval Office, President Obama would ditch the term GWOT in favor of “countering violent extremism”—and then essentially settle for a no-name global war. He would shift gears from a strategy focused on large numbers of “boots on the ground” to an emphasis on drone strikes, the use of Special Operations forces, and massive transfers of arms to US allies like Saudi Arabia. In the context of an increasingly militarized foreign policy, one might call Obama’s approach “politically sustainable warfare,” since it involved fewer (American) casualties and lower costs than Bush-style warfare, which peaked in Iraq at more than 160,000 troops and a comparable number of private contractors.

Recent terror attacks against Western targets from Brussels, Paris, and Nice to San Bernardino and Orlando have offered the national security state and the Obama administration the necessary fear factor that makes the case for higher Pentagon spending so palatable. This has been true despite the fact that more tanks, bombers, aircraft carriers, and nuclear weapons will be useless in preventing such attacks.

The majority of what the Pentagon spends, of course, has nothing to do with fighting terrorism. But whatever it has or hasn’t been called, the war against terror has proven to be a cash cow for the Pentagon and contractors like Lockheed Martin, Boeing, Northrop Grumman, and Raytheon.

The “war budget”—money meant for the Pentagon but not included in its regular budget—has been used to add on tens of billions of dollars more. It has proven to be an effective “slush fund” for weapons and activities that have nothing to do with immediate war fighting and has been the Pentagon’s preferred method for evading the caps on its budget imposed by the Budget Control Act. A Pentagon spokesman admitted as much recently by acknowledging that more than half of the $58.8 billion war budget is being used to pay for non-war costs.

The abuse of the war budget leaves ample room in the Pentagon’s main budget for items like the overpriced, underperforming F-35 combat aircraft, a plane that, at a price tag of $1.4 trillion over its lifetime, is on track to be the most expensive weapons program ever undertaken. That slush fund is also enabling the Pentagon to spend billions of dollars in seed money as a down payment on the department’s proposed $1 trillion plan to buy a new generation of nuclear-armed bombers, missiles, and submarines. Shutting it down could force the Pentagon to do what it likes least: live within an actual budget rather continuing to push its top line ever upward.

Although rarely discussed due to the focus on Donald Trump’s abominable behavior and racist rhetoric, both candidates for president are in favor of increasing Pentagon spending. Trump’s “plan” (if one can call it that) hews closely to a blueprint developed by the Heritage Foundation that, if implemented, could increase Pentagon spending by a cumulative $900 billion over the next decade. The size of a Clinton buildup is less clear, but she has also pledged to work toward lifting the caps on the Pentagon’s regular budget. If that were done and the war fund continued to be stuffed with non-war-related items, one thing is certain: The Pentagon and its contractors will be sitting pretty.

As long as fear, greed, and hubris are the dominant factors driving Pentagon spending, no matter who is in the White House, substantial and enduring budget reductions are essentially inconceivable. A wasteful practice may be eliminated here or an unnecessary weapons system cut there, but more fundamental change would require taking on the fear factor, the doctrine of armed exceptionalism, and the way the military-industrial complex is embedded in Washington.

Only such a culture shift would allow for a clear-eyed assessment of what constitutes “defense” and how much money would be needed to provide it. Unfortunately, the military-industrial complex that Eisenhower warned Americans about more than 50 years ago is alive and well, and gobbling up your tax dollars at an alarming rate.

How can you tell if someone is kind? Ask how rich they are.

Studies show that the wealthy are less empathetic than the poor, whether they’re driving a car or serving in Congress

October 21

Karen Weese is a freelance writer living in Cincinnati.

I was polishing off some pancakes at Denny’s with a friend when our waitress dropped off the check. We paid the $11 bill, and my friend tossed a $5 tip on the table.

I tried not to look surprised. My friend worked as a caregiver and was raising two kids on less than $19,000 a year.

She read my face. “Look at her,” she said, cocking her head at our waitress, who was visibly pregnant and speed-walking from table to table with laden platters in the busy restaurant. “She’s been on her feet for probably six hours already and has three more to go, she’s got a baby on the way, you know she’s exhausted, and somehow she still took great care of us like she’s supposed to. She needs it more than I do.”

I felt my face turn red. I could afford an extra $5. Why hadn’t I thought of that? “You are something else,” I said finally.

“Nah,” she demurred. “But I used to be her, you know? So I know how it is. Besides, karma’s a b—- and you can never be too careful.” She winked and reached for her keys. “Ready to go?”

There’s little question that people find it easier to give when they see something of themselves in the recipient. It’s what motivates families of cancer survivors to participate so eagerly in fundraising walks and why my friend at Denny’s gave so readily to our waitress. It’s also why hedge fund manager John Paulson gave $400 million last year to Harvard University, his alma mater, and not to, say, Habitat for Humanity.

Proximity plays a role, too. We give more easily to the people and causes we see, often regardless of the magnitude of the need. Americans gave nearly $1 billion more to the approximately 3,000 victims of the Sept. 11, 2001, terrorist attacks than they gave to victims of the South Asian tsunami three years later, even though the latter tragedy killed more than a quarter of a million people. A study by the Chronicle of Philanthropy showed that affluent people in homogeneously wealthy Zip codes are less generous than equally affluent people in mixed-income communities. If you never see a homeless person or a trailer park, it’s easier to forget they exist.

But a lot of it comes down to the sheer capacity for empathy — and it turns out that some people have more of it than others.

When shown photos of human faces with different expressions, lower-income subjects are better than their more affluent counterparts at identifying the emotions correctly, according to a study by Yale professor Michael Kraus. (This makes some intuitive sense — if keeping your job depends on reading your customers’ emotions, you’ll probably get good at it.) When University of California psychology professors Paul Piff and Dacher Keltner recorded behavior at four-way stop signs, they found that the drivers of Toyotas and other inexpensive cars were four times less likely to cut off other drivers than the people steering BMWs and other high-end cars. In a related experiment, drivers of more modest cars were more likely to respect the right-of-way of pedestrians in a crosswalk, while half the drivers of high-end cars motored right past them. In other experiments, lower-income subjects were less likely than higher-income individuals to cheat, lie and help themselves to a jar of candy meant for kids.

Strangely, even just thinking about money can make people act more selfishly. When University of Minnesota professor Kathleen Vohs primed study participants with images of money (showing them screensavers depicting floating cash, or asking them to unscramble lists of words that included terms like “cash” and “bill”), they were less likely to give money to a hypothetical charity. And when a research assistant dropped a box of pencils on the floor right beside them (pretending it was an accident), the money-primed subjects were less willing to help pick them up.

Does this mean wealthier people are inherently more selfish and self-absorbed, and lower-income people inherently more generous and empathetic? Or did being rich or poor make them that way?

There is “an obvious chicken-and-egg question to ask here,” Michael Lewis wrote in the New Republic in 2014. “But it is beginning to seem that the problem isn’t that the kind of people who wind up on the pleasant side of inequality suffer from some moral disability that gives them a market edge. The problem is caused by the inequality itself: It triggers a chemical reaction in the privileged few. It tilts their brains.”

Indeed, when University of North Carolina researcher Keely Muscatell showed high- and low-income subjects photos of human faces with accompanying personal stories, the brains of the low-income subjects demonstrated much more activity in the areas associated with empathy than the rich subjects’ brains.

Similarly, when University of Toronto researcher Jennifer Stellar showed videos of children at St. Jude’s hospital bravely undergoing medical procedures, lower-income viewers exhibited more heart-rate deceleration — which scientists use as a measure of compassion — than their higher-income counterparts.

This is, of course, not good news for a society with an inequality problem. If being richer makes people less empathetic toward the struggles of others, the people with the most power and resources will be the least inclined to help. And this seems to actually be the case: A 2014 study of Congress members found that while Republican lawmakers favored the same economic policies regardless of their personal wealth, Democratic legislators’ support for certain policies rose or fell in line with their bank accounts. Richer Democrats were more likely to favor lower taxes on the wealthy and decreased business regulation, while relatively poorer Democrats were more likely to support legislation to make college more affordable or increase the minimum wage.

But there are some positive findings. Even though rich subjects’ physiological changes suggest that they feel less empathy for others’ suffering, researchers in another experiment found that rich subjects began to act more empathetically toward others when shown a vivid, emotional video about kids in poverty.

What’s more, everyone, rich and poor, responds better to the plight of a single case than that of a whole group. (Social scientists even have a name for this: the “identifiable victim bias.”) Many Americans only vaguely aware of the Syrian refugee crisis were moved to help when they saw a photo of a dark-haired toddler in tiny sneakers whose body had washed up on the beach after a failed sea crossing. Thousands of strangers sent birthday cards to an autistic 12-year-old boy named Logan Pearson when his mother posted his photo and a plea on social media. A Detroit man named James Robertson received more than $300,000 in donations from strangers after the local newspaper reported that he walked 21 miles every day just to get to work. When a ponytailed 19-year-old in Ohio named Lauren Hill told a reporter that she dreamed of playing college basketball despite her diagnosis of terminal brain cancer, 10,000 people packed the arena to cheer her on.

These blooms of generosity are not replacements for policy-level action that can permanently change the lives of people on the darker side of the inequality spectrum, just as a big tip or a one-time holiday gift to a food pantry doesn’t fundamentally change the long-term arithmetic for a waitress earning $8 an hour.

But what they show is that almost everyone, including the well-off, can be moved to care about the less fortunate and less powerful, in spite of whatever effects wealth may have on them. Individual stories help. Exposure helps. Just paying attention — to the waitress, the person in the crosswalk, the cleaning staff in the corridor of the conference center — helps. Imagination helps, too.

I know a man who runs a large, urban affiliate of Habitat for Humanity, a nonprofit program in which low-income families build their own homes alongside community volunteers and then buy the houses at a reduced rate. On the first day of construction, he tells me, retired guys from the suburbs itching to break out their power tools show up to work with the future homeowner, often a working single mom with young kids who’s never been on a construction site in her life. “They have nothing in common and no idea what to do with each other,” he says.

But the weeks go by, and one guy shows her how to use a circular saw. Another man helps her perfect her swing with a hammer. They suffer together stapling up itchy pink insulation on a 100-degree day and freeze on the frosty afternoon they put up the siding. There is lunch, and laughter, and eventually a house. “And on the last day, when I stand on the front porch and look out over that same group that didn’t know what to do with each other only a few months before, it’s a completely different vibe,” he tells me. “It’s just — ” He pauses, like he knows this going to sound corny. “It’s just love.” She’s better off, and so are her kids. But so are they.

It’s an uphill battle for the well-off to fight the effects of wealth on their minds, to consciously step out of their circles and pay attention to the places where dinner is not certain, where keeping the lights on is a struggle, where a trailer park is a place real people live, not a punch line. Perhaps all of us who do not worry about where our next meal is coming from could stand to widen our lens.

At the very least, it bears remembering that the givers and the takers may not be who we thought they were.

A Progressive’s Answer to Trumpism

Published on

The Washington Post

A discarded hat on the sidewalk outside Trump Tower in New York on Oct. 8. (Photo: Mike Segar/Reuters)

As election 2016 winds to an end, it’s hard not to begin looking beyond Nov. 8. With Donald Trump behind in the polls and lashing out at the media, there is rampant speculation that Trump is laying the groundwork to launch his own media empire in the wake of his likely defeat. Yet, if he loses, Trump’s next move may well be less important than what’s in store for his supporters, whose long-simmering pain and rage have exploded into plain view.

It would be easy to dismiss Trump’s supporters as “deplorables” and simply move on. But while Trump has undeniably incited racism, misogyny and ugly behavior among his base, it’s critical to understand the context in which their fury has come to the fore.

The U.S. and global economies are in the midst of a tectonic shift. This election — along with Brexit and the spread of nationalism across Europe — has made it impossible to deny that millions of people are desperate for solutions and demanding to be heard. They are tired of being ignored by the elites who have failed them. For Hillary Clinton and the Democratic Party, the lesson of 2016 should not be that Trump voters are irredeemable. It should be that by paying more attention to the plight of blue-collar workers, and offering inclusive solutions to the great challenges roiling our country and the world, they have a real opportunity to expand the Obama coalition of minorities and young people who make up the Democratic base today.


Trump supporters disproportionately live in places where economic mobility is low and opportunities for young people in particular are scarce. Over the past two decades, the incomes of white men without a college degree — the one group Trump is winning by strong margins — have fallen dramatically in comparison to the incomes of their more educated counterparts. Meanwhile, as new trade deals increase foreign competition and technology continues to advance, the good-paying jobs that traditionally sustained the middle class in many parts of the country are disappearing forever. Disruption may be sexy in Silicon Valley, but it doesn’t look nearly as attractive from the factory floor.

As surreal as it may seem to some, Trump has convinced many working-class voters that he feels their pain. He has offered simple, albeit hollow, solutions (“we’ll build a wall!”) to their problems despite his own history of employing undocumented workersmanufacturing products overseas and importing Chinese steel.And of course, he has shamefully stoked racial fears and resentment in the process.

A serious progressive agenda should grapple with the grave challenges that many Trump supporters face. To that end, Sen. Bernie Sanders (I-Vt.) — who won strong support from working-class whites in the primaries — offered a useful blueprint. To start, we need a more progressive trade policy that gives priority to working people over corporate lobbyists and profits. As Roosevelt Institute fellow Mike Konczal argues, a truly progressive vision for trade would not embrace Trump’s retrograde protectionism but would strengthen workers’ rights and preserve the ability of countries to regulate multinational corporations. We also need debt-free college to increase opportunities for the next generation. And we need “Medicare for all” to create more security and flexibility as the traditional nature of work evolves.

While these ideas are represented in the Democratic platform, progressives should fight to ensure that Clinton and the party act on these ideas moving forward. Moreover, they will have to speak directly to communities that have been ravaged, with a message that truly recognizes and respects their anger and pain.

As University of California at Berkeley law professor Ian Haney-López recently wrote in the Nation, “Remaking our politics and economy depends on a broad coalition that must include substantial numbers of racially anxious whites. Ignoring their fears, or worse, pandering to them, further impoverishes all of us. Instead, we must have a unified message for whites as well as people of color: Fearful of one another, we too easily hand over power to moneyed interests, but working together, we can rebuild the American Dream.”

Whatever happens when the votes are counted in two weeks, it will be a political and moral imperative for Democrats to start paying attention to many of Trump’s supporters and working to advance an inclusive populism that gives them hope for their future. If they fail, it’s only a matter of time before a more polished, less toxic Trump emerges and threatens to drag us all back into the past.

Katrina vanden Heuvel is an American editor and publisher. She is the editor, publisher, and part-owner of the magazine The Nation. She has been the magazine’s editor since 1995.



AT&T-Time Warner merger to expand corporate, state control of media


By Barry Grey
24 October 2016

AT&T, the telecommunications and cable TV colossus, announced Saturday that it has struck a deal to acquire the pay TV and entertainment giant Time Warner. The merger, if approved by the Justice Department and US regulatory agencies under the next administration, will create a corporate entity with unprecedented control over both the distribution and content of news and entertainment. It will also mark an even more direct integration of the media and the telecomm industry with the state.

AT&T, the largest US telecom group by market value, already controls huge segments of the telephone, pay-TV and wireless markets. Its $48.5 billion purchase of the satellite provider DirecTV last year made it the biggest pay-TV provider in the country, ahead of Comcast. It is the second-largest wireless provider, behind Verizon.

Time Warner is the parent company of such cable TV staples as HBO, Cinemax, CNN and the other Turner System channels: TBS, TNT and Turner Sports. It also owns the Warner Brothers film and TV studio.

The Washington Post on Sunday characterized the deal as a “seismic shift” in the “media and technology world,” one that “could turn the legacy carrier [AT&T] into a media titan the likes of which the United States has never seen.” The newspaper cited Craig Moffett, an industry analyst at Moffett-Nathanson, as saying there was no precedent for a telecom company the size of AT&T seeking to acquire a content company such as Time Warner.

“A [telecom company] owning content is something that was expressly prohibited for a century” by the government, Moffett told the Post.

Republican presidential candidate Donald Trump, in keeping with his anti-establishment pose, said Saturday that the merger would lead to “too much concentration of power in the hands of too few,” and that, if elected, he would block it.

The Clinton campaign declined to comment on Saturday. Democratic vice-presidential candidate Tim Kaine, speaking on the NBC News program “Meet the Press” on Sunday, said he had “concerns” about the merger, but he declined to take a clear position, saying he had not seen the details.

AT&T, like the other major telecom and Internet companies, has collaborated with the National Security Agency (NSA) in its blanket, illegal surveillance of telephone and electronic communications. NSA documents released last year by Edward Snowden show that AT&T has played a particularly reactionary role.

As the New York Times put it in an August 15, 2015 article reporting the Snowden leaks: “The National Security Agency’s ability to spy on vast quantities of Internet traffic passing through the United States has relied on its extraordinary, decades-long partnership with a single company: the telecom giant AT&T.”

The article went on to cite an NSA document describing the relationship between AT&T and the spy agency as “highly collaborative,” and quoted other documents praising the company’s “extreme willingness to help” and calling their mutual dealings “a partnership, not a contractual relationship.”

The Times noted that AT&T installed surveillance equipment in at least 17 of its Internet hubs based in the US, provided technical assistance enabling the NSA to wiretap all Internet communications at the United Nations headquarters, a client of AT&T, and gave the NSA access to billions of emails.

If the merger goes through, this quasi-state entity will be in a position to directly control the content of much of the news and entertainment accessed by the public via television, the movies and smart phones. The announcement of the merger agreement is itself an intensification of a process of telecom and media convergence and consolidation that has been underway for years, and has accelerated under the Obama administration.

In 2009, the cable provider Comcast announced its acquisition for $30 billion of the entertainment conglomerate NBCUniversal, which owns both the National Broadcasting Company network and Universal Studios. The Obama Justice Department and Federal Communications Commission ultimately approved the merger.

Other recent mergers involving telecoms and content producers include, in addition to AT&T’s 2015 purchase of DirecTV: Verizon Communications’ acquisition of the Huffington Post, Yahoo and AOL; Lionsgate’s deal to buy the pay-TV channel Starz; Verizon’s agreement announced in the spring to buy DreamWorks Animation; and Charter Communications’ acquisition of the cable provider Time Warner Cable, approved this year.

The AT&T-Time Warner announcement will itself trigger a further restructuring and consolidation of the industry, as rival corporate giants scramble to compete within a changing environment that has seen the growth of digital and streaming companies such as Netflix and Hulu at the expense of the traditional cable and satellite providers.

The Financial Times wrote on Saturday that “the mooted deal could fire the starting gun on a round of media and technology consolidation.” Referring to a new series of mergers and acquisitions, the Wall Street Journal on Sunday quoted a “top media executive” as saying that an AT&T-Time Warner deal would “certainly kick off the dance.”

The scale of the buyout agreed unanimously by the boards of both companies is massive. AT&T is to pay Time Warner a reported $85.4 billion in cash and stocks, at a price of $107.50 per Time Warner share. This is significantly higher than the current market price of Time Warner shares, which rose 8 percent to more than $89 Friday on rumors of the merger deal.

In addition, AT&T is to take on Time Warner’s debt, pushing the actual cost of the deal to more than $107 billion. The merged company would have a total debt of $150 billion, making inevitable a campaign of cost-cutting and job reduction.

The unprecedented degree of monopolization of the telecom and media industries is the outcome of the policy of deregulation, launched in the late 1970s by the Democratic Carter administration and intensified by every administration, Republican or Democratic, since then. In 1982, the original AT&T, colloquially known as “Ma Bell,” was broken up into seven separate and competing regional “Baby Bell” companies.

This was sold to the public as a means of ending the tightly regulated AT&T monopoly over telephone service and unleashing the “competitive forces” of the market, where increased competition would supposedly lower consumer prices and improve service. What ensued was a protracted process of mergers and disinvestments involving the destruction of hundreds of thousands of jobs, which drove up stock prices at the expense of both employees and the consuming public.

Dallas-based Southwestern Bell was among the most aggressive of the “Baby Bells” in expanding by means of acquisitions and ruthless cost-cutting, eventually evolving into the new AT&T. Now, the outcome of deregulation has revealed itself to be a degree of monopolization and concentrated economic power beyond anything previously seen.