Paris attacks: it’s time for a more radical reaction

By Claire Veale On November 19, 2015

Post image for Paris attacks: it’s time for a more radical reactionIn the wake of the Paris attacks fingers were pointed in all directions, but few were directed at France itself. What has radicalized the French youth?

Photo: A young man is arrested at a student protest in Paris, by Philipe Leroyer, via Flickr.

The deadly attacks in Paris on the night of Friday, November 13, were quickly met by a global rush of solidarity with France and the French people. From world leaders expressing their sympathies, to raising the French flag on buildings across the globe, and more visibly, on Facebook profiles, everyone stood unequivocally united with France.

The sentiment of solidarity behind this mass concern is heart-warming, however it must come hand in hand with a demand for a serious debate on matters of terrorism, violence and war. Rage and sadness should not hinder our ability to think.

Why Paris? Who were the attackers, and how could they do such things? How can we counter these kind of attacks? Before bowing to the often narrow interpretations provided by the media and our political leaders, we must look for well-informed answers to these important questions. The current response–including more French bombings in Syria and extreme security measures on French territory–may be a fuel for further violence, rather than bring viable solutions.

“Us versus Them”

As a French national, the sudden inundation of the tricolored flag on my Facebook wall was a little unsettling. I do feel grateful for the surge of solidarity and wonderful messages calling for love and unity from all over the world. However, I find myself wondering if the French flag is truly the appropriate symbol to demonstrate this call for peace and inclusiveness, and to bring people together in unity against terror.

To me, the French flag represents first and foremost the French state, the respective governments that have ruled my country, and their foreign policies. Domestically, it is mostly a nationalist symbol, too often used by the likes of Marine Le Pen to create enemies out of foreigners. It represents certain values defined as “French”, as opposed to foreign values France should not welcome, and as such it can be a dangerous vector of racism.

In parallel to this bleu-blanc-rouge frenzy, many artists and humorists have responded to the attacks defending the stereotypes of French culture; drinking wine, enjoying life, smoking on terrasses. They state that any attack on French values is an attack on enjoying life itself. Although flattering in a way, as they praise what may seem the essence of being French, it unjustly encourages us to see the attacks through the lens of the “clash of civilisations” where enemy and foreign ideals threaten our way of life, our moral values.

Let us be clear about two things. First, in this “us” versus “them” discourse, I am not sure who the “us” is supposed to be. Am I–a French citizen who has long opposed aggressive foreign policy in the Middle East–all of a sudden on the same side as my government?

To many of us, the political elites of the country, who have insisted in involving France in wars that we did not want, are part of the problem. The different successive French governments have indirectly contributed to the rise of extremist groups and the radicalization of young men to join them. Waving the French flag could contribute to diminishing their role and the responsibility they hold in this crisis. Worse, it could legitimize further undesirable military actions abroad.

And second, who is “them”? The “War on Terror”, as it has been clearly framed by world leaders, is not a war in the traditional sense, with a clear, visible enemy. The attackers of the Paris killings weren’t foreigners; most of them were French or European citizens, born and raised on European soil. We are not talking about a mysterious, faraway enemy, but about young French men and women who are as much a part of French society as anyone else.

A show of force

And yet, the French president so promptly declared “war” and intensified the direct and aggressive bombings of IS targets in Syria. The terrorists being mostly European citizens, may it not be wiser to ask ourselves what is wrong in our own societies instead of taking such rash military action abroad?

Worryingly, there has been little resistance within the media or even within French left-wing circles, to Hollande’s policies. Has the emotion and anger from the Paris attacks impeded our ability to recognize that dropping bombs in the Middle East will not resolve the security threats that emanate from within?

Terrorism is an invisible enemy emanating from complex socio-political circumstances, which needs to be tackled in a more subtle and thought-through way. History has shown us that 14 years of “War against Terror” in the Middle East has only contributed to more violence, more terrorism and sadly, more deaths. Isn’t it time we started thinking about different tactics?

Since the attacks, Francois Hollande has proposed changes in the constitution, to make it easier for the state to resort to the use of force when facing terrorism. These changes include an increase in presidential powers, allowing Mr Hollande to enforce security measures without the usual scrutiny of the parliament. The president wants to extent the duration of the state of emergency, limiting freedom of movement and freedom of association, including mass demonstrations, in the name of national security.

The suggested changes could also result in widening the definition of targeted citizens to anyone who is “seriously suspected” of being a threat to public order, opening the door to a worrying reality of aggressive police tactics directed towards poor, disillusioned youth. Furthermore, Hollande wants to withdraw French nationality to any bi-national citizen suspected of terrorism acts.

The president’s reaction is deeply disturbing, and reinforces the skewed vision of a “foreign” enemy, which will inevitable result in discriminatory and racist policies and reactions towards foreigners, or anyone perceived as foreign, in France. More worrying still, is a recent poll in Le Parisien, which shows that 84% of respondents supported the decision to increase the manoeuvring power of the police and the army, while 91% agreed with the idea of withdrawing French nationality to suspected terrorists.

Where are the French values of openness and multiculturalism that we so ardently defend now? We must not let fear and an inaccurate “us” versus “them” discourse justify aggressive policies against our own citizens, or against anyone else for that matter, including refugees fleeing the very terror we claim to fight.

Why did French citizens decide to kill?

The reason why the media has focused on this angle of opposing the French values of liberté, egalité, fraternité, with the fearful and hateful values preached by the IS, is that it gives easy answers to complex questions. Why was Paris attacked? Because, we are told, it represents the heart of freedom, multiculturalism, secularism and joie de vivre. But does it really? France doesn’t always seem to live up to the values it professes.

The real question should be: why did young French (and Belgian) men and boys decide to sacrifice their life to kill members of their own society?

Two answers seem to have emerged. The first, mainly employed by the political elite and the media, is that the killers were “insane”, “brainwashed” and “barbaric”, and could not have acted rationally. This approach refuses proper analysis of the killers’ motives, brushing them aside to favour irrational and extremist religious ideology, and thus justifying a purely violent and heavy-handed response.

The second answer, coming from many left-wing, anti-racist circles, claims that such acts of terrorism are a direct result of France’s foreign and domestic policy. Although both seem radically opposed, they do have one thing in common: they undermine the agency and accountability of the attackers. This second approach, which points out undeniable political considerations, remains flawed in the same way as the first: it forgets that the killers are people who think and act, and not simply passive products of racist and imperialist foreign policy.

It is important to recognize the attackers as human beings, capable of acting and thinking rationally, as it is a first step towards understanding the reasoning behind their actions. Religious fanaticism is simply a vector of violence, as has been the case for many other ideologies in the past, such as nationalism, fascism, or communism. These ideologies are not the root causes of violence. Although this may seem obvious, there is a need to stress that religious extremism is not the reason why a young man would take up a gun and shoot into a crowd, it is simply an instrument to channel their anger.

We must try to look at the very roots of these young men’s discontent. Debates should be opened about the school system, about the ghettoization of urban areas across France, about police violence and domestic anti-terror security measures, about the prison system, about structural racism, about our skewed justice system, about oppressive and strict secularism; and the list goes on.

These questions are complex ones, and ones that are not easy to address. Thus, we prefer to paint the picture in black and white, our values versus their values, rather than to face the internal problems of our broken societies.

The little research that has been conducted on IS fighters, abroad and within Europe, shows that young men don’t necessarily join the extremist group for religious reasons. The Kouachi brothers who carried out the Charlie Hebdo shootings had suffered a difficult childhood in poverty after the suicide of their mother, with little support from social services and surrounded by extreme violence as children.

Anger at injustices they face, alienation, and years of increasing humiliation from the very societies they are meant to be a part of can push young men to express their frustrations through the vehicle of religious extremism. IS just happens to be an organized group, which seriously threatens European societies, and which offers these humiliated and enraged young men a way to defend their dignity and their pride.

As Anne Aly explains: “Religion and ideology serve as vehicles for an ‘us versus them’ mentality and as the justification for violence against those who represent ‘the enemy’, but they are not the drivers of radicalization.”

Radical solutions to radical problems

Radical solutions mean, first and foremost, tackling the problem at its roots.Julien Salingue expressed this idea very eloquently after the Charlie Hebdo shootings: “Deep change, and therefore the questioning of a system that generates structural inequalities and exploitation of violence is necessary”.

Every injustice and every act of humiliation towards a member of society can only cause anger and hatred, which might someday transform into violence. James Gilligan has written extensively about the way the prison system in America serves to intensify the feeling of shame and humiliation that push individuals to violence in the first place. This analysis is useful when looking at European societies, and the processes of discrimination and humiliation that push young men to react violently.

We must condemn all policies, discourses and actions that legitimize and reinforce the politics of hatred. Police violence towards young men of Arab origin, for instance, is frequent in France. Amedy Coulibaly, another actor in the Paris shootings in January 2015, suffered the death of his friend in a police “slipup” when he was 18. This kind of direct aggression perpetrated on a daily basis adds to the structural violence and discrimination young men from underprivileged backgrounds experience in European societies. War for them is not such a distant, disconnected reality, but closer to their every day life.

Every racist insult, act of police brutality, unfair trial, or discriminatory treatment brings them one step closer to carry out tragedies as the massacre in Paris. We must therefore question the very system we live in and the way of life we defend so defiantly after the attacks, for the problem may be closer to us than we imagine.

Claire Veale is a graduate from the SOAS, University of London, in Violence, Conflict & Development. Having lived and worked in several continents, she is particularly interested in writing about social movements, Latin American politics, gender rights and international development issues.

Breaking the chains: precarity in the Age of Anxiety

By Joseph Todd On October 15, 2015

Post image for Breaking the chains: precarity in the Age of AnxietyIn our Age of Anxiety, society assaults us from every possible angle with an avalanche of uncertainty. How do we fight back under conditions of precarity?

An Age of Anxiety is upon us, one where society assaults us from every possible angle with an avalanche of uncertainty, fear and alienation. We live with neither liberty nor security but instead precariousness. Our housing, our income and our play are temporary and contingent, forever at the whim of the landlord, policeman, bureaucrat or market. The only constant is that of insecurity itself. We are gifted the guarantee of perpetual flux, the knowledge that we will forever be flailing from one abyss to another, that true relaxation is a bourgeois luxury beyond our means.

Our very beings come to absorb this anxiety. We internalize society’s cruelty and contradiction and transform them into a problem of brain chemistry, one that is diagnosed and medicated away instead of being obliterated at root. All hope is blotted out. Authentic experience, unmediated conversation, distraction-free affection and truly relaxed association feel like relics of a bygone era, a sepia dream that perhaps never existed.

Instead we have the frenetic social arenas of late capitalism: the commodified hedonism of clubs and festivals, express lunches, binge culture and the escapist, dislocating experience of online video games, all underlined by either our desperate need to numb our anxieties or to create effective, time-efficient units of fun so we are available for work and worry.

This is assuming we have work, of course. Many of us are unemployed, or are instead held in constant precarity. Stuck on zero-hour contracts or wading through as jobbing freelancers in industries that used to employ but don’t anymore, we are unable to plan our lives any further than next week’s rota, unable to ever switch off as the search for work is sprawling and continuous.

And if we do have traditional employment, what then? We are imprisoned and surveilled in the office, coffee shop or back room, subject to constant assessment, re-assessment and self-assessment, tracked, monitored and looped in a perpetual performance review, one which even our managers think is worthless, but has to be done anyway because, hey, company policy.

Continuous is the effective probationary period and we are forever teetering on the edge of unemployment. We internalize the implications of our constant assessment, the knowledge that we’re always potentially being surveilled. We censor ourselves. We second-guess ourselves. We quash ourselves.

And thanks to the effective abolition of the traditional working day, work becomes unbearable and endless. The security of having delineated time — at work and then at play — has been eradicated. Often this is because individuals have to supplement their atrocious wages with work on the side. But it is also because traditional 9-to-5 jobs have suffered a continuous extension of working hours into out-of-office time, enabled and mediated by our laptops and smartphones. These gadgets demand immediacy and, when coupled with the knowledge that you are always reachable and thus available, they instill in us a frantic need to forever reply in the now.

And with this expectation comes obligation. Hyper-networked technologies gift our bosses the ability to demand action from us at any moment. Things that had to wait before become doable — and thus are done — in the now. If you are unwilling, then someone is ready to take your place. You must always be at their beck and call. From this, our only refuge is sleep, perhaps the last bastion of delineated time against frenetic capitalism, and one that is being gradually eroded and replaced.

For those that are out of work the situation is no better. They face the cruel bureaucracy of the Job Centre or the Atos assessment, institutions that have no interest in linking up job seekers with fulfilling employment but instead attempt only to lower the benefits bill through punitive, arbitrary sanctions and forcing the sick back to work. Insider accounts of these programs betray the mix of anxiety inducing micro-assessment and surveillance they employ.

Disabled claimants — always claimants, never patients, insists Atos — are assessed from the moment they enter the waiting room, noted as to whether they arrive alone, whether they can stand unassisted and whether they can hear their name when called. Compounding this is the hegemonic demonization of those that society has failed: if you are out of work, you are a scrounger, a benefit cheat and a liar. Utterly guilty of your failure, a situation individualized in its totality and attributable to no system, institution or individual but yourself.

We are surveilled, monitored and assessed from cradle to grave, fashioned by the demand that we must be empirical, computable and trackable, our souls transformed into a series of ones and zeros. This happens in the workplace, on the street and in various government institutions. But its ideological groundwork is laid in the nursery and the school.

These institutions bracket our imaginations while still in formation, normalizing a regime of continuous surveillance and assessment that is to last for the rest of our lives. Staff are increasingly taken away from educating and nurturing and instead are made to roam nurseries taking pictures and recording quotes, all to be computed and amalgamated so authorities can track, assess and predict a child’s trajectory.

It is true that this does not trouble the child in the same way traditional high intensity rote examination does. But what it instead achieves is the internalization of the surveillance/assessment nexus in our minds; laying the groundwork for an acquiescence to panoptical monitoring, a resignation to a private-less life and a buckling to regimes of continuous assessment.

Britain is particularly bad in this respect. Not only does our government have a fetish for closed-circuit television like no other, but also, GCHQ was at the heart of the Snowden revelations. Revelation, however, is slightly misleading — as what was most telling about the leaks wasn’t the brazen overstep by government institutions, but that few people were surprised. Although we didn’t know the details, we suspected such activity was going on. We acted as if we were being watched, tracked and monitored anyhow.

In this we see the paranoid fugitive of countless films, books and television dramas extrapolated to society writ large. We are all, to some extent, that person. Our growing distrust of governments, the knowledge that our technologically-integrated lives leave a heavy trace and the collection of “big” data for both commercial and authoritarian purposes contributes to our destabilized, anxious existence. An existence that impels us towards self-policing and control. One where we do the authority’s job for them.

Many individuals offer the amount of choice we have, or the amount of knowledge we can access at the click of a button, as the glorious consequences of late capitalist society. But our rampant choice society, one where we have to make an overwhelming number of choices — about the cereal we eat, the beer we drink, or the clothes we wear — is entirely one sided. While we have an incredible amount of choice over issues of little importance, we are utterly excluded from any choice about the things that matter; what we do with the majority of our time, how we relate to others or how society functions as a whole. Nearly always these choices are constricted by the market, the necessity of work, cultures of overwork and neoliberal ideology.

Again we find this ideology laid down in primary education. Over the years more and more “continuous” learning has been introduced whereby children, over a two week period or so, have to complete a set of tasks for which they can choose the order. This is an almost perfect example of how choice functions in our society, ubiquitous when insignificant but absent when important. The children can choose when they do an activity, which matters little as they will have to do it at some point anyway, but cannot choose not to do it, or to substitute one kind of activity with another.

Why does this matter? Because meaningful choices about our lives give us a sense of certainty and control. Avalanches of bullshit choices that still have to be made, as study after study has shown, make us incredibly anxious. Each of them takes mental effort. Each contains, implicitly, the multitude of choices that we didn’t make; all those denied experiences for every actual experience. This is fine if there are only one or two. But if there are hundreds, every act is riddled with disappointment, every decision shot with anxiety.

Compounding this orgy of choice, and in itself another root cause of anxiety, is the staggering amount of information that assaults us every day. Social media, 24-hour news, the encroachment of advertising into every crack — both spatially and temporally — and our cultures of efficiency that advocate consuming or working at every possible moment all combine to cause intense sensory overload. This world, for many, is just too much.

Although we’ve talked mostly about work, surveillance, assessment and choice, there are a multitude of factors one could add. The desolation of community due to the geographical dislocation of work, the increased transiency of populations and the growing privatization of previously public acts — drinking, eating and consuming entertainment are increasingly consigned to the home — shrinks our world to just our immediate families.

Camaraderie, extended community and solidarity are eroded in favor of mistrust, suspicion and competition. Outside of work our lives become little more than a series of privatized moments, tending to our property and ourselves rather than each other, flitting between the television shows, video games, home DIY and an incredible fetish for gardening with no hint towards the thought that perhaps these experiences would be better if they were held in common, if they appealed to the social and looked outward rather than in.

In the same way we could mention the ubiquity of debt — be it the mortgage, the credit card or the student loans — and the implicit moral judgment suffered by the debtor coupled with the anxiety-inducing knowledge that they could lose everything at any moment. Or we could consider the near-existential crises humanity faces, be it climate change, ISIS or the death throes of capitalism; all too abstract and total to comprehend, all contributing to a sense that there is no future, only a grainy, distant image of lawless brutality, flickering resolutely in our heads.

But the crux, and the reason anxiety could become a revolutionary battleground, is that neoliberal ideology has individualized our suffering, attributing it to imbalances in our brain chemistry, constructing it as a problem of the self, rather than an understandable human reaction to a myriad of cruel systemic causes. Instead of changing society the problem is medicalized and we change ourselves, popping pills to mold our subjectivities to late-capitalist structures, accepting the primacy of capitalism over humanity.

This is why “We Are All Very Anxious”, a pamphlet released by the Institute of Precarious Consciousness, is so explosively brilliant. Not only does it narrate the systemic causes of anxiety, but it situates the struggle within a revolutionary strategy, constructing a theory that is at once broad and personal, incorporating one’s own subjective experience into an explanatory framework, positing anxiety as a novel, contemporary revolutionary battleground, ripe for occupation.

It is, they claim, one of three eras spanning the last two-hundred years where we have progressed between different dominant societal affects. Until the postwar settlement we suffered from misery. The dominant narrative was that capitalism benefited everybody; while at the same time overcrowding, malnourishment and slum dwelling were rife. In response to this appropriate tactics such as strikes, mutual aid, cooperatives and formal political organization were adopted.

After the postwar settlement, until around the 1980s, a period of Fordist boredom ensued. Compared to the last era, most people had stable jobs, guaranteed welfare and access to mass consumerism and culture. But much of the work was boring, simple and repetitive. Life in the suburbs was beige and predictable. Capitalism, as they put it, “gave everything needed for survival, but no opportunities for life.” Again movements arose in opposition, positioned specifically against the boredom of the age. The Situationists and radical feminism can be mentioned, but also the counter-culture surrounding the anti-war movement in America and the flourishing DIY punk scene in the UK.

This period is now finished. Capitalism has co-opted the demand for excitement and stimulation both by appropriating formerly subversive avenues of entertainment — the festival, club and rave — while dramatically increasing both the amount and intensity of distractions and amusements.

In one sense we live in an age of sprawling consumerism that avoids superficial conformity by allowing you to ornament and construct your identity via hyper-customized, but still mass-produced products. But technological development also mean that entertainment is now more total, immersive and interactive; be it the video game or the full-color film watched on a widescreen, high-definition television.

Key to this linear conception is the idea of the public secret, the notion that anxiety, misery or boredom in these periods are ubiquitous but also hidden, excluded from public discourse, individualized and transformed into something unmentionable, a condition believed to be isolated and few because nobody really talked about it. Thus to even broach the subject in a public, systematic manner becomes not just an individual revelation but also a collective revolutionary act.

I’ve seen this first-hand when running workshops on the topic. Sessions, which were often argumentative and confrontational, became, when the subject was capitalism and anxiety, genuinely inquisitive and exploratory. Groups endeavored to broaden their knowledge of the subject, make theoretical links and root out its kernel rather than manning their usual academic ramparts and launching argument after rebuttal back and forth across the battlefield.

But more than this, there was a distinct edge of excitement, the feeling that we were onto something, a theory ripe with explosive newness, one that managed to combine our subjective experiences and situate them in a coherent theoretical framework.

However, we must be critical. To posit anxiety as a specifically modern affect, unique to our age, is contentious. What about the 1950s housewife, someone mentioned in one of the sessions, with her subjectivity rigidly dictated by the misogyny and overbearing cultural norms of the time? Didn’t this make her feel anxious?

Well, perhaps. But if we take anxiety to mean a general feeling of nervousness or unease about an uncertain outcome — with chronic anxiety being an actively debilitating form — then we can draw distinct differences. Although the housewife was oppressed, her oppression was codified and linear, her life depressingly mapped out with little room for choice or maneuver. Similarly with the slave — surely the universal symbol of oppression — hierarchies aren’t nebulous but explicit, domination is ensured by the whip and the gun, the master individualized and present.

This is in stark contrast to the current moment. While it is obvious that oppressions are distinct and incomparable, we can nevertheless see that the fug of the 21st century youth is of a different nature. Our only certainty is that of uncertainty. Our oppressor is not an individual but a diffuse and multiplicitous network of bureaucrats, institutions and global capital, hidden in its omnipotence and impossible to grasp.

We aren’t depressed by the inevitability of our oppression, but instead are baffled by its apparent (but unreal) absence, forever teetering on the brink, not knowing why, nor knowing who we should blame.

Similarly it is bold to claim that anxiety is the dominant affect of Western capitalism, tantamount to pitching it as the revolutionary issue of our age. Yet if we analyze the popular struggles of our time — housing, wages, work/life balance and welfare — they are often geared, in one way or another, towards promoting security over anxiety.

Housing for many is not about having a roof over their heads, but about security of tenure, be it via longer fixed-term tenancies or the guarantee that they won’t be priced out by rent rises that their precarious employment can’t possibly cover. In the same way struggles over welfare are often about material conditions, but what particularly strikes a chord is the cruel insecurity of a life on benefits, forever at the whim of sanction-wielding bureaucrats who are mandated to use any possible excuse to remove your only means of support.

Anxiety is also a struggle that unites diverse social strata, emanating from institutions such as the job center, loan shark, university, job market, landlord and mortgage lender, affecting the unemployed, precariously employed, office worker, indebted student and even the comparatively well-off. Again we find this unification in the near-universal adoption of the smartphone and other hyper-networked technologies. All of us, and especially our children, are beholden to a myriad of glowing screens, flitting between one identity and another, alienated and disconnected from our surroundings and each other.

This is not to say a movement against anxiety itself will ever arise. Such a rallying cry would be too abstract and fail to inspire. Instead, anxiety must be conceptualized both as an affect which underlies various different struggles, and a schema within which they can be assembled into a revolutionary strategy.

So, what is our tangible aim here? In part it must be to reduce the level of general anxiety so as to increase quality of life. Yet if we are to take a revolutionary rather than a mere humanitarian approach, this drop in anxiety must in some way translate into a rise in revolutionary disposition. In certain ways it obviously will. If there is a public realization that large swathes of the mentally ill are not as such because of their unfortunate brain chemistry but instead because of a misconfiguration of society, people are already thinking on an inherently challenging, systemic level.

Similarly, conflict with the state or capital — be it on the street, in the workplace or inside one’s own head — tends to be high-impact and anxiety-inducing. A drop in general anxiety will make it more likely that individuals will engage in such moments of conflict and, crucially, experience the intense radicalization and realization of hegemonic power that can only be achieved through such visceral moments. But a second part to this, hinted at already and integral to giving the struggle a revolutionary edge, is to emphasize that there is a public secret to be aired. As well as combating the sources of anxiety, we must say we are doing so; we must situate these struggles within larger frameworks and provide education on its systemic nature.

Thus, any strategy would need to be both abstract and practical. On one hand we must explode the public secret by raising consciousness. This would require a general onslaught of education, including, but not limited to, consciousness-raising sessions, participatory workshops, articles, books, pamphlets, leaflets, posters, YouTube videos and “subvertised” adverts. The emphasis would be to educate but also to listen, to intermingle theoretical understanding with subjective experience.

The second part would be to strategically support campaigns and make demands of politicians that specifically combat anxiety in its various different guises. When it comes to work, the abolition of zero-hour contracts, the raising of the minimum wage in line with the actual cost of living, and the tightening of laws on overwork as part of a broader campaign to assert the primacy of life over work, of love over pay, would be a good start.

For those out of work, underpaid or precarious, the introduction of a basic citizen’s income would represent a revolutionizing of the job market. In one move it would alleviate the cultural and practical anxieties of worklessness — ending the bureaucratic cruelty of the job center while removing the anxiety-inducing stigma associated with claiming benefits — while simultaneously allowing individuals to pursue culturally important and revolutionary activities such as art, music, writing or (dare I say it?) activism, without the crushing impossibility of trying to make them pay. When we look to housing obvious solutions include mandatory, secured five-year tenancies, capped rent increases and a guarantee of stable, suitable social housing for those who need it.

There are many more reforms I could list. You will notice, however, that these are indeed reforms; bread and butter social democracy. Does that mean such a program is counter-revolutionary? A mere placatory settlement between capital and the working class? No, it does not. Revolution does not emerge from the systematic subjection of individuals to increased misery, anxiety and hardship as accelerationist logic demands. Instead it flourishes when populations become aware of their chains, are given radical visions for the future and the means to achieve them. It is when leftists critique but also offer hope. It is when the population writ large are included in and are masters of their own liberation; not when they are viewed as a lumpen, otherly mass, of only instrumental importance in achieving the glorious revolution.

Look at the practicalities and this becomes obvious. How can we expect individuals to launch themselves into high-tension anxiety-inducing conflicts if the mere thought of such a situation causes them to have a panic attack? How can individuals, in the face of near panoptical surveillance and monitoring, combat the overwhelming desire to conform if they aren’t awarded some freedom from the practical anxieties of life? How are we to think and act in a revolutionary, and often abstract, manner if the very real and immediate anxieties of work, home and play fog our minds so totally?

This is not to say freedom will be given to us. It must always be taken, and we must not rely on electoral politics to hand us the revolution down from above. Nor will true struggle ever be an anxiety-free leisure pursuit. Genuine conflict with the state and capital will always entail danger, stress and the possibility of intensified precariousness.

Nevertheless, the dismissal of electoral politics in its totality represents abysmal revolutionary theory. The pursuit of reforms by progressive governments being bitten at the heels by sharp, vibrant social movements can produce real, tangible change.

It was what should have happened with Syriza, and it is what will hopefully happen with the new Labour leadership in the UK. And if, as individuals and communities, we are to puncture the distress, precariousness and general sense of cruel unknowing so particular to the moment in which we live, if we are to overcome the avalanche of bullshit and reclaim our confidence, if we to construct and disseminate a distinctly communal, hopeful revolutionary fervor, such changes are imminently needed.

Joseph Todd is a writer and an activist. Find more of his writings here or follow him on twitter.

The Baltimore upheaval: On race and class in America


12 May 2015

In the aftermath of the eruption of anger in Baltimore, Maryland over the police killing of Freddie Gray, the media and political establishment are seeking to conceal the real social and political issues at stake.

The killing of 25-year-old Gray last month—only one of the latest in a wave of police murders around the country—triggered clashes with police, demonstrations that spread to other cities and a police-military occupation of the city that was only lifted last week. While Gray’s murder was the catalyst, the scope and magnitude of the social discontent was fueled by the destitute conditions confronting working-class youth in the city’s poorest, largely minority, neighborhoods.

Much of the political elite that runs Baltimore is African American, including the current mayor, police chief and the majority of the city council. Although this fact has seriously undermined the arguments of the proponents of identity politics, it has not stopped them from insisting once again that the essential division in American society is race, not class.

On Sunday, the New York Times published a lead editorial, “How Racism Doomed Baltimore.” The newspaper, which sets the tone for what is described as “liberal public opinion” in America, declared that conditions in the city could only be understood within the context of the city’s legacy of racism and segregation.

“Americans might think of Maryland as a Northern state, but it was distinctly Southern in its attitudes toward race,” the Times editorialists write before giving a potted history of the state, from efforts to disenfranchise black voters in 1905 to more contemporary examples of racial segregation in public housing.

The desperate condition of young low-income men, the newspaper says, cannot be understood outside of the context of the “century-long assault that Baltimore’s blacks have endured at the hands of local, state and federal policy makers, all of whom worked to quarantine black residents in ghettos, making it difficult even for people of means to move into integrated areas that offered better jobs, schools and lives for their children.”

The “tensions associated with segregation and concentrated poverty place many cities at risk of unrest. But the acute nature of segregation in Baltimore—and the tools that were developed to enforce it over such a long period of time—have left an indelible mark and given that city a singular place in the country’s racial history.”

That Baltimore, like many cities in the north and the south, had a history of racial segregation is of course true. However, if a reader of this column were not familiar with the politics of Baltimore, they might be excused for believing the city is run by the Ku Klux Klan and that its police force is made up of Night Riders covered in white sheets.

The Times does not mention that the political establishment in the city is predominantly African American, or that half of the Baltimore Police Department is black. Indeed, three of the six cops indicted for Gray’s killing, including the driver of the police van charged with murder, are African American.

The relentless police violence in Baltimore stems not from racism but from class oppression, which the black politicians defend no less than their white counterparts. Unable to contain her hatred and fear of the city’s youth after sporadic rioting erupted the day of Gray’s funeral, Baltimore Mayor Stephanie Rawlings-Blake declared, “Too many people have spent generations building up the city for it to be destroyed by thugs who are trying to tear down what so many fought for. They are tearing down businesses, destroying property.”

Rawlings-Blake speaks for a whole layer of wealthy African Americans who have a stake in defending their property and wealth and overseeing a system that produces ever-greater poverty for black and white workers alike. This corrupt social layer includes countless academics, politicians, preachers, millionaire “civil rights” leaders and black entrepreneurs who have benefited from government funding for minority-owned businesses and African American university programs.

Alongside the Times are various pseudo-left organizations that have long promoted identity politics in order to subordinate the interests of workers and youth to the Democratic Party. They represent the strivings of a segment of the upper middle class that uses the politics of race, gender and sexual identity as part of efforts to gain more of a share of the wealth exploited from the working class.

With angry youth in the streets of Baltimore denouncing the mayor and other black officials, the International Socialist Organization (ISO)—which hailed Obama’s 2008 election as a “transformative event in US politics, as an African American takes the highest office in a country built on slavery”—has suddenly discovered a “black elite” whose interests are at odds with the majority of minority workers and youth.

The problem, however, is that these “black elected officials” defend the “racist system!”

The ISO’s Keeanga-Yamahtta Taylor—an assistant professor in Princeton’s African American studies department—tells us, “Black elected officials have largely governed in the same way as their white counterparts, reflecting all of the racism, corruption and policies favoring the wealthy seen throughout mainstream politics.” This “powerful Black political class,” she continues, “helps to deflect a serious interrogation of structural inequality and institutional racism.”

In other words, the problem is, according to Taylor, that the black politicians are simply not aggressive enough in their promotion of identity politics. Never does she suggest that there is a fundamental unity of interests between black and white workers.

The New York Times, the ISO—which is essentially an auxiliary agent of the Democratic Party—and the political establishment as a whole are determined to prevent any real examination of the social and economic structure of America because they all defend the capitalist system, which is the source of poverty and police brutality.

It has been 50 years since the Watts Rebellion in Los Angeles, one of the first of a wave of urban uprisings across the United States in the 1960s. The call made in the 1968 Kerner Commission on Civil Disorders for massive government spending to stop the country’s drift towards racial and economic polarization was never realized. Instead, President Lyndon B. Johnson’s “Great Society” programs gave way to massive outlays for the Vietnam War, with politicians declaring that it was impossible to provide “guns and butter.”

The five decades that have elapsed have seen the deindustrialization of major manufacturing centers like Baltimore, combined with an unrelenting destruction of social programs. At the same time, sections of the African American upper-middle-class have been elevated into positions of privilege and power.

By the time of Bill Clinton’s election in 1992, the Democratic Party had completely repudiated its association with the reforms of the New Deal and Great Society periods. Clinton gutted welfare programs to provide an ample supply of cheap labor for the rich, including a growing layer of black capitalists, and passed the 1994 Federal Crime Bill, with its notorious “three strikes” provision that has helped create the largest prison population in the world.

Since taking office Obama has only escalated these reactionary policies. Today the American ruling class will not even provide “guns and water,” as tens of thousands of low-income residents in Baltimore and Detroit are seeing their water service shut off for unpaid bills. The only “urban policy” Obama and the ruling class have is to try to contain the explosive social tensions with police military repression.

Whatever role racism might play in any particular act of police violence, the events in Baltimore expose the fact that above all class is the determining factor. With nothing to offer masses of people, the political and media representatives of the ruling class, along with the upper-middle-class boosters, are determined to block the development of a politically conscious and united movement of black, white and immigrant workers and youth against the profit system.

Jerry White

Disaster capitalism is a permanent state of life for too many Americans

According to the Department of Homeless Services, the number of homeless people in New York City has risen by more than 20,000 over the past five years. Photograph: Spencer Platt/Getty Images

In the United States, disaster has become our most common mode of life. Proof that our daily existence was something other than a simmering, smoldering disaster has been historically held somewhat at bay by the myth that hard work equals some kind of subsistence living. For the more deluded amongst us, this ‘American dream’ even got us to believe we could be something called ‘middle class’. We were deceived.

For those not yet woke, I don’t see how y’all can stay asleep when story after story proves how screwed we are.

The New York Post, no bastion of bleeding heart liberalism, reported on Monday that “Hundreds of full-time city workers are homeless”. These are people who clean our trash and make our city, the heart of American capitalism, safe and livable, including for those who plunder the globe from Wall Street. These are men and women, living in shelters and out of their cars, who have government jobs – the kind of workers conservatives love to paint as greedy, gluttonous pigs.

When a full time government worker can’t “find four walls and a roof to call his own” in the city he serves, we are living in a perpetual state of disaster capitalism.

Across the country, the San Francisco Chronicle told the tale of the “Tech bus drivers forced to live in cars to make ends meet”. It’s arguable whether living in your car can really be considered “making ends meet”, but what can you expect of a newspaper serving a city where tech is supposed to answer all of our needs. Where housing is even more stupidly expensive than in New York City.

This, too, is perpetual disaster capitalism, creating havoc and inflicting disaster upon individual souls for corporate greed without even needing the pretense of a crisis for an excuse.

In her 2007 book The Shock Doctrine: The Rise of Disaster Capitalism, Naomi Klein defined “disaster capitalism” as “orchestrated raids on the public sphere in the wake of catastrophic events, combined with the treatment of disasters as exciting marketing opportunities”. She was riffing on neoconservatives using Hurricane Katrina as an excuse for a New Orleans land grab. She witnessed the same phenomenon in the 2004 Asian Tsunami and in the aftermath of the US invasion of Iraq.

The concept of public plunder after disaster has been embraced in similar linguistic terms by Democrats and Republicans alike. Condoleezza Rice famouslycalled 9/11 an “enormous opportunity”, and indeed it was a profitable one, for war contractors anyway. Similarly, White House Chief of Staff Rahm Emanuel once said: “You never want a serious crisis to go to waste. And what I mean by that is an opportunity to do things you think you could not do before”. Emanuel was good to his word. While American workers lost their jobs, lost their homes and even took their own lives as a result of the 2008 financial meltdown, the Obama White House instituted financial “reforms” that arrested no Wall Street executives, and left even Forbes predicting “ten reasons why there will be another systematic financial crisis”.

When our daily life is one of a state of chaos – and with hundreds slaughtered by police annually, and folks who work full time unable to stave off homelessness, and white anchors shot on live TV, and black worshippers shot up in church, and incarcerated victims behind bars “taking their own lives” daily, it’s hard to say that it’s not – the continuous state of disaster justifies disaster capitalism continuously, and we’re barely able to notice it, and powerless to stop it.

We live in such an interminable state of disaster, we barely see the locusts for the plague. Take the other major sad story this week: that Silicon Valley investor Martin Shkrelli has bought the drug Daraprim, raising its price 5,000%. No crisis necessitated this increase. The drug is 62 years old, and its initial costs had long ago been absorbed.

It’s easy to be angry at Shkrelli, his smug smile and his greedy choices that may well equal the deaths of those priced out from the malaria, Aids and cancer medicine they need. But Shkrelli is just a tool. He lives in a world where disaster capitalism will reward him. He now says he will make the drug “more affordable,” but the richest nation on earth can’t stop him from deciding what “affordable” will mean. He may repulse us, but he represents our American way of disastrous living. Disaster capitalism no longer just reacts to chaos for profit, or even creates chaos for profit. It creates the conditions by which the spectre of social, spiritual and biological death hang over our heads on a daily basis so oppressively, the crises become seamless.

And it asks us to accept that when you work full time driving workers to the richest corporation in the history of the human race and must live in your car, you should be grateful that you’re “making ends meet”, keep calm and carry on.


The UAW and the Democratic Party


By Tom Mackaman
22 September 2015

The United Auto Workers was created in the 1930s in the heat of a massive revolt of industrial workers. But when it emerged, the US labor movement, virtually alone in the world, had never built a political party of its own.

This is not because there were no social classes and no class struggle in American history, as is often claimed. The enormous growth of capitalism between the end of the Civil War in 1865 and the start of World War I in 1914 created the largest and most international working class in the world. The cities, towns, and coal patches were the scenes of ferocious strikes, riots, massacres and occasional armed uprisings.

Yet for all of the US labor movement’s militancy, self-sacrifice and social power, its Achilles heel was its failure to free itself from the political domination of capitalist parties and politicians. The workers fought the bosses’ police and thugs in the streets, but at the ballot box they voted for politicians selected from the bosses’ two parties.

Within this two-party system, the Democratic Party was assigned a particular function. Its task was to defend the basic interests of capital by posing as a party of the “common man” against the Republicans, who unapologetically championed big business.

Every mass social movement—beginning with the Populist movement of farmers in the 1880s and 1890s, to the anti-monopoly Progressive movement of the early 1900s, to the revolt of industrial workers of the 1930s out of which the UAW was born, to the Civil Rights movement in the 1950s and 1960s, to the anti-Vietnam War movement of the 1960s—was channeled behind the Democratic Party to be smothered, declawed and defeated.

There is historical irony in the Democratic Party playing this role. In the 19th century, it was first the party of the southern slavocracy, and, after the Civil War, the party of Jim Crow white supremacy. It was its lesser northern wing, controlled by sections of capital and operating big city “machines” such as New York’s Tammany Hall, that prefigured the party’s 20th century incarnation.

Pro-slavery ideologues and propagandists linked to the Democratic Party attacked the brutality of emerging industrial capitalism in the North and posed as critics of wage slavery, while portraying Southern chattel slavery as a natural and beneficent system. They sought to inspire fear among northern workers that the liberation of the blacks in the South would undermine their own wages and living standards.

The Democratic city machines solicited the support of northern workers, including immigrant populations such as the Irish, and doled out patronage, while engaging in demagogic attacks against “privilege.”

The labor movement that emerged after the Civil War—the Knights of Labor in the 1870s and 1880s followed by the American Federation of Labor (AFL) in the 1880s and 1890s—did not build a mass party of its own, but neither did it formally support the Democratic Party. This changed in World War I during the administration of Woodrow Wilson, when the AFL sought to prevent workers from striking, and worked to stamp out the growing influence of socialism and sympathy for the Russian Revolution in exchange for federal mediation of labor disputes.

The AFL’s rapid growth during World War I was wiped out in its immediate aftermath by a ferocious corporate counteroffensive linked to the first “Red Scare.” Under Wilson, the American ruling class and state, backed by the corporate-controlled media, responded to the 1917 Russian Revolution and the eruption of mass labor struggles within the US (the 1913 Patterson, New Jersey silk strike, the 1919 steel strike) by conjuring up an atmosphere of hysteria against anarchists and “Bolsheviks.” Thousands of left-wing workers and intellectuals, mostly immigrants, were jailed in mass roundups and deported.

But the integration of the labor bureaucrats with “progressive” elements in and around the Democratic Party—figures such as labor reformists Frank Walsh, Frances Perkins and Felix Frankfurter—had taken a step forward. Their role exemplified the connection between the fight for the political independence of the working class and the struggle to free the working class from the influence of middle class reformism and anti-Marxist radicalism.

The UAW arose out of fierce and quasi-insurrectionary class struggles, in which workers defied and faced off against not only the corporations, but also their police, troops courts and politicians. The union was not a gift handed down to workers by Franklin D. Roosevelt, as subsequently portrayed by the union leadership.

At the height of the Depression in 1933, cities such as Detroit, Toledo and Chicago had unemployment rates ranging from 50 percent to 90 percent. The following year saw the eruption of general strikes in Minneapolis, Toledo and San Francisco, each of them led by socialist workers, and, in the case of Minneapolis, by Trotskyists.

These were followed by a strike movement, in which socialist workers figured prominently, that culminated in the 1936-37 Flint sit-down strike. That 44-day struggle, in which the workers seized control of key plants in Flint, shutting down much of General Motors production nationally, and refused to budge even after national guard troops set up machine gun nests outside the occupied factories, humbled GM and compelled it to recognize the UAW as the exclusive bargaining agent for its hourly employees.

Flint GM workers occupy their factory

The revolt of the autoworkers inspired industrial struggles across the US. The strikes were waged against the corporations and politicians of both parties. They were preceded by a break with the right-wing, craft union-dominated AFL—which treated industrial workers as social pariahs—and the founding of the Congress of Industrial Organizations (CIO), formed to establish mass industrial unions.

Fearing insurrection—the strike movement came less than 20 years after the Russian Revolution—the administration of Franklin Roosevelt offered its New Deal reforms and intervened to bring corporations to the bargaining table. On March 2, 1937, just weeks after the end of the Flint sit-down strike, US Steel, the notorious bastion of anti-unionism, avoided a strike and acceded to the establishment of the Steel Workers Organizing Committee (later the United Steel Workers Union).

In exchange, the CIO sought to rein in the strike wave. Its leading figures, John L. Lewis of the United Mine Workers (UMW) and Sidney Hillman of the Amalgamated Clothing Workers, believed that in Roosevelt and the Democratic Party they had found allies who would bring into line anti-union holdouts such as the Ford Motor Company and “Little Steel,” the name given US Steel’s major competitors. This policy quickly proved bankrupt.

Chicago’s Memorial Day Massacre

On May 30, 1937, ten striking steelworkers were gunned down by Chicago police in the Memorial Day Massacre, in response to which Roosevelt issued his infamous “plague on both your houses” remark, all but blaming the workers for the violence.

Roosevelt’s statement was taken as a green light. On June 19, in Youngtown, Ohio, police murdered two striking steelworkers, and on July 11, in Massillon, Ohio, they killed 3 more. The Little Steel Strike was crushed and the Ford organization drive stalled. In spite of this, the CIO refused to break with the Democratic Party.

There existed substantial support among workers for a break with the Democrats. At the UAW founding convention in 1935, a majority of delegates voted for the formation of a labor party. A second vote refusing to endorse Roosevelt was reversed after Lewis’s lieutenant, Adolph Germer, threatened to cut off funding for the nascent organization.

Lewis, Hillman, and the other union heads who had been catapulted into national prominence by the emergence of the CIO, now fought to preserve the subordination of American workers to the Democratic Party and combat the widespread influence of socialism. In a September 3, 1937 national radio address, Lewis unequivocally demanded that the CIO defend the capitalist system. He declared:

Unionization, as opposed to communism, presupposes the relationship of employment; it is based on the wage system and it recognizes fully and unreservedly the institution of private property and the right to investment profit. It is upon the fuller development of collective bargaining, the wider expansion of the labor movement, the increased influence of labor in our national councils that the perpetuity of our democratic institutions must largely depend. The organized workers of America, free in their industrial life, conscious partners in production, secure in their homes, enjoying a decent standard of living, will prove the finest bulwark against the intrusion of alien doctrines of government.

Yet, under conditions of a new economic crisis, the so-called Roosevelt Recession of 1937-1939, and the counteroffensive by capital announced by the Little Steel violence, the rapid growth of the CIO ground to a halt. The influx of new members into the UAW stalled. The Steel Workers Organizing Committee “was deeply demoralized and withering away by late 1937,” in the words of historian Steven Fraser (Rise and Fall of the New Deal Order, 1989). By 1939, just two years after the sit-down strikes, the CIO’s leader, Phillip Murray, would declare, “We are living in a wave and an age and an era of reaction.”

John L. Lewis

Analyzing these developments from his final exile in Mexico City, Leon Trotsky insisted that the only way forward for American workers was along the path of political struggle. “In the United States the situation is that the working class needs a party—its own party,” Trotsky wrote. “It is the first step in political education … It is an objective fact in the sense that the new trade unions created by the workers came to an impasse—a blind alley.”

Taking its lead from Trotsky, the Trotskyist movement in the US, the Socialist Workers Party, called for the unions to establish a labor party based on a socialist program, in order to arm the insurgent movement of workers with a revolutionary perspective in opposition to the trade union bureaucracy and the Stalinists of the Communist Party USA. The latter, in accordance with Moscow’s “Popular Front” line, supported Roosevelt and the Democrats. The first aim of the labor party demand was thus to break the working class from the Democratic Party.

Trotsky insisted that the decisive issue in considering whether to advance this demand was not the prevailing consciousness among American workers, many of whom still held illusions in Roosevelt, but the requirements of the objective situation, in which the international question was decisive. The revolt of the American industrial workers came in the context of working class defeats, in which Stalinism had played the critical role: the betrayal of the British General Strike in 1926, the decimation of the Chinese working class in 1927, the coming to power of the Nazis in Germany in 1933, and the defeats of the Spanish and French working classes between 1936 and 1938.


Under these challenging conditions, the labor party demand was seen as a means of fighting for the program of world socialist revolution in the US, where the mass industrial unions had exploded onto the scene virtually overnight in the late 1930s before just as quickly faltering.

“The rise of the CIO is incontrovertible evidence of the revolutionary tendencies within the working masses,” Trotsky wrote in 1940. “Indicative and noteworthy in the highest degree, however, is the fact that the new ‘leftist’ trade union organization was no sooner founded than it fell into the steel embrace of the imperialist state. The struggle among the tops between the old federation [the AFL] and the new is reducible in large measure to the struggle for the sympathy and support of Roosevelt and his cabinet.”

World War II handed the CIO a temporary reprieve. It joined the AFL in attempting to enforce no-strike pledges on workers, while American imperialism settled accounts with its German and Japanese rivals. In return, federal mediators and courts ruled in favor of the union shop, including at Ford in 1941 and “Little Steel” in 1942 and 1943. The Stalinists of the CP, following orders from the Soviet Union, which was then in a wartime alliance with the US, aided and abetted the no-strike pledge and cheered the imprisonment of the American Trotskyists in 1941, including the leader of the SWP, James P. Cannon.

The no-strike pledge was only partially effective during the war. There were nearly as many strikes in 1944 as there had been in 1937. Then, in the war’s aftermath, 1945 and 1946, the American working class erupted in the largest strike wave in its history. Many of these were wildcat strikes, carried out not only against the corporations, but also against the AFL and CIO and their Stalinist allies.

This was the domestic context of the post-World War II Red Scare, which, contrary to myth, began not with Republican Senator Joe McCarthy, but in the trade unions. In 1947, Democratic Party politicians combined with Republicans to impose the anti-union Taft-Hartley Act, which gave the president the right to outlaw strikes he declared a threat to “national security,” and which included an anti-communist loyalty oath.

That same year, President Harry S. Truman, a Democrat, gave an address to Congress asking for $400 million to prop up the royalist government in Greece against “the terrorist activities of … Communists.” During his administration, Truman founded the NSA and the CIA, announced that the American military would defend “free people” anywhere in the world, and invoked Taft-Hartley against American workers a dozen times. The Cold War was on, at home and abroad.

Anti-communist bureaucrats such as Walter Reuther of the UAW used the situation to muscle out union officials and workers who adhered to ideas of changing the social order. Between 1948 and 1950, the CIO purged from its ranks eleven unions representing 1 million workers. Thousands more officials and workers were driven out of individual unions like the UAW. The purge of the radical and militant workers, many of whom had led the great struggles of the 1930s, was inseparable from the unions’ full embrace of American imperialism.

Walter Reuther

This was epitomized by Reuther’s “Treaty of Detroit,” the 1950 contract agreement with General Motors that hitched the fate of the UAW, and the workers it represented, to the global domination of the Big Three. GM promised increased wages and benefits and a “seat at the table” for the union bureaucrats. In exchange, the UAW accepted corporate domination of the workplace. The heady demands of the 1930s—“workers control” and “industrial democracy”—were repudiated.

For a time, Reuther’s treaty seemed to work. The Big Three’s enormous market share after World War II allowed higher wages and benefits. The same workers who had occupied the factories in the “Hungry 30s” could now own the cars they made, buy homes, and make plans to send their children to college.

Rising living standards, along with the anti-communist purges, greatly weakened the influence of socialism among the autoworkers, who nonetheless carried out a number of major strikes from the 1950s through the 1970s to force the automakers to uphold their end of the bargain.

Under these conditions, many autoworkers would have found plausible Reuther’s claim, made in 1948, that there were no social classes in America, and therefore no need for a workers’ party:

In Europe, where you have society developed along very classical economic lines, where you have rigid class groupings, there labor parties are a natural political expression because there you have a highly fixed and class society. [W]e have a society that is not rigid in character along class lines, and that is the great hope of America.

But there were no new organizational breakthroughs for the UAW and the CIO after World War II. Their anti-communism and pro-corporate character having eliminated all meaningful differences, in 1955 the CIO, under Reuther’s leadership, merged with the AFL on terms dictated by AFL President George Meany, who became the president of the AFL-CIO.

This entailed the abandonment of any further effort to organize the majority of workers still outside of the union ranks. Organized labor had already embarked on its course of inexorable decline.

George Meany, left, and Walter Reuther, right, at AFL and CIO merger convention, 1955

Reuther’s treaty had, in fact, been based on historical circumstances that soon eroded. The Japanese and German automakers reemerged in the 1950s, and by the 1960s were cutting into the Big Three’s global market share. Afterwards, they began to conquer increasing shares of the US market. Profit rates declined and capital in the US flowed out of productive investment and into financial speculation.

Now, with the decline of American industry and the global position of US capitalism, the implications of the failure to build a socialist party came to the fore. The UAW, with its pro-capitalist perspective, had no answer to layoffs resulting from the automation of production, which developed rapidly in the 1960s, or the movement of factories and industrial jobs from heavily unionized states such as Michigan, Ohio and Pennsylvania to the anti-union South.

The CIO’s abortive effort to organize the South after World War II, Operation Dixie, had fallen victim to the anti-communist purge and Reuther’s fears over disturbing the UAW’s alliance with the Democratic Party, which exercised a political monopoly and enforced Jim Crow segregation throughout the region. Some 25 years later, the disastrous implications of this betrayal of the working class began to hit home in the unionized Northeast and Midwest, which lost 2 million manufacturing jobs in the 1970s, while the Sun Belt gained 1 million. By the 1980s, once rural North Carolina had the highest percentage of manufacturing jobs of any state—and the lowest wages and rate of unionization.

The 1970s found the unions’ nationalist, pro-capitalist road rapidly failing. The Workers League, forerunner of the Socialist Equality Party, continued to fight for the formation of an independent party of the working class. The pages of its newspaper, the Bulletin, provide the most complete chronicle of the last great strike wave in American history, which lasted from 1969 through the late 1970s, and the struggle of the Trotskyist movement for socialism in the US working class.

This was carried out in the midst of another great crisis of capitalist rule, caused by the end of the post-World War II economic boom, which came to a head with Nixon’s scrapping of gold backing for the dollar in 1971, along with the collapse of the war in Vietnam, which saw the destruction of both the Johnson (1963-1969) and Nixon (1969-1974) presidencies, and the ghetto rebellions that swept the cities in the late 1960s.

The crisis was such that the Workers League’s call for a labor party and workers’ government found a growing response in the working class. This was acknowledged in a backhanded way by Meany in a 1972 interview with US News and World Report :

[I]f we set up our own political party, we’d be telling this country that we’re ready to run the Government, and I don’t think that we’re ready—I don’t think we’re qualified to run the Government. I don’t think any special interest group is qualified to run the Government. I don’t think General Motors should run the Government, and I don’t think the AFL-CIO should run the Government.

The working class, even without a mass party, continued to demonstrate its industrial strength. In defiance of Nixon’s “wage freeze” policy, workers carried out scores of strikes to keep wages in line with spiraling inflation. In 1978, coal miners defied a Taft-Hartley back-to-work order by Democratic President Jimmy Carter. The miners’ response: “Carter invoked Taft-Hartley. Now let him come down here and enforce it.”

George Meany

The union bureaucrats could not permit a break with the Democratic Party, as had been posed by the coal miners’ defiance of Carter. The bankruptcy and bailout of Chrysler in 1979 offered a new course. Rather than mobilizing workers for a showdown, the UAW cooperated with the Carter administration, Chrysler and Wall Street financiers in imposing wage concessions, layoffs and plant closures.

Behind this historic capitulation by the unions were profound objective changes in the structure of world capitalism. The rise of truly transitional corporations—producing in factories around the world directly for the world, rather than merely the national, market—was the hallmark of an unprecedented globalization of production and finance. This development undercut all labor organizations based on national programs, including the UAW and the AFL-CIO.

They had no progressive answer to the emergence of a global labor market, which enabled corporations to shift production rapidly from higher- to lower-wage regions. Their response, based on the defense of the profit system and the national interests of the American ruling class, was to join with the companies in slashing the wages, jobs, benefits and conditions of their own members, in order to induce the companies to keep production at home and keep the union bureaucrats’ revenue stream from dues-paying members flowing.

That same year, 1979, Carter appointed Paul Volcker to head the Federal Reserve Board. Volcker raised interest rates past 20 percent in order to cause mass unemployment and thereby drive down wages and break the strike wave. In 1982 alone, 2,700 mass layoffs resulted in 1.25 million industrial jobs lost. Detroit, Flint, Pontiac, Toledo, St. Louis, Cleveland, Pittsburgh and the rest of the industrial heartland were shattered.

Volcker’s “shock therapy” and Carter’s attack on the Chrysler workers proved the opening salvos in a ruling class counteroffensive. They were followed by Ronald Reagan’s crushing of the strike of the PATCO air traffic controllers in 1981, which set a pattern that repeated throughout the 1980s, including among UAW workers in the parts plants and at heavy machinery manufacturer Caterpillar. All the old brutal methods of class rule were revived. Union-busting—the use of scabs, strike-breakers and violence against strikers—virtually unheard of for some 30 years, became commonplace.

The UAW and AFL-CIO attributed the attack solely to Reagan. But the offensive was conducted jointly by Democratic-controlled Congresses, Democratic mayors and Democratic governors, such as Minnesota’s Rudy Perpich and Arizona’s Bruce Babbitt, each of whom called out the National Guard to crush strikes in the 1980s.

In the face of this onslaught, workers showed they were ready to fight. They carried out long and bitter strikes again and again. But just as many times they were isolated and betrayed by the unions that claimed to represent them.

The repeated betrayals in the 1980s coincided with the unions’ open adoption of the ideology and program of corporatism—the renunciation of any conception of class struggle and advocacy of the supposed identity of interests of workers and the corporations. The UAW was the most aggressive of the official unions in adopting a policy of “jointness,” rapidly entering into union-management programs and structures with the Big Three automakers at the national, regional and local level. It boasted of relations with the auto companies that the pioneer militants and socialists who built the UAW knew only too well to be the hallmarks of hated company unions.

This went hand in hand with the promotion of economic nationalism, protectionism and outright chauvinism and racism against autoworkers in Japan, Mexico, Europe and even Canada. In this way, the UAW sought to line up US workers behind “their” bosses and justify layoffs and concessions as necessary sacrifices to ensure the ability of the Big Three to compete with their foreign rivals for market share and profits. This fratricidal and divisive policy, which, of course, played right into the hands of the companies, was implemented in the name of “defending jobs.” Its result was the destruction of tens of thousands more auto jobs and a massive contraction in UAW membership.

By the end of the 1980s, the official American labor movement had been shattered. The unions could no longer be called defensive organizations of the working class. To be sure, they entered the 1980s as corrupt, anti-socialist and nationalist organizations, but they still generally sought, in the 1970s, to gain concessions for workers. Now the unions demanded concessions fromworkers.

Shut-down auto plant

Yet the factories continued to close, and the ranks of the unions thinned. It became necessary to find new sources of revenue and a new social basis for the union officials’ existence. This the UAW has found in abundance.

The UAW played the instrumental role in imposing the Obama administration’s “rescue” of the auto industry in 2009. This entailed the elimination of 35,000 jobs, the banning of strikes for six years, the gutting of benefits for retired autoworkers, and the driving down of wages by expanding the category of newly hired workers called “tier two,” in which the workers’ pay, when adjusted for inflation, falls below what Henry Ford offered in his famous Five Dollar Day way back in 1914.

Obama and the investment bankers who head up his Auto Task Force saw to it that the UAW was given billions in corporate stock, and many billions more in VEBA trust funds. According to Wikipedia, the “UAW Retiree Medical Benefits Trust, with more than $45 billion in assets as of June 2010, and $58.8 billion as of March 2014, is the world’s largest VEBA.”

The UAW and the rest of the AFL-CIO are preparing to hand over hundreds of millions of dollars in union dues to Democratic Party politicians once again. There is a tactical difference between the two big business parties. The Republican Party seeks the unmediated exploitation of the working class. The Democratic Party seeks to use the services of its union allies for the same end.

The current glad-handing “negotiations” between the UAW and the Big Three, carried out with workers left totally in the dark and with the promise of only more concessions, brings the unions to a new milepost their transformation into anti-working class organizations.

It is as if a great historical experiment that began in the late 1930s has drawn to a close. Would it be possible to build a union movement on an explicitly pro-capitalist, anti-socialist and nationalist basis? History has delivered its verdict.

All the objective conditions exist for a break with the Democratic Party. It is now five decades since the last significant social reform in US history. Yet the political struggle against the union officials and their middle class acolytes continues.

History also shows that workers will be driven into struggle. As they did in the 1930s, they will wage strikes and create new forms of industrial organization. But the decisive battle will be fought in the arena of politics. Workers must build a socialist political movement that expresses their own interests, which are irreconcilably hostile to those of the capitalists and their parties.

US poverty rate and income growth stagnated in 2014


By Niles Williamson
19 September 2015

The US Census Bureau released its annual income and poverty report this week which showed that median household income and the national poverty rate held steady between 2013 and 2014.

The report found that 14.8 percent of the country’s population lived in poverty in 2014, statistically unchanged from a year prior. Blacks had the highest poverty rate in 2014 at 26.2 percent, which was a one percentage point increase over 2013. Among children and teenagers under the age of 18, approximately 15.5 million, or 21.1 percent, lived in poverty.

The vast majority of the population in the United States has seen little or no benefit from the supposedly ongoing economic recovery and a booming stock market. Wages and income remain stagnant while the poverty rate remains unconscionably high, especially in urban areas.

The steady decline of the official unemployment rate from its peak of 10 percent in October 2009 to 5.1 percent in August has had little impact as those returning to work find jobs at lower wages and many others have simply given up looking for work and are no longer counted as unemployed.

The labor force participation rate, the percentage of the working aging population currently employed, has fallen from a peak of 67.3 percent in 2000 to 62.6 percent in August, its lowest level in 38 years.

While the official poverty rate in 2014 was below the recent peak of 15.1 percent in 2010, the total number of Americans living in poverty in 2014, 46.7 million, marked an all-time high. The Census Bureau noted that 6.6 percent of the population lived in what is termed deep poverty, less than 50 percent of the poverty line or less than $12,115 annual income for a family of four.

The poverty line in 2014 was set at a meager $24,230 in annual income for a family of four. An individual who made less than $12,071 was officially counted as below the poverty line. If an individual or family makes a dollar more than these limits they are not considered to be poor.

In a much better measurement of the precariousness of life for most workers in the United States the Census Bureau reported that fully one third of the American population, more than 105 million people, live at less than twice the official the poverty rate.

The report notes that individuals and families often move in and out of poverty, with a smaller but significant number experiencing long-term poverty. The loss of a job or even a single paycheck can thrust a family into poverty.

Millions of Americans lack any savings they can rely on in the case of a financial emergency. Between 2009 and 2012, 34.5 percent of the population experienced a spell of poverty for two or more months while 2.7 percent lived in poverty for the entire four-year period.

Breaking down the poverty rate by geographical region, the South continued to be the most impoverished and was little changed from 2013 with 19.5 million, or 16.5 percent of the population, living in poverty. The West ranked second with a poverty rate of 15.2 percent (11.4 million), the Midwest third at 13 percent (8.7 million), and the Northeast fourth at 12.6 percent (7 million).

The poverty rate is particularly acute in urban areas throughout the country, with Detroit, Michigan (39.3 percent), Cleveland, Ohio (39.2 percent), Fresno, California (30.5 percent), Memphis, Tennessee (29.8 percent), and Milwaukee, Wisconsin (29 percent) topping the list for cities with populations over 300,000.

Detroit, once the center of auto manufacturing in North America, has been decimated by the shuttering of countless factories over the last four decades. In 2013 and 2014 the city of Detroit was forced into bankruptcy in an undemocratic process overseen by an unelected emergency manager in which wages and benefits were clawed back from city workers and retirees.

While tens of thousands of households in the city have had their water and other utilities shut off because they cannot afford to pay their bills, a handful of billionaires and multi-millionaires, including Quicken Loans CEO Dan Gilbert, have benefited handsomely from the restructuring and decimation of the city, snapping up land and properties often for as little as one dollar.

Meanwhile 57.1 percent of children and teenagers under the age of 18 in Detroit officially lived below the poverty line in 2014. Among major American cities only Cleveland, Ohio had a higher child poverty rate, at 58.5 percent.

Cleveland, not coincidentally, is another city where the billionaire Gilbert and other wealthy vultures have snapped up property, gentrifying select areas of the downtown entertainment district.

The Census Bureau report also notes that median household income has yet to return to pre-2008 levels, before the 2008 financial crisis and subsequent recession contributing to the persistence of high rates of poverty.

The report notes that median household income in 2014 was statistically unchanged from 2013 at $53,657, 6.7 percent lower than it was in 2007 and 7.2 percent below its peak in 1999.

While the ratio between the income of the richest ten percent and the poorest ten percent remained unchanged between 2013 and 2014, income inequality in the United States has risen significantly between 1999 and 2014.

Over the last 15 years, income for the 50th percentile, median income, fell by 7.2 percent while income for those in the bottom 10 percent fell by 16.5 percent. While income has fallen over the last decade and a half for the bottom half of society, those in 90th percentile saw their annual income increase by 2.8 percent, and those even higher on the scale saw much larger gains.

Why the Rich Are So Much Richer

The Great Divide: Unequal Societies and What We Can Do About Them  by Joseph E. Stiglitz   Norton, 428 pp., $28.95

Rewriting the Rules of the American Economy: An Agenda for Growth and Shared Prosperity by Joseph E. Stiglitz   The Roosevelt Institute, 114 pp., available at

Creating a Learning Society: A New Approach to Growth, Development, and Social Progress by Joseph E. Stiglitz and Bruce C. Greenwald  Columbia University Press, 660 pp., $34.95; $24.95 (paper)

Ludovic/REA/ReduxJoseph Stiglitz with Christine Lagarde,Paris, September 2009

The fundamental truth about American economic growth today is that while the work is done by many, the real rewards largely go to the few. The numbers are, at this point, woefully familiar: the top one percent of earners take home more than 20 percent of the income, and their share has more than doubled in the last thirty-five years. The gains for people in the top 0.1 percent, meanwhile, have been even greater. Yet over that same period, average wages and household incomes in the US have risen only slightly, and a number of demographic groups (like men with only a high school education) have actually seen their average wages decline.

Income inequality has become such an undeniable problem, in fact, that even Republican politicians have taken to decrying its effects. It’s not surprising that a Democrat like Barack Obama would call dealing with inequality “the defining challenge of our time.” But when Jeb Bush’s first big policy speech of 2015 spoke of the frustration that Americans feel at seeing “only a small portion of the population riding the economy’s up escalator,” it was a sign that inequality had simply become too obvious, and too harmful, to be ignored.

Something similar has happened in economics. Historically, inequality was not something that academic economists, at least in the dominant neoclassical tradition, worried much about. Economics was about production and allocation, and the efficient use of scarce resources. It was about increasing the size of the pie, not figuring out how it should be divided. Indeed, for many economists, discussions of equity were seen as perilous, because there was assumed to be a necessary “tradeoff” between efficiency and equity: tinkering with the way the market divided the pie would end up making the pie smaller. As the University of Chicago economist Robert Lucas put it, in an oft-cited quote: “Of the tendencies that are harmful to sound economics, the most seductive, and…the most poisonous, is to focus on questions of distribution.”

Today, the landscape of economic debate has changed. Inequality was at the heart of the most popular economics book in recent memory, the economist Thomas Piketty’s Capital. The work of Piketty and his colleague Emmanuel Saez has been instrumental in documenting the rise of income inequality, not just in the US but around the world. Major economic institutions, like the IMF and the OECD, have published studies arguing that inequality, far from enhancing economic growth, actually damages it. And it’s now easy to find discussions of the subject in academic journals.

All of which makes this an ideal moment for the Columbia economist Joseph Stiglitz. In the years since the financial crisis, Stiglitz has been among the loudest and most influential public intellectuals decrying the costs of inequality, and making the case for how we can use government policy to deal with it. In his 2012 book, The Price of Inequality, and in a series of articles and Op-Eds for Project Syndicate, Vanity Fair, and The New York Times, which have now been collected in The Great Divide, Stiglitz has made the case that the rise in inequality in the US, far from being the natural outcome of market forces, has been profoundly shaped by “our policies and our politics,” with disastrous effects on society and the economy as a whole. In a recent report for the Roosevelt Institute called Rewriting the Rules, Stiglitz has laid out a detailed list of reforms that he argues will make it possible to create “an economy that works for everyone.”

Stiglitz’s emergence as a prominent critic of the current economic order was no surprise. His original Ph.D. thesis was on inequality. And his entire career in academia has been devoted to showing how markets cannot always be counted on to produce ideal results. In a series of enormously important papers, for which he would eventually win the Nobel Prize, Stiglitz showed how imperfections and asymmetries of information regularly lead markets to results that do not maximize welfare. He also argued that this meant, at least in theory, that well-placed government interventions could help correct these market failures. Stiglitz’s work in this field has continued: he has just written (with Bruce Greenwald) Creating a Learning Society, a dense academic work on how government policy can help drive innovation in the age of the knowledge economy.

Stiglitz served as chairman of the Council of Economic Advisers in the Clinton administration, and then was the chief economist at the World Bank during the Asian financial crisis of the late 1990s. His experience there convinced him of the folly of much of the advice that Western economists had given developing countries, and in books like Globalization and Its Discontents (2002) he offered up a stinging critique of the way the US has tried to manage globalization, a critique that made him a cult hero in much of the developing world. In a similar vein, Stiglitz has been one of the fiercest critics of the way the Eurozone has handled the Greek debt crisis, arguing that the so-called troika’s ideological commitment to austerity and its opposition to serious debt relief have deepened Greece’s economic woes and raised the prospect that that country could face “depression without end.” For Stiglitz, the fight over Greece’s future isn’t just about the right policy. It’s also about “ideology and power.” That perspective has also been crucial to his work on inequality.

The Great Divide presents that work in Stiglitz’s most popular—and most populist—voice. While Piketty’s Capital is written in a cool, dispassionate tone, The Great Divideis clearly intended as a political intervention, and its tone is often impassioned and angry. As a collection of columns, The Great Divide is somewhat fragmented and repetitive, but it has a clear thesis, namely that inequality in the US is not an unfortunate by-product of a well-functioning economy. Instead, the enormous riches at the top of the income ladder are largely the result of the ability of the one percent to manipulate markets and the political process to their own benefit. (Thus, the title of his best-known Vanity Fair piece: “Of the 1 percent, by the 1 percent, for the 1 percent.”) Soaring inequality is a sign that American capitalism itself has gone woefully wrong. Indeed, Stiglitz argues, what we’re stuck with isn’t really capitalism at all, but rather an “ersatz” version of the system.

Inequality obviously has no single definition. As Stiglitz writes:

There are so many different parts to America’s inequality: the extremes of income and wealth at the top, the hollowing out of the middle, the increase of poverty at the bottom. Each has its own causes, and needs its own remedies.

But in The Great Divide, Stiglitz is mostly interested in one dimension of inequality: the gap between the people at the very top and everyone else. And his analysis of that gap concentrates on the question of why incomes at the top have risen so sharply, rather than why the incomes of everyone else have stagnated. While Stiglitz obviously recognizes the importance of the decline in union power, the impact of globalization on American workers, and the shrinking value of the minimum wage, his preoccupation here is primarily with why the rich today are so much richer than they used to be.

To answer that question, you have to start by recognizing that the rise of high-end incomes in the US is still largely about labor income rather than capital income. Piketty’s book is, as the title suggests, largely about capital: about the way the concentration of wealth tends to reproduce itself, leading to greater and greater inequality. And this is an increasing problem in the US, particularly at the highest reaches of the income spectrum. But the main reason people at the top are so much richer these days than they once were (and so much richer than everyone else) is not that they own so much more capital: it’s that they get paid much more for their work than they once did, while everyone else gets paid about the same, or less. CorporateCEOs, for instance, are paid far more today than they were in the 1970s, while assembly line workers aren’t. And while incomes at the top have risen in countries around the world, nowhere have they risen faster than in the US.

One oft-heard justification of this phenomenon is that the rich get paid so much more because they are creating so much more value than they once did. Globalization and technology have increased the size of the markets that successful companies and individuals (like pop singers or athletes) can reach, so that being a superstar is more valuable than ever. And as companies have gotten bigger, the potential value that CEOs can add has increased as well, driving their pay higher.

Stiglitz will have none of this. He sees the boom in the incomes of the one percent as largely the result of what economists call “rent-seeking.” Most of us think of rent as the payment a landlord gets in exchange for the use of his property. But economists use the word in a broader sense: it’s any excess payment a company or an individual receives because something is keeping competitive forces from driving returns down. So the extra profit a monopolist earns because he faces no competition is a rent. The extra profits that big banks earn because they have the implicit backing of the government, which will bail them out if things go wrong, are a rent. And the extra profits that pharmaceutical companies make because their products are protected by patents are rents as well.

Not all rents are terrible for the economy—in some cases they’re necessary evils. We have patents, for instance, because we think that the costs of granting a temporary monopoly are outweighed by the benefits of the increased innovation that patent protection is supposed to encourage. But rents make the economy less efficient, because they move it away from the ideal of perfect competition, and they make consumers worse off. So from the perspective of the economy as a whole, rent-seeking is a waste of time and energy. As Stiglitz puts it, the economy suffers when “more efforts go into ‘rent seeking’—getting a larger slice of the country’s economic pie—than into enlarging the size of the pie.”

Rents are nothing new—if you go back to the 1950s, many big American corporations faced little competition and enjoyed what amounted to oligopolies. But there’s a good case to be made that the sheer amount of rent-seeking in the US economy has expanded over the years. The number of patents is vastly greater than it once was. Copyright terms have gotten longer. Occupational licensing rules (which protect professionals from competition) are far more common. Tepid antitrust enforcement has led to reduced competition in many industries. Most importantly, the financial industry is now a much bigger part of the US economy than it was in the 1970s, and for Stiglitz, finance profits are, in large part, the result of what he calls “predatory rent-seeking activities,” including the exploitation of uninformed borrowers and investors, the gaming of regulatory schemes, and the taking of risks for which financial institutions don’t bear the full cost (because the government will bail them out if things go wrong).


All this rent-seeking, Stiglitz argues, leaves certain industries, like finance and pharmaceuticals, and certain companies within those industries, with an outsized share of the rewards. And within those companies, the rewards tend to be concentrated as well, thanks to what Stiglitz calls “abuses of corporate governance that lead CEOs to take a disproportionate share of corporate profits” (another form of rent-seeking). In Stiglitz’s view of the economy, then, the people at the top are making so much because they’re in effect collecting a huge stack of rents.

This isn’t just bad in some abstract sense, Stiglitz suggests. It also hurts society and the economy. It erodes America’s “sense of identity, in which fair play, equality of opportunity, and a sense of community are so important.” It alienates people from the system. And it makes the rich, who are obviously politically influential, less likely to support government investment in public goods (like education and infrastructure) because those goods have little impact on their lives. (The one percent are, in fact, more likely than the general public to support cutting spending on things like schools and highways.)

More interestingly (and more contentiously), Stiglitz argues that inequality does serious damage to economic growth: the more unequal a country becomes, the slower it’s likely to grow. He argues that inequality hurts demand, because rich people consume less of their incomes. It leads to excessive debt, because people feel the need to borrow to make up for their stagnant incomes and keep up with the Joneses. And it promotes financial instability, as central banks try to make up for stagnant incomes by inflating bubbles, which eventually burst. (Consider, for instance, the toleration, and even promotion, of the housing bubble by Alan Greenspan when he was chairman of the Fed.) So an unequal economy is less robust, productive, and stable than it otherwise would be. More equality, then, can actually lead to more efficiency, not less. As Stiglitz writes, “Looking out for the other guy isn’t just good for the soul—it’s good for business.”

This explanation of both the rise in inequality and its consequences is quite neat, if also bleak. But it’s also, it has to be said, oversimplified. Take the question, for instance, of whether inequality really is bad for economic growth. It certainly seems plausible that it would be, and there are a number of studies that suggest it is. Yet exactly why inequality is bad for growth turns out to be hard to pin down—different studies often point to different culprits. And when you look at cross-country comparisons, it turns out to be difficult to prove that there’s a direct connection between inequality and the particular negative factors that Stiglitz cites. Among developed countries, more unequal ones don’t, as a rule, have lower levels of consumption or higher levels of debt, and financial crises seem to afflict both unequal countries, like the US, and more egalitarian ones, like Sweden.

This doesn’t mean that, as conservative economists once insisted, inequality is good for economic growth. In fact, it’s clear that US-style inequality does not help economies grow faster, and that moving toward more equality will not do any damage. We just can’t yet say for certain that it will give the economy a big boost.

Similarly, Stiglitz’s relentless focus on rent-seeking as an explanation of just why the rich have gotten so much richer makes a messy, complicated problem simpler than it is. To some degree, he acknowledges this: in The Price of Inequality, he writes, “Of course, not all the inequality in our society is the result of rent seeking…. Markets matter, as do social forces….” Yet he doesn’t really say much about either of those inThe Great Divide. It’s unquestionably true that rent-seeking is an important part of the rise of the one percent. But it’s really only part of the story.

When we talk about the one percent, we’re talking about two groups of people above all: corporate executives and what are called “financial professionals” (these include people who work for banks and the like, but also money managers, financial advisers, and so on). These are the people that Piketty terms “supermanagers,” and he estimates that together they account for over half of the people in the one percent.

The emblematic figures here are corporate CEOs, whose pay rose 876 percent between 1978 and 2012, and hedge fund managers, some of whom now routinely earn billions of dollars a year. As one famous statistic has it, last year the top twenty-five hedge fund managers together earned more than all the kindergarten teachers in America did.

Stiglitz wants to attribute this extraordinary rise in CEO pay, and the absurd amounts of money that asset managers make, to the lack of good regulation. CEOs, in his account, are exploiting deficiencies in corporate governance—supine boards and powerless shareholders—to exploit shareholders and “appropriate for themselves firm revenues.” Money managers, meanwhile, are exploiting the ignorance of investors, reaping the benefits of what Stiglitz calls “uncompetitive and often undisclosed fees” to ensure that they get paid well even when they underperform.

The idea that high CEO pay is ultimately due to poor corporate governance is a commonplace, and certainly there are many companies where the relationship between the CEO and the board of directors (which in theory is supposed to be supervising him) is too cozy. Yet as an explanation for why CEOs get paid so much more today than they once did, Stiglitz’s argument is unsatisfying. After all, back in the 1960s and 1970s, when CEOs were paid much less, corporate governance was, by any measure, considerably worse than it is today, not better. As one recent study put it:

Corporate boards were predominately made up of insiders…or friends of theCEO from the “old boys’ network.” These directors had a largely advisory role, and would rarely overturn or even mount major challenges to CEO decisions.

Shareholders, meanwhile, had fewer rights and were less active. Since then, we’ve seen a host of reforms that have given shareholders more power and made boards more diverse and independent. If CEO compensation were primarily the result of bad corporate governance, these changes should have had at least some effect. They haven’t. In fact, CEO pay has continued to rise at a brisk rate.

It’s possible, of course, that further reform of corporate governance (like giving shareholders the ability to cast a binding vote on CEO pay packages) will change this dynamic, but it seems unlikely. After all, companies with private owners—who have total control over how much to pay their executives—pay their CEOs absurd salaries, too. And CEOs who come into a company from outside—meaning that they have no sway at all over the board—actually get paid more than inside candidates, not less. Since 2010, shareholders have been able to show their approval or disapproval of CEOpay packages by casting nonbinding “say on pay” votes. Almost all of those packages have been approved by large margins. (This year, for instance, these packages were supported, on average, by 95 percent of the votes cast.)

Similarly, while money managers do reap the benefits of opaque and overpriced fees for their advice and management of portfolios, particularly when dealing with ordinary investors (who sometimes don’t understand what they’re paying for), it’s hard to make the case that this is why they’re so much richer than they used to be. In the first place, opaque as they are, fees are actually easier to understand than they once were, and money managers face considerably more competition than before, particularly from low-cost index funds. And when it comes to hedge fund managers, their fee structure hasn’t changed much over the years, and their clients are typically reasonably sophisticated investors. It seems improbable that hedge fund managers have somehow gotten better at fooling their clients with “uncompetitive and often undisclosed fees.”

So what’s really going on? Something much simpler: asset managers are just managing much more money than they used to, because there’s much more capital in the markets than there once was. As recently as 1990, hedge funds managed a total of $38.9 billion. Today, it’s closer to $3 trillion. Mutual funds in the US had $1.6 trillion in assets in 1992. Today, it’s more than $16 trillion. And that means that an asset manager today can get paid far better than an asset manager was twenty years ago, even without doing a better job.

This doesn’t mean that asset managers or corporate executives “deserve” what they earn. In fact, there’s no convincing evidence that CEOs are any better, in relative terms, than they once were, and plenty of evidence that they are paid more than they need to be, in view of their performance. Similarly, asset managers haven’t gotten better at beating the market. The point, though, is that attributing the rise in their pay to corruption, or bad rules, doesn’t get us that far. More important, probably, has been the rise of ideological assumptions about the indispensability of CEOs, and changes in social norms that made it seem like executives should take whatever they could get. (Stiglitz alludes to these in The Price of Inequality, writing, “Norms of what was ‘fair’ changed, too.”) Discussions of shifts in norms often become what the economist Robert Solow once called a “blaze of amateur sociology.” But that doesn’t mean we can afford to ignore those shifts, either, since the rise of the one percent has been propelled by ideological changes as much as by economic or regulatory ones.

Complicating Stiglitz’s account of the rise of the one percent is not just an intellectual exercise. It actually has important consequences for thinking about how we can best deal with inequality. Strategies for reducing inequality can be generally put into two categories: those that try to improve the pretax distribution of income (this is sometimes called, clunkily, predistribution) and those that use taxes and transfers to change the post-tax distribution of income (this is what we usually think of as redistribution). Increasing the minimum wage is an example of predistribution. Medicaid is redistribution.

Stiglitz’s agenda for policy—which is sketched in The Great Divide, and laid out in comprehensive detail in Rewriting the Rules—relies on both kinds of strategies, but he has high hopes that better rules, designed to curb rent-seeking, will have a meaningful impact on the pretax distribution of income. Among other things, he wants much tighter regulation of the financial sector. He wants to loosen intellectual property restrictions (which will reduce the value of patents), and have the government aggressively enforce antitrust laws. He wants to reform corporate governance so CEOs have less influence over corporate boards and shareholders have more say over CEO pay. He wants to limit tax breaks that encourage the use of stock options. And he wants asset managers to “publicly disclose holdings, returns, and fee structures.” In addition to bringing down the income of the wealthiest Americans, he advocates measures like a higher minimum wage and laws encouraging stronger unions, to raise the income of ordinary Americans (though this is not the main focus of The Great Divide).

These are almost all excellent suggestions. And were they enacted, some—including above all tighter regulation of the financial industry—would have an impact on corporate rents and inequality. But it would be surprising if these rules did all that much to shrink the income of much of the one percent, precisely because improvements in corporate governance and asset managers’ transparency are likely to have a limited effect on CEO salaries and money managers’ compensation.

This is not a counsel of despair, though. In the first place, these rules would be good things for the economy as a whole, making it more efficient and competitive. More important, the second half of Stiglitz’s agenda—redistribution via taxes and transfers—remains a tremendously powerful tool for dealing with inequality. After all, while pretax inequality is a problem in its own right, what’s most destructive is soaring posttax inequality. And it’s posttax inequality that most distinguishes the US from other developed countries. As Stiglitz writes:

Some other countries have as much, or almost as much, before-tax and transfer inequality; but those countries that have allowed market forces to play out in this way then trim back the inequality through taxes and transfer and the provision of public services.

The redistributive policies Stiglitz advocates look pretty much like what you’d expect. On the tax front, he wants to raise taxes on the highest earners and on capital gains, institute a carbon tax and a financial transactions tax, and cut corporate subsidies. But dealing with inequality isn’t just about taxation. It’s also about investing. As he puts it, “If we spent more on education, health, and infrastructure, we would strengthen our economy, now and in the future.” So he wants more investment in schools, infrastructure, and basic research.

If you’re a free-market fundamentalist, this sounds disastrous—a recipe for taking money away from the job creators and giving it to government, which will just waste it on bridges to nowhere. But here is where Stiglitz’s academic work and his political perspective intersect most clearly. The core insight of Stiglitz’s research has been that, left on their own, markets are not perfect, and that smart policy can nudge them in better directions.

Indeed, Creating a Learning Society is dedicated to showing how developing countries can use government policy to become high-growth, knowledge- intensive economies, rather than remaining low-cost producers of commodities. What this means for the future of the US is only suggestive, but Stiglitz argues that it means the government should play a major role in the ongoing “structural transformation” of the economy.

Of course, the political challenge in doing any of this (let alone all of it) is immense, in part because inequality makes it harder to fix inequality. And even for progressives, the very familiarity of the tax-and-transfer agenda may make it seem less appealing. After all, the policies that Stiglitz is calling for are, in their essence, not much different from the policies that shaped the US in the postwar era: high marginal tax rates on the rich and meaningful investment in public infrastructure, education, and technology. Yet there’s a reason people have never stopped pushing for those policies: they worked. And as Stiglitz writes, “Just because you’ve heard it before doesn’t mean we shouldn’t try it again.”