Urban mural by street artist Estoy
Urban mural by street artist Estoy
In San Francisco, the tech culture wars continue to rage. On April 15, Google opened up purchases of its Google Glass headgear to the general public for 24 hours. The sale was marked by mockery, theft and the continuing fallout from an incident a few days earlier, when a Business Insider reporter covering an anti-eviction protest had his Glass snatched and smashed.
That same day, protesters organized by San Francisco’s most powerful union marched to Twitter’s headquarters — a major San Francisco gentrification battleground — and presented the company with a symbolic tax bill, designed to recoup the “millions” that some San Franciscans believe the city squandered by bribing Twitter with a huge tax break to stay in the city.
We learned two things on April 15. First, Google isn’t about to give up on its plans to make Glass the second coming of the iPhone, even if it’s clear that a significant number of people consider Google Glass to be a despicable symbol of the surveillance society and a pricey calling card of the techno-elite. Second, judging by the march on Twitter, the tide of anti-tech protest sentiment has yet to crest in the San Francisco Bay Area. The two points turn out to be inseparable. Scratch an anti-tech protester and you are unlikely to find a fan of Google Glass.
What’s it all mean? Earlier this week, after I promoted an article on Twitter that attempted to explore reasons for anti-Glass hatred, I received a one-word tweet in response: “Neoluddism.”
The Luddites of the early 19th century are famous for smashing weaving machinery in a fruitless attempt to resist the reshaping of society and the economy by the Industrial Revolution. They took their name from Ned Ludd, a possibly apocryphal character who is said to have smashed two stocking frames in a fit of rage — thus inspiring a rebellion. While I can’t be certain, I suspect that my correspondent was deploying the term in the sense most familiar to pop culture — the Luddite as barbarian goon, futilely standing against the relentless march of progress.
But the story isn’t quite that simple.Yes, the Luddite movement may have been smashed by the forces of the state and the newly ascendant industrialist bourgeoisie. Yes, the Luddites may never have had the remotest chance of maintaining their pre-industrial way of life in the face of the steam engine. But there is a version of history in which the Luddites were far from unthinking goons. Instead, they were acute critics of their changing times, grasping the first glimpse of the increasingly potent ways in which capital was learning to exploit labor. In this view, the Luddites were actually the avante garde for the formation of working-class consciousness, and paved the way for the rise of organized labor and trade unions. It’s no accident that Ned Ludd hailed from Nottingham, right up against Sherwood Forest.
Economic inequality and technologically induced dislocation? Ned Ludd, that infamous wrecker of weaving machinery, would recognize a clear echo of his own time in present-day San Francisco. But there’s more to see here than just the challenge of a new technological revolution. Just as the Luddites, despite their failure, spurred the creation of worker-class consciousness, the current Bay Area tech protests have had a pronounced political effect. While the tactics range from savvy, well-organized protest marches to juvenile acts of violence, the impact is clear. The attention of political leaders and the media has been engaged. Everyone is paying attention.
* * *
If you live in San Francisco, you may have seen them around town: Decals on bar windows that state “Google Glass is barred on these premises.” They are the work of an outfit called StopTheCyborgs.org, a group of scientists and engineers who have articulated a critique of Google Glass that steers cagily away from the half-baked nonsense of Counterforce.
I contacted StopTheCyborgs by email and asked them how they responded to being called “neoluddites.”
“If ‘neoluddism’ means blindly being anti-technology then we refute the charge,” said Jack Winters, who described himself as a Scala and Java developer. If ‘neoluddism’ means not being blindly pro-technology then guilty as charged.”
“We are technologically sophisticated enough to realize that technology is politics and code is law,” continued Winters. “Technology isn’t some external force of nature. It is created and used by people. It has an effect on power relations. It can be good, it can be bad. We can choose what kind of society we want rather than passively accepting that ‘the future’ is whatever data-mining corporations want.”
“Basically anyone who views critics of particular technologies as ‘luddites’ fundamentally misunderstands what technology is. There is no such thing as ‘technology.’ Rather there are specific technologies, produced by specific economic and political actors, and deployed in specific economic and social contexts. You can be anti-nukes without being anti-antibiotics. You can be pro-surveillance of powerful institutions without being pro-surveillance of individual people. You can work on machine vision for medical applications while campaigning against the use of the same technology for automatically identifying and tracking people. How? Because you take a moral view of the likely consequences of a technology in a particular context.” [Emphasis added.]
The argument made by StopTheCyborgs resonates with one of the core observations that revisionist historians have made about the original Luddites: They were not indiscriminate in their assaults on technology. (At least not at first.) They chose to destroy machines that were owned by employers who were acting in ways they believed were particularly economically harmful while leaving other machines undamaged. To translate that to a present-day stance: It is not hypocritical for protesters to argue that Glass embodies surveillance in a way that iPhones don’t, or that it is hypocritical to critique technology’s impact on inequality via Twitter or Facebook. Every mode of technology needs to be evaluated on its own merits. Some start-up entrepreneurs might legitimately be using technology to achieve a social good. Some tech tycoons might be genuinely committed to a higher standard of life for all San Franciscans. Some might just be tools. So Jack Winters of StopTheCyborgs is correct: The deployment of different technologies have different consequences. These consequences require a social and political response.
This is not to say that ripping Google Glass from the face of a young reporter, or otherwise demonizing individuals just because they happen to be employed by a particular company, is comparable to Ned Ludd’s destruction of two stocking frames. But Glass is just as embedded in the larger transformations we are going through as the spinning jenny was to the Industrial Revolution. By taking it seriously, we are giving “the second machine age” the respect it deserves.
The question is: Is Google?
* * *
I tried to find out from Google how many units of Google Glass had been sold during the one-day special promotion. I received a statement that read, “We were getting through our stock faster than we expected, so we decided to shut the store down. While you can still access the site, Glass will be marked as sold out.”
I followed up by asking how Google was coping with the fact that its signature device had become a symbol of tech-economy driven gentrification.
“It’s early days and we are thinking very carefully about how we design Glass because new technology always raises new issues,” said a Google spokesperson. “Our Glass Explorers come from all walks of life. They are firefighters, gardeners, athletes, moms, dads and doctors. No one should be targeted simply because of the technology they choose. We find that when people actually try Glass firsthand, they understand the philosophy that underpins it: Glass let’s you look up and engage with the world around you, rather than looking down and being distracted by your technology.”
You can hear an echo here of Ned Ludd in the statement that “new technology raises new issues.” But the rest is just marketing zombie chatter, about as useless in its own way as some of the more overheated and unhinged rhetoric from the more extreme dissident wings of Bay Area protest. When a group styling itself “Counterforce” shows ups at the home of a Google executive, demands $3 billion to build “anarchist colonies” and declares, as Adrianne Jeffries documented in Verge, that their goal is to “to destroy the capitalist system … [and] … create a new world without an economy,” well, good luck with that. We are a long way from “the precipice of a complete anarcho-primitivist rebellion against the technocracy.”
One thing seems reasonably clear: Moms and firefighters might be wearing Google Glass, but if Ned Ludd were around today, he’d probably be looking for different accessories.
Andrew Leonard is a staff writer at Salon. On Twitter, @koxinga21.
There’s nothing “normal” about having a middle class. Having a middle class is a choice that a society has to make, and it’s a choice we need to make again in this generation, if we want to stop the destruction of the remnants of the last generation’s middle class.
Despite what you might read in the Wall Street Journal or see on Fox News, capitalism is not an economic system that produces a middle class. In fact, if left to its own devices, capitalism tends towards vast levels of inequality and monopoly. The natural and most stable state of capitalism actually looks a lot like the Victorian England depicted in Charles Dickens’ novels.
At the top there is a very small class of superrich. Below them, there is a slightly larger, but still very small, “middle” class of professionals and mercantilists – doctor, lawyers, shop-owners – who help keep things running for the superrich and supply the working poor with their needs. And at the very bottom there is the great mass of people – typically over 90 percent of the population – who make up the working poor. They have no wealth – in fact they’re typically in debt most of their lives – and can barely survive on what little money they make.
So, for average working people, there is no such thing as a middle class in “normal” capitalism. Wealth accumulates at the very top among the elites, not among everyday working people. Inequality is the default option.
You can see this trend today in America. When we had heavily regulated and taxed capitalism in the post-war era, the largest employer in America was General Motors, and they paid working people what would be, in today’s dollars, about $50 an hour with benefits. Reagan began deregulating and cutting taxes on capitalism in 1981, and today, with more classical “raw capitalism,” what we call “Reaganomics,” or “supply side economics,” our nation’s largest employer is WalMart and they pay around $10 an hour.
This is how quickly capitalism reorients itself when the brakes of regulation and taxes are removed – this huge change was done in less than 35 years.
The only ways a working-class “middle class” can come about in a capitalist society are by massive social upheaval – a middle class emerged after the Black Plague in Europe in the 14th century – or by heavily taxing the rich.
French economist Thomas Piketty has talked about this at great length in his groundbreaking new book, Capital in the Twenty-First Century. He argues that the middle class that came about in Western Europe and the United States during the mid-twentieth was the direct result of a peculiar set of historical events.
According to Piketty, the post-World War II middle class was created by two major things: the destruction of European inherited wealth during the war and higher taxes on the rich, most of which were rationalized by the war. This brought wealth and income at the top down, and raised working people up into a middle class.
Piketty is right, especially about the importance of high marginal tax rates and inheritance taxes being necessary for the creation of a middle class that includes working-class people. Progressive taxation, when done correctly, pushes wages down to working people and reduces the incentives for the very rich to pillage their companies or rip off their workers. After all, why take another billion when 91 percent of it just going to be paid in taxes?
This is the main reason why, when GM was our largest employer and our working class were also in the middle class, CEOs only took home 30 times what working people did. The top tax rate for all the time America’s middle class was created was between 74 and 91 percent. Until, of course, Reagan dropped it to 28 percent and working people moved from the middle class to becoming the working poor.
Other policies, like protective tariffs and strong labor laws also help build a middle class, but progressive taxation is the most important because it is the most direct way to transfer money from the rich to the working poor, and to create a disincentive to theft or monopoly by those at the top.
History shows how important high taxes on the rich are for creating a strong middle class.
If you compare a chart showing the historical top income tax rate over the course of the twentieth century with a chart of income inequality in the United States over roughly the same time period, you’ll see that the period with the highest taxes on the rich – the period between the Roosevelt and Reagan administrations – was also the period with the lowest levels of economic inequality.
Even more striking, during those same 33 years since Reagan took office and started cutting taxes on the rich, income levels for the top 1 percent have ballooned while income levels for everyone else have stayed pretty much flat.
Coincidence? I think not.
Creating a middle class is always a choice, and by embracing Reaganomics and cutting taxes on the rich, we decided back in 1980 not to have a middle class within a generation or two. George H.W. Bush saw this, and correctly called it “Voodoo Economics.” And we’re still in the era of Reaganomics – as President Obama recently pointed out, Reagan was a successful revolutionary.
This, of course, is exactly what conservatives always push for. When wealth is spread more equally among all parts of society, people start to expect more from society and start demanding more rights. That leads to social instability, which is feared and hated by conservatives, even though revolutionaries and liberals like Thomas Jefferson welcome it.
And, as Kirk and Buckley predicted back in the 1950s, this is exactly what happened in the 1960s and ’70s when taxes on the rich were at their highest. The Civil Rights movement, the women’s movement, the consumer movement, the anti-war movement, and the environmental movement – social movements that grew out of the wealth and rising expectations of the post-World War II era’s middle class – these all terrified conservatives. Which is why ever since they took power in 1980, they’ve made gutting working people out of the middle class their number one goal.
We now have a choice in this country. We can either continue going down the road to oligarchy, the road we’ve been on since the Reagan years, or we can choose to go on the road to a more pluralistic society with working class people able to make it into the middle class. We can’t have both.
And if we want to go down the road to letting working people back into the middle class, it all starts with taxing the rich.
The time is long past due for us to roll back the Reagan tax cuts.
On February 7, president Obama signed legislation cutting $8.7 billion from the Supplemental Nutrition Assistance Program (SNAP), also known as food stamps, over the next ten years. This latest comes on top of an across-the-board 5 percent reduction of benefits to all food stamp recipients last November.
Jennifer Noonan (24) with her children Wenona and Taima at her home in the Portland area. Jennifer saw a $49 drop in her monthly SNAP benefits after the November 2013 cuts.
In 2012, there were 49 million people in the US who were “food insecure” at some point throughout the year, according to the US Department of Agriculture, meaning that nearly 50 million individuals (including 16 million children in nearly 18 million households) “did not have access at least part of the year, to enough food for an active, healthy life.” That is, one out of five children in the United States are living in households that cannot afford enough food and do not get enough to eat.
Food insecure households in the United States, according to Joel Berg of the NY Coalition Against Hunger, are families that are forced into a position of having to ration food, or “choosing between food and rent, choosing between food and health care—parents going without meals so that they can feed their children, or children having to sometimes go through the dumpsters in the back of their school to get a meal.”
“Food insecurity is basically hunger in the American context, it’s not necessarily people starving in the streets like North Korea or Somalia,” Berg stated in a recent interview with NPR. “We’re the only major industrialized Western society on the planet that has this high a level of hunger and this high a level of poverty. And this is a country that has so many billionaires, merely only having a billion dollars doesn’t get you on the Forbes 400 list any more…so it’s incredible that even though we don’t have Somalia-type starvation, that we do have mass deprivation—and the only reason we don’t have mass third-world style starvation is because of the very nutrition assistance programs [that are being cut].” Berg continued, “The SNAP program is what keeps us [the United States] from having mass famine and starvation.”
Last November’s SNAP cuts meant that every single one of the nearly 50 million people that depend on these benefits to feed themselves and their families were hit with reductions, the average amounting to around $30 per month, with many families facing even higher cuts. The latest round of cuts from February’s legislation as part of the Farm Bill will add another $8.7 billion in slashed spending to these, which will have devastating consequences for millions of Americans, hitting particularly hard the country’s most vulnerable people: children, the elderly and disabled.
The WSWS spoke to Portland, Oregon resident Jennifer Noonan and her two young children Taima and Wenona. Jennifer, who is now 24 years old, grew up in poverty and was placed in foster care along with her sister as a child after her mother, a schizophrenic, and her father, a drug addict, were no longer able to properly care for them. She spent her teen years running away from group homes and bouncing back and forth between foster families in the Northwest and then had her first child at the age of 18. Before being placed in foster care, and even after, she has many memories of going hungry in her childhood. Before the cuts in SNAP funding at the end of 2013 she was receiving $524 per month and she is now getting $475.
GoPro cameras are branded as recording devices for extreme sports, but a San Francisco-based entrepreneur had a different idea of what to do with the camera: Strap it to a homeless man and capture “extreme living.”
The project is called Homeless GoPro, and it involves learning the first-person perspective of homeless people on the streets of San Francisco. The website explains:
“With a donated HERO3+ Silver Edition from GoPro and a small team of committed volunteers in San Francisco, Homeless GoPro explores how a camera normally associated with extreme sports and other ’hardcore’ activities can showcase courage, challenge, and humanity of a different sort - extreme living.”
The intentions of the founder, Kevin Adler, seem altruistic. His uncle was homeless for 30 years, and after visiting his gravesite he decided to start the organization and help others who are homeless.
The first volunteer to film his life is a man named Adam, who has been homeless for 30 years, six of those in San Francisco. There are several edited videos of him on the organization’s site.
In one of the videos, titled “Needs,” Adam says, “I notice every day that people are losing their compassion and empathy — not just for homeless people — but for society in general. I feel like technology has changed so much — where people are emailing and don’t talk face to face anymore.”
Without knowing it Adam has critiqued the the entire project, which is attempting to use technology (a GoPro) to garner empathy and compassion. It is a sad reminder that humanity can ignore the homeless population in person on a day-to-day basis, and needs a video to build empathy. Viewers may feel a twinge of guilt as they sit removed from the situation, watching a screen.
According to San Francisco’s Department of Human Services‘ biennial count there were 6,436 homeless people living in San Francisco (county and city). “Of the 6,436 homeless counted,” a press release stated, “more than half (3,401) were on the streets without shelter, the remaining 3,035 were residing in shelters, transitional housing, resource centers, residential treatment, jail or hospitals.” The homeless population is subject to hunger, illness, violence, extreme weather conditions, fear and other physical and emotional ailments.
Empathy — and the experience of “walking a mile in somebody’s shoes” — are important elements of social change, and these documentary-style videos do give Adam a medium and platform to be a voice for the homeless population. (One hopes that the organization also helped Adam in other ways — shelter, food, a place to stay on his birthday — and isn’t just using him as a human tool in its project.) But something about the project still seems off.
It is in part because of the product placement. GoPro donated a $300 camera for the cause, which sounds great until you remember that it is a billion-dollar company owned by billionaire Nick Woodman. If GoPro wants to do something to help the Bay Area homeless population there are better ways to go about it than donate a camera.
As ValleyWag‘s Sam Biddle put it, “Stop thinking we can innovate our way out of one of civilization’s oldest ailments. Poverty, homelessness, and inequality are bigger than any app …”
Boston Globe technology writer Hiawatha Bray recalls the moment that inspired him to write his new book, You Are Here: From the Compass to GPS, the History and Future of How We Find Ourselves. “I got a phone around 2003 or so,” he says. “And when you turned the phone on—it was a Verizon dumb phone, it wasn’t anything fancy—it said, ‘GPS’. And I said, ‘GPS? There’s GPS in my phone?’” He asked around and discovered that yes, there was GPS in his phone, due to a 1994 FCC ruling. At the time, cellphone usage was increasing rapidly, but 911 and other emergency responders could only accurately track the location of land line callers. So the FCC decided that cellphone providers like Verizon must be able to give emergency responders a more accurate location of cellphone users calling 911. After discovering this, “It hit me,” Bray says. “We were about to enter a world in which…everybody had a cellphone, and that would also mean that we would know where everybody was. Somebody ought to write about that!”
So he began researching transformative events that lead to our new ability to navigate (almost) anywhere. In addition, he discovered the military-led GPS and government-led mapping technologies that helped create new digital industries. The result of his curiosity is You Are Here, an entertaining, detailed history of how we evolved from primitive navigation tools to our current state of instant digital mapping—and, of course, governments’ subsequent ability to track us. The book was finished prior to the recent disappearance of Malaysia Airlines flight 370, but Bray says gaps in navigation and communication like that are now “few and far between.”
Here are 13 pivotal moments in the history of GPS tracking and digital mapping that Bray points out in You Are Here:
1st century: The Chinese begin writing about mysterious ladles made of lodestone. The ladle handles always point south when used during future-telling rituals. In the following centuries, lodestone’s magnetic abilities lead to the development of the first compasses.
2nd century: Ptolemy’s Geography is published and sets the standard for maps that use latitude and longitude.
1473: Abraham Zacuto begins working on solar declination tables. They take him five years, but once finished, the tables allow sailors to determine their latitude on any ocean.
1887: German physicist Heinrich Hertz creates electromagnetic waves, proof that electricity, magnetism, and light are related. His discovery inspires other inventors to experiment with radio and wireless transmissions.
1895: Italian inventor Guglielmo Marconi, one of those inventors inspired by Hertz’s experiment, attaches his radio transmitter antennae to the earth and sends telegraph messages miles away. Bray notes that there were many people before Marconi who had developed means of wireless communication. “Saying that Marconi invented the radio is like saying that Columbus discovered America,” he writes. But sending messages over long distances was Marconi’s great breakthrough.
1958: Approximately six months after the Soviets launched Sputnik, Frank McLure, the research director at Johns Hopkins Applied Physics Laboratory, calls physicists William Guier and George Weiffenbach into his office. Guier and Weiffenbach used radio receivers to listen to Sputnik’s consistent electronic beeping and calculate the Soviet satellite’s location; McLure wants to know if the process could work in reverse, allowing a satellite to location their position on earth. The foundation for GPS tracking is born.
1969: A pair of Bell Labs scientists named William Boyle and George Smith create a silicon chip that records light and coverts it into digital data. It is called a charge-coupled device, or CCD, and serves as the basis for digital photography used in spy and mapping satellites.
1976: The top-secret, school-bus-size KH-11 satellite is launched. It uses Boyle and Smith’s CCD technology to take the first digital spy photographs. Prior to this digital technology, actual film was used for making spy photographs. It was a risky and dangerous venture for pilots like Francis Gary Powers, who was shot down while flying a U-2 spy plane and taking film photographs over the Soviet Union in 1960.
1983: Korean Air Lines flight 007 is shot down after leaving Anchorage, Alaska, and veering into Soviet airspace. All 269 passengers are killed, including Georgia Democratic Rep. Larry McDonald. Two weeks after the attack, President Ronald Reagan directs the military’s GPS technology to be made available for civilian use so that similar tragedies would not be repeated. Bray notes, however, that GPS technology had always been intended to be made public eventually. Here’s Reagan’s address to the nation following the attack:
1989: The US Census Bureau releases (PDF) TIGER (Topologically Integrated Geographic Encoding and Referencing) into the public domain. The digital map data allows any individual or company to create virtual maps.
1994: The FCC declares that wireless carriers must find ways for emergency services to locate mobile 911 callers. Cellphone companies choose to use their cellphone towers to comply. However, entrepreneurs begin to see the potential for GPS-integrated phones, as well. Bray highlights SnapTrack, a company that figures out early on how to squeeze GPS systems into phones—and is purchased by Qualcomm in 2000 for $1 billion.
1996: GeoSystems launches an internet-based mapping service called MapQuest, which uses the Census Bureau’s public-domain mapping data. It attracts hundreds of thousands of users and is purchased by AOL four years later for $1.1 billion.
2004: Google buys Australian mapping startup Where 2 Technologies and American satellite photography company Keyhole for undisclosed amounts. The next year, they launch Google Maps, which is now the most-used mobile app in the world.
2012: The Supreme Court ruling in United States v. Jones (PDF) restricts police usage of GPS to track suspected criminals. Bray tells the story of Antoine Jones, who was convicted of dealing cocaine after police placed a GPS device on his wife’s Jeep to track his movements. The court’s decision in his case is unanimous: The GPS device had been placed without a valid search warrant. Despite the unanimous decision, just five justices signed off on the majority opinion. Others wanted further privacy protections in such cases—a mixed decision that leaves future battles for privacy open to interpretation.
Photo Credit: Aaron Amat via Shutterstock.com
“No, no. You’ve got something the test and machines will never be able to measure: you’re artistic. That’s one of the tragedies of our times, that no machine has ever been built that can recognize that quality, appreciate it, foster it, sympathize with it.” —Paul Proteus to his wife Anita in Kurt Vonnegut’s Player Piano
“So much depends upon a red wheel barrow glazed with rain water beside the white chickens” is, essentially, a grammatical sentence in the English language. While the syntax is somewhat out of the norm, the diction is accessible to small children—the hardest word likely being “depends.” But “The Red Wheelbarrow” by William Carlos Williams is much more than a sentence; it is a poem:
so much depends
a red wheel
glazed with rain
beside the white
A relatively simple sentence shaped into purposeful lines and stanzas becomes poetry. And like Langston Hughes’ “Harlem” and Gwendolyn Brooks’ “We Real Cool,” it sparks in me a profoundly important response each time I read these poems:I wish I had written that. It is the same awe and wonder I felt as a shy, self-conscious teenager when I bought, collected and read comic books, marveling at the artwork I wished I had drawn.
Will we wake one morning soon to find the carcasses of poems washed up on the beach by the tsunami of the Common Core?
That question, especially during National Poetry Month, haunts me more every day, notably because of the double-impending doom augured by the Common Core: the rise of nonfiction (and the concurrent erasing of poetry and fiction) from the ELA curriculum and the mantra-of-the-moment, “close reading” (the sheep’s clothing for that familiar old wolf New Criticism):
We have come to a moment in the history of the U.S. when we no longer even pretend to care about art. And poetry is the most human of the arts—the very human effort to make order out of chaos, meaning out of the meaningless: “Daddy, daddy, you bastard, I’m through” (Sylvia Plath, “Daddy”).
The course was speech, taught by Mr. Brannon. I was a freshman at a junior college just 15-20 miles from my home. Despite the college’s close proximity to my home, my father insisted I live on campus. But that class and those first two years of college were more than living on campus; they were the essential beginning of my life.
In one of the earliest classes, Mr. Brannon read aloud and gave us a copy of “[in Just-]“ by e. e. cummings. I imagine that moment was, for me, what many people describe as a religious experience. That was more than 30 years ago, but I own two precious books that followed from that day in class: cummings’ Complete Poems and Selected Poems. Several years later, Emily Dickinson‘s Complete Poemswould join my commitment to reading every poem by those poets who made me respond over and over, I wish I had written that.
But my introduction to cummings was more than just finding the poetry I wanted to read; it was when I realized I was a poet. Now, when the words “j was young&happy” come to me, I know there is work to do—I recognize the gift of poetry.
As a high school English teacher, I divided my academic year into quarters by genre/form: nonfiction, poetry, short fiction, and novels/plays. The poetry quarter, when announced to students, initially received moans and even direct complaints: “I hate poetry.” That always broke my heart. Life and school had already taken something very precious from these young people:
children guessed (but only a fewand down they forgot as up they grew…(“[anyone lived in a pretty how town],” e.e. cummings
I began to teach poetry in conjunction with popular songs. Although my students in rural South Carolina were overwhelmingly country music fans, I focused my nine weeks of poetry on the songs of alternative group R.E.M. At first, that too elicited moans from students in those early days of exploring poetry (see that unit on the blog “There’s time to teach”).
Concurrently, throughout my high school teaching career, students would gather in my room during our long mid-morning break and lunch (much to the chagrin of administration). And almost always, we played music, even closing the door so two of my students could dance and sing and laugh along with the Violent Femmes.
Many of those students are in their 30s and 40s, but it is common for them to contact me—often on Facebook—and recall fondly R.E.M. and our poetry unit. Those days meant something to them that lingers, that matters in ways that cannot be measured. It was an oasis of happiness in their lives at school.
e.e. cummings begins “since feeling is first,” and then adds:
my blood approves,
and kisses are better fate
lady i swear by all flowers. Don’t cry
—the best gesture of my brain is less than
your eyelids’ flutter….
Each year when my students and I examined this poem, we would discuss that cummings—in Andrew Marvell fashion—offers an argument that is profoundly unlike what parents, teachers, preachers, and politicians claim.
I often paired this poem with Coldplay’s “The Scientist,” focusing on:
I was just guessing at numbers and figures
Pulling your puzzles apart
Questions of science, science and progress
Do not speak as loud as my heart
Especially for teenagers, this question, this tension between heart and mind, mattered. Just as it recurs in the words of poets and musicians over decades, centuries. Poetry, as with all art, is the expressed heart—that quest to rise above our corporeal humanness:
I have loved a few people intensely—so deeply that my love, I believe, resides permanently in my bones. One such love is my daughter, and she now carries the next human who will add to that ache of being fully human—loving another beyond words.
And that is poetry.
Poetry is not identifying iambic pentameter on a poetry test or discussing the nuances of enjambment in an analysis of a Dickinson poem.
Poems are not fodder for close reading.
Poetry is the ineluctable “Oh my heart” that comes from living fully in the moment, the moment that draws us to words as well as inspires us toward words.
We read a poem, we listen to a song, and our hearts rise out of our eyes as tears.
That is poetry.
Like the picture books of our childhood, poetry must be a part of our learning, essential to our school days—each poem an oasis of happiness that “machines will never be able to measure.”
Will we wake one morning to find the carcasses of poems washed up on the beach by the tsunami of the Common Core?
Maybe the doomsayers are wrong. Maybe poetry will not be erased from our classrooms. School with less poetry is school with less heart. School with no poetry is school with no heart.
Both are tragic mistakes, because if school needs anything, it is more heart. And poetry? Oh my heart.
This piece originally appeared on the Becoming Radical blog.
Political scientists show that average American has “near-zero” influence on policy outcomes, but their groundbreaking study is not without problems.
It’s not every day that an academic article in the arcane world of American political science makes headlines around the world, but then again, these aren’t normal days either. On Wednesday, various mainstream media outlets — including even the conservative British daily The Telegraph — ran a series of articles with essentially the same title: “Study finds that US is an oligarchy.” Or, as the Washington Post summed up: “Rich people rule!” The paper, according to the review in the Post, “should reshape how we think about American democracy.”
The conclusion sounds like it could have come straight out of a general assembly or drum circle at Zuccotti Park, but the authors of the paper in question — two Professors of Politics at Princeton and Northwestern University — aren’t quite of the radical dreadlocked variety. No, like Piketty’s book, this article is real “science”. It’s even got numbers in it! Martin Gilens of Princeton and Benjamin Page of Northwestern University took a dataset of 1,779 policy issues, ran a bunch of regressions, and basically found that the United States is not a democracy after all:
Multivariate analysis indicates that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence. The results provide substantial support for theories of Economic Elite Domination and for theories of Biased Pluralism, but not for theories of Majoritarian Electoral Democracy or Majoritarian Pluralism.
The findings, of course, are both very interesting and very obvious. What Gilens and Page claim to have empirically demonstrated is that policy outcomes by and large favor the interests of business and the wealthiest segment of the population, while the preferences of the vast majority of Americans are of little to no consequence for policy outcomes. As the authors show, this new data backs up the conclusions of a number of long-forgotten studies from the 1950s and 1960s — not least the landmark contributions by C.W. Mills and Ralph Miliband — that tried to debunk the assertion of mainstream pluralist scholars that no single interest group dominates US policymaking.
But while Gilens and Page’s study will undoubtedly be considered a milestone in the study of business power, there’s also a risk in focusing too narrowly on the elites and their interest groups themselves; namely the risk of losing sight of the broader set of social relations and institutional arrangements in which they are embedded. What I am referring to, of course, is the dreaded C-word: capitalism — a term that appears only once in the main body of Gilens and Page’s text, in a superficial reference to The Communist Manifesto, whose claims are quickly dismissed as empirically untestable. How can you talk about oligarchy and economic elites without talking about capitalism?
What’s missing from the analysis is therefore precisely what was missing from C.W. Mills’ and Miliband’s studies: an account of the nature of the capitalist state as such. By branding the US political system an “oligarchy”, the authors conveniently sidestep an even thornier question: what if oligarchy, as opposed to democracy, is actually the natural political form in capitalist society? What if the capitalist state is by its very definition an oligarchic form of domination? If that’s the case, the authors have merely proved the obvious: that the United States is a thoroughly capitalist society. Congratulations for figuring that one out! They should have just called a spade a spade.
That, of course, wouldn’t have raised many eyebrows. But it’s worth noting that this was precisely the critique that Nicos Poulantzas leveled at Ralph Miliband in the New Left Review in the early 1970s — and it doesn’t take an Althusserian structuralist to see that he had a point. Miliband’s study of capitalist elites, Poulantzas showed, was very useful for debunking pluralist illusions about the democratic nature of US politics, but by focusing narrowly on elite preferences and the “instrumental” use of political and economic resources to influence policy, Miliband’s empiricism ceded way too much methodological ground to “bourgeois” political science. By trying to painstakingly prove the existence of a causal relationship between instrumental elite behavior and policy outcomes, Miliband ended up missing the bigger picture: the class-bias inherent in the capitalist state itself, irrespective of who occupies it.
These methodological and theoretical limitations have consequences that extend far beyond the academic debate: at the end of the day, these are political questions. The way we perceive business power and define the capitalist state will inevitably have serious implications for our political strategies. The danger with empirical studies that narrowly emphasize the role of elites at the expense of the deeper structural sources of capitalist power is that they will end up reinforcing the illusion that simply replacing the elites and “taking money out of politics” would be sufficient to restore democracy to its past glory. That, of course, would be profoundly misleading. If we are serious about unseating the oligarchs from power, let’s make sure not to get carried away by the numbers and not to lose sight of the bigger picture.
Jerome Roos is a PhD candidate in International Political Economy at the European University Institute, and founding editor of ROAR Magazine.
Street art in Ecuador
by street artist Vez Rayas.
Photo by Vez Rayas.