Why the Birth of Shakespeare Is the Birth of Modern Art

April 22, 2014, 9:50 PM
Chandos_shakespeare--crop

April 23, 2014, marks the 450th birthday of William Shakespeare, one of the greatest writers of all time and an inescapable influence not just on literature, but also on every form of culture since the 19th century. Although the canon of plays was more or less established with the publication of The First Folio in 1623, Shakespeare had to wait for larger acclaim until the Romantic era of the 1800s, when critics such as Samuel Taylor Coleridge and August Wilhelm Schlegel first spread the Gospel of Will which would soon blossom into full bardolatry. In many ways, the Romantic era never ended and we are the “last” Romantics, full of ideas of individuality, imagination, and even love that would be totally foreign to the classical world. Even those who accept that the Romantic era’s over see it as a Post-Romantic era, a time defined by what it can no longer be.  This Romantic or Post-Romantic world gave birth to Modern art.  So, by an almost Biblical series of begats, you can say that the birth of Shakespeare is the birth of Modern art, the birth of how we see the world within and the world without today.

During Shakespeare’s own lifetime he was known best as the “honey-tongued” poet of such works as Venus and Adonis and The Rape of Lucrece, in which he used classical and ancient characters to his own artistic purposes as well as practical purposes of making money during the plague-forced theater closures of 1593-1594. Readers literally read published copies of these works to pieces, making surviving copies extremely rare today. People went to see the plays, of course, but the emphasis of the theaters was on making money as much as making art. Publishing plays never became a priority because it never seemed profitable enough. It was Shakespeare’s friend and rival Ben Jonson who believed that publishing ones works in a collected way could serve both practical and artistic purposes. Jonson published his own collected works in 1616 and pushed for the posthumous collection of Shakespeare’s works in 1623, both of which served as templates for collected works of contemporaries such as Beaumont and Fletcher and others that essentially established the study of “modern” (that is, 16th century) literature as an art form as worthy as that of the already well-studied classics. Yes, Jonson deserves credit for making the initial push, but it was the inspiration of Shakespeare, as well as the lasting success of Shakespeare’s works in print, that set in motion what we know as literature today.

Once the Romantics got hold of Shakespeare, however, they turned the 16th century author into a 19th century “modern” contemporary. T.S. Eliot later complained about this trend in his 1920 essay “Hamlet”:

These minds often find in Hamlet a vicarious existence for their own artistic realization. Such a mind had Goethe, who made of Hamlet a Werther; and such had Coleridge, who made of Hamlet a Coleridge; and probably neither of these men in writing about Hamlet remembered that his first business was to study a work of art.

While Eliot felt that the “first business was to study a work of art,” Goethe, Coleridge, and others felt that the reason behind that business was to make those works relevant to living, breathing people, even if that “made of Hamlet” the critic himself. Some argue that Shakespeare’s critical lull period during the 17th and 18th centuries owes something to the neo-classical tastes of the time in which individuality took a back seat to more communal ideals.

Once the modern taste for the individual took hold, however, Shakespeare found a home beyond England’s shores. American colonists staged plays by Shakespeare as early as 1750. “There is hardly a pioneer’s hut that does not contain a few odd volumes of Shakespeare,” Alexis de Tocqueville wrote in 1835 in Democracy in America. From the very beginning of the American experiment in democracy, Shakespeare and his individualized characters inspired a government of, by, and for the people, to paraphrase the Gettysburg Address of that notorious Shakespeare lover Abraham Lincoln. As kings fell and democracies rose throughout Europe in the 19th and 20th centuries, Shakespeare (often in vernacular translation) showed the way, sometimes in the form of music, as in Giuseppe Verdi’s operas Otello and Falstaff, which provided the popular soundtrack to the political movement by which modern Italy was born.

Modern, democratic societies longed for art that reflected their ideals and anxieties. So much modern art comes from the psychoanalytic ideas of Sigmund Freud, who mined ancient characters such as Oedipus for the infamous “complex,” but also plumbed the human psyche in the fictional person of Hamlet. The “-isms” of the 20th century also soon found new artistic uses for Shakespeare. German Expressionism, Russian Futurism, and European Marxism all explored new ways of staging the Bard to make the people understand their goals. More recently, art steeped philosophically in feminism, anti-colonialism, and sexualism views Shakespeare as friend or foe, but either way cannot escape the cultural gravitational pull of his massive influence.

Although the pedantic women of T.S. Eliot’s “The Love Song of J. Alfred Prufrock” “come and go/ talking of Michelangelo” as a badge of cultural knowing, Eliot alludes in that poem to no less than three Shakespeare plays (Henry IV Part II, Twelfth Night, and that old Coleridgean favorite, Hamlet). Even Eliot couldn’t avoid Shakespeare in the making of modern poetic art. So, as we wish the Bard a happy 450th (the last round number anniversary some of us, including me, will likely see), we can wish him many, many more with the knowledge that we can join Ben Jonson’s tribute in that First Folio that Shakespeare “was not of an age, but for all time!”, including ours.

[Image: The “Chandos” Portrait of William Shakespeare (detail).]

“Cubed”: How the American office worker wound up in a box

From the Organization Man to open-plan design, a new history of the way the office has shaped the modern world

 

"Cubed": How the American office worker wound up in a box

Over the past week, as I’ve been carrying around a copy of Nikil Saval’s “Cubed: A Secret History of the Workplace,” I’ve gotten some quizzical looks. “It’s a history of the office,” I’d explain, whereupon a good number of people would respond, “Well, that sounds boring.”

It isn’t. In fact, “Cubed” is anything but, despite an occasional tendency to read like a grad school thesis. The fact that anyone would expect it to be uninteresting is striking, though, and marks an unexpected limit to the narcissistic curiosity of the average literate American. The office, after all, is where most contemporary labor takes place and is a crucible of our culture. We’ve almost all worked in one. Of course it’s a subject that merits a thoroughly researched and analytical history, because the history of the office is also the history of us.

Saval’s book glides smoothly between his two primary subjects: the physical structure of offices and the social institution of white-collar work over the past 150 years or so. “Cubed” encompasses everything from the rise of the skyscraper to the entrance of women into the professional workplace to the mid-20th-century angst over grey-flannel-suit conformity to the dorm-like “fun” workplaces of Silicon Valley. His stance is skeptical, a welcome approach given that most writings on the contemporary workplace are rife with dubious claims to revolutionary innovation — office design or management gimmicks that bestselling authors indiscriminately pounce on like magpies seizing glittering bits of trash.

Some of the most fascinating parts of “Cubed” come in the book’s early chapters, in which Saval traces the roots of the modern office to the humble 19th-century countinghouse. Such firms — typically involved in the importation or transport of goods — were often housed in a single room, with one or more of the partners working a few feet away from a couple of clerks, who copied and filed documents, as well as a bookkeeper. A novice clerk might earn less than skilled manual laborers, but he had the potential to make significantly more, and he could boast the intangible but nevertheless significant status of working with his mind rather than his hands.



Even more formative to the identity of white-collar workers (so named for the detachable collars, changed daily in lieu of more regular washings of the actual shirt, that served as a badge of the class) was the proximity to the boss himself and the very real possibility of advancement to the role of partner. Who better to marry the boss’ daughter and take over when he retired? Well, one of the other clerks is who, and in this foundational rivalry much of the character of white-collar work was set. “Unlike their brothers in the factory, who had begun to see organizing on the shop floors as a way to counter the foul moods and arbitrary whims of their bosses,” Saval writes, “clerks saw themselves as potential bosses.” Blue-collar workers, by contrast, knew they weren’t going to graduate to running the company one day.

The current American ethic of self-help and individualism owes at least as much to the countinghouse as it does to the misty ideal of the Jeffersonian yeoman farmer. An ambitious clerk identified and sought to ingratiate himself with the partners, not his peers, who were, after all, his competitors. He was on his own, and had only himself to blame if he failed. A passion for self-education and self-improvement, via books and night schools and lecture series, took root and flourished. So did a culture of what Saval calls “unctuous male bonding,” the ancestor of the 20th century’s golf outings and three-martini lunches, customs that would make it that much harder for outsiders like women and ethnic minorities to rise in the ranks — once they managed to get into the ranks in the first place.

The meritocratic dream became a lot more elusive in the 1920s, when the population employed in white-collar work surpassed the number of blue-collar workers for the first time. Saval sees this stage in the evolution of the office as epitomized in the related boom in skyscrapers. The towering buildings, enabled by new technologies like elevators and steel-frame construction, were regarded as the concrete manifestation of American boldness and ambition; certainly modern architects, reaching for new heights of self-aggrandizement, encouraged that view. They rarely acknowledged that a skyscraper was, at heart, a hive of standardized cells in which human beings were slotted like interchangeable parts. Most of the workers inhabiting those cells, increasingly female, could not hope to climb to the executive suites.

Office workers had always made observers uncomfortable; the clerk, with his “minute leg, chalky face and hollow chest,” was one of the few members of the American multitude that Walt Whitman scorned to contain. After World War II, that unease grew into a nationwide obsession, bumping several works of not-so-pop sociology — “The Lonely Crowd,” “The Man in the Grey Flannel Suit,” “The Organization Man,” etc. — onto the bestseller lists. In turn, the challenge of managing this new breed of “knowledge workers” became the subject of intensive rumination and theorizing. Saval lists Douglas McGregor’s 1960 guide, “The Human Side of Enterprise,” as the seminal work in a field that would spawn such millionaire gurus as Tom Peters and Peter Drucker.

Office design — typically regimented rows of identical desks — was deemed in need of an overhaul as well, and perhaps the most rueful story Saval recounts is that of Robert Propst, hired to head the research wing of the Herman Miller company in 1958. Propst made an intensive study of white-collar work and in 1964, Herman Miller unveiled his Action Office, a collection of pieces designed to support work on “thinking projects.” Saval praises this incarnation of the Action Office as the first instance in which ” the aesthetics of design and progressive ideas about human needs were truly united,” but it didn’t entirely catch on until the unveiling of Action Office II, which incorporated wooden frames covered with fabric that served as short, modular, temporary walls.

Oh, what a difference 30 degrees makes! Propst’s original Action Office II arranged these walls in 120 degree-angles, an orthogonal configuration that looked and felt open and dynamic. One of Propst’s colleagues told Saval of the dismal day that “someone figured out you don’t need the 120-degree [angle] and it went click.” Ninety-degrees used up the available space much more efficiently, enabling employers to cram in more workstations. The American office worker had been cubed.

The later chapters of “Cubed” speak to more recent trends, like the all-inclusive, company-town-like facilities of tech firms like Google and wacky experiments in which no one is allowed to have a permanent desk at all. Saval visits the Los Angeles office of the ad agency Chiat/Day, which is organized around an artificial main street, features a conference table made of surfboards and resembles nothing so much as a theme park. Using architecture and amenities to persuade employees that their work is also play is a gambit to keep them on the premises and producing for as much of the day (and night) as possible. It all seems a bit desperate. In my own personal experience, if people 1) are paid sufficiently; 2) like the other people they work with; and 3) find the work they do a meaningful use of their energy and skills, then they don’t really care if they work in cubicles or on picnic benches. Item No. 3 is a bitch, though, the toughest criteria for most contemporary corporations to meet. Maybe that’s what they ought to be worrying about instead.

 

Laura Miller is a senior writer for Salon. She is the author of “The Magician’s Book: A Skeptic’s Adventures in Narnia” and has a Web site, magiciansbook.com.

 

http://www.salon.com/2014/04/20/cubed_how_the_american_office_worker_wound_up_in_a_box/?source=newsletter

Sunlight in cities is an endangered species

Welcome to the permanent dusk:

As cities grow taller, light has become a precious commodity. Is it time for it to be regulated like one?

Welcome to the permanent dusk: Sunlight in cities is an endangered species

What would you pay for more natural light in your apartment? $10,000 per sunlit window, in TriBeCa? A 15 percent surcharge for an apartment that faced south, in London? An annual levy of 60 pounds for 20 windows, as the English monarchy demanded during a 150-year period beginning in 1696, under the so-called Window Tax?

Would you support a municipal effort to install a giant mirror to reflect winter sunshine into the town square? The Norwegian mountain town of Rjukan spent $800,000 to do just that. In Islamic Cairo, researchers have developed a sheet of corrugated plastic that can double the amount of light that trickles into the narrow alleyways.

The importance of light to great architecture is no secret. But in cities, where natural light is instrumental to urban design and property values, sunlight is a fickle friend. It can account for the prices of apartments, the popularity of parks, and even influence commercial rents on big avenues. Its holistic properties are obvious, but its economic benefits no less important, including the effect of solar radiation on heating costs and the burgeoning potential for urban solar panel use. But sunlight can be taken away in an instant, from a backyard, a kitchen window or a treasured park, with neither notice nor consequence.

As American cities grow taller and denser — and most everyone agrees that they must — natural light becomes a more precious commodity. Does that mean it should be regulated like one? Or would preserving current sun patterns — so-called “solar rights” — grind real estate development to a halt? Put simply: Should Americans, in their homes and in their cities, have a right to light?

Planners, lawyers and homeowners have been arguing about this for two millennia. The Greeks incorporated the sun in their city planning; the Roman emperor Justinian ensured that no neighbor could block light “previously enjoyed for heat, light or sundial operation.” In desert climes, the same consideration was incorporated into city planning with even greater verve, for opposite results. In the Mozabite enclave of Ghardaia, Algeria, streets wind and curve so that the Saharan sun cannot penetrate.



In England, as the first throes of the Industrial Revolution wrought their transformations, Parliament attempted to legislate this concept with more objectivity. The so-called “Ancient Lights” law, passed in 1832, prevents new constructions from blocking light that has continuously reached the interior of a building for 20 years. The amount of light protected is determined by “the grumble line,” the point at which one might begin to complain about the lack of light.

It’s a law that British homeowners can still invoke today, though with only partial success. On the BBC reality show “The Planners,” an 87-year-old homeowner failed to prevent the construction of a neighbor’s light-hogging extension… and the law is even less likely to order the demolition of a larger, more expensive construction.

In Japan, where tall buildings are more common, a similar law, called “nissho-ken,” is more frequently cited. As skyscrapers proliferated in Japanese cities alongside small homes during the 1960s, sunshine suits exploded, from six in 1968 to 83 in 1972. More than 300 cities adopted “sunshine hour codes,” specifying penalties that developers must pay for casting shadows. Tokyo adopted a stricter zoning code for residential areas. In 1976, the Tokyo District Court delivered $7,000 in sunshine damages to residents at the foot of a new office tower. “Sunshine is essential to a comfortable life,” the court opined, “and therefore a citizen’s right to enjoy sunshine at his home should be duly protected by law.” Such rewards are not common, though in theory a developer can be forced to pay as much as $10,000 to each shaded homeowner.

But in America, the concept of property has never been so expansive. In the second half of the 19th century, the subject was a hot legal issue, but never overcame opposition from pro-growth circles. As the New York Times wrote in 1878, “Courts have rendered decisions that the law of ancient lights is inappropriate and inapplicable in America… Our sparsely settled country, they say, has not required such a law; encouragement of building is more needed than restrictions upon it.” That same logic still fuels opposition to zoning measures today.

Quality-of-life concerns struck back. In a series of tenement laws, New York City required habitable domiciles to include features like external windows in every room. It was the seven-acre shadow of the Equitable Building, completed in 1915, that inspired the nation’s first comprehensive zoning resolution. New York’s setback laws required buildings to taper as they rose, and shaped the city’s skyline and its streets for the next half-century. Many cities followed suit.

But there are few direct protections on the books, and the issue has again come to the forefront as a rash of super-tall buildings rise in Midtown Manhattan, casting half-mile-long shadows on Central Park. A quarter-century ago, activists led by Jackie Kennedy Onassis and the Municipal Arts Society successfully obtained architectural concessions from developer Mort Zuckerman when his plans threatened to devour sunlit stretches of Central Park. Today the issue has spawned scattered complaints but no results.

If New York had a law like San Francisco’s, that would be different. Voters in the famously sun-starved city passed a ballot ordinance in 1984 that prohibited new buildings from casting significant shadows on public parks. It has since required hundreds of real estate projects to be altered, and is regularly targeted by developers for repeal.

It’s a microcosm of a much larger debate about the wisdom of zoning, and the balance between regulation and development. High-rent cities like New York and San Francisco desperately need new units of housing. Which quality-of-life requirements represent basic human rights, and which are not-in-my-backyard claims to stymie new construction in a crowded city? Some proponents of maximizing the potential for new housing in American cities have proposed repealing some of the Progressive-era stipulations for proper apartments.

The rise of solar power further complicated the debate, even as it neatly quantifies the pecuniary value of sunlight. Even decades ago, American legal ambivalence to the sunbeams was called “the single most important legal issue concerning solar energy.” These days, many U.S. states and a handful of U.S. cities have introduced “solar permits,” through which an owner can ensure that his or her solar access cannot be disrupted. In Portland, Ore., existing vegetation (i.e., a tree that grows taller) is exempted. In Ashland, solar collectors are protected from encroaching vegetation but not from new construction. Boulder, Colo., has some of the most extensive solar rights in the U.S.

Sometimes this leads to odd conflicts. In Sunnyvale, Calif., one neighbor sued another over a crop of redwood trees that were casting shadows on his solar panels. Under the state’s 1978 solar rights law, he won — the neighbors had to trim their trees to let more sun through to his panels.

Developers contend that such regulations can amount to extortion: solar panels could be used to extract limitless concessions from nearby properties. Then again, without the assurance of continuing sunlight, what homeowner could invest in solar power?

 

http://www.salon.com/2014/04/20/welcome_to_the_permanent_dusk_sunlight_in_cities_is_an_endangered_species/?source=newsletter

The Rise of the Digital Proletariat

In open systems, discrimination and barriers can become invisibilized,’ says author and activist Astra Taylor. (Deborah DeGraffenreid.)

Astra Taylor reminds us that the Internet cannot magically produce revolution.

BY Sarah Jaffe

It really challenges the notion that we’re all on these social media platforms purely by choice, because there’s a real obligatory dimension to so much of this.

The conversation about the impact of technology tends to be binary: Either it will save us, or it will destroy us. The Internet is an opportunity for revolution; our old society is being “disrupted”; tech-savvy college dropouts are rendering the staid elite obsolete. Or else our jobs are being lost to automation and computers; drones wipe out families on their wedding day; newly minted millionaires flush with tech dollars are gentrifying San Francisco at lightning speed.

Neither story is completely true, of course. In her new book, The People’s Platform: Taking Back Power and Culture in the Digital Age, out now from Metropolitan Books, Astra Taylor takes on both the techno-utopians and the techno-skeptics, reminding us that the Internet was created by the society we live in and thus is more likely to reflect its problems than transcend them. She delves into questions of labor, culture and, especially, money, reminding us who profits from our supposedly free products. She builds a strong case that in order to understand the problems and potentials of technology, we have to look critically at the market-based society that produced it.

Old power dynamics don’t just fade away, she points out—they have to be destroyed. That will require political action, struggle, and a vision of how we want the Internet (and the rest of our society) to be. I spoke with Taylor about culture, creativity, the possibility of nationalizing Facebook and more.

Many people know you as a filmmaker or as an activist with Occupy and Strike Debt. How do you see this book fitting in with the other work you’ve done?

Initially I saw it as a real departure, and now that it’s done, I recognize the continuity. I felt that the voices of culture makers were left out of the debate about the consequences of Internet technology. There are lots of grandiose statements being made about social change and organizing and about how social media tools are going to make it even easier for us to aggregate and transform the world. I felt there was a role I could play rooted in my experiences of being a culture maker and an activist. It was important for somebody grounded in those areas to make a sustained effort to be part of the conversation. I was really troubled that people on all sides of the political spectrum were using Silicon Valley rhetoric to describe our new media landscape. Using terms like “open” and “transparent” and saying things were “democratizing” without really analyzing those terms. A big part of the book was just trying to think through the language we’re using and to look at the ideology underpinning the terminology that’s now so commonplace.

You make the point in the book that the Internet and the offline world aren’t two separate worlds. Can you talk about that a bit more?

It’s amazing that these arguments even need to be made. That you need to point out that these technologies cannot just magically overcome the structures and material conditions that shape regular life.

It harkens back to previous waves of technological optimism. People have always invested a lot of hope in their tools. I talk about the way that we often imbue our machines with the power to liberate us. There was lots of hope that machines would be doing all of our labor and that we would have, as a society, much more free time, and that we would have this economy of abundance because machines would be so dramatically improved over time. The reasons that those predictions never came to pass is because machines are embedded in a social context and the rewards are siphoned off by the elite.

The rise of the Internet really fits that pattern. We can see that there is this massive shifting of wealth [to corporations]. These gigantic digital companies are emerging that can track and profit from not just our online interactions, but increasingly things that we’re doing away from the keyboard. As we move towards the “Internet-of-things,” more and more of the devices around us are going to have IP addresses and be leaking data. These are avenues for these companies that are garnering enormous power to increase their wealth.

The rhetoric a few years ago was that these companies are going to vanquish the old media dinosaurs. If you read the tech books from a few years ago, it’s just like “Disney and these companies are so horrible. Google is going to overthrow them and create a participatory culture.” But Google is going to be way more invasive than Mickey Mouse ever was.

Google’s buying drone companies.

Google’s in your car, Google’s in your thermostat, it’s in your email box. But then there’s the psychological element. There was this hope that you could be anyone you wanted to be online. That you could pick an avatar and be totally liberated from your offline self. That was a real animating fantasy. That, too, was really misleading. Minority groups and women are often forced back into their real bodies, so to speak. They’re not given equal access to the supposedly open space of the Internet.

This is one of the conversations that I think your book is incredibly relevant to right now. Even supposedly progressive spaces are still dominated by white people, mostly men, and there’s a real pushback against women and people of color who are using social media.

It’s been amazing how much outrage can get heaped on one person who’s making critical observations about an institution with such disproportionate power and reach.

The new media elites end up looking a whole lot like the old ones. The other conversations about race and gender and the Internet recently has been about these new media websites that are launched with a lot of fanfare, that have been funded in many cases by Silicon Valley venture capital, that are selling themselves as new and rebellious and exciting and a challenge to the old media—the faces of them are still white men.

The economic rewards flow through the usual suspects. Larry Lessig has done a lot of interesting work around copyright. But he wrote basically that we need to cheer on the Facebooks of the world because they’re new and not the old media dinosaurs. He has this line about “Stanford is vanquishing Harvard.” We need something so much more profound than that.

This is why I really take on the concept of “openness.” Because open is not equal. In open systems, discrimination and barriers can become invisibilized. It’s harder to get your mind around how inequitable things actually are. I myself follow a diverse group of people and feel like Twitter is full of people of color or radicals. But that’s because I’m getting a very distorted view of the overall picture.

I think it’s helpful to look at the handful of examples of these supposedly open systems in action. Like Wikipedia, which everyone can contribute to. Nonetheless, only like 15 percent of the editors are women. Even the organizations that are held up as exemplars of digital democracy, there’s still such structural inequality. By the time you get to the level of these new media ventures that you’re talking about, it’s completely predictable.

We really need to think through these issues on a social level. I tried to steer the debate away from our addiction to our devices or to crappy content on the Internet, and really take a structural view. It’s challenging because ultimately it comes down to money and power and who has it and how do you wrest it away and how do you funnel some of it to build structures that will support other types of voices. That’s far more difficult than waiting around for some new technology to come around and do it for you.

You write about this tension between professional work from the amateurs who are working for free and the way the idea of doing work for the love of it has crept in everywhere. Except people are working longer hours than ever and they’re making less money than ever, and who has time to come home at the end of your two minimum wage jobs and make art?

It would be nice to come out and say follow your heart, do everything for the love of it, and things’ll work out. Artists are told not to think about money. They’re actively encouraged to deny the economic sphere. What that does though is it obscures the way privilege operates—the way that having a trust fund can sure be handy if you want to be a full time sculptor or digital video maker.

I think it’s important that we tackle these issues. That’s where I look at these beautiful predictions about the way these labor-saving devices would free us all and the idea that the fruits of technological advancement would be evenly shared. It’s really interesting how today’s leading tech pundits don’t pretend that [the sharing is] going to be even at all. Our social imagination is so diminished.

There’s something really off about celebrating amateurism in an economy where people are un- and under-employed, and where young people are graduating with an average of $30,000 of student debt. It doesn’t acknowledge the way that this figure of the artist—[as] the person who loves their work so much that they’ll do it for nothing—is increasingly central to this precarious labor force.

I quote this example of people at an Apple store asking for a raise and the response was “When you’re working for Apple, money shouldn’t be a consideration.” You’re supposed to just love your work so much you’ll exploit yourself. That’s what interning is. That’s what writing for free is when you’re hoping to get a foot in the door as a journalist. There are major social implications if that’s the road we go down. It exacerbates inequality, because who can afford to do this kind of work?

Of course, unpaid internships are really prevalent in creative fields.

Ultimately, it’s a corporate subsidy. People are sometimes not just working for free but then also going into debt for college credit to do it. In a way, all of the unpaid labor online is also a corporate subsidy. I agree that calling our participation online “labor” is problematic because it’s not clear exactly how we’re being exploited, but the point is the value being extracted. We need to talk about that value extraction and the way that people’s free participation feeds into it.

Of course we enjoy so much of what we do online. People enjoy creating art and culture and doing journalism too. The idea that work should only be well-compensated and secure if it makes you miserable ultimately leads to a world where the people who feel like they should make a lot of money are the guys on Wall Street working 80 hours a week. It’s a bleak, bleak view.

In many ways the problem with social media is it does break down this barrier between home and work. You point this out in the book–it’s everywhere, you can’t avoid it, especially if you are an independent creative person where you have to constantly promote your own work, or it is part of your job. There’s now the Wages for Facebook conversation—people are starting to talk about the way we are creating value for these companies.

It really challenges the notion that we’re all on these social media platforms purely by choice, because there’s a real obligatory dimension to so much of this. Look also at the way we talk to young people. “Do you want a college recruiter to see that on your Facebook profile?” What we’re really demanding is that they create a Facebook profile that appeals to college recruiters, that they manage a self that will help them get ahead.

I was at a recent talk about automation and the “end of jobs,” and one researcher said that the jobs that would be hardest to automate away would be ones that required creativity or social intelligence—skills that have been incredibly devalued in today’s economy, only in part because of technology.

Those skills are being pushed out of the economy because they’re supposed to be things you just choose to do because they’re pleasurable. There is a paradox there. Certain types of jobs will be automated away, that can be not just deskilled but done better by machines, and meanwhile all the creative jobs that can’t be automated away are actually considered almost superfluous to the economy.

The thing about the jobs conversation is that it’s a political question and a policy question as well as a technological question. There can be lots of different types of jobs in the world if we invest in them. This question of what kind of jobs we’re going to have in the future. So much of it is actually comes down to these social decisions that we’re making. The technological aspect has always been overhyped.

You do bring up ideas like a basic income and shorter working hours as ways to allow people to have time and money for culture creation.

The question is, how do you get there? You’d have to have a political movement, you’d have to challenge power. They’re not just going to throw the poor people who’ve had their jobs automated away a bone and suddenly provide a basic income. People would really have to organize and fight for it. It’s that fight, that element of antagonism and struggle that isn’t faced when we just think tools are evolving rapidly and we’ll catch up with them.

The more romantic predictions about rising prosperity and the inevitable increase in free time were made against the backdrop of the post-war consensus of the 1940s, ‘50s and ‘60s. There was a social safety net, there were structures in place that redistributed wealth, and so people made predictions colored by that social fabric, that if there were advancements in our tools that they would be shared by people. It just shows the way that the political reality shapes what we can collectively imagine.

Finally, you make the case for state-subsidized media as well as regulations—for ensuring that people have the ability to make culture as well as consume it. You note that major web companies like Google and Facebook operate like public utilities, and that nationalizing them would be a really hard sell, and yet if these things are being founded with government subsidies and our work, they are in a sense already ours.

The invisible subsidy is the thing that we really have to keep in mind. People say, “Where’s the money going to come from?” We’re already spending it. So much innovation is the consequence of state investment. Touchscreens, the microchip, the Internet itself, GPS, all of these things would not exist if the government had not invested in them, and the good thing about state investment is it takes a much longer view than short-term private-market investment. It can have tremendous, socially valuable breakthroughs. But all the credit for these innovations and the financial rewards is going to private companies, not back to us, the people, whose tax dollars actually paid for them.

You raise a moral question: If we’re paying for these things already, then shouldn’t they in some sense be ours? I think the answer is yes. There are some leverage points in the sense that these companies like to talk about themselves as though they actually are public utilities. There’s this public-spiritedness in their rhetoric but it doesn’t go deep enough—it doesn’t go into the way they’re actually run. That’s the gap we need to bridge. Despite Silicon Valley’s hostility to the government and the state, and the idea that the Internet is sort of this magic place where regulation should not touch, the government’s already there. We just need it to be benefiting people, not private corporations.

Sarah Jaffe is a staff writer at In These Times and the co-host of Dissent magazine’s Belabored podcast. Her writings on labor, social movements, gender, media, and student debt have been published in The Atlantic, The Nation, The American Prospect, AlterNet, and many other publications, and she is a regular commentator for radio and television. You can follow her on Twitter @sarahljaffe.

The 420 Myth: How Did ’420′ Become Synonymous with Pot?

  Drugs
 
There are still a bong-load of misconceptions revolving around the term’s supposed derivation.

APRIL 20: A marijuana activist dancing on live music during the annual marijuana 420 event at Yonge & Dundas Square on April 20 2012 in Toronto, Canada.

The following article first appeared in The 420 Times

The term “420″ has to be unquestionably the most familiar turn of phrase used among pot-partakers all around the world. Most of us involved in the marijuana consuming community are aware of the most popular version of how the term came to be, but there are still a bong-load of misconceptions revolving around the term’s supposed derivation.

The seemingly most agreed upon adaptation involves a group of ganja-tokin’ teenagers that attended the San Rafael High School in California back in the 1970s.

The group of pubescent pot smokers referred to themselves as the Waldos due to their chosen hangout spot located outside their high school, a wall.

And just what time did the Waldos meet after being dismissed from school for the day? Yup, you guessed it! 4:20pm. And so, in a nutshell, the term “420″ was by all accounts born.

There’s no documented evidence of the Waldos kickin’ it at 4:20pm around the Louis Pasteur statue on the school grounds contemplating their mission, but it does seem to be the most widespread point of reference.

But what about the other believed beginnings regarding the beloved expression? Is there any truth to those tales?

Common myth 1: The day 4/20 commemorates the death and/or birth of reggae musician Bob Marley.

Although Bob was, and still is, greatly admired and celebrated throughout the marijuana society, April 20 is not the day of his birth or his unfortunate passing. The godfather of reggae was in reality born on February 6 back in 1945.

It’s possible that Bob smoked over 420 spliffs in his lifetime, but he left this plane of existence on May 11, 1981, shortly after the day that has become known as the holiest of high holidaze among frequent tokers.

Common myth 2: The number 420 is police radio-code for “marijuana smoking in progress” and that’s how the term originated.

“Attention all units, attention all units, drop your doughnuts! We have a code 420 taking place all across the nation on a daily basis! We need any available units to please respond immediately! Oh the humanity! Over.”

Well? No. There may be police radio-codes in existence bearing the number 420, but they’re not associated with the notorious time of day, time of year and exceedingly used number that we diehard-tokers have come to know and worship on the regular.

Common myth 3: The date 4/20 is an observation of Adolf Hitler’s birth.

Who in their right mind would celebrate Hitler’s birthday? Yes. The villainous genocidal murderer was apparently dropped on his head repeatedly on April 20, 1889, (how else would one explain his insane ways), but that has absolutely zero connection with the great day of merriment marijuana tokers enjoy each year.

Maybe if he’d passed away on 4/20 there would be some cause for celebration, but I doubt his own mother celebrated his birthday. And even though he was a drug addict, he most likely steered clear of marijuana in fear of it curbing his hatred.

Common myth 4:  The term “420″ derived from a Bob Dylan song.

As historic folklore or some toker hyped-up on a Sativa strain would have you believe, the term “420″ originated via Dylan’s tune “Rainy Day Woman #12 and 35,” where he repeatedly chants, “Everybody must get stoned!” And in addition, when one multiplies the numbers 12 & 35 they will arrive at a sum of 420! (Let’s see here, carry the one….?)

This has to be true, right? Well, no. Although it would be the most believable and quite possibly the coolest out of all the misconceptions, it just ain’t so, yo!

***

If you conduct an internet search for information regarding the origin of the term “420” the list of mythical descriptions that can be accumulated bear a resemblance to that of the fictional Area 51 in Roswell! And I’m quite certain if you were to search hard enough you could even find a fable linking the term’s origin to the famous alien warehouse! But I digress.

Regardless of how the phrase truly came about, it is here to stay. And for the majority of marijuana smokers it is 4:20 all day every day or it’s always 4:20 somewhere and the term will irrefutably be a part of the marijuana subculture’s vocabulary for at least another 420 years, give or take.

 

http://www.alternet.org/drugs/420-myth-how-did-420-become-synonymous-pot?akid=11733.265072.XziZQx&rd=1&src=newsletter983505&t=11&paging=off&current_page=1#bookmark

Artists brains are ‘structurally different’ claims new study

 

Limited study found more grey and white matter in artists’ brains connected to visual imagination and fine motor control

It’s a truism to say that artists see the world differently from the rest of us, but new research suggests that their brains are structurally different as well.

The small study, published in journal NeuroImage, looked at the brain scans of 21 art students and 23 non-artists using a scanning method known as voxel-based morphometry.

Comparisons between the two groups showed that the artist has more neural matter in the parts of their brain relating to visual imagery and fine motor control.

Although this is certainly a physical difference it does not mean that artists’ talents are innate. The balance between the influence of nature and nurture is never easy to divine, and the authors say that training and upbringing also plays a large role in ability.

The brain scans were accompanied by various drawing tasks, with the researchers finding that those who performed best at these tests routinely had more grey and white matter in the motor areas of the brain.

“The people who are better at drawing really seem to have more developed structures in regions of the brain that control for fine motor performance and what we call procedural memory,” lead author Rebecca Chamberlain from KU Leuven University, Belgium told the BBC.

The artists also showed significantly more grey matter in the part of the brain called the parietal lobe, a region involved with a range of activities that include the capacity to imagine, deconstruct and combine visual imagery.

Scientists also suggest that the study would help put to rest the idea that artists predominantly use the right side of their brain, as the study showed that increased grey and white matter was found equally distributed.

Despite this, previous research has suggested that there are some hard-wired structural differences between individuals’ brains, with some of the divides falling across gender lines.

A ‘pioneering study’ published in December last year found that male brains had more neural connections running front to back while female brains had more connections between the right and left hemisphere. Scientist suggested that this could explain why men are ‘better at reading maps’ and women are ‘better at remembering a conversation.

 

http://www.independent.co.uk/news/science/artists-brains-are-structurally-different-claims-new-study-9267513.html

The 2,000-Year History of GPS Tracking

| Tue Apr. 15, 2014 3:00 AM PDT
Egyptian geographer Claudius Ptolemy and Hiawatha Bray’s “You Are Here”

Boston Globe technology writer Hiawatha Bray recalls the moment that inspired him to write his new book, You Are Here: From the Compass to GPS, the History and Future of How We Find Ourselves. “I got a phone around 2003 or so,” he says. “And when you turned the phone on—it was a Verizon dumb phone, it wasn’t anything fancy—it said, ‘GPS’. And I said, ‘GPS? There’s GPS in my phone?’” He asked around and discovered that yes, there was GPS in his phone, due to a 1994 FCC ruling. At the time, cellphone usage was increasing rapidly, but 911 and other emergency responders could only accurately track the location of land line callers. So the FCC decided that cellphone providers like Verizon must be able to give emergency responders a more accurate location of cellphone users calling 911. After discovering this, “It hit me,” Bray says. “We were about to enter a world in which…everybody had a cellphone, and that would also mean that we would know where everybody was. Somebody ought to write about that!”

So he began researching transformative events that lead to our new ability to navigate (almost) anywhere. In addition, he discovered the military-led GPS and government-led mapping technologies that helped create new digital industries. The result of his curiosity is You Are Here, an entertaining, detailed history of how we evolved from primitive navigation tools to our current state of instant digital mapping—and, of course, governments’ subsequent ability to track us. The book was finished prior to the recent disappearance of Malaysia Airlines flight 370, but Bray says gaps in navigation and communication like that are now “few and far between.”

Here are 13 pivotal moments in the history of GPS tracking and digital mapping that Bray points out in You Are Here:

1st century: The Chinese begin writing about mysterious ladles made of lodestone. The ladle handles always point south when used during future-telling rituals. In the following centuries, lodestone’s magnetic abilities lead to the development of the first compasses.

Image: ladle

Model of a Han Dynasty south-indicating ladle Wikimedia Commons

2nd century: Ptolemy’s Geography is published and sets the standard for maps that use latitude and longitude.

Image: Ptolemy map

Ptolemy’s 2nd-century world map (redrawn in the 15th century) Wikimedia Commons

1473: Abraham Zacuto begins working on solar declination tables. They take him five years, but once finished, the tables allow sailors to determine their latitude on any ocean.

Image: declination tables

The Great Composition by Abraham Zacuto. (A 17th-century copy of the manuscript originally written by Zacuto in 1491.) Courtesy of The Library of The Jewish Theological Seminary

1887: German physicist Heinrich Hertz creates electromagnetic waves, proof that electricity, magnetism, and light are related. His discovery inspires other inventors to experiment with radio and wireless transmissions.

Image: Hertz

The Hertz resonator John Jenkins. Sparkmuseum.com

1895: Italian inventor Guglielmo Marconi, one of those inventors inspired by Hertz’s experiment, attaches his radio transmitter antennae to the earth and sends telegraph messages miles away. Bray notes that there were many people before Marconi who had developed means of wireless communication. “Saying that Marconi invented the radio is like saying that Columbus discovered America,” he writes. But sending messages over long distances was Marconi’s great breakthrough.

Image: Marconi

Inventor Guglielmo Marconi in 1901, operating an apparatus similar to the one he used to transmit the first wireless signal across Atlantic Wikimedia Commons

1958: Approximately six months after the Soviets launched Sputnik, Frank McLure, the research director at Johns Hopkins Applied Physics Laboratory, calls physicists William Guier and George Weiffenbach into his office. Guier and Weiffenbach used radio receivers to listen to Sputnik’s consistent electronic beeping and calculate the Soviet satellite’s location; McLure wants to know if the process could work in reverse, allowing a satellite to location their position on earth. The foundation for GPS tracking is born.

​1969: A pair of Bell Labs scientists named William Boyle and George Smith create a silicon chip that records light and coverts it into digital data. It is called a charge-coupled device, or CCD, and serves as the basis for digital photography used in spy and mapping satellites.

1976: The top-secret, school-bus-size KH-11 satellite is launched. It uses Boyle and Smith’s CCD technology to take the first digital spy photographs. Prior to this digital technology, actual film was used for making spy photographs. It was a risky and dangerous venture for pilots like Francis Gary Powers, who was shot down while flying a U-2 spy plane and taking film photographs over the Soviet Union in 1960.

Image: KH-11 image

KH-11 satellite photo showing construction of a Kiev-class aircraft carrier Wikimedia Commons

1983: Korean Air Lines flight 007 is shot down after leaving Anchorage, Alaska, and veering into Soviet airspace. All 269 passengers are killed, including Georgia Democratic Rep. Larry McDonald. Two weeks after the attack, President Ronald Reagan directs the military’s GPS technology to be made available for civilian use so that similar tragedies would not be repeated. Bray notes, however, that GPS technology had always been intended to be made public eventually. Here’s Reagan’s address to the nation following the attack:

1989: The US Census Bureau releases (PDF) TIGER (Topologically Integrated Geographic Encoding and Referencing) into the public domain. The digital map data allows any individual or company to create virtual maps.

1994: The FCC declares that wireless carriers must find ways for emergency services to locate mobile 911 callers. Cellphone companies choose to use their cellphone towers to comply. However, entrepreneurs begin to see the potential for GPS-integrated phones, as well. Bray highlights SnapTrack, a company that figures out early on how to squeeze GPS systems into phones—and is purchased by Qualcomm in 2000 for $1 billion.

1996: GeoSystems launches an internet-based mapping service called MapQuest, which uses the Census Bureau’s public-domain mapping data. It attracts hundreds of thousands of users and is purchased by AOL four years later for $1.1 billion.

2004: Google buys Australian mapping startup Where 2 Technologies and American satellite photography company Keyhole for undisclosed amounts. The next year, they launch Google Maps, which is now the most-used mobile app in the world.

2012: The Supreme Court ruling in United States v. Jones (PDF) restricts police usage of GPS to track suspected criminals. Bray tells the story of Antoine Jones, who was convicted of dealing cocaine after police placed a GPS device on his wife’s Jeep to track his movements. The court’s decision in his case is unanimous: The GPS device had been placed without a valid search warrant. Despite the unanimous decision, just five justices signed off on the majority opinion. Others wanted further privacy protections in such cases—a mixed decision that leaves future battles for privacy open to interpretation.

 

http://www.motherjones.com/mixed-media/2014/04/you-are-here-book-hiawatha-bray-gps-navigation

What We Lose When We Rip the Heart Out of Arts Education



It’s National Poetry Month, but if the Common Core has its way,
our children will hardly know what poetry is.

Photo Credit: Aaron Amat via Shutterstock.com

“No, no. You’ve got something the test and machines will never be able to measure: you’re artistic. That’s one of the tragedies of our times, that no machine has ever been built that can recognize that quality, appreciate it, foster it, sympathize with it.” —Paul Proteus to his wife Anita in Kurt Vonnegut’s Player Piano

“So much depends upon a red wheel barrow glazed with rain water beside the white chickens” is, essentially, a grammatical sentence in the English language. While the syntax is somewhat out of the norm, the diction is accessible to small children—the hardest word likely being “depends.” But “The Red Wheelbarrow” by William Carlos Williams is much more than a sentence; it is a poem:

so much depends
upon

a red wheel
barrow

glazed with rain
water

beside the white
chickens.

A relatively simple sentence shaped into purposeful lines and stanzas becomes poetry. And like Langston Hughes’ “Harlem” and Gwendolyn Brooks’ “We Real Cool,” it sparks in me a profoundly important response each time I read these poems:I wish I had written that. It is the same awe and wonder I felt as a shy, self-conscious teenager when I bought, collected and read comic books, marveling at the artwork I wished I had drawn.

Will we wake one morning soon to find the carcasses of poems washed up on the beach by the tsunami of the Common Core?

That question, especially during National Poetry Month, haunts me more every day, notably because of the double-impending doom augured by the Common Core: the rise of nonfiction (and the concurrent erasing of poetry and fiction) from the ELA curriculum and the mantra-of-the-moment, “close reading” (the sheep’s clothing for that familiar old wolf New Criticism):

We have come to a moment in the history of the U.S. when we no longer even pretend to care about art. And poetry is the most human of the arts—the very human effort to make order out of chaos, meaning out of the meaningless: “Daddy, daddy, you bastard, I’m through” (Sylvia Plath, “Daddy”).

***

The course was speech, taught by Mr. Brannon. I was a freshman at a junior college just 15-20 miles from my home. Despite the college’s close proximity to my home, my father insisted I live on campus. But that class and those first two years of college were more than living on campus; they were the essential beginning of my life.

In one of the earliest classes, Mr. Brannon read aloud and gave us a copy of “[in Just-]“ by e. e. cummings. I imagine that moment was, for me, what many people describe as a religious experience. That was more than 30 years ago, but I own two precious books that followed from that day in class: cummings’ Complete Poems and Selected Poems. Several years later, Emily Dickinson‘s Complete Poemswould join my commitment to reading every poem by those poets who made me respond over and over, I wish I had written that.

But my introduction to cummings was more than just finding the poetry I wanted to read; it was when I realized I was a poet. Now, when the words “j was young&happy” come to me, I know there is work to do—I recognize the gift of poetry.

***

As a high school English teacher, I divided my academic year into quarters by genre/form: nonfiction, poetry, short fiction, and novels/plays. The poetry quarter, when announced to students, initially received moans and even direct complaints: “I hate poetry.” That always broke my heart. Life and school had already taken something very precious from these young people:

children guessed (but only a few
and down they forgot as up they grew…
                              (“[anyone lived in a pretty how town],” e.e. cummings

I began to teach poetry in conjunction with popular songs. Although my students in rural South Carolina were overwhelmingly country music fans, I focused my nine weeks of poetry on the songs of alternative group R.E.M. At first, that too elicited moans from students in those early days of exploring poetry (see that unit on the blog “There’s time to teach”).

Concurrently, throughout my high school teaching career, students would gather in my room during our long mid-morning break and lunch (much to the chagrin of administration). And almost always, we played music, even closing the door so two of my students could dance and sing and laugh along with the Violent Femmes.

Many of those students are in their 30s and 40s, but it is common for them to contact me—often on Facebook—and recall fondly R.E.M. and our poetry unit. Those days meant something to them that lingers, that matters in ways that cannot be measured. It was an oasis of happiness in their lives at school.

***

e.e. cummings begins “since feeling is first,” and then adds:

my blood approves,
and kisses are better fate
than wisdom
lady i swear by all flowers. Don’t cry
—the best gesture of my brain is less than
your eyelids’ flutter….

Each year when my students and I examined this poem, we would discuss that cummings—in Andrew Marvell fashion—offers an argument that is profoundly unlike what parents, teachers, preachers, and politicians claim.

I often paired this poem with Coldplay’s “The Scientist,” focusing on:

I was just guessing at numbers and figures
Pulling your puzzles apart
Questions of science, science and progress
Do not speak as loud as my heart

Especially for teenagers, this question, this tension between heart and mind, mattered. Just as it recurs in the words of poets and musicians over decades, centuries. Poetry, as with all art, is the expressed heart—that quest to rise above our corporeal humanness:

               Bold Lover, never, never canst thou kiss,
Though winning near the goal yet, do not grieve;
       She cannot fade, though thou hast not thy bliss,
               For ever wilt thou love, and she be fair!
                                           (Ode on a Grecian Urn,” John Keats)

***

I have loved a few people intensely—so deeply that my love, I believe, resides permanently in my bones. One such love is my daughter, and she now carries the next human who will add to that ache of being fully human—loving another beyond words.

And that is poetry.

Poetry is not identifying iambic pentameter on a poetry test or discussing the nuances of enjambment in an analysis of a Dickinson poem.

Poems are not fodder for close reading.

Poetry is the ineluctable “Oh my heart” that comes from living fully in the moment, the moment that draws us to words as well as inspires us toward words.

We read a poem, we listen to a song, and our hearts rise out of our eyes as tears.

That is poetry.

Like the picture books of our childhood, poetry must be a part of our learning, essential to our school days—each poem an oasis of happiness that “machines will never be able to measure.”

***

Will we wake one morning to find the carcasses of poems washed up on the beach by the tsunami of the Common Core?

Maybe the doomsayers are wrong. Maybe poetry will not be erased from our classrooms. School with less poetry is school with less heart. School with no poetry is school with no heart.

Both are tragic mistakes, because if school needs anything, it is more heart. And poetry? Oh my heart.

This piece originally appeared on the Becoming Radical blog.

New study finds US to be ruled by oligarchic elite

by Jerome Roos on April 17, 2014

Post image for New study finds US to be ruled by oligarchic elite

Political scientists show that average American has “near-zero” influence on policy outcomes, but their groundbreaking study is not without problems.

 

It’s not every day that an academic article in the arcane world of American political science makes headlines around the world, but then again, these aren’t normal days either. On Wednesday, various mainstream media outlets — including even the conservative British daily The Telegraph — ran a series of articles with essentially the same title: “Study finds that US is an oligarchy.” Or, as the Washington Post summed up: “Rich people rule!” The paper, according to the review in the Post, “should reshape how we think about American democracy.”

The conclusion sounds like it could have come straight out of a general assembly or drum circle at Zuccotti Park, but the authors of the paper in question — two Professors of Politics at Princeton and Northwestern University — aren’t quite of the radical dreadlocked variety. No, like Piketty’s book, this article is real “science”. It’s even got numbers in it! Martin Gilens of Princeton and Benjamin Page of Northwestern University took a dataset of 1,779 policy issues, ran a bunch of regressions, and basically found that the United States is not a democracy after all:

Multivariate analysis indicates that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence. The results provide substantial support for theories of Economic Elite Domination and for theories of Biased Pluralism, but not for theories of Majoritarian Electoral Democracy or Majoritarian Pluralism.

The findings, of course, are both very interesting and very obvious. What Gilens and Page claim to have empirically demonstrated is that policy outcomes by and large favor the interests of business and the wealthiest segment of the population, while the preferences of the vast majority of Americans are of little to no consequence for policy outcomes. As the authors show, this new data backs up the conclusions of a number of long-forgotten studies from the 1950s and 1960s — not least the landmark contributions by C.W. Mills and Ralph Miliband — that tried to debunk the assertion of mainstream pluralist scholars that no single interest group dominates US policymaking.

But while Gilens and Page’s study will undoubtedly be considered a milestone in the study of business power, there’s also a risk in focusing too narrowly on the elites and their interest groups themselves; namely the risk of losing sight of the broader set of social relations and institutional arrangements in which they are embedded. What I am referring to, of course, is the dreaded C-word: capitalism — a term that appears only once in the main body of Gilens and Page’s text, in a superficial reference to The Communist Manifesto, whose claims are quickly dismissed as empirically untestable. How can you talk about oligarchy and economic elites without talking about capitalism?

What’s missing from the analysis is therefore precisely what was missing from C.W. Mills’ and Miliband’s studies: an account of the nature of the capitalist state as such. By branding the US political system an “oligarchy”, the authors conveniently sidestep an even thornier question: what if oligarchy, as opposed to democracy, is actually the natural political form in capitalist society? What if the capitalist state is by its very definition an oligarchic form of domination? If that’s the case, the authors have merely proved the obvious: that the United States is a thoroughly capitalist society. Congratulations for figuring that one out! They should have just called a spade a spade.

That, of course, wouldn’t have raised many eyebrows. But it’s worth noting that this was precisely the critique that Nicos Poulantzas leveled at Ralph Miliband in the New Left Review in the early 1970s — and it doesn’t take an Althusserian structuralist to see that he had a point. Miliband’s study of capitalist elites, Poulantzas showed, was very useful for debunking pluralist illusions about the democratic nature of US politics, but by focusing narrowly on elite preferences and the “instrumental” use of political and economic resources to influence policy, Miliband’s empiricism ceded way too much methodological ground to “bourgeois” political science. By trying to painstakingly prove the existence of a causal relationship between instrumental elite behavior and policy outcomes, Miliband ended up missing the bigger picture: the class-bias inherent in the capitalist state itself, irrespective of who occupies it.

These methodological and theoretical limitations have consequences that extend far beyond the academic debate: at the end of the day, these are political questions. The way we perceive business power and define the capitalist state will inevitably have serious implications for our political strategies. The danger with empirical studies that narrowly emphasize the role of elites at the expense of the deeper structural sources of capitalist power is that they will end up reinforcing the illusion that simply replacing the elites and “taking money out of politics” would be sufficient to restore democracy to its past glory. That, of course, would be profoundly misleading. If we are serious about unseating the oligarchs from power, let’s make sure not to get carried away by the numbers and not to lose sight of the bigger picture.

Jerome Roos is a PhD candidate in International Political Economy at the European University Institute, and founding editor of ROAR Magazine.

The commons lies at the heart of a major cultural and social shift now underway.

The New Economic Events Giving Lie to the Fiction That

We Are All Selfish, Rational Materialists

Photo Credit: AllanGregg; Screenshot / YouTube.com

Jeremy Rifkin’s new book, “The Zero Marginal Cost Society,” brings welcome new attention to the commons just as it begins to explode in countless new directions. His book focuses on one of the most significant vectors of commons-based innovation — the Internet and digital technologies — and documents how the incremental costs of nearly everything is rapidly diminishing, often to zero. Rifkin explored the sweeping implications of this trend in an excerpt from his book and points to the “eclipse of capitalism” in the decades ahead.

But it’s worth noting that the commons is not just an Internet phenomenon or a matter of economics. The commons lies at the heart of a major cultural and social shift now underway. People’s attitudes about corporate property rights and neoliberal capitalism are changing as cooperative endeavors — on digital networks and elsewhere — become more feasible and attractive. This can be seen in the proliferation of hackerspaces and Fablabs, in the growth of alternative currencies, in many land trusts and cooperatives and in seed-sharing collectives and countless natural resource commons.

Beneath the radar screen of mainstream politics, which remains largely clueless about such cultural trends on the edge, a new breed of commoners is building the vision of a very different kind of society, project by project. This new universe of social activity is being built on the foundation of a very different ethics and social logic than that of homo economicus — the economist’s fiction that we are all selfish, utility-maximizing, rational materialists.

Durable projects based on social cooperation are producing enormous amounts of wealth; it’s just that this wealth is not generally not monetized or traded. It’s socially or ecologically embedded wealth that is managed by self-styled commoners themselves. Typically, such commoners act more as stewards of their common wealth than as owners who treat it as private capital. Commoners realize that a life defined by impersonal transactions is not as rich or satisfying as one defined by abiding relationships. The larger trends toward zero-marginal-cost production make it perfectly logical for people to seek out commons-based alternatives.

You can find these alternatives popping up all over: in the 10,000-plus open access scientific journals whose research is freely shareable to anyone and in community gardens that produce both fresh vegetables and neighborliness. In hundreds of “timebanks” that let people meet basic needs through time-barters, and in highly productive, ecologically minded commons-based agriculture.

Economists tend to ignore such wealth because it generally doesn’t involve market activity. No cash is exchanged, no legal contracts signed and no measureable Gross Domestic Product is generated. But the wealth of the commons is not accumulated like capital; its vitality comes from being circulated. As I describe in my new book, “Think Like a Commoner,” the story of our time is the rise of the commons as a new way to emancipate oneself from predatory markets and to collaborate with peers to protect and expand one’s shared wealth. This is a story that is being played out in countless digital arenas, as Rifkin documents, but also in such diverse contexts as cities, farming, museums, theaters and indigenous communities.

One reason that so many commons arise and flourish is because they help their participants meet important basic needs in fair, responsive and socially satisfying ways. That’s quite attractive to those who are otherwise held captive by conventional, predatory markets. Big agriculture is more concerned with efficiency and profit than ecological stewardship. Large transnationals are more interested in rip-and-run resource extraction (mining, fracking, timber) than in the protection of sacred lands and time-honored ways of life. “Copyright industries” like Hollywood and record labels want to treat all of culture as tightly controlled “product,” not as something that is freely shared and built upon.

Nowadays the commons has a special appeal for people of the global South who are often victimized by the “enclosures” inflicted by neoliberal investment and trade policies. Enclosures are the act of privatizing and commodifying previously shared resources. For example, millions of acres of land in Africa, Asia and Latin America are currently being seized by investors in a massive international land grab. Hedge funds and even the government of South Korea, Saudi Arabia and China are enacting an eerie replay of the English enclosure movement. Commoners who have worked the land for generations as a customary right are being forced to migrate to cities in search of work, where they often end up as paupers and sweatshop employees: a modern-day replay of Charles Dickens’ novels.

By the lights of modern economic theory, it’s all for the best because it promotes “development” (i.e., consumerism and other market dependencies). But many commoners are now fighting the dispossession and dependencies that enclosures entail by struggling to retain some measure of dignity and self-determination through their commons. The International Land Alliance estimates that 2 billion people around the world depend upon subsistence commons of forests, fisheries, arable land, water and wild game to meet their everyday needs.

Strangely, the leading introductory economics textbooks in the U.S. virtually ignore the commons except for the obligatory warning about the “tragedy of the commons.” They prefer not to recognize that the commons represents an entirely viable but different paradigm of “development” – one that can transcend the unsustainable consumerism, cultural disintegration and economic growth of our time. As the late Nobel Prize winner Elinor Ostrom showed, commons are an entirely sustainable, ecologically friendly model of resource management, contrary to the “tragedy” parable.

Commoners are not all alike. They have many profound differences in their governance systems, management practices and cultural values. And commons are not without their conflicts, struggles and failures. That said, most commoners tend to share fundamental commitments to participation, openness, inclusiveness, social equity, ecological respect and human rights.

The politics of the commons movement can be confounding to conventional observers because political goals are not the paramount priority; protection of the commons is. Commoners tend to be more focused on “prepolitical” social activity and relationships, which is why commons are embraced by such a wide variety of people. As German commons advocate Silke Helfrich notes in The Wealth of the Commons, “Commons draw from the best of all political ideologies.” Conservatives like the tendency of commons to promote responsibility. Liberals are pleased with the focus on equality and basic social entitlement. Libertarians like the emphasis on individual initiative. And leftists like the idea of limiting the scope of the Market.

It is important to realize that the commons is not a discussion about objects, but a discussion about who we are and how we treat each other. What decisions are being made about our resources? Does economic activity satisfy basic human needs and honor human rights and dignity? These kind of discussions are not often heard in in conventional business and policy circles, alas.

To conventional minds, the idea of the commons as a paradigm of social governance appears either utopian or communistic, or at the very least, impractical. But a diverse, eclectic universe of commons around the world demonstrates otherwise. It is the neoliberal project of ever-expanding consumption on a global scale that is the utopian, totalistic dream. It manifestly cannot fulfill its mythological vision of human progress through ubiquitous market activity and greater heaps of private consumption, if only because it demands more from Nature than it can possibly deliver – while inflicting too much social inequity and disruption as well.

Fortunately, the Internet and indigenous peoples, the re-localization movement and hackers, community foresters and fishing cooperatives and many, many others, are showing that the commons can be an effective vehicle for social and political emancipation. Jeremy Rifkin’s astute analysis of this powerful trend will help open up a much-needed discussion in the stodgy precincts of conventional economics.

David A. Bollier is an author, activist, blogger and independent scholar with a primary focus on “the commons” as a new paradigm for economics, politics, and culture. He is the founding editor of Onthecommons.org (2002-2010), co-founder and principal of the international consulting project Commons Strategy Group, and co-director of the Commons Law Project. Bollier is the author of numerous books, including “Think Like a Commoner: A Short Introduction to the Life of the Commons.”

 http://www.alternet.org/economy/were-about-enter-whole-new-era-economics-and-its-going-make-everyone-feel-lot-more-wealthy?akid=11716.265072.WdcnEx&rd=1&src=newsletter981596&t=7&paging=off&current_page=1#bookmark