Why the Birth of Shakespeare Is the Birth of Modern Art

April 22, 2014, 9:50 PM
Chandos_shakespeare--crop

April 23, 2014, marks the 450th birthday of William Shakespeare, one of the greatest writers of all time and an inescapable influence not just on literature, but also on every form of culture since the 19th century. Although the canon of plays was more or less established with the publication of The First Folio in 1623, Shakespeare had to wait for larger acclaim until the Romantic era of the 1800s, when critics such as Samuel Taylor Coleridge and August Wilhelm Schlegel first spread the Gospel of Will which would soon blossom into full bardolatry. In many ways, the Romantic era never ended and we are the “last” Romantics, full of ideas of individuality, imagination, and even love that would be totally foreign to the classical world. Even those who accept that the Romantic era’s over see it as a Post-Romantic era, a time defined by what it can no longer be.  This Romantic or Post-Romantic world gave birth to Modern art.  So, by an almost Biblical series of begats, you can say that the birth of Shakespeare is the birth of Modern art, the birth of how we see the world within and the world without today.

During Shakespeare’s own lifetime he was known best as the “honey-tongued” poet of such works as Venus and Adonis and The Rape of Lucrece, in which he used classical and ancient characters to his own artistic purposes as well as practical purposes of making money during the plague-forced theater closures of 1593-1594. Readers literally read published copies of these works to pieces, making surviving copies extremely rare today. People went to see the plays, of course, but the emphasis of the theaters was on making money as much as making art. Publishing plays never became a priority because it never seemed profitable enough. It was Shakespeare’s friend and rival Ben Jonson who believed that publishing ones works in a collected way could serve both practical and artistic purposes. Jonson published his own collected works in 1616 and pushed for the posthumous collection of Shakespeare’s works in 1623, both of which served as templates for collected works of contemporaries such as Beaumont and Fletcher and others that essentially established the study of “modern” (that is, 16th century) literature as an art form as worthy as that of the already well-studied classics. Yes, Jonson deserves credit for making the initial push, but it was the inspiration of Shakespeare, as well as the lasting success of Shakespeare’s works in print, that set in motion what we know as literature today.

Once the Romantics got hold of Shakespeare, however, they turned the 16th century author into a 19th century “modern” contemporary. T.S. Eliot later complained about this trend in his 1920 essay “Hamlet”:

These minds often find in Hamlet a vicarious existence for their own artistic realization. Such a mind had Goethe, who made of Hamlet a Werther; and such had Coleridge, who made of Hamlet a Coleridge; and probably neither of these men in writing about Hamlet remembered that his first business was to study a work of art.

While Eliot felt that the “first business was to study a work of art,” Goethe, Coleridge, and others felt that the reason behind that business was to make those works relevant to living, breathing people, even if that “made of Hamlet” the critic himself. Some argue that Shakespeare’s critical lull period during the 17th and 18th centuries owes something to the neo-classical tastes of the time in which individuality took a back seat to more communal ideals.

Once the modern taste for the individual took hold, however, Shakespeare found a home beyond England’s shores. American colonists staged plays by Shakespeare as early as 1750. “There is hardly a pioneer’s hut that does not contain a few odd volumes of Shakespeare,” Alexis de Tocqueville wrote in 1835 in Democracy in America. From the very beginning of the American experiment in democracy, Shakespeare and his individualized characters inspired a government of, by, and for the people, to paraphrase the Gettysburg Address of that notorious Shakespeare lover Abraham Lincoln. As kings fell and democracies rose throughout Europe in the 19th and 20th centuries, Shakespeare (often in vernacular translation) showed the way, sometimes in the form of music, as in Giuseppe Verdi’s operas Otello and Falstaff, which provided the popular soundtrack to the political movement by which modern Italy was born.

Modern, democratic societies longed for art that reflected their ideals and anxieties. So much modern art comes from the psychoanalytic ideas of Sigmund Freud, who mined ancient characters such as Oedipus for the infamous “complex,” but also plumbed the human psyche in the fictional person of Hamlet. The “-isms” of the 20th century also soon found new artistic uses for Shakespeare. German Expressionism, Russian Futurism, and European Marxism all explored new ways of staging the Bard to make the people understand their goals. More recently, art steeped philosophically in feminism, anti-colonialism, and sexualism views Shakespeare as friend or foe, but either way cannot escape the cultural gravitational pull of his massive influence.

Although the pedantic women of T.S. Eliot’s “The Love Song of J. Alfred Prufrock” “come and go/ talking of Michelangelo” as a badge of cultural knowing, Eliot alludes in that poem to no less than three Shakespeare plays (Henry IV Part II, Twelfth Night, and that old Coleridgean favorite, Hamlet). Even Eliot couldn’t avoid Shakespeare in the making of modern poetic art. So, as we wish the Bard a happy 450th (the last round number anniversary some of us, including me, will likely see), we can wish him many, many more with the knowledge that we can join Ben Jonson’s tribute in that First Folio that Shakespeare “was not of an age, but for all time!”, including ours.

[Image: The “Chandos” Portrait of William Shakespeare (detail).]

Sean Parker and the next generation of libertarian billionaires

Young, rich and politically ignorant:

A young billionaire adorably thinks he can solve Washington’s problems through centrism! Why our democracy’s a mess

Young, rich and politically ignorant: Sean Parker and the next generation of libertarian billionaires
Sean Parker (Credit: Reuters/Gonzalo Fuentes)

Those who own the country ought to govern it -Founder John Jay

That quote may be apocryphal, but the sentiment has been with us since the beginning of the Republic. I think we all know he wasn’t talking about that amorphous mass known as “the people,” don’t you? Much better for the real stakeholders of democracy to be in charge. You know, the people with money and property.

And it appears as though they got what they wanted. According to this blockbuster study from Martin Gilens and Benjamin Gage, the fact that the people sometimes have their policy preferences enacted is purely a matter of coincidence: It only happens if it happens to coincide with the preferences of the wealthy. If not, we’re just out of luck. The practical result of that is that while the wealthy might view certain issues along progressives lines, such as gay rights or maybe even gun control, it’s highly unlikely they will ever allow us to get too far out on the egalitarian limb. They tend to draw the line at anything that affects their own interest. And that interest is to keep most of the money and all of the power within their own control, regardless of whether they see themselves as liberals or conservatives.

Take, for instance, what Thomas Frank in this piece astutely described as liberal Mugwumpery, the “reformist” strain among aristocrats in which they take it upon themselves to reform the state and better the characters of the lower orders. Yes, they may want to clean up government corruption and coerce the “unhealthy” to change their ways, whether through temperance or prohibition, but they cannot be counted upon to fully engage on issues that infringe on their own interests. Frank quotes historian Richard Hofstadter describing the “Mugwump types” of the 19th century:



[T]he most serious abuses of the unfolding economic order of the Gilded Age he either resolutely ignored or accepted complacently as an inevitable result of the struggle for existence or the improvidence and laziness of the masses. As a rule, he was dogmatically committed to the prevailing theoretical economics of laissez faire. . . . He imagined that most of the economic ills that were remediable at all could be remedied by free trade, just as he believed that the essence of government lay in honest dealing by honest and competent men.

Frank applied that term specifically to Michael Bloomberg, who has pledged to spend $50 million to defeat the NRA, a worthy cause if there ever was one. As he points out, however, as much as this particular pledge might benefit the population at large, Bloomberg will also use his power to defeat anything that looks to directly benefit working people economically. Just as the Gilens-Gage paper suggests, as long as the billionaires’ interests align with the people, there is a chance it might get done. Where they diverge, it might as well be impossible. That is a very odd definition of democracy.

Not all of our wealthy warriors for liberal causes are as openly hostile to the economic reforms at the center of the progressive agenda as a Wall Street billionaire like Bloomberg. In fact, many of them are probably just unaware of it, as they are the scions of great wealth who are flush with the idealism of youth and simply wish to make a difference. These Baby Mugwumps have good intentions, but somehow their focus also tends to be directed at worthy causes that don’t upset the economic apple cart.

For instance, these nice young would-be philanthropists who were invited to the White House last week to share their thoughts on how to fix the problems of the world:

“Moon shots!” one administration official said, kicking off the day on an inspirational note to embrace the White House as a partner and catalyst for putting their personal idealism into practice.

The well-heeled group seemed receptive. “I think it’s fantastic,” said Patrick Gage, a 19-year-old heir to the multibillion-dollar Carlson hotel and hospitality fortune. “I’ve never seen anything like this before.” Mr. Gage, physically boyish with naturally swooping Bieber bangs, wore a conservative pinstripe suit and a white oxford shirt. His family’s Carlson company, which owns Radisson hotels, Country Inns and Suites, T.G.I. Friday’s and other brands, is an industry leader in enforcing measures to combat trafficking and involuntary prostitution.

A freshman at Georgetown University, Mr. Gage was among the presenters at a breakout session, titled “Combating Human Trafficking,” that attracted a notable group of his peers. “The person two seats away from me was a Marriott,“ he said. “And when I told her about trafficking, right away she was like, ‘Uh, yeah, I want to do that.’ ”

Of course. Who wouldn’t be against human trafficking? Or limiting the proliferation of guns, either? But I think one can see with those two examples just how limited the scope of our patrician betters’ interest in the public good really is. Whether undertaken through the prism of their own self-interest or a youthful idealism born of privilege, it represents causes, not any real challenge to the status quo.

But what should we make of the latest audacious entry into the political arena? Sean Parker, Napster inventor and Facebook billionaire, announced that he’s jumping into politics with both feet. He’s not signing on to any specific cause or even a vague political philosophy. In fact, it’s almost impossible to figure out what it’s about.

One of the nice things about being a billionaire is that even if you have no idea about what you believe or any sense of how the political system works in theory or in practice you can meet with the actual players and have them explain it to you. That’s what Parker has been doing, meeting with politicians of such disparate ideologies as Rand Paul, Bill DeBlasio and Charlie Christ. I’m sure they all told him to put them on speed dial and to call day or night if he had any questions.

His plan, if one can call it that, makes the naive young heirs to the great fortunes look like grizzled old political veterans by comparison:

Unlike other politically-inclined billionaires, such as the conservative Koch brothers and liberal environmentalist Tom Steyer, Parker hopes to avoid a purely partisan role as he ventures more deeply into politics.

Having donated almost exclusively to Democrats up to this point, Parker made a trip to Washington in December for the purpose of meeting quietly with Republican officeholders and strategists around town. He plans to donate to both sides starting this year, associates say, for the first time committing big sums to aid Republicans he views as credible deal-makers in a bitterly divided Congress.

He’s not even a Mugwump. He’s just a mess. Apparently, he thinks he can “make Washington work” by financing a group of deal makers in both parties who will knock some heads together and get the job done. What, exactly, it is they are supposed to get done remains a mystery. Indeed, from the sound of it, it doesn’t really matter.

I have an idea for Parker. There’s a group of “activists” out there who are right up his alley. He could just buy their outfit outright and rebrand it to his liking. It’s called “No-Labels” and they’ve been flogging this idea of bipartisan nothingness for a very long time. For some unknown reason it just hasn’t taken hold with the public. But if anyone can market this dog to the public, the guy who made hundreds of millions for the singular advice to take “the” out of “The Facebook” seems like just the guy to make it happen.

According to the article, a battalion of opportunistic political consultants from across the ideological spectrum are already on the payroll and are going to make a whole lot of money from this quixotic venture, however it goes, so no matter what, I suppose he’s at least trickling some of his wealth down to lower orders. In the current political environment run by radical right-wing billionaires, Mugwumps and fools, that may be the best we can hope for.

Heather Digby Parton is a writer also well-known as “Digby.” Her political and cultural observations can be found at www.digbysblog.blogspot.com.

 

http://www.salon.com/2014/04/22/young_rich_and_politically_ignorant_the_next_generation_of_libertarian_billionaires/?source=newsletter

“Cubed”: How the American office worker wound up in a box

From the Organization Man to open-plan design, a new history of the way the office has shaped the modern world

 

"Cubed": How the American office worker wound up in a box

Over the past week, as I’ve been carrying around a copy of Nikil Saval’s “Cubed: A Secret History of the Workplace,” I’ve gotten some quizzical looks. “It’s a history of the office,” I’d explain, whereupon a good number of people would respond, “Well, that sounds boring.”

It isn’t. In fact, “Cubed” is anything but, despite an occasional tendency to read like a grad school thesis. The fact that anyone would expect it to be uninteresting is striking, though, and marks an unexpected limit to the narcissistic curiosity of the average literate American. The office, after all, is where most contemporary labor takes place and is a crucible of our culture. We’ve almost all worked in one. Of course it’s a subject that merits a thoroughly researched and analytical history, because the history of the office is also the history of us.

Saval’s book glides smoothly between his two primary subjects: the physical structure of offices and the social institution of white-collar work over the past 150 years or so. “Cubed” encompasses everything from the rise of the skyscraper to the entrance of women into the professional workplace to the mid-20th-century angst over grey-flannel-suit conformity to the dorm-like “fun” workplaces of Silicon Valley. His stance is skeptical, a welcome approach given that most writings on the contemporary workplace are rife with dubious claims to revolutionary innovation — office design or management gimmicks that bestselling authors indiscriminately pounce on like magpies seizing glittering bits of trash.

Some of the most fascinating parts of “Cubed” come in the book’s early chapters, in which Saval traces the roots of the modern office to the humble 19th-century countinghouse. Such firms — typically involved in the importation or transport of goods — were often housed in a single room, with one or more of the partners working a few feet away from a couple of clerks, who copied and filed documents, as well as a bookkeeper. A novice clerk might earn less than skilled manual laborers, but he had the potential to make significantly more, and he could boast the intangible but nevertheless significant status of working with his mind rather than his hands.



Even more formative to the identity of white-collar workers (so named for the detachable collars, changed daily in lieu of more regular washings of the actual shirt, that served as a badge of the class) was the proximity to the boss himself and the very real possibility of advancement to the role of partner. Who better to marry the boss’ daughter and take over when he retired? Well, one of the other clerks is who, and in this foundational rivalry much of the character of white-collar work was set. “Unlike their brothers in the factory, who had begun to see organizing on the shop floors as a way to counter the foul moods and arbitrary whims of their bosses,” Saval writes, “clerks saw themselves as potential bosses.” Blue-collar workers, by contrast, knew they weren’t going to graduate to running the company one day.

The current American ethic of self-help and individualism owes at least as much to the countinghouse as it does to the misty ideal of the Jeffersonian yeoman farmer. An ambitious clerk identified and sought to ingratiate himself with the partners, not his peers, who were, after all, his competitors. He was on his own, and had only himself to blame if he failed. A passion for self-education and self-improvement, via books and night schools and lecture series, took root and flourished. So did a culture of what Saval calls “unctuous male bonding,” the ancestor of the 20th century’s golf outings and three-martini lunches, customs that would make it that much harder for outsiders like women and ethnic minorities to rise in the ranks — once they managed to get into the ranks in the first place.

The meritocratic dream became a lot more elusive in the 1920s, when the population employed in white-collar work surpassed the number of blue-collar workers for the first time. Saval sees this stage in the evolution of the office as epitomized in the related boom in skyscrapers. The towering buildings, enabled by new technologies like elevators and steel-frame construction, were regarded as the concrete manifestation of American boldness and ambition; certainly modern architects, reaching for new heights of self-aggrandizement, encouraged that view. They rarely acknowledged that a skyscraper was, at heart, a hive of standardized cells in which human beings were slotted like interchangeable parts. Most of the workers inhabiting those cells, increasingly female, could not hope to climb to the executive suites.

Office workers had always made observers uncomfortable; the clerk, with his “minute leg, chalky face and hollow chest,” was one of the few members of the American multitude that Walt Whitman scorned to contain. After World War II, that unease grew into a nationwide obsession, bumping several works of not-so-pop sociology — “The Lonely Crowd,” “The Man in the Grey Flannel Suit,” “The Organization Man,” etc. — onto the bestseller lists. In turn, the challenge of managing this new breed of “knowledge workers” became the subject of intensive rumination and theorizing. Saval lists Douglas McGregor’s 1960 guide, “The Human Side of Enterprise,” as the seminal work in a field that would spawn such millionaire gurus as Tom Peters and Peter Drucker.

Office design — typically regimented rows of identical desks — was deemed in need of an overhaul as well, and perhaps the most rueful story Saval recounts is that of Robert Propst, hired to head the research wing of the Herman Miller company in 1958. Propst made an intensive study of white-collar work and in 1964, Herman Miller unveiled his Action Office, a collection of pieces designed to support work on “thinking projects.” Saval praises this incarnation of the Action Office as the first instance in which ” the aesthetics of design and progressive ideas about human needs were truly united,” but it didn’t entirely catch on until the unveiling of Action Office II, which incorporated wooden frames covered with fabric that served as short, modular, temporary walls.

Oh, what a difference 30 degrees makes! Propst’s original Action Office II arranged these walls in 120 degree-angles, an orthogonal configuration that looked and felt open and dynamic. One of Propst’s colleagues told Saval of the dismal day that “someone figured out you don’t need the 120-degree [angle] and it went click.” Ninety-degrees used up the available space much more efficiently, enabling employers to cram in more workstations. The American office worker had been cubed.

The later chapters of “Cubed” speak to more recent trends, like the all-inclusive, company-town-like facilities of tech firms like Google and wacky experiments in which no one is allowed to have a permanent desk at all. Saval visits the Los Angeles office of the ad agency Chiat/Day, which is organized around an artificial main street, features a conference table made of surfboards and resembles nothing so much as a theme park. Using architecture and amenities to persuade employees that their work is also play is a gambit to keep them on the premises and producing for as much of the day (and night) as possible. It all seems a bit desperate. In my own personal experience, if people 1) are paid sufficiently; 2) like the other people they work with; and 3) find the work they do a meaningful use of their energy and skills, then they don’t really care if they work in cubicles or on picnic benches. Item No. 3 is a bitch, though, the toughest criteria for most contemporary corporations to meet. Maybe that’s what they ought to be worrying about instead.

 

Laura Miller is a senior writer for Salon. She is the author of “The Magician’s Book: A Skeptic’s Adventures in Narnia” and has a Web site, magiciansbook.com.

 

http://www.salon.com/2014/04/20/cubed_how_the_american_office_worker_wound_up_in_a_box/?source=newsletter

The 420 Myth: How Did ’420′ Become Synonymous with Pot?

  Drugs
 
There are still a bong-load of misconceptions revolving around the term’s supposed derivation.

APRIL 20: A marijuana activist dancing on live music during the annual marijuana 420 event at Yonge & Dundas Square on April 20 2012 in Toronto, Canada.

The following article first appeared in The 420 Times

The term “420″ has to be unquestionably the most familiar turn of phrase used among pot-partakers all around the world. Most of us involved in the marijuana consuming community are aware of the most popular version of how the term came to be, but there are still a bong-load of misconceptions revolving around the term’s supposed derivation.

The seemingly most agreed upon adaptation involves a group of ganja-tokin’ teenagers that attended the San Rafael High School in California back in the 1970s.

The group of pubescent pot smokers referred to themselves as the Waldos due to their chosen hangout spot located outside their high school, a wall.

And just what time did the Waldos meet after being dismissed from school for the day? Yup, you guessed it! 4:20pm. And so, in a nutshell, the term “420″ was by all accounts born.

There’s no documented evidence of the Waldos kickin’ it at 4:20pm around the Louis Pasteur statue on the school grounds contemplating their mission, but it does seem to be the most widespread point of reference.

But what about the other believed beginnings regarding the beloved expression? Is there any truth to those tales?

Common myth 1: The day 4/20 commemorates the death and/or birth of reggae musician Bob Marley.

Although Bob was, and still is, greatly admired and celebrated throughout the marijuana society, April 20 is not the day of his birth or his unfortunate passing. The godfather of reggae was in reality born on February 6 back in 1945.

It’s possible that Bob smoked over 420 spliffs in his lifetime, but he left this plane of existence on May 11, 1981, shortly after the day that has become known as the holiest of high holidaze among frequent tokers.

Common myth 2: The number 420 is police radio-code for “marijuana smoking in progress” and that’s how the term originated.

“Attention all units, attention all units, drop your doughnuts! We have a code 420 taking place all across the nation on a daily basis! We need any available units to please respond immediately! Oh the humanity! Over.”

Well? No. There may be police radio-codes in existence bearing the number 420, but they’re not associated with the notorious time of day, time of year and exceedingly used number that we diehard-tokers have come to know and worship on the regular.

Common myth 3: The date 4/20 is an observation of Adolf Hitler’s birth.

Who in their right mind would celebrate Hitler’s birthday? Yes. The villainous genocidal murderer was apparently dropped on his head repeatedly on April 20, 1889, (how else would one explain his insane ways), but that has absolutely zero connection with the great day of merriment marijuana tokers enjoy each year.

Maybe if he’d passed away on 4/20 there would be some cause for celebration, but I doubt his own mother celebrated his birthday. And even though he was a drug addict, he most likely steered clear of marijuana in fear of it curbing his hatred.

Common myth 4:  The term “420″ derived from a Bob Dylan song.

As historic folklore or some toker hyped-up on a Sativa strain would have you believe, the term “420″ originated via Dylan’s tune “Rainy Day Woman #12 and 35,” where he repeatedly chants, “Everybody must get stoned!” And in addition, when one multiplies the numbers 12 & 35 they will arrive at a sum of 420! (Let’s see here, carry the one….?)

This has to be true, right? Well, no. Although it would be the most believable and quite possibly the coolest out of all the misconceptions, it just ain’t so, yo!

***

If you conduct an internet search for information regarding the origin of the term “420” the list of mythical descriptions that can be accumulated bear a resemblance to that of the fictional Area 51 in Roswell! And I’m quite certain if you were to search hard enough you could even find a fable linking the term’s origin to the famous alien warehouse! But I digress.

Regardless of how the phrase truly came about, it is here to stay. And for the majority of marijuana smokers it is 4:20 all day every day or it’s always 4:20 somewhere and the term will irrefutably be a part of the marijuana subculture’s vocabulary for at least another 420 years, give or take.

 

http://www.alternet.org/drugs/420-myth-how-did-420-become-synonymous-pot?akid=11733.265072.XziZQx&rd=1&src=newsletter983505&t=11&paging=off&current_page=1#bookmark

Google Glass, techno-rage and the battle for San Francisco’s soul

The Bay is burning!

The advent of Google Glass has started an incendiary new chapter in tech’s culture wars. Here’s what’s at stake

The Bay is burning! Google Glass, techno-rage and the battle for San Francisco's soul
Sergey Brin (Credit: Reuters/Stephen Lam/Ilya Andriyanov via Shutterstock/Salon)

In San Francisco, the tech culture wars continue to rage. On April 15, Google opened up purchases of its Google Glass headgear to the general public for 24 hours. The sale was marked by mockery, theft and the continuing fallout from an incident a few days earlier, when a Business Insider reporter covering an anti-eviction protest had his Glass snatched and smashed.

That same day, protesters organized by San Francisco’s most powerful union marched to Twitter’s headquarters — a major San Francisco gentrification battleground — and presented the company with a symbolic tax bill, designed to recoup the “millions” that some San Franciscans believe the city squandered by bribing Twitter with a huge tax break to stay in the city.

We learned two things on April 15. First, Google isn’t about to give up on its plans to make Glass the second coming of the iPhone, even if it’s clear that a significant number of people consider Google Glass to be a despicable symbol of the surveillance society and a pricey calling card of the techno-elite. Second, judging by the march on Twitter, the tide of anti-tech protest sentiment has yet to crest in the San Francisco Bay Area. The two points turn out to be inseparable. Scratch an anti-tech protester and you are unlikely to find a fan of Google Glass.

What’s it all mean? Earlier this week, after I promoted an article on Twitter that attempted to explore reasons for anti-Glass hatred, I received a one-word tweet in response: “Neoluddism.”

The Luddites of the early 19th century are famous for smashing weaving machinery in a fruitless attempt to resist the reshaping of society and the economy by the Industrial Revolution. They took their name from Ned Ludd, a possibly apocryphal character who is said to have smashed two stocking frames in a fit of rage — thus inspiring a rebellion. While I can’t be certain, I suspect that my correspondent was deploying the term in the sense most familiar to pop culture — the Luddite as barbarian goon, futilely standing against the relentless march of progress.



But the story isn’t quite that simple.Yes, the Luddite movement may have been smashed by the forces of the state and the newly ascendant industrialist bourgeoisie. Yes, the Luddites may never have had the remotest chance of maintaining their pre-industrial way of life in the face of the steam engine. But there is a version of history in which the Luddites were far from unthinking goons. Instead, they were acute critics of their changing times, grasping the first glimpse of the increasingly potent ways in which capital was learning to exploit labor. In this view, the Luddites were actually the avante garde for the formation of working-class consciousness, and paved the way for the rise of organized labor and trade unions. It’s no accident that Ned Ludd hailed from Nottingham, right up against Sherwood Forest.

Economic inequality and technologically induced dislocation? Ned Ludd, that infamous wrecker of weaving machinery, would recognize a clear echo of his own time in present-day San Francisco. But there’s more to see here than just the challenge of a new technological revolution. Just as the Luddites, despite their failure, spurred the creation of worker-class consciousness, the current Bay Area tech protests have had a pronounced political effect. While the tactics range from savvy, well-organized protest marches to juvenile acts of violence, the impact is clear. The attention of political leaders and the media has been engaged. Everyone is paying attention.

 

 

* * *

If you live in San Francisco, you may have seen them around town: Decals on bar windows that state “Google Glass is barred on these premises.” They are the work of an outfit called StopTheCyborgs.org, a group of scientists and engineers who have articulated a critique of Google Glass that steers cagily away from the half-baked nonsense of Counterforce.

I contacted StopTheCyborgs by email and asked them how they responded to being called “neoluddites.”

“If ‘neoluddism’ means blindly being anti-technology then we refute the charge,” said Jack Winters, who described himself as a Scala and Java developer. If ‘neoluddism’ means not being blindly pro-technology then guilty as charged.”

“We are technologically sophisticated enough to realize that technology is politics and code is law,” continued Winters. “Technology isn’t some external force of nature. It is created and used by people. It has an effect on power relations. It can be good, it can be bad. We can choose what kind of society we want rather than passively accepting that ‘the future’ is whatever data-mining corporations want.”

“Basically anyone who views critics of particular technologies as ‘luddites’ fundamentally misunderstands what technology is. There is no such thing as ‘technology.’ Rather there are specific technologies, produced by specific economic and political actors, and deployed in specific economic and social contexts. You can be anti-nukes without being anti-antibiotics. You can be pro-surveillance of powerful institutions without being pro-surveillance of individual people. You can work on machine vision for medical applications while campaigning against the use of the same technology for automatically identifying and tracking people. How? Because you take a moral view of the likely consequences of a technology in a particular context.” [Emphasis added.]

The argument made by StopTheCyborgs resonates with one of the core observations that revisionist historians have made about the original Luddites: They were not indiscriminate in their assaults on technology. (At least not at first.) They chose to destroy machines that were owned by employers who were acting in ways they believed were particularly economically harmful while leaving other machines undamaged. To translate that to a present-day stance: It is not hypocritical for protesters to argue that Glass embodies surveillance in a way that iPhones don’t, or that it is hypocritical to critique technology’s impact on inequality via Twitter or Facebook. Every mode of technology needs to be evaluated on its own merits. Some start-up entrepreneurs might legitimately be using technology to achieve a social good. Some tech tycoons might be genuinely committed to a higher standard of life for all San Franciscans. Some might just be tools. So Jack Winters of StopTheCyborgs is correct: The deployment of different technologies have different consequences. These consequences require a social and political response.

This is not to say that ripping Google Glass from the face of a young reporter, or otherwise demonizing individuals just because they happen to be employed by a particular company, is comparable to Ned Ludd’s destruction of two stocking frames. But Glass is just as embedded in the larger transformations we are going through as the spinning jenny was to the Industrial Revolution. By taking it seriously, we are giving “the second machine age” the respect it deserves.

The question is: Is Google?

* * *

I tried to find out from Google how many units of Google Glass had been sold during the one-day special promotion. I received a statement that read, “We were getting through our stock faster than we expected, so we decided to shut the store down. While you can still access the site, Glass will be marked as sold out.”

I followed up by asking how Google was coping with the fact that its signature device had become a symbol of tech-economy driven gentrification.

“It’s early days and we are thinking very carefully about how we design Glass because new technology always raises new issues,” said a Google spokesperson. “Our Glass Explorers come from all walks of life. They are firefighters, gardeners, athletes, moms, dads and doctors. No one should be targeted simply because of the technology they choose. We find that when people actually try Glass firsthand, they understand the philosophy that underpins it: Glass let’s you look up and engage with the world around you, rather than looking down and being distracted by your technology.”

You can hear an echo here of Ned Ludd in the statement that “new technology raises new issues.” But the rest is just marketing zombie chatter, about as useless in its own way as some of the more overheated and unhinged rhetoric from the more extreme dissident wings of Bay Area protest. When a group styling itself “Counterforce” shows ups at the home of a Google executive, demands $3 billion to build “anarchist colonies” and declares, as Adrianne Jeffries documented in Verge, that their goal is to “to destroy the capitalist system … [and] … create a new world without an economy,” well, good luck with that. We are a long way from “the precipice of a complete anarcho-primitivist rebellion against the technocracy.”

One thing seems reasonably clear: Moms and firefighters might be wearing Google Glass, but if Ned Ludd were around today, he’d probably be looking for different accessories.

Apparently you can’t be empathetic, or help the homeless, without a GoPro

Today in bad ideas: Strapping video cameras to homeless

people to capture “extreme living”

Today in bad ideas: Strapping video cameras to homeless people to capture "extreme living"

GoPro cameras are branded as recording devices for extreme sports, but a San Francisco-based entrepreneur had a different idea of what to do with the camera: Strap it to a homeless man and capture “extreme living.”

The project is called Homeless GoPro, and it involves learning the first-person perspective of homeless people on the streets of San Francisco. The website explains:

“With a donated HERO3+ Silver Edition from GoPro and a small team of committed volunteers in San Francisco, Homeless GoPro explores how a camera normally associated with extreme sports and other ’hardcore’ activities can showcase courage, challenge, and humanity of a different sort - extreme living.”

The intentions of the founder, Kevin Adler, seem altruistic. His uncle was homeless for 30 years, and after visiting his gravesite he decided to start the organization and help others who are homeless.

The first volunteer to film his life is a man named Adam, who has been homeless for 30 years, six of those in San Francisco. There are several edited videos of him on the organization’s site.

In one of the videos, titled “Needs,” Adam says, “I notice every day that people are losing their compassion and empathy — not just for homeless people — but for society in general. I feel like technology has changed so much — where people are emailing and don’t talk face to face anymore.”

Without knowing it Adam has critiqued the the entire project, which is attempting to use technology (a GoPro) to garner empathy and compassion. It is a sad reminder that humanity can ignore the homeless population in person on a day-to-day basis, and needs a video to build empathy. Viewers may feel a twinge of guilt as they sit removed from the situation, watching a screen.

According to San Francisco’s Department of Human Services‘ biennial count there were 6,436 homeless people living in San Francisco (county and city). “Of the 6,436 homeless counted,” a press release stated, “more than half (3,401) were on the streets without shelter, the remaining 3,035 were residing in shelters, transitional housing, resource centers, residential treatment, jail or hospitals.” The homeless population is subject to hunger, illness, violence, extreme weather conditions, fear and other physical and emotional ailments.



Empathy — and the experience of “walking a mile in somebody’s shoes” — are important elements of social change, and these documentary-style videos do give Adam a medium and platform to be a voice for the homeless population. (One hopes that the organization also helped Adam in other ways — shelter, food, a place to stay on his birthday — and isn’t just using him as a human tool in its project.) But something about the project still seems off.

It is in part because of the product placement. GoPro donated a $300 camera for the cause, which sounds great until you remember that it is a billion-dollar company owned by billionaire Nick Woodman. If GoPro wants to do something to help the Bay Area homeless population there are better ways to go about it than donate a camera.

As ValleyWag‘s Sam Biddle put it, “Stop thinking we can innovate our way out of one of civilization’s oldest ailments. Poverty, homelessness, and inequality are bigger than any app …”

 

http://www.salon.com/2014/04/17/today_in_bad_ideas_strapping_video_cameras_to_homeless_people_to_capture_extreme_living/?source=newsletter

Depriving homeless people of their last shelter in life is Silicon Valley at its worst.


The 1% Wants to Ban Sleeping in Cars

Because It Hurts Their ‘Quality of Life’

Photo Credit: meunierd/Shutterstock.com

Across the United States, many local governments are responding to skyrocketing levels of inequality and the now decades-long crisis of homelessness among the very poor … by passing laws making it a crime to sleep in a parked car.

This happened most recently in Palo Alto, in California’s Silicon Valley, where new billionaires are seemingly minted every month – and where 92% of homeless people lack shelter of any kind. Dozens of cities have passed similar anti-homeless laws. The largest of them is Los Angeles, the longtime unofficial “homeless capital of America”, where lawyers are currently defending a similar vehicle-sleeping law before a skeptical federal appellate court. Laws against sleeping on sidewalks or in cars are called “quality of life” laws. But they certainly don’t protect the quality of life of the poor.

To be sure, people living in cars cannot be the best neighbors. Some people are able to acquire old and ugly – but still functioning – recreational vehicles with bathrooms; others do the best they can. These same cities have resisted efforts to provide more public toilet facilities, often on the grounds that this will make their city a “magnet” for homeless people from other cities. As a result, anti-homeless ordinances often spread to adjacent cities, leaving entire regions without public facilities of any kind.

Their hope, of course, is that homeless people will go elsewhere, despite the fact that the great majority of homeless people are trying to survive in the same communities in which they were last housed – and where they still maintain connections. Americans sleeping in their own cars literally have nowhere to go.

Indeed, nearly all homelessness in the US begins with a loss of income and an eviction for nonpayment of rent – a rent set entirely by market forces. The waiting lists are years long for the tiny fraction of housing with government subsidies. And rents have risen dramatically in the past two years, in part because long-time tenants must now compete with the millions of former homeowners who lost their homes in the Great Recession.

The paths from eviction to homelessness follow familiar patterns. For the completely destitute without family or friends able to help, that path leads more or less directly to the streets. For those slightly better off, unemployment and the exhaustion of meager savings – along with the good graces of family and friends – eventually leaves people with only two alternatives: a shelter cot or their old automobile.

However, in places like Los Angeles, the shelters are pretty much always full. Between 2011 and 2013, the number of unsheltered homeless people increased by 67%. In Palo Alto last year, there were 12 shelter beds for 157 homeless individuals. Homeless people in these cities do have choices: they can choose to sleep in a doorway, on a sidewalk, in a park, under a bridge or overpass, or – if they are relatively lucky – in a car. But these cities have ordinances that make all of those choices a criminal offense. The car is the best of bad options, now common enough that local bureaucrats have devised a new, if oxymoronic, term – the “vehicularly housed”.

People sleeping in cars try to find legal, nighttime parking places, where they will be less apparent and arouse the least hostility. But cities like Palo Alto and Los Angeles often forbid parking between 2am and 5am in commercial areas, where police write expensive tickets and arrest and impound the vehicles of repeat offenders. That leaves residential areas, where overnight street parking cannot, as a practical matter, be prohibited.

One finds the “vehicularly housed” in virtually every neighborhood, including my own. But the animus that drives anti-homeless laws seems to be greatest in the wealthiest cities, like Palo Alto, which has probably spawned more per-capita fortunes than any city on Earth, and in the more recently gentrified areas like Los Angeles’ Venice. These places are ruled by majorities of “liberals” who decry, with increasing fervor, the rapid rise in economic inequality. Nationally, 90% of Democrats (and 45% of Republicans) believe the government should act to reduce the rich-poor gap.

It is easy to be opposed to inequality in the abstract. So why are Los Angeles and Palo Alto spending virtually none of their budgets on efforts to provide housing for the very poor and homeless? When the most obvious evidence of inequality parks on their street, it appears, even liberals would rather just call the police. The word from the car: if you’re not going to do anything to help, please don’t make things worse.

http://www.alternet.org/news-amp-politics/1-wants-ban-sleeping-cars-because-it-hurts-their-quality-life?akid=11722.265072.4yEWu6&rd=1&src=newsletter982385&t=3&paging=off&current_page=1#bookmark

The commons lies at the heart of a major cultural and social shift now underway.

The New Economic Events Giving Lie to the Fiction That

We Are All Selfish, Rational Materialists

Photo Credit: AllanGregg; Screenshot / YouTube.com

Jeremy Rifkin’s new book, “The Zero Marginal Cost Society,” brings welcome new attention to the commons just as it begins to explode in countless new directions. His book focuses on one of the most significant vectors of commons-based innovation — the Internet and digital technologies — and documents how the incremental costs of nearly everything is rapidly diminishing, often to zero. Rifkin explored the sweeping implications of this trend in an excerpt from his book and points to the “eclipse of capitalism” in the decades ahead.

But it’s worth noting that the commons is not just an Internet phenomenon or a matter of economics. The commons lies at the heart of a major cultural and social shift now underway. People’s attitudes about corporate property rights and neoliberal capitalism are changing as cooperative endeavors — on digital networks and elsewhere — become more feasible and attractive. This can be seen in the proliferation of hackerspaces and Fablabs, in the growth of alternative currencies, in many land trusts and cooperatives and in seed-sharing collectives and countless natural resource commons.

Beneath the radar screen of mainstream politics, which remains largely clueless about such cultural trends on the edge, a new breed of commoners is building the vision of a very different kind of society, project by project. This new universe of social activity is being built on the foundation of a very different ethics and social logic than that of homo economicus — the economist’s fiction that we are all selfish, utility-maximizing, rational materialists.

Durable projects based on social cooperation are producing enormous amounts of wealth; it’s just that this wealth is not generally not monetized or traded. It’s socially or ecologically embedded wealth that is managed by self-styled commoners themselves. Typically, such commoners act more as stewards of their common wealth than as owners who treat it as private capital. Commoners realize that a life defined by impersonal transactions is not as rich or satisfying as one defined by abiding relationships. The larger trends toward zero-marginal-cost production make it perfectly logical for people to seek out commons-based alternatives.

You can find these alternatives popping up all over: in the 10,000-plus open access scientific journals whose research is freely shareable to anyone and in community gardens that produce both fresh vegetables and neighborliness. In hundreds of “timebanks” that let people meet basic needs through time-barters, and in highly productive, ecologically minded commons-based agriculture.

Economists tend to ignore such wealth because it generally doesn’t involve market activity. No cash is exchanged, no legal contracts signed and no measureable Gross Domestic Product is generated. But the wealth of the commons is not accumulated like capital; its vitality comes from being circulated. As I describe in my new book, “Think Like a Commoner,” the story of our time is the rise of the commons as a new way to emancipate oneself from predatory markets and to collaborate with peers to protect and expand one’s shared wealth. This is a story that is being played out in countless digital arenas, as Rifkin documents, but also in such diverse contexts as cities, farming, museums, theaters and indigenous communities.

One reason that so many commons arise and flourish is because they help their participants meet important basic needs in fair, responsive and socially satisfying ways. That’s quite attractive to those who are otherwise held captive by conventional, predatory markets. Big agriculture is more concerned with efficiency and profit than ecological stewardship. Large transnationals are more interested in rip-and-run resource extraction (mining, fracking, timber) than in the protection of sacred lands and time-honored ways of life. “Copyright industries” like Hollywood and record labels want to treat all of culture as tightly controlled “product,” not as something that is freely shared and built upon.

Nowadays the commons has a special appeal for people of the global South who are often victimized by the “enclosures” inflicted by neoliberal investment and trade policies. Enclosures are the act of privatizing and commodifying previously shared resources. For example, millions of acres of land in Africa, Asia and Latin America are currently being seized by investors in a massive international land grab. Hedge funds and even the government of South Korea, Saudi Arabia and China are enacting an eerie replay of the English enclosure movement. Commoners who have worked the land for generations as a customary right are being forced to migrate to cities in search of work, where they often end up as paupers and sweatshop employees: a modern-day replay of Charles Dickens’ novels.

By the lights of modern economic theory, it’s all for the best because it promotes “development” (i.e., consumerism and other market dependencies). But many commoners are now fighting the dispossession and dependencies that enclosures entail by struggling to retain some measure of dignity and self-determination through their commons. The International Land Alliance estimates that 2 billion people around the world depend upon subsistence commons of forests, fisheries, arable land, water and wild game to meet their everyday needs.

Strangely, the leading introductory economics textbooks in the U.S. virtually ignore the commons except for the obligatory warning about the “tragedy of the commons.” They prefer not to recognize that the commons represents an entirely viable but different paradigm of “development” – one that can transcend the unsustainable consumerism, cultural disintegration and economic growth of our time. As the late Nobel Prize winner Elinor Ostrom showed, commons are an entirely sustainable, ecologically friendly model of resource management, contrary to the “tragedy” parable.

Commoners are not all alike. They have many profound differences in their governance systems, management practices and cultural values. And commons are not without their conflicts, struggles and failures. That said, most commoners tend to share fundamental commitments to participation, openness, inclusiveness, social equity, ecological respect and human rights.

The politics of the commons movement can be confounding to conventional observers because political goals are not the paramount priority; protection of the commons is. Commoners tend to be more focused on “prepolitical” social activity and relationships, which is why commons are embraced by such a wide variety of people. As German commons advocate Silke Helfrich notes in The Wealth of the Commons, “Commons draw from the best of all political ideologies.” Conservatives like the tendency of commons to promote responsibility. Liberals are pleased with the focus on equality and basic social entitlement. Libertarians like the emphasis on individual initiative. And leftists like the idea of limiting the scope of the Market.

It is important to realize that the commons is not a discussion about objects, but a discussion about who we are and how we treat each other. What decisions are being made about our resources? Does economic activity satisfy basic human needs and honor human rights and dignity? These kind of discussions are not often heard in in conventional business and policy circles, alas.

To conventional minds, the idea of the commons as a paradigm of social governance appears either utopian or communistic, or at the very least, impractical. But a diverse, eclectic universe of commons around the world demonstrates otherwise. It is the neoliberal project of ever-expanding consumption on a global scale that is the utopian, totalistic dream. It manifestly cannot fulfill its mythological vision of human progress through ubiquitous market activity and greater heaps of private consumption, if only because it demands more from Nature than it can possibly deliver – while inflicting too much social inequity and disruption as well.

Fortunately, the Internet and indigenous peoples, the re-localization movement and hackers, community foresters and fishing cooperatives and many, many others, are showing that the commons can be an effective vehicle for social and political emancipation. Jeremy Rifkin’s astute analysis of this powerful trend will help open up a much-needed discussion in the stodgy precincts of conventional economics.

David A. Bollier is an author, activist, blogger and independent scholar with a primary focus on “the commons” as a new paradigm for economics, politics, and culture. He is the founding editor of Onthecommons.org (2002-2010), co-founder and principal of the international consulting project Commons Strategy Group, and co-director of the Commons Law Project. Bollier is the author of numerous books, including “Think Like a Commoner: A Short Introduction to the Life of the Commons.”

 http://www.alternet.org/economy/were-about-enter-whole-new-era-economics-and-its-going-make-everyone-feel-lot-more-wealthy?akid=11716.265072.WdcnEx&rd=1&src=newsletter981596&t=7&paging=off&current_page=1#bookmark

Why the Web can’t abandon its misogyny

The Internet’s destructive gender gap:

People like Ezra Klein are showered with opportunity, while women face an online world hostile to their ambitions

, TomDispatch.com

The Internet's destructive gender gap: Why the Web can't abandon its misogynyEzra Klein (Credit: MSNBC)
This piece originally appeared on TomDispatch.

The Web is regularly hailed for its “openness” and that’s where the confusion begins, since “open” in no way means “equal.” While the Internet may create space for many voices, it also reflects and often amplifies real-world inequities in striking ways.

An elaborate system organized around hubs and links, the Web has a surprising degree of inequality built into its very architecture. Its traffic, for instance, tends to be distributed according to “power laws,” which follow what’s known as the 80/20 rule — 80% of a desirable resource goes to 20% of the population.

In fact, as anyone knows who has followed the histories of Google, Apple, Amazon, and Facebook, now among the biggest companies in the world, the Web is increasingly a winner-take-all, rich-get-richer sort of place, which means the disparate percentages in those power laws are only likely to look uglier over time.

Powerful and exceedingly familiar hierarchies have come to define the digital realm, whether you’re considering its economics or the social world it reflects and represents.  Not surprisingly, then, well-off white men are wildly overrepresented both in the tech industry and online.

Just take a look at gender and the Web comes quickly into focus, leaving you with a vivid sense of which direction the Internet is heading in and — small hint — it’s not toward equality or democracy.

Experts, Trolls, and What Your Mom Doesn’t Know

As a start, in the perfectly real world women shoulder a disproportionate share of household and child-rearing responsibilities, leaving them substantially less leisure time to spend online. Though a handful of high-powered celebrity “mommy bloggers” have managed to attract massive audiences and ad revenue by documenting their daily travails, they are the exceptions not the rule. In professional fields like philosophy, law, and science, where blogging has become popular, women are notoriously underrepresented; by one count, for instance, only around 20% of science bloggers are women.



An otherwise optimistic white paper by the British think tank Demos touching on the rise of amateur creativity online reported that white males are far more likely to be “hobbyists with professional standards” than other social groups, while you won’t be shocked to learn that low-income women with dependent children lag far behind. Even among the highly connected college-age set, research reveals a stark divergence in rates of online participation.

Socioeconomic status, race, and gender all play significant roles in a who’s who of the online world, with men considerably more likely to participate than women. “These findings suggest that Internet access may not, in and of itself, level the playing field when it comes to potential pay-offs of being online,” warns Eszter Hargittai, a sociologist at Northwestern University. Put simply, closing the so-called digital divide still leaves a noticeable gap; the more privileged your background, the more likely that you’ll reap the additional benefits of new technologies.

Some of the obstacles to online engagement are psychological, unconscious, and invidious. In a revealing study conducted twice over a span of five years — and yielding the same results both times — Hargittai tested and interviewed 100 Internet users and found that there was no significant variation in their online competency. In terms of sheer ability, the sexes were equal. The difference was in their self-assessments.

It came down to this: The men were certain they did well, while the women were wracked by self-doubt. “Not a single woman among all our female study subjects called herself an ‘expert’ user,” Hargittai noted, “while not a single male ranked himself as a complete novice or ‘not at all skilled.’” As you might imagine, how you think of yourself as an online contributor deeply influences how much you’re likely to contribute online.

The results of Hargittai’s study hardly surprised me. I’ve seen endless female friends be passed over by less talented, more assertive men. I’ve had countless people — older and male, always — assume that someone else must have conducted the interviews for my documentary films, as though a young woman couldn’t have managed such a thing without assistance. Research shows that people routinely underestimate women’s abilities, not least women themselves.

When it comes to specialized technical know-how, women are assumed to be less competent unless they prove otherwise. In tech circles, for example, new gadgets and programs are often introduced as being “so easy your mother or grandmother could use them.” A typical piece in the New York Times wastitled “How to Explain Bitcoin to Your Mom.”  (Assumedly, dad already gets it.)  This kind of sexism leapt directly from the offline world onto the Web and may only have intensified there.

And it gets worse. Racist, sexist, and homophobic harassment or “trolling” has become a depressingly routine aspect of online life.

Many prominent women have spoken up about their experiences being bullied and intimidated online — scenarios that sometimes escalate into the release of private information, including home addresses, e-mail passwords, and social security numbers, or simply devolve into an Internet version of stalking. Esteemed classicist Mary Beard, for example, “received online death threats and menaces of sexual assault” after a television appearance last year, as did British activist Caroline Criado-Perez after she successfully campaigned to get more images of women onto British banknotes.

Young women musicians and writers often find themselves targeted online by men who want to silence them. “The people who were posting comments about me were speculating as to how many abortions I’ve had, and they talked about ‘hate-fucking’ me,” blogger Jill Filipovic told the Guardian after photos of her were uploaded to a vitriolic online forum. Laurie Penny, a young political columnist who has faced similar persecution and recently published an ebook called Cybersexism, touched a nerve by calling a woman’s opinion the “short skirt” of the Internet: “Having one and flaunting it is somehow asking an amorphous mass of almost-entirely male keyboard-bashers to tell you how they’d like to rape, kill, and urinate on you.”

Alas, the trouble doesn’t end there. Women who are increasingly speaking out against harassers are frequently accused of wanting to stifle free speech. Or they are told to “lighten up” and that the harassment, however stressful and upsetting, isn’t real because it’s only happening online, that it’s just “harmless locker-room talk.”

As things currently stand, each woman is left alone to devise a coping mechanism as if her situation were unique. Yet these are never isolated incidents, however venomously personal the insults may be. (One harasser called Beard — and by online standards of hate speech this was mild — “a vile, spiteful excuse for a woman, who eats too much cabbage and has cheese straws for teeth.”)

Indeed, a University of Maryland study strongly suggests just how programmatic such abuse is. Those posting with female usernames, researchers were shocked to discover, received 25 times as many malicious messages as those whose designations were masculine or ambiguous. The findings were so alarming that the authors advised parents to instruct their daughters to use sex-neutral monikers online. “Kids can still exercise plenty of creativity and self-expression without divulging their gender,” a well-meaning professor said, effectively accepting that young girls must hide who they are to participate in digital life.

Over the last few months, a number of black women with substantial social media presences conducted an informal experiment of their own. Fed up with the fire hose of animosity aimed at them, Jamie Nesbitt Golden and others adopted masculine Twitter avatars. Golden replaced her photo with that of a hip, bearded, young white man, though she kept her bio and continued to communicate in her own voice. “The number of snarky, condescending tweets dropped off considerably, and discussions on race and gender were less volatile,” Golden wrote, marveling at how simply changing a photo transformed reactions to her. “Once I went back to Black, it was back tobusiness as usual.”

Old Problems in New Media

Not all discrimination is so overt. A study summarized on the Harvard Business Review website analyzed social patterns on Twitter, where female users actually outnumbered males by 10%. The researchers reported “that an average man is almost twice [as] likely to follow another man [as] a woman” while “an average woman is 25% more likely to follow a man than a woman.” The results could not be explained by varying usage since both genders tweeted at the same rate.

Online as off, men are assumed to be more authoritative and credible, and thus deserving of recognition and support. In this way, long-standing disparities are reflected or even magnified on the Internet.

In his 2008 book The Myth of Digital Democracy, Matthew Hindman, a professor of media and public affairs at George Washington University, reports that of the top 10 blogs, only one belonged to a female writer. A wider census of every political blog with an average of over 2,000 visitors a week, or a total of 87 sites, found that only five were run by women, nor were there “identifiable African Americans among the top 30 bloggers,” though there was “one Asian blogger, and one of mixed Latino heritage.” In 2008, Hindman surveyed the blogosphere and found it less diverse than the notoriously whitewashed op-ed pages of print newspapers. Nothing suggests that, in the intervening six years, things have changed for the better.

Welcome to the age of what Julia Carrie Wong has called “old problems in new media,” as the latest well-funded online journalism start-ups continue to be helmed by brand-name bloggers like Ezra Klein and Nate Silver. It is “impossible not to notice that in the Bitcoin rush to revolutionize journalism, the protagonists are almost exclusively — and increasingly — male and white,” Emily Bell lamented in a widely circulated op-ed. It’s not that women and people of color aren’t doing innovative work in reporting and cultural criticism; it’s just that they get passed over by investors and financiers in favor of the familiar.

As Deanna Zandt and others have pointed out, such real-world lack of diversity is also regularly seen on the rosters of technology conferences, even as speakers take the stage to hail a democratic revolution on the Web, while audiences that look just like them cheer. In early 2013, in reaction to the announcement of yet another all-male lineup at a prominent Web gathering, a pledge was posted on the website of the Atlantic asking men to refrain from speaking at events where women are not represented. The list of signatories was almost immediately removed “due to a flood of spam/trolls.” The conference organizer, a successful developer, dismissed the uproar over Twitter. “I don’t feel [the] need to defend this, but am happy with our process,” he stated. Instituting quotas, he insisted, would be a “discriminatory” way of creating diversity.

This sort of rationalization means technology companies look remarkably like the old ones they aspire to replace: male, pale, and privileged. Consider Instagram, the massively popular photo-sharing and social networking service, which was founded in 2010 but only hired its first female engineer last year. While the percentage of computer and information sciences degrees women earned rose from 14% to 37% between 1970 and 1985, that share had depressingly declined to 18% by 2008.

Those women who do fight their way into the industry often end up leaving — their attrition rate is 56%, or double that of men — and sexism is a big part of what pushes them out. “I no longer touch code because I couldn’t deal with the constant dismissing and undermining of even my most basic work by the ‘brogramming’ gulag I worked for,” wrote one woman in a roundup of answers to the question: Why there are so few female engineers?

In Silicon Valley, Facebook’s Sheryl Sandberg and Yahoo’s Marissa Mayer excepted, the notion of the boy genius prevails.  More than 85% of venture capitalists are men generally looking to invest in other men, and women make 49 cents for every dollar their male counterparts rake in — enough to make a woman long for the wage inequities of the non-digital world, where on average they take home a whopping 77 cents on the male dollar. Though 40% of private businesses are women-owned nationwide, only 8% of the venture-backed tech start-ups are.

Established companies are equally segregated. The National Center for Women and Information Technology reports that in the top 100 tech companies, only 6% of chief executives are women. The numbers of Asians who get to the top are comparable, despite the fact that they make up one-third of all Silicon Valley software engineers. In 2010, not even 1% of the founders of Silicon Valley companies were black.

Making Your Way in a Misogynist Culture

What about the online communities that are routinely held up as exemplars of a new, networked, open culture? One might assume from all the “revolutionary” and “disruptive” rhetoric that they, at least, are better than the tech goliaths. Sadly, the data doesn’t reflect the hype. Consider Wikipedia. A survey revealed that women make up less than 15% of the contributors to the site, despite the fact that they use the resource in equal numbers to men.

In a similar vein, collaborative filtering sites like Reddit and Slashdot, heralded by the digerati as the cultural curating mechanisms of the future, cater to users who are up to 87% male and overwhelmingly young, wealthy, and white. Reddit, in particular, has achieved notoriety for its misogynist culture, with threads where rapists have recounted their exploits and photos of underage girls got posted under headings like “Chokeabitch,” “N*****jailbait,” and “Creepshots.”

Despite being held up as a paragon of political virtue, evidence suggests that as few as 1.5% of open source programmers are women, a number far lower than the computing profession as a whole. In response, analysts have blamed everything from chauvinism, assumptions of inferiority, and outrageous examples of impropriety (including sexual harassment at conferences where programmers gather) to a lack of women mentors and role models. Yet the advocates of open-source production continue to insist that their culture exemplifies a new and ethical social order ruled by principles of equality, inclusivity, freedom, and democracy.

Unfortunately, it turns out that openness, when taken as an absolute, actually aggravates the gender gap. The peculiar brand of libertarianism in vogue within technology circles means a minority of members — a couple of outspoken misogynists, for example — can disproportionately affect the behavior and mood of the group under the cover of free speech. As Joseph Reagle, author of Good Faith Collaboration: The Culture of Wikipediapoints out, women are not supposed to complain about their treatment, but if they leave — that is, essentially are driven from — the community, that’s a decision they alone are responsible for.

“Urban” Planning in a Digital Age

The digital is not some realm distinct from “real” life, which means that the marginalization of women and minorities online cannot be separated from the obstacles they confront offline. Comparatively low rates of digital participation and the discrimination faced by women and minorities within the tech industry matter — and not just because they give the lie to the egalitarian claims of techno-utopians. Such facts and figures underscore the relatively limited experiences and assumptions of the people who design the systems we depend on to use the Internet — a medium that has, after all, become central to nearly every facet of our lives.

In a powerful sense, programmers and the corporate officers who employ them are the new urban planners, shaping the virtual frontier into the spaces we occupy, building the boxes into which we fit our lives, and carving out the routes we travel. The choices they make can segregate us further or create new connections; the algorithms they devise can exclude voices or bring more people into the fold; the interfaces they invent can expand our sense of human possibility or limit it to the already familiar.

What vision of a vibrant, thriving city informs their view? Is it a place that fosters chance encounters or does it favor the predictable? Are the communities they create mixed or gated? Are they full of privately owned shopping malls and sponsored billboards or are there truly public squares? Is privacy respected? Is civic engagement encouraged? What kinds of people live in these places and how are they invited to express themselves? (For example, is trolling encouraged, tolerated, or actively discouraged or blocked?)

No doubt, some will find the idea of engineering online platforms to promote diversity unsettling and — a word with some irony embedded in it — paternalistic, but such criticism ignores the ways online spaces are already contrived with specific outcomes in mind.  They are, as a start, designed to serve Silicon Valley venture capitalists, who want a return on investment, as well as advertisers, who want to sell us things. The term “platform,” which implies a smooth surface, misleads us, obscuring the ways technology companies shape our online lives, prioritizing certain purposes over others, certain creators over others, and certain audiences over others.

If equity is something we value, we have to build it into the system, developing structures that encourage fairness, serendipity, deliberation, and diversity through a process of trial and error. The question of how we encourage, or even enforce, diversity in so-called open networks is not easy to answer, and there is no obvious and uncomplicated solution to the problem of online harassment. As a philosophy, openness can easily rationalize its own failure, chalking people’s inability to participate up to choice, and keeping with the myth of the meritocracy, blaming any disparities in audience on a lack of talent or will.

That’s what the techno-optimists would have us believe, dismissing potential solutions as threats to Internet freedom and as forceful interference in a “natural” distribution pattern. The word “natural” is, of course, a mystification, given that technological and social systems are not found growing in a field, nurtured by dirt and sun. They are made by human beings and so can always be changed and improved.

Astra Taylor is a writer, documentary filmmaker (including Zizek! andExamined Life), and activist. Her new book, “The People’s Platform: Taking Back Power and Culture in the Digital Age (Metropolitan Books), has just been published. This essay is adapted from it. She also helped launch the Occupy offshoot Strike Debt and its Rolling Jubilee campaign.

Astra Taylor is working on a film about the theorist Slavoj Zizek, which is being produced by the Documentary Campaign.

http://www.salon.com/2014/04/10/the_internets_destructive_gender_gap_why_the_web_cant_abandon_its_misogyny_partner/?source=newsletter

Where is the protest? A reply to Graeber and Lapavitsas

by Jerome Roos on April 9, 2014

Post image for Where is the protest? A reply to Graeber and Lapavitsas

Yes, we’re nice people, and yes we have been sapped of our energy. But the main reasons we’re not protesting are deeper and must be targeted directly.

Photo by Dimitris Michalakis. Video by Yiannis Biliris.

Last week, two commentaries appeared in The Guardian — one by David Graeber and the other by Costas Lapavitsas and Alex Politaki — basically asking the same question: given that we’re under such relentless assault by the rich and powerful, why are people not rioting in the streets? What happened to the indignation? The screws of austerity are only being tightened. So where are the protests? The two pieces provide two very different answers to the question, and while each contains a moment of truth, both ultimately remain unsatisfactory.

Before turning to the articles, however, we should note that things are not as bad as it would seem from a cursory glance at the headlines. Back in 2010-’11, popular protest was a novelty and it was all over the mainstream media. Today, resistance is widespread, but we no longer see it reported in the news. To give just the most obvious example: two weeks ago, Madrid experienced one of its biggest demonstrations since the start of the crisis, with hundreds of thousands taking to the streets. Despite the enormous turnout and the violent clashes that broke out towards the end of the march, the Spanish and international media chose to systematically ignore the event.

Do we care too much?

That said, it’s true that the protests have generally subsided in frequency and intensity since 2011. Why so? In his article, anthropologist David Graeber argues that the working class simply “cares too much.” In his words: “working-class people [are] much less self-obsessed [than the rich]. They care more about their friends, families and communities. In aggregate, at least, they’re just fundamentally nicer.” In a way, Graeber is right to highlight this moral chasm. Recent research has yielded a plethora of scientific evidence that the rich — and their “rational” acolytes in economics departments — are indeed much more selfish than common folk. Here in Athens, communal solidarity and mutual aid is singularly responsible for maintaining the social fabric in the face of this destructive selfishness of bankers and politicians.

But can we really infer from this somewhat moralistic observation that the fundamental niceness of working people — combined with the displacement of their sense of solidarity into abstract concepts like national identity — provides a “partial answer” to the mystery of the empty street? That conclusion strikes me as slightly misplaced. After all, as Graeber himself can attest, there were plenty of ordinary people out there in the streets in 2011, building up protest camps on the basis of solidarity. Why are we no longer out there today? Have we suddenly become so much more caring towards the rich and so much less solidary with one another? What changed? It seems to me that we should be focusing not so much on the moral virtues of workers but rather on the social causes of the ephemeral and ineffective nature of contemporary protest per se.

An economic double whammy?

Here, the article by political economist Costas Lapavitsas and journalist Alex Politaki — which focuses more specifically on European youth protest, although their question is basically the same as Graeber’s — provides a slightly more dynamic explanation. According to Lapavitsas and Politaki, “the answer seems to be that the European youth has been battered by a ‘double whammy’ of problematic access to education and rising unemployment.” This, in turn, has “sapped the rebellious energy of the young, forcing them to seek greater financial help from parents for housing and daily life.” As a result, “the young have been largely absent from politics, social movements and even from the spontaneous social networks that have dealt with the worst of the catastrophe.”

At first sight, this argument seems to have some explanatory merit. Upon closer inspection, however, it clearly contradicts itself. Back in 2010-’11, everyone — including Lapavitsas — cited rising unemployment as a major factor behind the protests. Now the same people are citing rising unemployment as a reason for the lack of protests? That explanation hardly seems to hold water. In 2012, Lapavistas wrote that “this situation is manifestly untenable. It brings unemployment … and spreads hopelessness across Europe. As the eurozone moves deeper into recession in 2013, social and economic tensions will ratchet up across the continent.” Except, they didn’t. As the eurozone moved deeper into recession, the streets all but emptied out. You cannot retroactively account for that fact with the same economistic reasoning you once deployed to predict the opposite outcome, unless you explicitly posit the existence of some kind of threshold at which economic hardship starts to actively discourage popular protest — but Lapavitsas does not do that.

Precarity, anxiety, futility

So, apart from the most immediate factor inhibiting protest (i.e., violent state repression), why are we no longer out there in the streets? I would suggest that, if we look a bit deeper and move beyond mere surface manifestations, we can identify at least three interrelated factors — all long-term developments coming to the fore today — underlying the relatively ephemeral character of contemporary protest:

  1. The total dis-aggregration and atomization of the social fabric as a result of the rise of indebtedness and the precarious nature of work under financialized capitalism, along with the emergence of supposedly “revolutionary” social media and communication technologies, which may be very useful tools for coordinating protest but which render us increasingly incapable of holding together broad popular coalitions. The social atomicity of late capitalism inhibits the development of a sense of solidarity and makes it much harder to self-organize in the workplace and build  strong and lasting autonomous movements from the grassroots up.
  2. The pervasive sense of anxiety wrought by the neoliberal mantra of permanent productivity and constant connectivity, which keeps people isolated and perpetually preoccupied with the exigencies of the present moment and thereby preempts strategic thinking and long-term grassroots organizing. Closely connected to the rise of indebtedness and precarity, anxiety becomes the dominant affect under financialized capitalism. While anxiety is easily transformed into brief outbursts of anger, its paralyzing effects also form a psychological barrier to investment and involvement in inter-personal relationships and long-term social projects.
  3. The overwhelming sense of futility that people experience in the face of an invisible and seemingly untouchable enemy — finance capital — that we simply cannot directly confront in the streets, nor meaningfully challenge in parliament or government. In the wake of the evident failure of recent mobilizations to produce any immediate change at the level of political outcomes or economic policy, people are understandably disappointed by the perceived pointlessness of street protest. Futility — the conviction that “there is no alternative” to capitalist control — thus becomes the most important weapon in the ideological arsenal of the neoliberal imaginary.

In a future article, I will try to dissect these three factors in greater detail and provide an overview of what I see as the main challenges that the movements face in rekindling the radical imagination. Here, however, I just want to highlight one critical point: neither Graeber’s moralistic narrative (counterposing the fundamental niceness of working people to the selfishness of the capitalists), nor Lapavitsas and Politaki’s economistic reading (explaining the decline in protest through the lack of access to jobs and higher education) provides us very much in the way of an analytical-strategic framework to help revamp the resistance in this phase of relative demobilization. What we desperately need right now is a serious debate within the movements on how to break down the neoliberal control mechanisms of precarity, anxiety and futility — and how to adapt our protest tactics and organizing strategies accordingly.

If we are serious about moving beyond the revolutionary moment of 2011 and building a radically democratic anti-capitalist movement that can actually endure and change the material constitution of society, we will first of all need to find ways to disarm the structural and ideological mechanisms of capitalist control. While I do not pretend to have any easy answers on how to do this — David Graeber’s grassroots organizing in Occupy and his direct involvement in the Strike Debt campaign is much more instructive in this respect — it seems to me that recognizing the systemic importance of precarity, anxiety and futility is a crucial first step in the process of revamping the resistance. Only by directly targeting the structural, ideological and psychological mechanisms that sustain the rule of capital can we begin to recover a sense of social solidarity and craft lasting and meaningful alternatives to financial dictatorship.

Jerome Roos is a PhD candidate in International Political Economy at the European University Institute and founding editor of ROAR Magazine.