The Killing of America’s Creative Class

hqdefault

A review of Scott Timberg’s fascinating new book, ‘Culture Crash.’

Some of my friends became artists, writers, and musicians to rebel against their practical parents. I went into a creative field with encouragement from my folks. It’s not too rare for Millennials to have their bohemian dreams blessed by their parents, because, as progeny of the Boomers, we were mentored by aging rebels who idolized rogue poets, iconoclast cartoonists, and scrappy musicians.

The problem, warns Scott Timberg in his new book Culture Crash: The Killing of the Creative Class, is that if parents are basing their advice on how the economy used to support creativity – record deals for musicians, book contracts for writers, staff positions for journalists – then they might be surprised when their YouTube-famous daughter still needs help paying off her student loans. A mix of economic, cultural, and technological changes emanating from a neoliberal agenda, writes Timberg, “have undermined the way that culture has been produced for the past two centuries, crippling the economic prospects of not only artists but also the many people who supported and spread their work, and nothing yet has taken its place.”

 

Tech vs. the Creative Class

Timberg isn’t the first to notice. The supposed economic recovery that followed the recession of 2008 did nothing to repair the damage that had been done to the middle class. Only a wealthy few bounced back, and bounced higher than ever before, many of them the elites of Silicon Valley who found a way to harvest much of the wealth generated by new technologies. InCulture Crash, however, Timberg has framed the struggle of the working artist to make a living on his talents.

Besides the overall stagnation of the economy, Timberg shows how information technology has destabilized the creative class and deprofessionalized their labor, leading to an oligopoly of the mega corporations Apple, Google, and Facebook, where success is measured (and often paid) in webpage hits.

What Timberg glances over is that if this new system is an oligopoly of tech companies, then what it replaced – or is still in the process of replacing – was a feudal system of newspapers, publishing houses, record labels, operas, and art galleries. The book is full of enough discouraging data and painful portraits of artists, though, to make this point moot. Things are definitely getting worse.

Why should these worldly worries make the Muse stutter when she is expected to sing from outside of history and without health insurance? Timberg proposes that if we are to save the “creative class” – the often young, often from middle-class backgrounds sector of society that generates cultural content – we need to shake this old myth. The Muse can inspire but not sustain. Members of the creative class, argues Timberg, depend not just on that original inspiration, but on an infrastructure that moves creations into the larger culture and somehow provides material support for those who make, distribute, and assess them. Today, that indispensable infrastructure is at risk…

Artists may never entirely disappear, but they are certainly vulnerable to the economic and cultural zeitgeist. Remember the Dark Ages? Timberg does, and drapes this shroud over every chapter. It comes off as alarmist at times. Culture is obviously no longer smothered by an authoritarian Catholic church.

 

Art as the Province of the Young and Independently Wealthy

But Timberg suggests that contemporary artists have signed away their rights in a new contract with the market. Cultural producers, no matter how important their output is to the rest of us, are expected to exhaust themselves without compensation because their work is, by definition, worthless until it’s profitable. Art is an act of passion – why not produce it for free, never mind that Apple, Google, and Facebook have the right to generate revenue from your production? “According to this way of thinking,” wrote Miya Tokumitsu describing the do-what-you-love mantra that rode out of Silicon Valley on the back of TED Talks, “labor is not something one does for compensation, but an act of self-love. If profit doesn’t happen to follow, it is because the worker’s passion and determination were insufficient.”

The fact is, when creativity becomes financially unsustainable, less is created, and that which does emerge is the product of trust-fund kids in their spare time. “If working in culture becomes something only for the wealthy, or those supported by corporate patronage, we lose the independent perspective that artistry is necessarily built on,” writes Timberg.

It would seem to be a position with many proponents except that artists have few loyal advocates on either side of the political spectrum. “A working artist is seen neither as the salt of the earth by the left, nor as a ‘job creator’ by the right – but as a kind of self-indulgent parasite by both sides,” writes Timberg.

That’s with respect to unsuccessful artists – in other words, the creative class’s 99 percent. But, as Timberg disparages, “everyone loves a winner.” In their own way, both conservatives and liberals have stumbled into Voltaire’sCandide, accepting that all is for the best in the best of all possible worlds. If artists cannot make money, it’s because they are either untalented or esoteric elitists. It is the giants of pop music who are taking all the spoils, both financially and morally, in this new climate.

Timberg blames this winner-take-all attitude on the postmodernists who, beginning in the 1960s with film critic Pauline Kael, dismantled the idea that creative genius must be rescued from underneath the boots of mass appeal and replaced it with the concept of genius-as-mass-appeal. “Instead of coverage of, say, the lost recordings of pioneering bebop guitarist Charlie Christian,” writes Timberg, “we read pieces ‘in defense’ of blockbuster acts like the Eagles (the bestselling rock band in history), Billy Joel, Rush – groups whose songs…it was once impossible to get away from.”

Timberg doesn’t give enough weight to the fact that the same rebellion at the university liberated an enormous swath of art, literature, and music from the shadow of an exclusive (which is not to say unworthy) canon made up mostly of white men. In fact, many postmodernists have taken it upon themselves to look neither to the pop charts nor the Western canon for genius but, with the help of the Internet, to the broad creative class that Timberg wants to defend.

 

Creating in the Age of Poptimism

This doesn’t mean that today’s discovered geniuses can pay their bills, though, and Timberg is right to be shocked that, for the first time in history, pop culture is untouchable, off limits to critics or laypeople either on the grounds of taste or principle. If you can’t stand pop music because of the hackneyed rhythms and indiscernible voices, you’ve failed to appreciate the wonders of crowdsourced culture – the same mystery that propels the market.

Sadly, Timberg puts himself in checkmate early on by repeatedly pitting black mega-stars like Kanye West against white indie-rockers like the Decembrists, whose ascent to the pop-charts he characterizes as a rare triumph of mass taste.

But beyond his anti-hip-hop bias is an important argument: With ideological immunity, the pop charts are mimicking the stratification of our society. Under the guise of a popular carnival where a home-made YouTube video can bring a talented nobody the absurd fame of a celebrity, creative industries have nevertheless become more monotonous and inaccessible to new and disparate voices. In 1986, thirty-one chart-toppers came from twenty-nine different artists. Between 2008 and mid-2012, half of the number-one songs were property of only six stars. “Of course, it’s never been easy to land a hit record,” writes Timberg. “But recession-era rock has brought rewards to a smaller fraction of the artists than it did previously. Call it the music industry’s one percent.”

The same thing is happening with the written word. In the first decade of the new millennium, points out Timberg, citing Wired magazine, the market share of page views for the Internet’s top ten websites rose from 31 percent to 75 percent.

Timberg doesn’t mention that none of the six artists dominating the pop charts for those four years was a white man, but maybe that’s beside the point. In Borges’s “Babylon Lottery,” every citizen has the chance to be a sovereign. That doesn’t mean they were living in a democracy. Superstars are coming up from poverty, without the help of white male privilege, like never before, at the same time that poverty – for artists and for everyone else – is getting worse.

Essayists are often guilted into proposing solutions to the problems they perceive, but in many cases they should have left it alone. Timberg wisely avoids laying out a ten-point plan to clean up the mess, but even his initial thrust toward justice – identifying the roots of the crisis – is a pastiche of sometimes contradictory liberal biases that looks to the past for temporary fixes.

Timberg puts the kibosh on corporate patronage of the arts, but pines for the days of newspapers run by wealthy families. When information technology is his target because it forces artists to distribute their work for free, removes the record store and bookstore clerks from the scene, and feeds consumer dollars to only a few Silicon Valley tsars, Timberg’s answer is to retrace our steps twenty years to the days of big record companies and Borders book stores – since that model was slightly more compensatory to the creative class.

When his target is postmodern intellectuals who slander “middle-brow” culture as elitist, only to expend their breath in defense of super-rich pop stars, Timberg retreats fifty years to when intellectuals like Marshall McLuhan and Norman Mailer debated on network television and the word “philharmonic” excited the uncultured with awe rather than tickled them with anti-elitist mockery. Maybe television back then was more tolerable, but Timberg hardly even tries to sound uplifting. “At some point, someone will come up with a conception better than middlebrow,” he writes. “But until then, it beats the alternatives.”

 

The Fallacy of the Good Old Days

Timberg’s biggest mistake is that he tries to find a point in history when things were better for artists and then reroute us back there for fear of continued decline. What this translates to is a program of bipartisan moderation – a little bit more public funding here, a little more philanthropy there. Something everyone can agree on, but no one would ever get excited about.

Why not boldly state that a society is dysfunctional if there is enough food, shelter, and clothing to go around and yet an individual is forced to sacrifice these things in order to produce, out of humanistic virtue, the very thing which society has never demanded more of – culture? And if skeptics ask for a solution, why not suggest something big, a reorganization of society, from top to bottom, not just a vintage flotation device for the middle class? Rather than blame technological innovation for the poverty of artists, why not point the finger at those who own the technology and call for a system whereby efficiency doesn’t put people out of work, but allows them to work fewer hours for the same salary; whereby information is free not because an unpaid intern wrote content in a race for employment, but because we collectively pick up the tab?

This might not satisfy the TED Talk connoisseur’s taste for a clever and apolitical fix, but it definitely trumps championing a middle-ground littered with the casualties of cronyism, colonialism, racism, patriarchy, and all their siblings. And change must come soon because, if Timberg is right, “the price we ultimately pay” for allowing our creative class to remain on its crash course “is in the decline of art itself, diminishing understanding of ourselves, one another, and the eternal human spirit.”

 

http://www.alternet.org/news-amp-politics/killing-americas-creative-class?akid=12719.265072.45wrwl&rd=1&src=newsletter1030855&t=9

“River’s Edge”: The darkest teen film of all time

“River’s Edge” understood ’80s kids — and what they’d do to combat that horrible feeling of emptiness

"River's Edge": The darkest teen film of all time

Keanu Reeves and Crispin Glover in “River’s Edge”

About a year and a half ago, I interviewed Daniel Waters, screenwriter of the enduring and dark teen comedy and media satire “Heathers” for the book (“Twee”) I was researching at the time. The conversation was genial and funny, and I could tell he was what we used to call at my old employer Spin magazine a “quote machine.” Soon, the subject got around to films of the late American auteur John Hughes, particularly his iconic high school trilogy of “Sixteen Candles” (1984), “The Breakfast Club” (1985) and “Pretty in Pink” (the 1986 romantic comedy that he wrote but did not direct). “I felt like Hughes was trying to coddle teenagers and almost suck up to them, idealize them,” Waters said, with almost no fear of reprisal from the many millions who hold these films (and Ferris Bueller … and even, to paraphrase Jeff Daniels in “The Squid and the Whale,” minor Hughes efforts like “Some Kind of Wonderful” and “She’s Having a Baby,” dear), “[With ‘Heathers’) I was more of a terrorist coming after John Hughes. What drove me nuts about the Hughes moves was the third act was always something about how bad adults were. When you grow up your heart dies. Hey, your heart dies when you’re 12!”

One could make an argument for Waters’ “Heathers” (directed as a gauzy, occasionally surrealist morality play by Michael Lehmann) as the darkest teen film of all time. The humor is pitch-black, there’s a body count, a monocle, corn nuts and an utter excoriation of clueless boomers who wonder, as the supremely camp Paul Lynde did a quarter of a century earlier in the film adaptation of “Bye Bye Birdie,” what (in fuck) is the matter with kids today?

But it’s not. Not even close, when compared with a film that preceded it by only three years, the Neal Jimenez-penned, Tim Hunter-directed 1986 drama “River’s Edge,” which is released this month on DVD after years of being difficult to find for home viewing. No other film captures more accurately what it’s like to be dead inside during the end of the Cold War, the height of MTV and the invasion of concerned but impotent parents. “River’s Edge” was the one film that seemed to understand that it wasn’t the rap music, heavy metal music or even drugs that made ’80s kids, it was … nothing. As in the feeling of searching your soul for what you should feel and finding it empty, and slowly, horrifyingly getting used to it to the point that at least one, maybe more of us, will do anything, even commit murder, in order to combat that horrible void. I didn’t want to kill anyone or even myself, but I wanted to disappear, or at least be frozen and wake up in art school in the early ’90s, when bands like Nirvana gave that feeling a voice, and a few anthems.



There’s a lot of Nirvana in “River’s Edge.” Most “what’s the matter with kids today?” films have their juvenile delinquents in some kind of drag: black leather jackets (“Blackboard Jungle,” “The Wild One”) or spiked hair and safety pins and pet rats (“Suburbia,” “Next Stop, Nowhere,” aka “the Punk Rock Quincy episode”). But the kids in “River’s Edge” dress in ripped jeans and T-shirts and chunky, shapeless sweaters. It’s sexless (the only sex scene takes place under a shitty maroon sleeping bag with bullfrogs croaking in the distance and a dead body being simultaneously disposed of not too far off). “The thing about a shark,” Robert Shaw famously observed during the “USS Indianapolis” speech in 1975’s “Jaws” just before all hell breaks loose, “is he’s got lifeless eyes. Black eyes. Like a doll’s eyes. When he comes at ya, he doesn’t even seem to be livin’… till he bites ya.” The kids who populate “River’s Edge,” Keanu Reeves’ Mike, Ione Skye’s Clarissa, Daniel Roebuck’s Samson, etc., don’t seem to be living, buzzed on sixers, many of which they must steal from a harried liquor store cashier (the great, recently late Taylor Negron), as they’re underage. Until they bite you. It’s hard to capture boredom on film without boring an audience (Richard Linklater’s “Suburbia,” for one, tries and fails). What keeps viewers of “River’s Edge” on, well, edge is the sense that these black-eyed, dead creatures in inside-out heavy metal tees (Iron Maiden, Def Leppard, even the band logos are muted) is that they might bite. It’s a sickening feeling and you cannot turn away.

The first thing we see is a preteen kid, Tim (Josh Miller), a juvenile delinquent fast in the making, with an earring, holding an actual doll. We notice, with a little required deduction as he barely reacts, that he is staring across the river at a murderer and his naked, blue-ing victim, while holding the doll he stole from his sister: All four have doll eyes, the corpse (Danyi Deats’ Jamie), the killer (Roebuck) and the doll, which Miller casually drops into the river despite knowing well it’s his little sister’s security object and probably best friend. We are soon with Jamie and Samson after the crime has been committed. Samson is smoking. Despite the occasional feral howl that he knows nobody will hear (except Tim, which is the same thing), it feels like some kind of test for the audience. How much apathy can we weather? How many dead eyes can we stare back at? This is, of course, a testament to the young cast, all of them brilliant and committed (it can’t be easy to portray those bored soulless, can it? You want to react, you want to break). Jamie, a stunned look on her face, lies there, in the cold, also a committed actress, and there is simply nothing like this in any other teen film, or even a teen-populated horror film. Horror films, as the “Scream” franchise would soon remind us, have rules. I wanted to enter the screen, like Jeff Daniels’ genial explorer in Woody Allen’s charming comedy from the previous year, “The Purple Rose of Cairo,” and cover her body somehow. But Hunter forces us to look, which could not be easy for him as an artist, and must have been a challenge for him to ask it of his young cast. In his review, the late Roger Ebert wrote, “The difference is that the film feels a horror that the teenagers apparently did not.”

“Where’s Jamie?” Samson’s crew asks once he leaves the crime scene (for more beer).

“I killed her,” he says.

Most don’t believe him but Layne (Crispin Glover, top billed but unmistakenly launching his freak phase, only a year after playing Michael J. Fox’s bumbling dad in the blockbuster “Back to the Future”). Layne sees the event, the tragedy, as both fait accompli (“You’re gonna bring her back? It’s done!” he squeals in a reedy, wired voice) and a life-changing (and -saving) break in the day-in, day-out living hell; a kind of moral test. He believes Samson, he rallies around Samson, and he tries to motivate his crew to do the same. The corpse is a gift to Layne and Layne returns the favor by pledging his loyalty. He can’t help stifling a smile when he is led to the site. “This is unreal! Completely unreal. It’s like some movie, you know?” Layne enters the movie, doing a reverse “Purple Rose …” Even Samson doesn’t want in. He wants out … of the world, and yet he becomes strangely proud when he displays the body to his group of friends, who borrow a red pickup truck to end their suspicion that they are being jerked around. Most of them instantly recoil at the site of the corpse (still naked!) and cannot get back to the torpor (arcades, sex, beer) quickly enough. Only Reeves’ Mike is conflicted and contemplates going to the cops. Similar terrain was covered in the hit “Stand By Me,” which was released the same year. “You guys wanna see a dead body?” Jerry O’Connell’s Vern asks his pals River Phoenix, Corey Feldman and Wil Wheaton, but they are clearly spooked and remain so into adulthood (as the narrator, Richard Dreyfuss, attests). The kids of this bumfuck town go about their bumfuck business, sleeping through class, hating their non-bio broken-home inhabitants (“Motherfucker, food eater!” Reeves yells at the slob who’s moved in with his mom). They’re not stupid. They’re just … unequipped for reality that does not repeat on a loop, sun up and sun down. Layne, in his makeup, watch cap, black clothes and muscle car is the only one among them who wants to feel “like Starsky and Hutch!”

“River’s Edge” is based, loosely, on reality. In late 1981, a 16-year-old student, Anthony Broussard, from Milpitas High School, near San Jose, California, led a group of his friends and his 8-year-old brother into the hills to see the barely clothed body of the 14-year-old Marcy Renee Conrad, whom he’d strangled days before. “Then instead of reporting the body of their dead school chum to the police,” reporter Claire Spiegel wrotein her coverage of the case, “they went back to class or the local pinball arcade. One went home and fell asleep listening to the radio.” She added, “Their surprising apathy toward murder bothered even hardened homicide detectives.”

Jimenez, then a college student in Santa Clara, California, read about the events and was inspired to begin working on a story based on this behavior. In the age of “Serial,” it’s hard not to see “River’s Edge” as prescient, and when I listened to the podcast last year, I thought a lot about the film. But its power comes not from reality, but from its craft: the script, the performances and the cinematography by David Lynch collaborator Frederick Elmes, who shot “Blue Velvet,” another milestone ’86 release. The beauty of the exteriors (the grainy opening, the murky drink, the perfect blue and shadows when Layne half-heartedly disposes of the body in it) make the ugliness of the behavior all the more disturbing.

Director Tim Hunter knew his way around a “youth gone fucked up” film by ’86. He was the co-writer of “Over the Edge,” known mostly as the film debut of then 14-year-old Matt Dillon who utters the pull-away line, “A kid who tells on another kid is a dead kid.” Loaded with excellent power pop (Cheap Trick’s “Downed,” and “Surrender,” especially), Dillon and his J.D. friends spoil the planned suburban community of “New Granada” on their dirt bikes, shooting off fireworks and BB guns. Dillon starred in Hunter’s directing debut, 1982’s “Tex,” based on a book by go-to wild, but sensitive, youth writer S.E. Hinton. Who knows why he didn’t appear in “River’s Edge.” Maybe it was too easy to see the heart beating under his flannel. Even Judd Nelson’s John Bender has a heart under his, and at the end of John Singleton’s 1991 film “Boyz n the Hood,” Ice Cube’s scowling gang member Doughboy has a monologue that provides evidence that he’s got a big one. (“Turned on the TV this morning. Had this shit on about living in a violent world. Showed all these foreign places, where foreigners live and all. Started thinking, man. Either they don’t know, don’t show, or don’t care about what’s going on in the hood.”)

The parents of New Granada are pretty pissed that their utopia has been vandalized and rally in protest, but the boomers of “River’s Edge” don’t even have the fight in them. There’s no Ms. Fleming from “Heathers” among them. Nobody will call when the shuttle lands. “Fuck you, man,” one of them rages in vain at his class. “You don’t give a damn! I don’t give a damn! No one in this classroom gives a damn that she’s dead. It gives us a chance to feel superior.” “Are we being tested on this shit?” a student asks. Even the media don’t really care. And if the kids themselves are apathetic (“I don’t give a fuck about you and I don’t give a fuck about your laws,” Samson tells Negron’s clerk before brandishing a gun), the new generation cares even less. Not even teenagers; they smoke weed, pack heat and drive big gas guzzlers they can barely see out of, when not speeding through nowheresville on their bikes or shooting trapped crayfish in a barrel, literally. Full disclosure: I was friendly with Josh Miller in Hollywood in the early ’90s. For a time, he was going to star in and produce a pretty decent screenplay I’d co-written, which eventually fell through. In person he was sweet, generous and caring, but I always, always looked at him sideways because he was also … Tim, who utters the following line: “Go get your numchucks and your dad’s car. I know where we can get a gun.”

There’s irony and black humor in “River’s Edge.” I don’t want to portray it as some kind of Fassbender-ish downer, 90 minutes of misery. Samson promises to read Dr. Seuss to his incapacitated aunt. And there’s, of course, Layne, who doesn’t even seem to realize that nearly every line out of his mouth is absolutely ridiculous (which makes him beyond endearing, sociopath that he likely is). When he is rewarded his sixer for chucking the corpse in the river, he complains, “You’d think I’d at least rate Michelob.” I wonder why Reeves became a star (this is only his second film, after a small part in the Rob Lowe hockey drama “Youngblood”) and Ione Skye, more briefly a sought after actress. Perhaps because his albeit belated actions make him as close to a hero as the film has … discounting, of course, Feck.

You know you are dealing with a dark film when its only true beating heart belongs to a crippled biker, weed dealer and fugitive murderer who is in love with a blow-up doll, having blown the head off his previous paramour. Feck lives alone. Feck, at the behest of Layne, briefly hides Samson. And, realizing he is dealing with a soulless and dangerous generation, Feck does what dozens of teachers and parents cannot, and will not do. He reacts. Perhaps it’s a testament to his skill, but Dennis Hopper the man looks genuinely heartbroken at what’s happened to the youth he fought so hard to liberate with his “Easy Rider.” In the midst of a glorious comeback (he’d appear in “Blue Velvet” and receive an Oscar nomination for the basketball film “Hoosiers”). It’s Feck that Samson finally opens up to (“She was dead there in front of me and I felt so fucking alive”). We don’t know why Feck shot his ex, but we do know that he maintains that he loved her. He sees none of that emotion, no emotion at all, in Samson. “I’m dead now,” Samson says. “They’re gonna fry me for sure.” Thanks to Feck, they won’t get the chance.

“River’s Edge” doesn’t end in a trial, but rather a quiet, plain, sparse church funeral and a bit of long-absent dignity returned to the victim. It somehow relieves the viewer. Sanity, as it is, has been restored. No one would call it a feel-good ending but somehow, strangely, bloodily, perversely, love wins in the end. “There was no hope for him. There was no hope at all. He didn’t love her. He didn’t feel a thing. I at least loved [mine],” Feck explains. “I cared for her.”

Released in May of ’87 in limited theaters, the movie quickly made a mark with critics, if not audiences, and began to amass a loyal cult of viewers who appreciated its unique and revolutionary qualities. It beat out Jonathan Demme’s “Swimming to Cambodia,” the acclaimed Spalding Gray monologue film, at the Independent Spirit Awards, as well as John Huston’s final film, “The Dead.” And while far from a box office hit, it effortlessly set a precedent for films about teens. They no longer had to be either good or evil or anything at all. They didn’t have to dress or look like James Dean or Droogs or get off in any way on their heroism and their villainy. “River’s Edge” made all that seem quaint. It’s a singular film that foresaw the ’90s and freed the cinema teen to be a loser … baby.

Portrait of the Artist as a Dying Class

highfidelity.web_850_593

Scott Timberg argues that we’ve lost the scaffolding of middle-class jobs—record-store clerk, critic, roadie—that made creative scenes thrive. Record store clerks—like Barry (Jack Black) in High Fidelity—are going the way of the dodo. (Getty Images)

BY JOANNA SCUTTS

It was livable, affordable, close-knit cities, with plenty of universities and plenty of cheap gathering places, that allowed art to flourish in 20th-century America.

Though Scott Timberg’s impassioned Culture Crash: The Killing of the Creative Class focuses on the struggles of musicians, writers and designers, it’s not just a story about (the impossibility of) making a living making art in modern America. More urgently, it’s another chapter in America’s central economic story today, of plutocracy versus penury and the evisceration of the middle class.

Timberg lost his job as an arts reporter at the Los Angeles Times in 2008 after real-estate mogul Sam Zell purchased the paper and gutted its staff. But newspapers are experiencing a natural dieoff, right? Wrong, says Timberg. He cites statistics showing that newspaper profits remained fat into the 21st century—peaking at an average of 22.3 percent in 2002—as the industry began slashing staff. The problem isn’t profitability but shareholder greed, and the fact that we’ve ceded so much authority to the gurus of economic efficiency that we’ve failed to check their math.

The story of print journalism’s demise is hardly new, but Timberg’s LA-based perspective brings architecture, film and music into the conversation, exposing the fallacy of the East Coast conviction that Hollywood is the place where all the money is hiding. Movie studios today are as risk-averse and profit-minded as the big New York publishing houses, throwing their muscle behind one or two stars and proven projects (sequels and remakes) rather than nurturing a deep bench of talent.

For aspiring stars to believe that they may yet become the next Kanye or Kardashian is as unrealistic as treating a casino as a viable path to wealth. Not only that, but when all the money and attention cluster around a handful of stars, there’s less variation, less invention, less risk-taking. Timberg notes that the common understanding of the “creative class,” coined by Richard Florida in 2002, encompasses “anyone who works with their mind at a high level,” including doctors, lawyers and software engineers.

But Timberg looks more narrowly at those whose living, both financially and philosophically, depends on creativity, whether or not they are highly educated or technically “white collar.” He includes a host of support staff: technicians and roadies, promoters and bartenders, critics and publishers, and record-store and bookstore autodidacts (he devotes a whole chapter to these passionate, vanishing “clerks.”) People in this class could once survive, not lavishly but respectably, leading a decent middle-class life, with even some upward mobility.

Timberg describes the career of a record-store clerk whose passion eventually led him to jobs as a radio DJ and a music consultant for TV. His retail job offered a “ladder into the industry” that no longer exists. Today, in almost all creative industries, the rungs of that ladder have been replaced with unpaid internships, typically out of reach to all but the children of the bourgeoisie. We were told the Internet would render physical locations unimportant and destroy hierarchies, flinging open the gates to a wider range of players. To an extent that Timberg doesn’t really acknowledge, that has proven somewhat true: Every scene in the world now has numerous points of access, and any misfit can find her tribe online. But it’s one thing to find fellow fans; it’s another to find paying work. It turns out that working as unfettered freelancers—one-man brands with laptops for offices—doesn’t pay the rent, even if we forgo health insurance.

Timberg points to stats on today’s music business, for instance, which show that even those who are succeeding, with millions of Twitter followers and Spotify plays, can scrape together just a few thousand dollars for months of work. (Timberg is cautiously optimistic about the arrival of Obamacare, which at least might protect people from the kinds of bankrupting medical emergencies that several of his subjects have suffered.).

In addition, Timberg argues that physical institutions help creativity thrive. His opening chapter documents three successful artistic scenes—Boston’s postwar poetry world, LA’s 1960s boom in contemporary art, and Austin’s vibrant 1970s alternative to the Nashville country-music machine. In analyzing what makes them work, he owes much to urban planner Jane Jacobs: It was livable, affordable, close-knit cities, with plenty of universities and plenty of cheap gathering places that allowed art to flourish in 20th-century America. In Austin, the university and the legislature provided day jobs or other support to the freewheeling artists, Timberg notes: “For all its protests of its maverick status, outlaw country was made possible by public funding.”

Today, affordability has gone out the window. As one freelance writer, Chris Ketcham, puts it, “rent is the basis of everything”—and New York and San Francisco, gouging relentlessly away at their middle class, are driving out the very people who built their unique cultures.

Take live music, for example. Without a robust support structure of people working for money, not just love—local writers who chronicle a scene, talented designers and promoters, bars and clubs that can pay the rent—live music is withering. Our minimum wage economy isn’t helping: For the venue and the band to cover their costs, they need curious music-lovers who have the time and money to come out, pay a cover charge, buy a beer or two and maybe an album. That’s a night out that fewer and fewer people can afford. Wealthy gentrifiers, meanwhile, would rather spend their evenings at a hot new restaurant than a grungy rock club. Foodie culture, Timberg suggests, has pushed out what used to nourish us.

Timberg is not a historian but a journalist, and his book is strongest when he allows creative people to speak for themselves. We hear how the struggles of a hip LA architect echo those of music professors and art critics. However, the fact that most of Timberg’s sources are men (and from roughly the same generation as the author), undercuts the book’s claim to universality. Those successful artistic scenes he cites at the beginning, in Boston, LA and Austin, and the mid-century heyday of American culture in general, were hardly welcoming to women and people of color.

It’s much harder to get upset about the decline of an industry that wasn’t going to let you join in the first place. Although Timberg admits this in passing, he doesn’t explore the way that the chipping away of institutional power might in fact have helped to liberate marginalized artists.

But all the liberation in the world counts for little if you can’t get paid, and Timberg’s central claim—that the number of people making a living by making art is rapidly decreasing—is timely and important, as is his argument that unemployed architects deserve our sympathetic attention just as much as unemployed auto workers.

The challenge is to find a way to talk about the essential role of art and creativity that doesn’t fall back on economic value, celebrity endorsement or vague mysticism. It’s far too important for that.

Joanna Scutts is a freelance writer based in Queens, NY, and a board member of the National Book Critics Circle. Her book reviews and essays have appeared in the Washington Post, the New Yorker Online, The Nation, The Wall Street Journal and several other publications. You can follow her on Twitter @life_savour.

http://inthesetimes.com/article/17522/portrait_of_the_dying_creative_class

Repression as big business in pre-Olympic Rio de Janeiro

By Jason O’Hara On January 10, 2015

Post image for Repression as big business in pre-Olympic Rio de Janeiro
Big bucks are to be made in hosting the 2014 World Cup and the 2016 Olympic Games in Rio de Janeiro — not least by repressing all voices of dissent.
Editor’s note: For four years, Canadian documentary filmmaker Jason O’Hara has been working with communities in Brazil to document human rights abuses in advance of the 2014 World Cup and the 2016 Olympic Games, focusing specifically on the illegal forced evictions that have been taking place in Rio.Now, with over 300 hours of unique footage documenting the evictions, protests and police brutality that have come to define the preparations in Rio, O’Hara has launched a crowdfunding campaign to realize a feature-length documentary, State of Exception, telling the essential and inspiring stories of community resistance at this critical moment in Brazil’s history
.::::::::::::::::
The circus has left town in Rio de Janeiro, which this past summer hosted the 2014 FIFA World Cup Finals as one of twelve host cities for the tournament in Brazil. Next year, Rio will also host the 2016 Olympic Games, making it the first city in history to host the two mega-spectacles back-to-back.As international tourists descended on Rio’s iconic Maracanã Stadium to watch the final World Cup match in July, most Brazilians watched from television screens outside, while others took to the streets to exercise their constitutionally guaranteed right to protest. It was not mere opportunism that was bringing people to the streets, seeking to capitalize on all the attention garnered by the Cup — their grievances were very much tied to the international spectacle and the social legacy it will leave in their country.

When the circus leaves town, it is Brazilians who will be shoveling the shit for years afterward. It is very true that these events bring extraordinary benefits, but to whom are these benefits accrued? The benefits are privatized and profit an international elite — FIFA and the event sponsors — while the costs are socialized.

FIFA paid no taxes in Brazil, thus depriving the Brazilian citizenry of much-needed financial resources in order to fatten the purses of high-ranking officials and the international football mafia. “These projects, massive in their scope and scale, cost many billions of public dollars and leave behind ambiguous legacies. Nearly every global mega-event has resulted in financial losses for the host, temporary cessation of the democratic process, the production of militarized and exclusionary spaces, residential displacement, and environmental degradation,” says event critic Helen Lenskyj.

Where is Amarildo?!

The grievances being expressed in the streets are manifold: thousands of familiesforcefully evicted from their homes (often brutally, by riot police firing tear gas and rubber bullets); gross overspending on the stadium and other event-related infrastructure, while basic public services such as health care, education and basic sanitation are left unaddressed; and the militarization of the favelas in Rio, through the so-called favela ‘pacification’ program.

Initially, pacification brought much hope to communities that had long been suffering from the violence associated with criminal gangs and militias. However, these hopes were dashed through egregious abuses by a new gun-toting gang perpetrating summary executions and disappearances, only this time the violence was officially sanctioned by the state.

One of the most notable examples of this new form of brutality is that ofAmarildo Dias de Souza last year in the favela of Rocinha, whose case was only investigated after the international outcry following his disappearance. There was nothing particularly special about Amarildo’s case — there have been thousands of Amarildos we will never hear about — but the timing of the incident coincided with one of the largest civil uprisings in Brazilian history (of June 2013), and the Brazilian people were fed up.

Another international media storm erupted in Rio in July, this time a gringodocumentary filmmaker from Canada (me) was beaten up by a handful of police officers at a protest near Maracanã Stadium during the FIFA World Cup final. While the police have been beating, torturing, and disappearing poor faveladosfor a long time, on this occasion they had overstepped their duty and swung their batons at one of the international visitors they were tasked to protect.

The fact that this relatively minor incident garnered so much international media attention is emblematic of precisely the inequalities Brazilians were protesting against in the streets — the transformation of the urban landscape in Rio and throughout Brazil to serve people like me, international tourists and international capital, at the expense of the people who actually live there.

While the police repression we have been seeing in the streets of Rio during protests is shocking, it’s a relative picnic when one considers the much worse lethal violence perpetrated by Brazilian police who kill on average five citizens every day. The vast majority of cases are not so much as investigated, let alone prosecuted, the culture of impunity runs deep among Brazilian policing institutions. The victims are almost always poor black favela dwellers. When a privileged white foreigner receives a mild beating at the hands of Brazilian police, it makes international headlines, whereas the same police carry out summary executions every day in the favelas with complete impunity.

Whatever the rhetoric about equality before the law, the value of a human life in Brazil is not universal.

The dictatorship’s legacy

While it’s easy to dismiss my beating as an unfortunate incident perpetrated by a handful of ‘bad apples’, I think it is wise to take pause and consider the systemic context whereby the police are themselves victims of Brazil’s oppressive political system under global capitalism.

They are pawns in the business of repression, most police are themselves favela dwellers who are poorly paid, poorly trained, and are (in most cases) pursuing a career in policing for lack of other opportunities — much like African-Americans in the United States, who are represented twice as much in the military as they are in the US population. This is not to dismiss the scandalous violence perpetrated by a handful of the police, but amongst any mass harvest (the ‘thousands of new jobs created by the World Cup’), there are bound to be more than a few bad apples.

For the World Cup Finals, Rio saw one of the largest mobilizations of military and police forces in Brazilian history since the dictatorship, and it was not Brazilian citizens they were there to protect — rather, they were protecting FIFA and the associated interests of global capital. The police were sent to the streets to brutally repress and censor any dissidents who might spoil the party.

Theoretically, policing should be about citizen security. Unfortunately, in Brazil, policing institutions are a legacy of the dictatorship, at which time police were not tasked to protect the citizenry. On the contrary, their role was to protect the state from its citizens, the so-called ‘internal enemy’ — the political dissident.

And so, in the modern context, in the shadow of the dictatorship, the brutality we are seeing in the streets is a logical manifestation of this philosophy, without contradiction. The fastest growing category of international arms sales are not for fighting foreign enemies, but are used to repress and silence political dissent in many modern day ‘democracies’ like Brazil. It’s difficult to raise one’s voice when you are choking on tear gas.

Repression is big business

One of the biggest players in this industry (Riot Control and Public Order Weaponry) is the Rio-based company Condor, which recently secured itself an exclusive $22 million contract as part of the security budget for the World Cup and has expanded its business by 30% in the past 5 years.

Condor provides Brazilian security forces with 27 different categories of ‘non-lethal’ weapons of repression including rubber bullets, tear gas, tasers, light and sound grenades. Condor also supplied many of the weapons deployed in uprisings in Egypt, Turkey and Bahrain, where their products were repeatedly used against protocol and to systematically torture people.

The company has an exclusive deal with Brazilian Defense and Security Industries Association. “That means all public defense and security public institutions, such as the Brazilian police, may purchase without a government procurement process,” says investigative reporter Bruno Fonseca.

Condor categorizes its products as ‘non-lethal’ despite a growing number of deaths of both protesters and bystanders as reported by the UN. The self-described categorization is important because it allows them to circumvent the Chemical Weapons Convention restricting the uses of toxic gases. Often classified as policing equipment, these weapons fall outside of arms sales restrictions and are thus mostly unregulated with hundreds of thousands of such weapons being funneled directly to Brazilian security forces without oversight. It appears that repression is good business in Brazil.

Exploit the many to comfort the few

As most people watch the World Cup and Olympics from the comfort of their homes, in bars and restaurants in cities and towns across the globe, we should not forget the real cost of creating this spectacle — both the financial and social costs paid by Brazilians, who will be coping with the legacy of these events for years to come. The police sent to the streets for the World Cup were not serving Brazilians; they were serving FIFA, serving you, and protecting the status quo from the inevitable resentment that is going to boil up in host countries when the circus comes to town and doesn’t consult or invite the people hosting the party.

The problem runs much deeper than the actions of a few bad apples. It is systemic and arises from the inherent dynamics of global capitalism. Events like the FIFA World Cup and the Olympics exemplify these dynamics. When FIFA and IOC come to town, a ‘state of exception‘ is imposed, justified by the accelerated expediency required to prepare for such events. It is an exemplary legal framework that temporarily suspends the rule of law and strangles civil liberties such as the right to free movement and protest, among many others (such as housing rights in the case of the forced community evictions).

It might assuage our collective conscience to tell ourselves that our relative comfort is hard-earned through our own efforts, but such reductionist thinking dismisses the fact that much of our privilege rests on the backs of the global majority who constitute the oppressed classes of the world. From the shirts on our backs to the fantastic sporting events like the FIFA World Cup and Olympic Games, all are served to us through a global economic system that disenfranchises the majority to the benefit of the few. As consumers of these global spectacles, we are all implicated in this story.

Find out more about Jason O’Hara’s upcoming documentary, State of Exception, and support the project by visiting the campaign page here.

 

 

 

 

 

 

 

 

 

Jason O’Hara is the founder of Seven Generations, a Toronto-based documentary production company committed to telling stories that inform and inspire, with a focus on themes of social and environmental justice.

 

http://roarmag.org/2015/01/rio-olympic-games-protest/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+roarmag+%28ROAR+Magazine%29

Tim Burton’s Big Eyes: Kitsch has never helped anyone yet

By Joanne Laurier

3 January 2015

Directed by Tim Burton; screenplay by Scott Alexander and Larry Karaszewski

Director Tim Burton’s new film Big Eyes tells the story of Margaret Keane, the American artist and pop culture icon, who created “big-eye art.” Keane’s paintings of waifs whose doe-like eyes were several times the normal human size, became a mass-marketing sensation in the 1960s and continued into the 1970s.

Big Eyes

Burton treats her art, justifiably derided by critics as “kitsch” and the “lowest common denominator,” as legitimate. In fact, his film begins with a quote from Andy Warhol (1928-1987), the Pop artist and sometime painter of soup cans: “It has to be good. If it were bad, so many people wouldn’t like it.”

The tale of how Margaret Keane’s art, purchased by figures as disparate as Madame Chiang Kai-shek and Marilyn Manson, and even by Burton himself, came to prominence is an interesting one.

The movie opens in 1958 in Northern California. Margaret (Amy Adams), with young daughter in tow, is leaving her first husband. Born in Nashville, Tennessee and trained at an art school in Memphis, Margaret eventually ends up in San Francisco and starts to peddle her portraits and paintings at an outdoor art fair. Next to her booth is one belonging to Walter Keane (Christoph Waltz), a painter of Parisian street scenes who mentions he studied in Europe.

Walter earns a living as a realtor (“Any blockhead can arrange a sublet”), but wants to “walk away from the bourgeois scene.” Margaret, now a single mother with a child to support, is susceptible to Walter’s syrupy charms. They marry in paradise (Hawaii). Walter recognizes something in Margaret’s paintings of vulnerable-looking, big-eyed children.

His indefatigable skills as a hustler and con artist eventually pay off. Margaret’s paintings are simply signed “Keane.” So when her work starts to be noticed, and the question is asked, “Who is the artist?” Walter opportunistically answers, “I am.” Margaret objects, but Walter counters that “people don’t buy lady art” and convinces her to go along with the hoax.

Stashed away in a smoke- and turpentine fume-filled attic where she churns out one sentimental picture after another, Margaret carefully guards her secret from everyone, including her daughter and her best friend Dee Ann (Krysten Ritter). “If you tell anyone, this empire collapses,” threatens Walter. As her work becomes wildly popular, Walter starts mass producing it. Local journalist Dick Nolan (Danny Huston) writes puff pieces about Walter’s ever-rising stardom.

But Keane’s oeuvre is not warmly received by reputable art critics. Terence Stamp, in a brief but effective appearance as John Canaday, the real-life New York Times critic, calls the large Keane piece, “Tomorrow Forever,” done for the 1964 New York World’s Fair, “the very definition of tasteless hack work.”

Meanwhile, Walter becomes more tyrannical and abusive toward Margaret, who ultimately ends the marriage and leaves for Hawaii. In 1970, spurred on by an encounter with the Jehovah’s Witnesses religious group, Margaret reveals that she painted all the “Big Eyes.” Besides inventing himself as a painter, Walter has made up the inspiration for “his” paintings: the destructive impact of World War II and its reflection in the sad eyes of the orphaned children. Conversely, his wife, the real artist, offers a more mundane account: that a brief period of deafness as a child made her focus on “the windows of the world.”

When Walter refuses to admit the fraud, the couple face off in 1986 in a Honolulu courtroom. The judge orders each of them to create a work on the spot (a real event). Margaret finishes her painting of a saucer-eyed waif in 53 minutes, while Walter sits before a blank canvas, complaining about a sore shoulder. The “authorship” issue is settled once and for all.

Bathing his movie in gentle, bright colors, Burton is able to impart to Big Eyeshis trademark hyper-real quality. In one scene set in a grocery store, all the patrons are curiously endowed with “big eyes,” creating a disturbing tableau. Adams is appealing as the delicate, soft-spoken Margaret, while Waltz turns up the volume in an often irritating manner. Huston, a fine actor, provides unnecessary narration and is not fully integrated into the storyline.

The movie has merit as a depiction of an unusual episode in art history. Unfortunately, the director does not probe the incident in any depth, and what he does make of it is largely wrong.

Burton’s film has a certain feminist coloration—Margaret’s soul is being destroyed by the theft of her art through gross intimidation and violence. However, the director’s primary aim seems to be an attack on so-called “high art” and its proponents. Big Eyes is essentially an implied defense of Keane’s kitsch in the name of egalitarianism and anti-elitism. Opposition to Keane’s art (and the sort of outlook expressed in Warhol’s statement referred to above) is attributed to intellectual snobbery, personified by both the Canaday character (“Mr. Keane is why society needs critics to protect them against such atrocities”) and Ruben (Jason Schwartzman), the gallery owner who caters to the rich with pricey abstract art.

Ironically, undermining the filmmaker’s claim to be a genuine admirer of Keane’s work is the fact that his own movie is artistically (within the limits of his talent) and knowingly made. This was also the case with his far more compelling film, Ed Wood (1994), in which the director took a relatively worked out and psychologically consistent approach to his characters. That is, he made a perceptive and humane film about an eccentric artist who created schlock. So too in Big Eyes, Burton’s artistry is at odds with his supposed favoring of a populist, low-brow trend.

The extreme divide between low- and high-brow art is a social and historical problem, not something fixed and inevitable. When Canaday-Stamp observes that “Art should elevate, not pander,” he is merely repeating an elementary truth, which would have been widely accepted for most of the 20th century by artists and critics alike.

Behind Burton’s aesthetic stance lie decades of postmodernist and other damaging trends taught at the universities and art schools—or imbibed one way or another by up-and-coming artists. (Burton attended the California Institute of the Arts in the late 1970s and from there went directly to working for Disney as an animator and storyboard artist.)

In his 1990 work, Post-modernism: The Twilight of the Real, Neville Wakefield provided a revealing comment . He wrote: “What has emerged over the past few years is not so much a redefinition of the aesthetic, but rather a more general consensus of opinion pointing to a decline of faith in the transformativepowers of the arts—in the plausibility of distinctions between art and advertising, traditionally cast in terms of distinctions between high and low, or authorised and popular culture (unauthorised in the sense of being without discernible pedigree or genealogy) … It is a phase marked by a new sort of promiscuity in which the various strands of human activity jostle, intermingle, and exchange amongst one another.”

Big Eyes

In November 2009, as part of an exhibition of Burton’s work organized by the Museum of Modern Art in New York, the museum’s assistant curator, Ron Magliozzi, noted approvingly that the director was “an artist and filmmaker who shares much with his contemporaries in the post-modern generation who have taken their inspiration from pop culture. In Burton’s case he was influenced by newspaper and magazine comics, cartoon animation and children’s literature, toys and TV, Japanese monster movies, carnival sideshows and performance art, cinema Expressionism and science fiction films alike.” These influences, treated uncritically, do not necessarily add up to anything positive. Burton and others like him, in fact, have accommodated themselves to cultural confusion and retrogression.

In truth, Margaret Keane’s work emerged in a largely stagnant and reactionary cultural climate. To a certain extent, the almost complete abandonment of figurative work by far more talented and profound artists helped create a space in the 1950s and 1960s for Keane’s terribly limited paintings.

With Big Eyes, Burton has added his seventeenth feature to his extremely uneven body of work, which ranges from the dreadful Sweeney Todd: The Demon Barber of Fleet Street (2007) and Planet of the Apes (2001) at one pole to the laudable Ed Wood at the other. The new film is not cause for great optimism.

 

http://www.wsws.org/en/articles/2015/01/03/pers-j03.html

NORTH KOREA/SONY STORY SHOWS HOW EAGERLY U.S. MEDIA STILL REGURGITATE GOVERNMENT CLAIMS

Featured photo - North Korea/Sony Story Shows How Eagerly U.S. Media Still Regurgitate Government Claims

The identity of the Sony hackers is still unknown. President Obama, in a December 19 press conference, announced: “We can confirm that North Korea engaged in this attack.” He then vowed: “We will respond. . . . We cannot have a society in which some dictator some place can start imposing censorship here in the United States.”

The U.S. Government’s campaign to blame North Korea actually began two days earlier, when The New York Timesas usualcorruptly granted anonymity to “senior administration officials” to disseminate their inflammatory claims with no accountability. These hidden “American officials” used the Paper of Record to announce that they “have concluded that North Korea was ‘centrally involved’ in the hacking of Sony Pictures computers.” With virtually no skepticism about the official accusation, reporters David Sanger and Nicole Perlroth deemed the incident a “cyberterrorism attack” and devoted the bulk of the article to examining the retaliatory actions the government could take against the North Koreans.

The same day, The Washington Post granted anonymity to officials in order to print this:

Other than noting in passing, deep down in the story, that North Korea denied responsibility, not a shred of skepticism was included byPost reporters Drew Harwell and Ellen Nakashima. Like the NYT, the Postdevoted most of its discussion to the “retaliation” available to the U.S.

The NYT and Post engaged in this stenography in the face of numerous security experts loudly noting how sparse and unconvincing was the available evidence against North Korea. Kim Zetter in Wired – literally moments before the NYT laundered the accusation via anonymous officials – proclaimed the evidence of North Korea’s involvement “flimsy.” About the U.S. government’s accusation in the NYT, she wisely wrote: “they have provided no evidence to support this and without knowing even what agency the officials belong to, it’s difficult to know what to make of the claim. And we should point out that intelligence agencies and government officials have jumped to hasty conclusions or misled the public in the past because it was politically expedient.”

Numerous cyber experts subsequently echoed the same sentiments. Bruce Schneier wrote: “I am deeply skeptical of the FBI’s announcement on Friday that North Korea was behind last month’s Sony hack. The agency’s evidence is tenuous, and I have a hard time believing it.” The day before Obama’s press conference, long-time expert Marc Rogers detailed his reasons for viewing the North Korea theory as “unlikely”; after Obama’s definitive accusation, he comprehensively reviewed the disclosed evidence and was even more assertive: “there is NOTHING here that directly implicates the North Koreans” (emphasis in original) and “the evidence is flimsy and speculative at best.”

Yet none of this expert skepticism made its way into countless media accounts of the Sony hack. Time and again, many journalists mindlessly regurgitated the U.S. Government’s accusation against North Korea without a shred of doubt, blindly assuming it to be true, and then discussing, oftendemanding, strong retaliation. Coverage of the episode was largely driven by the long-standing, central tenet of the establishment U.S. media: government assertions are to be treated as Truth.

The day after Obama’s press conference, CNN’s Fredricka Whitfelddiscussed Sony’s decision not to show The Interview and wondered: “how does this empower or further embolden North Korea that, OK, this hacking thing works. Maybe there’s something else up the sleeves of the North Korean government.” In response, her “expert” guest, thegenuinely crazed and discredited Gordon Chang, demanded: “President Obama wisely talks about proportional response, but what we need is an effective response, because what North Korea did in this particular case really goes to the core of American democracy.”

Even worse was an indescribably slavish report on the day of Obama’s press conference from CNN’s Chief National Security Correspondent Jim Sciutto. One has to watch the segment to appreciate the full scope of its mindlessness. He not only assumed the accusations true but purported to detail – complete with technical-looking maps and other graphics – how “the rogue nation” sent “investigators on a worldwide chase,” but “still, the NSA and FBI were able to track the attack back to North Korea and its government.” He explained: “Now that the country behind those damaging keystrokes has been identified, the administration is looking at how to respond.”

 

 

 

 

 

 

 

 

 

 

MSNBC announced North Korea’s culpability on Al Sharpton’s program, where the host breathlessly touted NBC‘s “breaking news” that the hackers were “acting on orders from North Koreans.” Sharpton convened a panel that included the cable host Touré, who lamented that “that Kim Jong-un suddenly has veto power over what goes into American theaters.” He explained that he finds this really bad: “I don’t like that. I don’t like negotiating with terrorists. I don’t like giving into terrorists.”

Bloomberg TV called upon former Obama Director of National Intelligence Dennis Blair, who said without any challenge that “this is not the first time that North Korea has threatened Americans.” Blair demanded that “the type of response we should make I think should be able to deny the North Koreans the ability to use the Western financial system, telecommunications and system to basically steal money, threaten our systems.” The network’s on-air host, Matt Miller, strongly insinuated – based on absolutely nothing – that China was an accomplice: “I simply can’t imagine how the North Koreans pull off something like this by themselves. . . . I feel like maybe some larger, huge neighbor of North Korean may give them help in this kind of thing.”

Unsurprisingly, the most egregious (and darkly amusing) “report” came fromVox‘s supremely error-plagued and government-loyal national security reporter Max Fisher. Writing on the day of Obama’s press conference, he not only announced that “evidence that North Korea was responsible for the massive Sony hack is mounting,” but also smugly lectured everyone that “North Korea’s decision to hack Sony is being widely misconstrued as an expression of either the country’s insanity or of its outrage over The Interview.” The article was accompanied by a typically patronizing video, narrated by Fisher and set to scary music and photos, and the text of the article purported to “explain” to everyone the real reason North Korea did this. As Deadspin‘s Kevin Draper put it yesterday (emphasis in original):

Here is Vox’s foreign policy guy laying out an article titled, “Here’s the real reason North Korea hacked Sony. It has nothing to do with The Interview.” Never mind the tone (and headline) of utter certainty in the face of numerous computer security experts extremely skeptical of the government’s story that North Korea hacked Sony. . . . Vox’s foreign policy guy thinks he can explain the reason the notoriously opaque North Korean regime conducted a hack they may well not have actually conducted!

This government-subservient reporting was not universal; there were some noble exceptions. On the day of Obama’s press conference, MSNBC’s Rachel Maddow hosted Xeni Jardin in a segment which repeatedly questioned the evidence of North Korea’s involvement. The network’s Chris Hayes early on did the same. The Guardian published a video interview with a cyber expert casting doubt on the government’s case. The Daily Beast published an article by Rogers expressly arguing that “all the evidence leads me to believe that the great Sony Pictures hack of 2014 is far more likely to be the work of one disgruntled employee facing a pink slip.” He concluded: “I am no fan of the North Korean regime. However I believe that calling out a foreign nation over a cybercrime of this magnitude should never have been undertaken on such weak evidence.”

Earlier this week, the NYT‘s Public Editor, Margaret Sullivan, chided the paper’s original article on the Sony hack, noting – with understatement – that “there’s little skepticism in this article.” Sullivan added that the paper’s granting of anonymity to administration officials to make the accusation yet again violated the paper’s own supposed policy on anonymity, a policy touted by the paper as a redress for the debacle over its laundering of false claims about Iraqi WMDs from anonymous officials.

But – especially after that first NYT article, and even more so after Obama’s press conference – the overwhelming narrative disseminated by the U.S. media was clear: North Korea was responsible for the hack, because the government said it was.

That kind of reflexive embrace of government claims is journalistically inexcusable in all cases, for reasons that should be self-evident. But in this case, it’s truly dangerous.

It was predictable in the extreme that – even beyond the familiar neocon war-lovers – the accusation against North Korea would be exploited to justify yet more acts of U.S. aggression. In one typical example, the Boston Globe quoted George Mason University School of Law assistant dean Richard Kelsey calling the cyber-attack an “act of war,” one “requiring an aggressive response from the United States.” He added: “This is a new battlefield, and the North Koreans have just fired the first flare.” The paper’s own writer, Hiawatha Bray, explained that “hackers allegedly backed by the impoverished, backward nation of North Korea have terrorized one of the world’s richest corporation” and approvingly cited Newt Gingrich as saying: “With the Sony collapse America has lost its first cyberwar.”

Days after President Obama vowed to retaliate, North Korea’s internet service was repeatedly disrupted. While there is no conclusive evidence of responsibility, North Korea blamed the U.S., while State Department spokesperson Marie Harf smirked as she responded to a question about U.S. responsibility: “We aren’t going to discuss publicly the operational details of possible response options, or comment in any way – except to say that as we implement our responses, some will be seen, some may not be seen.”

 

 

 

 

 

 

 

 

North Korean involvement in the Sony hack is possible, but very, very far from established. But most U.S. media discussions treated the accusation as fact, predictably resulting in this polling data from CNN last week (emphasis added):

The U.S. public does think that the incidents which led to that decision were acts of terrorism on the part of North Korea and nearly three-quarters of all Americans say that North Korea is a serious threat to the U.S. That puts North Korea at the very top of the public’s threat list — only Iran comes close. . . . Three-quarters of the public call for increased economic sanctions against North Korea. Roughly as many say that country is a very serious or moderately serious threat to the U.S.

It’s tempting to say that the U.S. media should have learned by now not to uncritically disseminate government claims, particularly when those claims can serve as a pretext for U.S. aggression. But to say that, at this point, almost gives them too little credit. It assumes that they want to improve, but just haven’t yet come to understand what they’re doing wrong.

But that’s deeply implausible. At this point – eleven years after the run-up to the Iraq War and 50 years after the Gulf of Tonkin fraud - any minimally sentient American knows full well that their government lies frequently. Any journalist understands full well that assuming government claims to be true, with no evidence, is the primary means by which U.S. media outlets become tools of government propaganda.

U.S. journalists don’t engage in this behavior because they haven’t yet realized this. To the contrary, they engage in this behavior precisely because they do realize this: because that is what they aspire to be. If you know how journalistically corrupt it is for large media outlets to uncritically disseminate evidence-free official claims, they know it, too. Calling on them to stop doing that wrongly assumes that they seek to comport with their ostensible mission of serving as watchdogs over power. That’s their brand, not their aspiration or function.

Many of them benefit in all sorts of ways by dutifully performing this role. Others are True Believers: hard-core nationalists and tribalists who see their “journalism” as a means of nobly advancing the interests of the state and corporate officials whom they admire and serve. At this point, journalists who mindlessly repeat government claims like this are guilty of many things; ignorance of what they are doing is definitely not one of them.

Cybersecurity investigators raise doubts about North Korean responsibility for Sony hack

1549_The_Interview_Poster_main

By Niles Williamson

31 December 2014

On Monday, researchers from the Norse cybersecurity firm provided the FBI with evidence discovered in the course of their independent investigation into the hack of Sony Pictures Entertainment which allegedly points towards a small group of individuals including a disgruntled former employee and away from North Korea.

A group known as Guardians of Peace has claimed responsibility for the hacking attack and issued threats against theaters which were to screen “The Interview,” a comedy about the assassination of the North Korean leader, Kim Jong Un. In the face of the threats, Sony initially pulled the film from theaters throughout the US, but has since made the movie available online and in a limited number of theaters.

Pyongyang has officially denied any involvement in the hacking attack, and an offer by the regime to assist in any investigation into the leaks was rebuffed by the United States.

Kurt Stammberger, a senior vice president at Norse, told the Security Ledgerthat the company’s investigation uncovered six individuals directly involved in the hack including a former Sony employee who had been employed by the company for ten years before being laid off in May. The other suspects identified included two other individuals in the United States, one in Canada, one in Singapore, and a final suspect in Thailand.

Starting with the assumption that the attack was an inside job, the Norse researchers utilized leaked Human Resources data to identify recently laid-off Sony employees with the technical skills necessary to carry out the hack. They identified one possible suspect and followed her activity online, where they noted that she had made disgruntled posts on social media about Sony and the layoffs.

The Norse investigators also recorded conversations related to the Sony hacking attack on IRC (internet relay channel) forums where hackers communicate with each other online. The investigators were able to connect an individual involved in the IRC conversations with the former employee and a server on which one of the earliest known iterations of the malware used in the attack was assembled in July.

Norse’s allegations of an insider attack directly contradict the claims of the US government, which has explicitly blamed North Korea for the hack of Sony’s server network which resulted in the leaking of sensitive employee information and embarrassing emails from top executives.

The FBI released a statement on December 19 explicitly blaming the North Korean government for the hack. The agency claimed that its analysis of the malware used in the Sony attack “revealed links to other malware that the FBI knows North Korean actors previously developed.”

The statement also pointed to an overlap in the internet protocol addresses utilized in the attack and attacks previously connected to the North Korean government. It also claimed to have found similarities in the tools used in the Sony attack and attacks last year on South Korean banks and media firms.

The same day, President Barack Obama, in his final press conference of the year, blamed North Korea for the attack and promised that the US would carry out a “proportionate response” against the country “at the time and place of our choosing.”

Last Monday, several days after Obama’s warning, North Korea lost its connection to the Internet for several hours possibly as the result of a US cyber-attack. North Korean internet and mobile 3G network service went down again for several hours on Saturday.

The evidence put forward by the US government has been scrutinized by a number of internet security experts who argue that the government has not yet provided enough evidence to convincingly support its contention of North Korea’s responsibility.

Marc Rogers, principal security researcher for mobile security company CloudFlare, wrote in The Daily Beast that the evidence was “weak” and “flimsy.” He pointed to the fact that the malware shared source code with previous attacks is not unusual as hackers sell malware, and source codes often leak online.

Rogers noted that all but one of the IP addresses used in the attacks were public proxies utilized in prior malware attacks. Hackers often route their attacks through public proxies to avoid being traced back to their real IP address, meaning that it cannot be known exactly where the Sony attack originated.

According to Rogers, hard-coded paths and passwords in the malware indicated that whoever wrote the code had detailed knowledge of Sony’s servers and access to crucial passwords, things to which it would be much easier for someone on the inside to gain access.

Bruce Schneier, chief technology officer at Co3 Systems, writing in The Atlantic, expressed his deep skepticism about the evidence provided by the US government. According to Schneier, the evidence put forward by the FBI was “easy to fake, and it’s even easier to interpret it incorrectly.” He also pointed out that Korean language in the malware code would indicate Korean origin but would not directly implicate North Korea.

A linguistic analysis of online messages put out by Guardians of Peace published last week by the cybersecurity consultancy group Taia Global concluded that the nationality of the authors was most likely Russia and possibly, but not likely, Korean.

 

http://www.wsws.org/en/articles/2014/12/31/sony-d31.html