Every sci-fi movie since Kubrick’s 1968 masterpiece has echoed the original in certain unavoidable ways

Kubrick’s indestructible influence: “Interstellar’’ joins the long tradition of borrowing from “2001’’

Kubrick's indestructible influence: "Interstellar’’ joins the long tradition of borrowing from "2001’’
“2001: A Space Odyssey” and “Interstellar” (Credit: Warner Bros./Salon)

When I first heard about Christopher Nolan’s new sci-fi adventure, “Interstellar,” my immediate thought was only this: Here comes the latest filmmaker to take on Stanley Kubrick’s “2001: A Space Odyssey.” Though it was released more than 40 years ago, ”2001″ remains the benchmark for the “serious” science fiction film: technical excellence married to thematic ambition, and a pervading sense of historic self-importance.

More specifically, I imagined that Nolan would join a long line of challengers to aim squarely at “2001’s” famous Star Gate sequence, where astronaut Dave Bowman (Keir Dullea) passes through a dazzling space-time light show and winds up at a waystation en route to his transformation from human being into the quasi-divine Star Child.

The Star Gate scene was developed by special effects pioneer Douglas Trumbull, who modernized an old technique known as slit scan photography (you can learn more about it here). While we’ve long since warp-drived our way beyond the sequence effects-wise (you can now do slit scan on your phone), the Star Gate’s eerie and propulsive quality is still powerful, because it functions as much more than just eye candy. It’s a set piece whose theme is the attempt to transcend set pieces — and character, and narrative and, most of all, the technical limitations of cinema itself.

In “2001,” the Star Gate scene is followed by another scene that also turns up frequently in sci-fi flicks. Bowman arrives at a series of strange rooms, designed in the style of Louis XVI (as interpreted by an alien intelligence), and he watches himself age and die before being reborn. Where is he? Another galaxy? Another dimension? Heaven? Hell? What are the mysterious monoliths that have brought him here? Why?

Let’s call this the Odd Room Scene. Pristine and uncanny, the odd room is the place at the end of the journey where truths of all sorts, profound and pretentious, clear and obscure, are at last revealed. In “The Matrix Reloaded,” for instance, Neo’s Odd Room Scene is his meeting with an insufferable talking suit called the Architect, where he learns the truth about the Matrix. Last summer’s “Snowpiercer,” about a train perpetually carrying the sole survivors of a new Ice Age around the world, follows the lower-class occupants of the tail car as they stage a revolution, fighting and hacking their way through first class toward the train’s engine, an Odd Room where our hero learns the truth about the train.



These final scenes in “2001″ still linger in the collective creative consciousness as inspiration or as crucible. The Star Gate and the Odd Room, particular manifestations of the journey and the revelation, have become two key architectural building blocks of modern sci-fi films. The lure to imitate and try to top these scenes, either separately or together, is apparently too powerful to resist.

Perhaps the most literal of the Star Gate-Odd Room imitators is Robert Zemeckis’s 1997 “Contact.” It’s a straightforward drama about humanity’s efforts to build a large wormhole machine whose plans have been sent by aliens, and the debate over which human should be the first to journey beyond the solar system. The prize falls to Jodie Foster’s agnostic astronomer Ellie Arroway. During the film’s Star Gate sequence, Foster rides a capsule through a wormhole that winds her around distant planets and through a newly forming star. Zemeckis’s knockoff is a decent roller coaster, but nothing more. Arroway is anxious as she goes through the wormhole, but still in control of herself; a deeply distressed Bowman, by contrast, is losing his mind.

Arroway’s wormhole deposits her in an Odd Room that looks to her (and us) like a beach lit by sunlight and moonlight. She is visited by a projection of her dead father, the aliens’ way of appearing to her in a comfortable guise, and she learns the stunning truth about … well, actually, she doesn’t learn much. Her father gives her a Paternal Alien Pep Talk. Yes, there is a lot of life out in the galaxy. No, you can’t hang out with us. No, we’re not going to answer any of your real questions. Just keep working hard down there on planet Earth; you’ll get up here eventually (as long as you all don’t kill each other first).

Brian De Palma tried his own version of the Odd Room at the end of 2000’s “Mission to Mars,” which culminates in a team of astronauts entering a cool, Kubrick-like room in an alien spaceship on Mars and, yes, learning the stunning truth about the origins of life on Earth. De Palma is a skilled practitioner of the mainstream Hollywood set piece, but you can feel the film working up quite a sweat trying and failing to answer “2001,” and early-century digital effects depicting red Martians are, to be charitable, somewhat dated.

But here comes “Interstellar.” This film would appear to be the best shot we’ve had in years to challenge the supremacy of the Star Gate, of “2001″ itself, as a Serious Sci-Fi Film About Serious Ideas. Christopher Nolan should be the perfect candidate to out-Star Gate the Star Gate. Kubrick machined his visuals to impossibly tight tolerances. Nolan (along with his screenwriter brother Jonathan) do much the same to their films’ narratives, manufacturing elaborately conceived contraptions. The film follows a Hail Mary pass to find a planet suitable for the human race as the last crops on earth begin to die out. Matthew McConaughey plays an astronaut tasked with piloting a starship through a wormhole, into another galaxy and onto a potentially habitable planet. “Interstellar” promises a straight-ahead technological realism as well as a sense of conscious “We’re pushing the envelope” ambition. (Hey, even Neil deGrasse Tyson vouches for the film’s science bonafides.) The possibilities and ambiguities of time, one of Nolan’s consistent concerns as a storyteller, is meant, I think, to be the trump card that takes “Interstellar” past “2001.”

But the film is not about fealty to, or the realistic depiction of, relativity theory. It’s about “2001.” And before it can try to usurp the throne, “Interstellar” must first kiss the ring. (And if you haven’t seen “Interstellar” yet, you might want to stop reading now.) So we get the seemingly rational crewmember who proves to be homicidal. The dangerous attempt to manually enter a spaceship. More brazenly, there’s a set piece of one ship docking with another. In “2001,” the stately docking of a spaceship with a wheel-shaped space station, turning gently above the Earth to the strains of the Blue Danube was, quite literally, a waltz, a graceful celestial courtship. It clued us in early that the machines in “2001″ would prove more lively, more human, than the humans. “Interstellar” assays the same moment, only on steroids. It turns that waltz, so rich in subtext, into a violent, vertiginous fandango as a shuttle tries to dock with a mothership that’s pirouetting out of control.

Finally, after a teasing jaunt through a wormhole earlier in the movie, we come to “Interstellar’s” Star Gate moment, as Cooper plummets into a black hole and ultimately into a library-like Odd Room that M.C. Escher might have fancied. It’s visually impressive for a moment, but its imprint quickly fades.

It’s too bad.” Interstellar” wants the stern grandeur of “2001″ and the soft-hearted empathy of Steven Spielberg, but in most respects achieves neither. Visually only a few images impress themselves in your brain — Nolan, as is often the case in his movies, is more successful designing and calibrating his story than at creating visuals worthy of his ambition. Yet the film doesn’t manage the emotional dynamics, either. It’s not for lack of trying. The Nolan brothers are rigorous scenarists, and the concept of dual father-daughter bonds being tested and reaffirmed across space-time is strong enough on the drawing board. (Presumably, familial love is sturdier than romantic love, though the film makes a half-hearted stab at the latter.)

For those with a less sentimental bent, the thematic insistence on the primacy of love might seem hokey, but it’s one way the film tries to advance beyond the chilly humanism of Kubrick toward something more warm-blooded. Besides, when measured against the stupefying vastness of the universe, what other human enterprise besides love really matters? The scale of the universe and its utter silence is almost beyond human concern, anyway.

So I don’t fault a film that suggests that it’s love more than space-age alloys and algorithms that can overcome the bounds of space and time. But the big ideas Nolan is playing with are undercut by too much exposition about what they mean. The final scene between Cooper and his elderly daughter — the triumphant, life-affirming emotional home run — is played all wrong, curt and businesslike. It’s a moment Spielberg would have handled with more aplomb; he would have had us teary-eyed, for sure, even those who might feel angry at having their heartstrings yanked so hard. This is more like having a filmmaker give a lecture on how to pull at the heartstrings without actually doing it.

Look, pulling off these Star Gate-like scenes requires an almost impossible balance. The built-in expectations in the structure of the story itself are unwieldy enough, without the association to one of science fiction’s most enduring scenes. You can make the transcendent completely abstract, like poetry, a string of visual and aural sensations, and hope viewers are in the right space to have their minds blown, but you run the risk of copping out with deliberate obfuscation. (We can level this charge at the Star Gate sequence itself.)

But it’s easy to press too far the other way — to personify the higher power or the larger force at the end of these journeys with a too literal explanation that leaves us underwhelmed. I suppose what we yearn for is just a tiny revelation, one that honors our desire for awe, preserves a larger mystery, but is not entirely inaccessible. It’s a tiny taste of the sublime. There’s an imagined pinpoint here where we would dream of transcendence as a paradox, as having God-like perception and yet still remaining human, perhaps only for a moment before crossing into something new. For viewers, though, the Star Gate scenes ultimately play on our side of that crossroads: To be human is to steal a glimpse of the transcendent, to touch it, without transcending.

While Kubrick didn’t have modern digital effects to craft his visuals with, in retrospect he had the easier time of it. It’s increasingly difficult these days to really blow an audience’s minds. We’ve seen too much. We know too much. The legitimate pleasure we can take in knowledge, in our ability to decode an ever-more-complex array of allusions and references, may not be as pleasurable or meaningful as truly seeing something beyond what we think we know.

Maybe the most successful challenger to Kubrick was Darren Aronofsky and his 2006 film “The Fountain.” The film, a meditation on mortality and immortality, plays out in three thematically-linked stories: A conquistador (Hugh Jackman) searches the new world for the biblical Tree of Life; a scientist (Jackman again) tries to save his cancer-stricken wife (Rachel Weisz), and a shaven-headed, lotus-sitting traveler (Jackman once more) journeys to a distant nebula. It’s the latter that bears the unique “2001″ imprint of journey and revelation: Jackman travels in a bubble containing the Tree of Life, through a milky and golden cosmicscape en route to his death and rebirth. It’s the Star Gate and the Odd Room all in one. Visually, Aronofsky eschewed computer-generated effects for a more organic approach that leans on fluid dynamics. I won’t tell you the film is a masterpiece — its Grand Unifying ending is more than a little inscrutable; again, pulling this stuff of is a real tightrope — but the visuals are wondrous and unsettling, perhaps the closest realization since the original of what the Star Gate sequence is designed to evoke.

Having said that, though, it may be time to turn away from the Star Gate in our quest for the mind-blowing sci-fi cinematic sequence. Filmmakers have thus far tried to imagine something like it, only better, and have mostly failed. It’s harder to imagine something beyond it, something unimaginable. Maybe future films should not be quite so literal in their chasing of those transcendent moments. This might challenge a new generation of filmmakers while also allowing the Star Gate, and “2001″ itself, to lie fallow for awhile, so we can return to it one day with fresh eyes.

It is, after all, when we least suspect it that a story may find a way past our jaded eyes and show us a glimpse of something that really does stir a moment of profound connection. There is one achingly brief moment in “Interstellar” that accomplishes this: Nolan composes a magnificent shot of a small starship, seen from a great distance gliding past Saturn’s awesome rings. The ship glitters in a gentle rhythm as it catches light of the Sun. It’s a throwaway, a transitional moment between one scene and another, merely meant to establish where we are. But its very simplicity and beauty, the power of its scale, invites us for a moment to experience the scale of the unknown and to appreciate our efforts to find a place in it, or beyond it.

http://www.salon.com/2014/11/22/kubricks_indestructible_influence_interstellar_joins_the_long_tradition_of_borrowing_from_2001/?source=newsletter

US Defense Department organizing covert operations against “the general public”

http://insideevs.com/wp-content/uploads/2013/11/DoD14.jpg

By Thomas Gaist
19 November 2014

The US Defense Department (DOD) is developing domestic espionage and covert operations targeting “the general public” in coordination with the intelligence establishment and police agencies, according to a New York Times report.

“The Times analysis showed that the military and its investigative agencies have almost as many undercover agents working inside the United States as does the F.B.I,” the newspaper wrote.

“While most of them are involved in internal policing of service members and defense contractors, a growing number are focused, in part, on the general public as part of joint federal task forces that combine military, intelligence and law enforcement specialists,” the Times continued.

The report amounts to an acknowledgment by the leading media organ of the US ruling class that the American government is deploying a vast, forward-deployed counter-insurgency machine to target the US population at large.

Coming directly from the horse’s mouth, the Times report makes clear that espionage, deception, and covert operations are now primary instruments of the US government’s domestic policy. In preparation for a massive upsurge in the class struggle, the US ruling class is mobilizing the entire federal bureaucracy to carry out systematic and targeted political repression against the working class in the US and around the world.

These moves are in keeping with the latest US Army “Operating Concept” strategy document, published in October, which calls for “Army forces to extend efforts beyond the physical battleground to other contested spaces such as public perception, political subversion, and criminality.”

In addition to the DOD, at least 39 other federal security and civilian agencies, including the Drug Enforcement Agency (DEA), the Department of Homeland Security (DHS), the Department of Education and the Internal Revenue Service (IRS), have developed increasingly ambitious forms of covert operations involving the use of undercover agents, which now inhabit “virtually every corner of the federal government,” according to unnamed government officials and documents cited by the New York Times.

New training programs to prepare agents to conduct Internet-based undercover sting operations have been developed by the DOD, Homeland Security (DHS) and the FBI, according to the report.

DHS alone spends at least $100 million per year on the development of undercover operations, an unnamed DHS intelligence official told the Times. Total costs for operations involving undercover government agents likely total at least several hundred millions of dollars per year, the Times reported.

The US Supreme Court trains its own security force in “undercover tactics,” which officers use to infiltrate and spy on demonstrators outside the high court’s facilities, the Times reported.

IRS agents frequently pose as professionals, including as medical doctors, in order to gain access to privileged information, according to a former agent cited by the report. IRS internal regulations cited in the report state that “an undercover employee or cooperating private individual may pose as an attorney, physician, clergyman or member of the news media.”

Teams of undercover agents deployed by the IRS operate in the US and internationally in a variety of guises, including as drug money launderers and expensive luxury goods buyers.

The Department of Agriculture (DOA) employs at least 100 of its own covert agents, who often pretend to be food stamp users while investigating “suspicious vendors and fraud,” according to the Times .

Covert agents employed by the Department of Education (DOE) have embedded themselves in federally funded education programs, unnamed sources cited by the report say.

Numerous other federal bureaucracies are running their own in-house espionage programs, including the Smithsonian, the Small Business Administration, and the National Aeronautics and Space Administration (NASA), the report stated.

This sprawling apparatus of spying, disruption and manipulation implicates the state in a mind-boggling range of criminal and destructive activities. Covert operations using undercover agents are conducted entirely in secret, and are funded from secret budgets and slush funds that are replenished through the “churning” of funds seized during previous operations back into the agencies’ coffers to fund the further expansion of secret programs.

Secret operations orchestrated by the Bureau of Alcohol, Tobacco and Firearms (ATF) on this basis are increasingly indistinguishable from those of organized crime syndicates, and give a foretaste of what can be expected from the ongoing deployment of counter-revolutionary undercover agents by the military-intelligence apparatus throughout the US.

In 2010, the ATF launched a series of covert operations that used state-run front businesses to seize weapons, drugs, and cash, partly by manipulating mentally disabled and drug addicted individuals, many of them teenagers, according to investigations by the Milwaukee Journal Sentinel.

While posing as owners of pawnshops and drug paraphernalia retail outlets, ATF agents induced cash-desperate and psychologically vulnerable individuals to carry out illegal activities including the purchase and sale of stolen weapons and banned substances.

A number of the ATF-run fake stores exposed by the Sentinel were run in “drug free” and “safe” zones near churches and schools. Youths were encouraged to smoke marijuana and play video games at these locations by ATF agents. In one instance reported by the Sentinel, a female agent wore revealing attire and flirted with teenage targets while inciting them to acquire weapons and illegal substances to sell to an ATF-run front business, the Sentinel found.

The ATF was notorious for its operations in the 1980s where it used agents provocateurs to frame up and jail militant workers involved in industrial strikes. In one infamous case in Milburn, West Virginia an ATF informer was exposed after he tried unsuccessfully to convince striking coal miners to blow up an abandoned processing facility.

The US government has steadily escalated its domestic clandestine operations in the years since the September 11, 2001, attacks. The New York Police Department (NYPD) intelligence section deployed hundreds of covert agents throughout New York City, Massachusetts, Pennsylvania and New Jersey.

As part of operations coordinated with the CIA and spanning more than a decade, the NYPD paid informants to spy on and “bait” Muslim residents into manufactured terror plots. The security and intelligence agencies refer to this method as “create and capture,” according to a former NYPD asset cited by the Associated Press.

It is now obvious these surveillance and infiltration programs, initially focusing on Muslim neighborhoods, were only the first stage in the implementation of a comprehensive espionage and counter-insurgency system targeting the entire population.

Large numbers of informers and FBI agents infiltrated the Occupy Wall Street protests in 2011.

Historically, secret police groups targeted the political and class enemies of the capitalist state using the pretext of defending the nation from dangerous “foreign” elements.

Among the first covert police sections established by the imperialist powers were the British “Special Branch,” originally established as the “Special Irish Branch” in 1883 to target groups opposed to British domination of Ireland. “Special Branch” police intelligence forces were subsequently set up throughout the commonwealth to run cloak-and-dagger missions in service of British imperialism.

Similarly, in an early effort by the US ruling class to develop a secret police force, New York City police commissioner established “Italian Squad” in 1906 to carry out undercover activities against socialist-minded workers in the city’s immigrant and working class areas.

http://www.wsws.org/en/articles/2014/11/19/unde-n19.html

Stop calling the Keystone pipeline a job creator! It will create 35 jobs.

Keystone will not create tens of thousands of jobs. The actual number? 35

 

The Keystone myth that refuses to die: Stop calling the pipeline a job creator!

(Credit: MSNBC)

Of all the reasons one might have to support the construction of the Keystone XL pipeline (like, say, a last-minute gambit to save one’s Senate seat), arguing that it’s going to create jobs is the least sensical — because, as the State Department itself determined, it will create only 35 permanent jobs.

Even with the 15 other, temporary jobs the project will create, for inspections and maintenance, that’s still not enough even to employ the 60 senators Mary Landrieu, D-La., needs to pass through approval of the pipeline when it comes to a vote Tuesday evening.

And yet the argument that Keystone will lead to jobs upon jobs upon jobs is perhaps the most pervasive, and fundamentally incorrect, myth surrounding the pipeline controversy.

Only an extremely skewed reading of the job projections could lead Fox News Host Anna Kooiman, for example, to claim that “there would be tens of thousands of jobs created” if the president approved of the pipeline, a claim that Politifact rounded down to “mostly false.” While it’s true that the State Department estimates that 42,100 jobs — many only tangentially related to the pipeline — will be created during its two years of construction, they’re almost all temporary, and include 10,400 seasonal positions that will only last for four to eight months. When you look at that over the course of two years, Politifact explains, that only comes out to 3,900 “average annual” jobs. Most of the construction jobs in Montana, South Dakota, Nebraska and Kansas, through which the pipeline will pass, will rely on specialists brought in from out of state.

TransCanada’s CEO, Russ Girling, further stretched the truth into an outright lie on ABC’s “This Week” Sunday morning, claiming that the State Department called those 42,000 jobs “ongoing” and “enduring.” Again, Politifact corrects the record, explaining that, for the reasons above, those adjectives only apply if you have an incredibly short-sighted definition of “ongoing and enduring” (read: two years or less).



But if you really want to get an idea of how hard the jobs myth is to squash, look no further than lefty news channel MSNBC, where host Joe Scarborough propagated that same false narrative. Questioning a potential decision to delay the pipeline, he laughed: “Their own State Department says it’s going to create 50,000 new jobs.”

Again: not.

You know what already did create tens of thousands of jobs, in nearly every state? Renewable energy, which according to a report from Environmental Entrepreneurs created almost 80,000 of them in 2013 alone. The main thing holding back future growth, that same report found, is “ongoing regulatory uncertainty,” most notably with wind energy tax credits. It’s worth checking out, especially if you happen to be a politician who’s legitimately looking for a way to grow the economy.

Those other persuasive arguments for approving the pipeline, for the record, don’t hold up much better: The part of the State Department review finding that Keystone would have a negligible impact on the environment, for one, is made extremely suspect by the multiple conflicts of interest surrounding it. The local impacts of leaks and the global impacts of emitting any more greenhouse gases into the atmosphere would suggest otherwise; another study evaluating the State Department’s analysis concluded that the report downplays the pipeline’s environmental significance.

Studies have established that the pipeline isn’t going to reduce the United States’ dependence on foreign oil. And over at the Washington Post, Philip Bump has the ultimate explainer for why it isn’t going to lower gas prices in any straightforward way — it some regions, in fact, it could even raise them. What he boils it all down to: “The most direct beneficiaries of Keystone XL won’t be consumers.”

Here’s Sen. Bernie Sanders, I-Vt., on CNN, trying to wrap his mind around the idea that approving the pipeline would make any kind of sense whatsoever:

Oh, and one other job pushing the pipeline won’t be able to ensure? Sen. Landrieu’s, as voters don’t seem to have been swayed by her pro-Keystone rhetoric. Although, as Salon writers Luke Brinker and Joan Walsh have both pointed out, we can expect to see a brand-new position with the oil lobby created just for her once this is all over.

Lindsay Abrams is a staff writer at Salon, reporting on all things sustainable. Follow her on Twitter @readingirl, email labrams@salon.com.

The Interregnum: Why the Future is so chaotic

The Interregnum:

Why the Future is so chaotic

“The old is dying,and the new cannot be born; in this interregnum there arises a diversity of morbid symptoms”-Antonio Gramsci

The morbid symptoms began to appear in the spring of 2003. The Department of Homeland Security was officially formed and despite the street protests of millions around the world, the United States invaded Iraq on the pretext of capturing Saddam’s “weapons of mass destruction”. By summer it was obvious that there were no such weapons and that we had been tricked into a war from which there was no easy exit. Pollsters began to notice that a majority of American’s felt we were “on the wrong track” and the distrust of our leadership has gotten worse every year.

So while the citizens exhibit historical levels of anger with the country’s drift, neither the political nor the economic leaders have put forth an alternative vision of our future. We are in an Interregnum: the often painful uprooting of old traditions and the hard-fought emergence of the new. The traditional notion of an interregnum refers to the time when a king died and a new king had not been coronated. But for our purposes, the notion of interregnum refers to those hinges in time when the old order is dead, but the new direction has not been determined. Quite often, the general populace does not understand that the transition is taking place and so a great deal of tumult arises as the birth pangs of a new social and political order. We are in such a time in America.

For those of us who work in the field of media and communications the signs of the Interregnum are everywhere. Internet services decimate the traditional businesses of music and journalism. For individual journalists or musicians, the old order is clearly dying, but a new way to make a living cannot seem to be birthed. Those who work in the fields of film and television can only hope a similar fate does not await their careers. In the world of politics a similar dynamic is destroying traditional political parties and the insurgent bottom up, networked campaigns pioneered by Barack Obama now become the standard. And yet we realize that for all it’s insurgency, the Obama campaign really did not usher in a new era. It is clear that there is an American Establishment that seems to stay in power no matter which party controls The White House. And the recent election only makes this more obvious. But this top-down establishment order is clearly dying, but it clings to it privileges and the networked, bottom-up society is not yet empowered.

Since 1953 when two senior partners of a Wall Street law firm, the brothers John Foster and Allen Dulles began running American foreign (and often domestic) policy, an establishment view, through Democratic and Republican presidencies alike, has been the norm. As Stephen Kinzer (in his book The Brothers)has written about the Dulles brothers, “Their life’s work was turning American money and power into global money and power. They deeply believed, or made themselves believe, that what benefited them and their clients would benefit everyone.” They created a world in which the Wall Street elites at first set our foreign policy and eventually (under Ronald Reagan) came to dominate domestic and tax policy — all to the benefit of themselves and their clients.

In 1969 the median salary for a male worker was $35,567 (in 2012 dollars). Today it is $33,904. So for 44 years, while wages for the top 10% have continued to climb, most Americans have been caught in a ”Great Stagnation”, bringing into question the whole purpose of the American capitalist economy. The notion that what benefited the establishment would benefit everyone, had been thoroughly discredited.

Seen through this lens, the savage partisanship of the current moment makes an odd kind of sense. What were the establishment priorities that moved inexorably forward in both Republican and Democratic administrations? The first was a robust and aggressive foreign policy. As Kinzer writes of the Dulles brothers, “Exceptionalism — the view that the United States has a right to impose its will because it knows more, sees farther, and lives on a higher moral plane than other nations — was to them not a platitude, but the organizing principle of daily life and global politics.” From Eisenhower to Obama, this principle has been the guiding light of our foreign policy, bringing with it annual defense expenditures that dwarf those of all the world’s major powers combined and drive us deeper in debt. The second principle of the establishment was, “what is good for Wall Street is good for America.” Despite Democrats efforts to paint the GOP as the party of Wall Street, one would only have to look at the efforts of Clinton’s Treasury secretaries Rubin and Summers to kill the Glass-Steagal Act and deregulate the big banks, to see that the establishment rules no matter who is in power. Was it any surprise that Obama then appointed the architects of bank deregulation, Summers and Geithner, to clean up the mess their policies had caused?

So when we observe politicians as diverse as Elizabeth Warren and Rand Paul railing against the twin poles of establishment orthodoxy, can we really be surprised? Is there not a new consensus that the era of America as global policeman is over? Is there not agreement from the Tea Party to Occupy Wall Street that the domination of domestic policy by financial elites is over? But here is our Interregnum dilemma. It is one thing to forecast a kind of liberal-libertarian coalition around the issues of defense spending, corporate welfare and even the privacy rights of citizens in a national security state. It is a much more intractable problem to find consensus on the causes and cures of the Great Stagnation. It does seem like we need to understand the nature of the current stagnation by looking back to the late sixties when the economy was very different than it is today. In 1966, net investment as a percentage of GDP peaked at 14% and it has been on a steady decline ever since, despite the computer revolution which was only getting started in the early 1970’s.

Economic growth only comes from three sources: consumption, investment or foreign earnings from trade (the Current Account). We have been living so long with a negative current account balance and falling investment that economic growth is almost totally dependent on the third leg of the stool, consumer spending. But with the average worker unable to get a raise since 1969, consumption can only come from loosened credit standards. As long as the average family could use their home equity as an ATM, the party could continue, driven by the increasing sophistication of advertising and “branded entertainment” to induce mall fever to a strapped consumer. And by the late 1990’s consumer preferences began to drive a winner take all digital economy where one to three firms dominated each sector: Apple and Google; Verizon and AT&T, Comcast and Time Warner Cable; Disney, Fox, Viacom and NBC Universal; Facebook and Twitter. All of this was unloosed by the establishment meme of deregulation — a world in which anti-trust regulators had little influence and laissez-faire ruled. These oligopolies began making so much money they didn’t have enough places to invest so corporate cash as a percentage of assets rose to an all time high.

Here is my fear. That our current version of capitalism is not working. Apple holds on to $158 billion in cash because it can’t find a profitable investment. And because U.S. worker participation rates are only 64%, a huge number of people can never afford an I Phone and so domestic demand is flat (though very profitable) and the real growth in the digital economy will be in Asia, Africa and South America. There is not much the Fed lowering interest rates can do to alter this picture. What is needed is not more easy money loans; it more decent jobs.

But unlike our left-right consensus on military spending, there is a fierce debate raging between economists about the causes and solutions to this stagnation. Though both left and right agree the economy has stagnated, there are huge differences in the prospects for emerging from this condition. On the right, the political economist Tyler Cowen’s new book is called Average is Over: Powering America Beyond the Age of the Great Stagnation. Here is how Cowen sees the next twenty years.

The rise of intelligent machines will spawn new ideologies along with the new economy it is creating. Think of it as a kind of digital social Darwinism, with clear winners and losers: Those with the talent and skills to work seamlessly with technology and compete in the global marketplace are increasingly rewarded, while those whose jobs can just as easily be done by foreigners, robots or a few thousand lines of code suffer accordingly. This split is already evident in the data: The median male salary in the United States was higher in 1969 than it is today. Middle-class manufacturing jobs have been going away due to a mix of automation and trade, and they are not being replaced. The most lucrative college majors are in the technical fields, such as engineering. The winners are doing much better than ever before, but many others are standing still or even seeing wage declines.

On the left, Paul Krugman is not so sure we can emerge from this stagnation.

But what if the world we’ve been living in for the past five years is the new normal? What if depression-like conditions are on track to persist, not for another year or two, but for decades?…In fact, the case for “secular stagnation” — a persistent state in which a depressed economy is the norm, with episodes of full employment few and far between — was made forcefully recently at the most ultrarespectable of venues, the I.M.F.’s big annual research conference. And the person making that case was none other than Larry Summers. Yes, that Larry Summers.

Cowen forecasts a dystopian world where 10% of the population do very well and “the rest of the country will have stagnant or maybe even falling wages in dollar terms, but they will also have a lot more opportunities for cheap fun and cheap education.” That’s real comforting. He predicts the 90% will put up with this inequality for two reasons. First, the country is aging: “remember that riots and protests are typically the endeavors of young hotheads, not sage (or tired) senior citizens.” And second, because of the proliferation of social networks, “envy is local…Right now, the biggest medium for envy in the United States is probably Facebook, not the big yachts or other trophies of the rich and famous.”

Although Cowen cites statistics about the fall in street crime to back up the notion that the majority of citizens are passively accepting gross inequality, I think he completely misunderstands the nature of anti-social pathologies in the Internet Age of Stagnation. Take the example of the Web Site Silk Road.

Silk Road already stands as a tabloid monument to old-fashioned vice and new-fashioned technology. Until the website was shut down last month, it was the place to score, say, a brick of cocaine with a few anonymous strokes on a computer keyboard. According to the authorities, it greased $1.2 billion in drug deals and other crimes, including murder for hire.

From Lulzsec to Pirate Bay to Silk Road, the coming anarchy of a Bladerunner like society are far more vicious than a few street thugs in our major cities. The rise of virtual currencies that can’t be traced like Bitcoin only make the possibilities for a huge crime wave on the Dark Net more imminent—one which IBM estimates already costs the economy $400 billion annually.

So while both Cowen and Krugman agree that stagnation is causing the labor force participation rate to fall, they disagree as to whether anything can be done to remedy the problem.

In the early 1970’s the participation rate began to climb as more and more women entered the workforce. It peaked when George Bush entered office and has been on the decline ever since. As the Time’s David Leonhardt has pointed out, this has very little to do with Baby Boomer retirement. The economist Daniel Alpert has argued in his new book, The Age of Oversupply, that “the central challenge facing the global economy is an oversupply of labor, productive capacity and capital relative to the demand for all three.”

Viewed through this lens, neither the policy prescriptions of Republicans nor Democrats are capable of changing the dynamic brought about by the entrance of three billion new workers into the global economy in the last 20 years. Republican fears that U.S. deficits will lead to Weimar-like hyper-inflation ring hollow in a country where only 63% of the able bodied are working. Democrats hectoring for The Fed and the banks to loan more to business to stimulate the economy are equally nonsensical when American corporations are sitting on $2.4 trillion in cash.

But there is a way out of this deflationary trap we are in. First the Republicans have got to acknowledge the obvious: America’s corporations are not going to invest in vast amounts of new capacity when there is a glut in almost every sector worldwide. Secondly, that overcapacity is not going to get absorbed until more people go back to work and start buying the goods from the factories. This was the same problem our country faced in the great depression and the way we got out of it was by putting people to work rebuilding the infrastructure of this country. Did it ever occur to the politicians in Washington that the reason so many bridges, water and electrical systems are failing is because most of them were built 80 years ago, during the great depression? For Republicans to insist that more austerity will bring back the “confidence fairy”is exactly the wrong policy prescription for an age of oversupply. But equally destructive, as Paul Krugman points out are Democratic voices like Erskine Bowles, shouting from any venue that will pay him, that the debt apocalypse is upon us.

But the Democrats are also going to have to give up some long held beliefs that all good solutions come from Washington. If the Healthcare.gov website debacle has taught us anything, it is that devolving power from Washington to the states is the answer to the complexity of modern governance. While California’s healthcare website performed admirably, the notion of trying to create a centralized system to service 50 different state systems was a fool’s errand. So what is needed is a federalist solution for investment in the infrastructure of the next economy. This is the way out of The Interregnum. Investors buying tax-free municipal bonds to rebuild ancient water systems and bridges as well as solar and wind plants will finance much of it. But just as President Eisenhower understood that a national interstate highway system built in the 1950’s would lead to huge productivity gains in the 1960’s and 1970’s, Federal tax dollars will have to play a large part in rebuilding America. As we wind down our trillion dollar commitments to wars in the Middle East, we must engage in an Economic Conversion Strategy from permanent war to peaceful innovation that both liberals and libertarians could embrace.

The way to overcome the partisan gridlock on infrastructure spending would be for Obama to commit to a totally federalist solution to us getting out of our problems. The Federal Government would use every dollar saved from getting out of Iraq, Afghanistan and all the other defense commitments in block innovation grants to the states. Lets say the first grant is for $100 Billion. It will be given directly to the states on a per capita basis to be used to foster local economic growth. No strings or Federal Bureaucracy attached to the grants except that the states have to publish a yearly accounting of the money in an easily readable form. And then let the press follow the money and see which states come up with the most imaginative solutions. Some states might use the grants to lower the cost of state university tuition. Others might spend the money on high-speed rail lines or municipal fiber broadband and wifi. As we have found in the corporate sector, pushing power to the edges of an organization helps foster innovation. As former IBM CEO Sam Palmisano told his colleagues, “we have to lower the center of gravity of this organization”.

If it worked, then slowly more money could be transferred to the states in these bureaucracy free block grants. Gradually the bureaucracies of the Federal government would shrink as more and more responsibility was shifted to local supervision of education, health, welfare and infrastructure.

In the midst of our current Washington quagmire this vision of a growing American middle class may seem like a distant mirage. But it is clear that the establishment consensus on foreign policy, defense spending, domestic spying and corporate welfare has died in the last 12 months. The old top-down establishment order is clearly dying, but just how we build the new order based on a bottom-up, networked society that works for the 90%, not just the establishment is the question of our age.

Hanging out with the disgruntled guys who babysit our aging nuclear missiles—and hate every second of it.

Death Wears Bunny Slippers

Illustration by Tavis Coburn

Illustration by Tavis Coburn

Along a lonely state highway on central Montana’s high plains, I approach what looks like a ranch entrance, complete with cattle guard. “The first ace in the hole,” reads a hand-etched cedar plank hanging from tall wooden posts. “In continuous operation for over 50 years.” I drive up the dirt road to a building surrounded by video cameras and a 10-foot-tall, barbed-wire-topped fence stenciled with a poker spade. “It is unlawful to enter this area,” notes a sign on the fence, whose small print cites the Subversive Activities Control Act of 1950, a law that once required communist organizations to register with the federal government. “Use of deadly force authorized.”

I’m snapping photos when a young airman appears. “You’re not taking pictures, are you?” he asks nervously.

“Yeah, I am,” I say. “The signs don’t say that I can’t.”

“Well, we might have to confiscate your phone.”

Maybe he should. We’re steps away from the 10th Missile Squadron Alpha Missile Alert Facility, an underground bunker capable of launching several dozen nuclear-tipped Minuteman III intercontinental ballistic missiles (ICBMs), with a combined destructive force 1,000 times that of the Hiroshima bomb.

Another airman comes out of the ranch house and asks for my driver’s license. He’s followed by an older guy clad in sneakers, maroon gym shorts, and an air of authority. “I’m not here to cause trouble,” I say, picturing myself in a brig somewhere.

“Just you being here taking photos is causing trouble,” he snaps.

An alarm starts blaring from inside the building. One airman turns to the other. “Hey, there’s something going off in there.”
Six hours earlier, I was driving through Great Falls with a former captain in the Air Force’s 341st Missile Wing. Aaron, as I’ll call him, had recently completed a four-year stint at the Alpha facility. Had President Obama ordered an attack with ICBMs, Aaron could have received a coded message, authenticated it, and been expected to turn a launch key.

Also read: “That Time We Almost Nuked North Carolina“—a timeline of near-misses, mishaps, and scandals from our atomic arsenal.

We kept passing unmarked blue pickup trucks with large tool chests—missile maintenance guys. The Air Force doesn’t like to draw attention to the 150 silos dotting the surrounding countryside, and neither does Great Falls. With about 4,000 residents and civilian workers and a $219 million annual payroll, Malmstrom Air Force Base drives the local economy, but you won’t see any missile-themed bars or restaurants. “We get some people that have no idea that there’s even an Air Force base here,” one active-duty missileer told me.

It’s not just Great Falls practicing selective amnesia. The days of duck-and-cover drills, fallout shelters, and No Nukes protests are fading memories—nowhere more so than in the defense establishment. At a July 2013 forum in Washington, DC, Lt. General James Kowalski, who commands all of the Air Force’s nuclear weapons, said a Russian nuclear attack on the United States was such “a remote possibility” that it was “hardly worth discussing.”

But then Kowalski sounded a disconcerting note that has a growing number of nuclear experts worried. The real nuclear threat for America today, he said, “is an accident. The greatest risk to my force is doing something stupid.”

Lt. General James Kowalski

Lt. General James Kowalski Air Force

“You can’t screw up once—and that’s the unique danger of these machines,” points out investigative journalist Eric Schlosser, whose recent book, Command and Control, details the Air Force’s stunning secret history of nuclear near-misses, from the accidental release of a hydrogen bomb that would have devastated North Carolina to a Carter-era computer glitch that falsely indicated a shower of incoming Soviet nukes. “In this business, you need a perfect safety record.”

Once the military’s crown jewels, ICBM bases have become “little orphanages that get scraps for dinner.”

And a perfect record, in a homeland arsenal made up of hundreds of missiles and countless electronic and mechanical systems that have to operate flawlessly—to say nothing of the men and women at the controls—is a very hard thing to achieve. Especially when the rest of the nation seems to have forgotten about the whole thing. “The Air Force has not kept its ICBMs manned or maintained properly,” says Bruce Blair, a former missileer and cofounder of the anti-nuclear group Global Zero. Nuclear bases that were once the military’s crown jewels are now “little orphanages that get scraps for dinner,” he says. And morale is abysmal.

Blair’s organization wants to eliminate nukes, but he argues that while we still have them, it’s imperative that we invest in maintenance, training, and personnel to avoid catastrophe: An accident resulting from human error, he says, may be actually more likely today because the weapons are so unlikely to be used. Without the urgent sense of purpose the Cold War provided, the young men (and a handful of women) who work with the world’s most dangerous weapons are left logging their 24-hour shifts under subpar conditions—with all the dangers that follow.

In August 2013, Air Force commanders investigated two officers in the ICBM program suspected of using ecstasy and amphetamines. A search of the officers’ phones revealed more trouble: They and other missileers were sharing answers for the required monthly exams that test their knowledge of things like security procedures and the proper handling of classified launch codes. Ultimately, 98 missileers were implicated for cheating or failure to report it. Nine officers were stripped of their commands, and Colonel Robert Stanley, the commander of Malmstrom’s missile wing, resigned.

The Air Force claimed the cheating only went as far back as November 2011. Ex-missileers told me it went back decades: “Everybody has cheated on those tests.”

The Air Force claimed the cheating only went as far back as November 2011, but three former missileers told me it was the norm at Malmstrom when they arrived there back in 2007, and that the practice was well established. (Blair told me that cheating was even common when he served at Malmstrom in the mid-1970s.) Missileers would check each other’s tests before turning them in and share codes indicating the correct proportion of multiple-choice answers on a given exam. If the nuclear program’s top brass, who all began their careers as missileers, weren’t aware of it, the men suggested, then they were willfully looking the other way. “You know in Casablanca, when that inspector was ‘absolutely shocked’ that there was gambling at Rick’s? It’s that,” one recently retired missileer told me. “Everybody has cheated on those tests.”

Cheating is just one symptom of what Lt. Colonel Jay Folds, then the commander of the nuclear missile wing at North Dakota’s Minot Air Force Base, called “rot” in the atomic force. Last November, Associated Press reporter Robert Burns obtained a RAND study commissioned by the Air Force. It concluded that the typical launch officer was exhausted, cynical, and distracted on the job. ICBM airmen also had high rates of sexual assault, suicide, and spousal and child abuse, and more than double the rates of courts-martial than Air Force personnel as a whole.

The morale problems were well known to Michael Carey, the two-star general who led the program at the time the cheating was revealed. Indeed, he pointed them out to other Americans during an official military cooperation trip to Moscow, before spending the rest of his three-day visit on a drunken bender, repeatedly insulting his Russian military hosts and partying into the wee hours with “suspect” foreign women, according to the Air Force’s inspector general. He later confessed to chatting for most of a night with the hotel’s cigar sales lady, who was asking questions “about physics and optics”—and thinking to himself: “Dude, this doesn’t normally happen.” Carey was stripped of his command in October 2013.

The embarrassments just keep coming. Last week, the Air Force fired two more nuclear commanders, including Col. Carl Jones, the No. 2 officer in the 90th Missile Wing at Wyoming’s Warren Air Force Base, and disciplined a third, for a variety of leadership failures, including the maltreatment of subordinates. In one instance, two missileers were sent to the hospital after exposure to noxious fumes at a control center—they had remained on duty for fear of retaliation by their commander, Lt. Col. Jimmy “Keith” Brown. This week, the Pentagon is expected to release a comprehensive review of the nuclear program that details “serious problems that must be addressed urgently.”

“Their buddies from the B-52s and B-2s tell them all sorts of exciting stories about doing real things in Afghanistan and Iraq. They end up feeling superfluous.”

Stung by the recent bad press, the Air Force has announced pay raises, changes to the proficiency tests, and nearly $400 million in additional spending to increase staffing and update equipment. In the long term, Congress and the administration are debating a trillion-dollar suite of upgrades to the nuclear program, which could include replacing the existing ICBMs and warheads with higher-tech versions.

But outside experts say none of the changes will address the core of the problem: obsolescence. “There is a morale issue,” says Hans Kristensen, who directs the Federation of American Scientists’ Nuclear Information Project, “that comes down to the fundamental question: How is the ICBM force essential? It’s hard to find that [answer] if you sit in the hole out there. Their buddies from the B-52s and B-2s tell them all sorts of exciting stories about doing real things in Afghanistan and Iraq. They end up feeling superfluous.”

launch switches

A missile commander’s launch switches. National Park Service

Indeed, on my first night in town, over beer and bison burgers, Aaron had introduced me to “Brent,” another recently former missileer who looks more like a surfer now that his military crew cut is all grown out. Brent lost faith in his leaders early on, he told me, when he saw the way they tolerated, if not encouraged, a culture of cheating. He’d resisted the impulse, he said, and his imperfect test scores disqualified him for promotions. But the worst part of the gig, the guys agreed, might be the stultifying tedium of being stuck in a tiny room all day and night waiting for an order you knew would never come. “Any TV marathon you can stumble upon is good,” Brent said. “Even if it’s something you hate. It’s just that ability to zone out and lose time.”

 

CONTINUED:  http://www.motherjones.com/politics/2014/11/air-force-missile-wing-minuteman-iii-nuclear-weapons-burnout

William Gibson: I never imagined Facebook

The brilliant science-fiction novelist who imagined the Web tells Salon how writers missed social media’s rise

William Gibson: I never imagined Facebook
William Gibson (Credit: Putnam/Michael O’Shea)

Even if you’ve never heard of William Gibson, you’re probably familiar with his work. Arguably the most important sci-fi writer of his generation, Gibson’s cyber-noir imagination has shaped everything from the Matrix aesthetic to geek culture to the way we conceptualize virtual reality. In a 1982 short story, Gibson coined the term “cyberspace.” Two years later, his first and most famous novel, “Neuromancer,” helped launch the cyberpunk genre. By the 1990s, Gibson was writing about big data, imagining Silk Road-esque Internet enclaves, and putting his characters on reality TV shows — a full four years before the first episode of “Big Brother.”

Prescience is flashy, but Gibson is less an oracle than a kind of speculative sociologist. A very contemporary flavor of dislocation seems to be his specialty. Gibson’s heroes shuttle between wildly discordant worlds: virtual paradises and physical squalor; digital landscapes and crumbling cities; extravagant wealth and poverty.

In his latest novel, “The Peripheral,” which came out on Tuesday, Gibson takes this dislocation to new extremes. Set in mid-21st century Appalachia and far-in-the-future London, “The Peripheral” is partly a murder mystery, and partly a time-travel mind-bender. Gibson’s characters aren’t just dislocated in space, now. They’ve become unhinged from history.

Born in South Carolina, Gibson has lived in Vancouver since the 1960s. Over the phone, we spoke about surveillance, celebrity and the concept of the eternal now.

You’re famous for writing about hackers, outlaws and marginal communities. But one of the heroes of “The Peripheral” is a near-omniscient intelligence agent. She has surveillance powers that the NSA could only dream of. Should I be surprised to see you portray that kind of character so positively?

Well, I don’t know. She’s complicated, because she is this kind of terrifying secret police person in the service of a ruthless global kleptocracy. At the same time, she seems to be slightly insane and rather nice. It’s not that I don’t have my serious purposes with her, but at the same time she’s something of a comic turn.

Her official role is supposed to be completely terrifying, but at the same time her role is not a surprise. It’s not like, “Wow, I never even knew that that existed.”



Most of the characters in “The Peripheral” assume that they’re being monitored at all times. That assumption is usually correct. As a reader, I was disconcerted by how natural this state of constant surveillance felt to me.

I don’t know if it would have been possible 30 years ago to convey that sense to the reader effectively, without the reader already having some sort of cultural module in place that can respond to that. If we had somehow been able to read this text 30 years ago, I don’t know how we would even register that. It would be a big thing for a reader to get their head around without a lot of explaining. It’s a scary thing, the extent to which I don’t have to explain why [the characters] take that surveillance for granted. Everybody just gets it.

You’re considered a founder of the cyberpunk genre, which tends to feature digital cowboys — independent operators working on the frontiers of technology. Is the counterculture ethos of cyberpunk still relevant in an era when the best hackers seem to be working for the Chinese and U.S. governments, and our most famous digital outlaw, Edward Snowden, is under the protection of Vladimir Putin?

It’s seemed to me for quite a while now that the most viable use for the term “cyberpunk” is in describing artifacts of popular culture. You can say, “Did you see this movie? No? Well, it’s really cyberpunk.” Or, “Did you see the cyberpunk pants she was wearing last night?”

People know what you’re talking about, but it doesn’t work so well describing human roles in the world today. We’re more complicated. I think one of the things I did in my early fiction, more or less for effect, was to depict worlds where there didn’t really seem to be much government. In “Neuromancer,” for example, there’s no government really on the case of these rogue AI experiments that are being done by billionaires in orbit. If I had been depicting a world in which there were governments and law enforcement, I would have depicted hackers on both sides of the fence.

In “Neuromancer,” I don’t think there’s any evidence of anybody who has any parents. It’s kind of a very adolescent book that way.

In “The Peripheral,” governments are involved on both sides of the book’s central conflict. Is that a sign that you’ve matured as a writer? Or are you reflecting changes in how governments operate?

I hope it’s both. This book probably has, for whatever reason, more of my own, I guess I could now call it adult, understanding of how things work. Which, I suspect, is as it should be. People in this book live under governments, for better or worse, and have parents, for better or worse.

In 1993, you wrote an influential article about Singapore for Wired magazine, in which you wondered whether the arrival of new information technology would make the country more free, or whether Singapore would prove that “it is possible to flourish through the active repression of free expression.” With two decades of perspective, do you feel like this question has been answered?

Well, I don’t know, actually. The question was, when I asked it, naive. I may have posed innocently a false dichotomy, because some days when you’re looking out at the Internet both things are possible simultaneously, in the same place.

So what do you think is a better way to phrase that question today? Or what would have been a better way to phrase it in 1993?

I think you would end with something like “or is this just the new normal?”

Is there anything about “the new normal” in particular that surprises you? What about the Internet today would you have been least likely to foresee?

It’s incredible, the ubiquity. I definitely didn’t foresee the extent to which we would all be connected almost all of the time without needing to be plugged in.

That makes me think of “Neuromancer,” in which the characters are always having to track down a physical jack, which they then use to plug themselves into this hyper-futuristic Internet.

Yes. It’s funny, when the book was first published, when it was just out — and it was not a big deal the first little while it was out, it was just another paperback original — I went to a science fiction convention. There were guys there who were, by the standards of 1984, far more computer-literate than I was. And they very cheerfully told me that I got it completely wrong, and I knew nothing. They kept saying over and over, “There’s never going to be enough bandwidth, you don’t understand. This could never happen.”

So, you know, here I am, this many years later with this little tiny flat thing in my hand that’s got more bandwidth than those guys thought was possible for a personal device to ever have, and the book is still resonant for at least some new readers, even though it’s increasingly hung with the inevitable obsolescence of having been first published in 1984. Now it’s not really in the pale, but in the broader outline.

You wrote “Neuromancer” on a 1927 Hermes typewriter. In an essay of yours from the mid-1990s, you specifically mention choosing not to use email. Does being a bit removed from digital culture help you critique it better? Or do you feel that you’re immersed in that culture, now?

I no longer have the luxury of being as removed from it as I was then. I was waiting for it to come to me. When I wrote [about staying off email], there was a learning curve involved in using email, a few years prior to the Web.

As soon as the Web arrived, I was there, because there was no learning curve. The interface had been civilized, and I’ve basically been there ever since. But I think I actually have a funny kind of advantage, in that I’m not generationally of [the Web]. Just being able to remember the world before it, some of the perspectives are quite interesting.

Drones and 3-D printing play major roles in “The Peripheral,” but social networks, for the most part, are obsolete in the book’s fictional future. How do you choose which technological trends to amplify in your writing, and which to ignore?

It’s mostly a matter of which ones I find most interesting at the time of writing. And the absence of social media in both those futures probably has more to do with my own lack of interest in that. It would mean a relatively enormous amount of work to incorporate social media into both those worlds, because it would all have to be invented and extrapolated.

Your three most recent novels, before “The Peripheral,” take place in some version of the present. You’re now returning to the future, which is where you started out as a writer in the 1980s. Futuristic sci-fi often feels more like cultural criticism of the present than an exercise in prediction. What is it about the future that helps us reflect on the contemporary world?

When I began to write science fiction, I already assumed that science fiction about the future is only ostensibly written about the future, that it’s really made of the present. Science fiction has wound up with a really good cultural toolkit — an unexpectedly good cultural toolkit — for taking apart the present and theorizing on how it works, in the guise of presenting an imagined future.

The three previous books were basically written to find out whether or not I could use the toolkit that I’d acquired writing fictions about imaginary futures on the present, but use it for more overtly naturalistic purposes. I have no idea at this point whether my next book will be set in an imaginary future or the contemporary present or the past.

Do you feel as if sci-fi has actually helped dictate the future? I was speaking with a friend earlier about this, and he phrased the question well: Did a book like “Neuromancer” predict the future, or did it establish a dress code for it? In other words, did it describe a future that people then tried to live out?

I think that the two halves of that are in some kind of symbiotic relationship with one another. Science fiction ostensibly tries to predict the future. And the people who wind up making the future sometimes did what they did because they read a piece of science fiction. “Dress code” is an interesting way to put it. It’s more like … it’s more like attitude, really. What will our attitude be toward the future when the future is the present? And that’s actually much more difficult to correctly predict than what sort of personal devices people will be carrying.

How do you think that attitude has changed since you started writing? Could you describe the attitude of our current moment?

The day the Apple Watch was launched, late in the day someone on Twitter announced that it was already over. They cited some subject, they linked to something, indicating that our moment of giddy future shock was now over. There’s just some sort of endless now, now.

Could you go into that a little bit more, what you mean by an “endless now”?

Fifty years ago, I think now was longer. I think that the cultural and individual concept of the present moment was a year, or two, or six months. It wasn’t measured in clicks. Concepts of the world and of the self couldn’t change as instantly or in some cases as constantly. And I think that has resulted in there being a now that’s so short that in a sense it’s as though it’s eternal. We’re just always in the moment.

And it takes something really horrible, like some terrible, gripping disaster, to lift us out of that, or some kind of extra-strong sense of outrage, which we know that we share with millions of other people. Unfortunately, those are the things that really perk us up. This is where we get perked up, perked up for longer than for over a new iPhone, say.

The worlds that you imagine are enchanting, but they also tend to be pretty grim. Is it possible to write good sci-fi that doesn’t have some sort of dystopian edge?

I don’t know. It wouldn’t occur to me to try. The world today, considered in its totality, has a considerable dystopian edge. Perhaps that’s always been true.

I often work in a form of literature that is inherently fantastic. But at the same time that I’m doing that, I’ve always shared concerns with more naturalistic forms of writing. I generally try to make my characters emotionally realistic. I do now, at least; I can’t say I always have done that. And I want the imaginary world they live in and the imaginary problems that they have to reflect the real world, and to some extent real problems that real people are having.

It’s difficult for me to imagine a character in a work of contemporary fiction who wouldn’t have any concerns with the more dystopian elements of contemporary reality. I can imagine one, but she’d be a weird … she’d be a strange character. Maybe some kind of monster. Totally narcissistic.

What makes this character monstrous? The narcissism?

Well, yeah, someone sufficiently self-involved. It doesn’t require anything like the more clinical forms of narcissism. But someone who’s sufficiently self-involved as to just not be bothered with the big bad things that are happening in the world, or the bad things — regular-size bad things — that are happening to one’s neighbors. There certainly are people like that out there. The Internet is full of them. I see them every day.

You were raised in the South, and you live in Vancouver, but, like Philip K. Dick, you’ve set some of your most famous work in San Francisco. What is the appeal of the city for technological dreamers? And how does the Silicon Valley of today fit into that Bay Area ethos?

I’m very curious to go back to San Francisco while on tour for this book, because it’s been a few years since I’ve been there, and it was quite a few years before that when I wrote about San Francisco in my second series of books.

I think one of the reasons I chose it was that it was a place that I would get to fairly frequently, so it would stay fresh in memory, but it also seemed kind of out of the loop. It was kind of an easy canvas for me, an easier canvas to set a future in than Los Angeles. It seemed to have fewer moving parts. And that’s obviously no longer the case, but I really know contemporary San Francisco now more by word of mouth than I do from first-person experience. I really think it sounds like a genuinely new iteration of San Francisco.

Do you think that Google and Facebook and this Silicon Valley culture are the heirs to the Internet that you so presciently imagined in the 1980s? Or do they feel like they’ve taken the Web in different directions than what you expected?

Generally it went it directions that didn’t occur to me. It seems to me now that if I had been a very different kind of novelist, I would have been more likely to foresee something like Facebook. But you know, if you try to imagine that somebody in 1982 writes this novel that totally and accurately predicted what it would be like to be on Facebook, and then tried to get it published? I don’t know if you would be able to get it published. Because how exciting is that, or what kind of crime story could you set there?

Without even knowing it, I was limited by the kind of fiction of the imaginary future that I was trying to write. I could use detective gangster stories, and there is a real world of the Internet that’s like that, you know? Very much like that. Although the crimes are so different. The ace Russian hacker mobs are not necessarily crashing into the global corporations. They’re stealing your Home Depot information. If I’d put that as an exploit in “Neuromancer,” nobody would have gotten it. Although it would have made me seem very, very prescient.

You’ve written often and eloquently about cults of celebrity and the surrealness of fame. By this point you’re pretty famous yourself. Has writing about fame changed the way you experience it? Does experiencing fame change the way you write about it?

Writers in our society, even today, have a fairly homeopathic level of celebrity compared to actors and really popular musicians, or Kardashians. I think in [my 1993 novel] “Virtual Light,” I sort of predicted Kardashian. Or there’s an implied celebrity industry in that book that’s very much like that. You become famous just for being famous. And you can keep it rolling.

But writers, not so much. Writers get just a little bit of it on a day-to-day basis. Writers are in an interesting place in our society to observe how that works, because we can be sort of famous, but not really famous. Partly I’d written about fame because I’d seen little bits of it, but the bigger reason is the extent to which it seems that celebrity is the essential postmodern product, and the essential post-industrial product. The so-called developed world pioneered it. So it’s sort of inherently in my ballpark. It would be weird if it wasn’t there.

You have this reputation of being something of a Cassandra. I don’t want to put you on the spot and ask for predictions. But I’m curious: For people who are trying to understand technological trends, and social trends, where do you recommend they look? What should they be observing?

I think the best advice I’ve ever heard on that was from Samuel R. Delany, the great American writer. He said, “If you want to know how something works, look at one that’s broken.” I encountered that remark of his before I began writing, and it’s one of my fridge magnets for writing.

Anything I make, and anything I’m describing in terms of its workings — even if I were a non-literary futuristic writer of some kind — I think that statement would be very resonant for me. Looking at the broken ones will tell you more about what the thing actually does than looking at one that’s perfectly functioning, because then you’re only seeing the surface, and you’re only seeing what its makers want you to see. If you want to understand social media, look at troubled social media. Or maybe failed social media, things like that.

Do you think that’s partly why so much science fiction is crime fiction, too?

Yeah, it might be. Crime fiction gives the author the excuse to have a protagonist who gets her nose into everything and goes where she’s not supposed to go and asks questions that will generate answers that the author wants the reader to see. It’s a handy combination. Detective fiction is in large part related to literary naturalism, and literary naturalism was a quite a radical concept that posed that you could use the novel to explore existing elements of society which had previously been forbidden, like the distribution of capital and class, and what sex really was. Those were all naturalistic concerns. They also yielded to detective fiction. Detective fiction and science fiction are an ideal cocktail, in my opinion.

 

http://www.salon.com/2014/11/09/william_gibson_i_never_imagined_facebook/?source=newsletter

Why “Psychological Androgyny” Is Essential for Creativity

by

“Creative individuals are more likely to have not only the strengths of their own gender but those of the other one, too.”

Despite the immense canon of research on creativity — including its four stages, the cognitive science of the ideal creative routine, the role of memory, and the relationship between creativity and mental illness — very little has focused on one of life’s few givens that equally few of us can escape: gender and the genderedness of the mind.

In Creativity: The Psychology of Discovery and Invention (public library) — one of the most important, insightful, and influential books on creativity ever written — pioneering psychologist Mihaly Csikszentmihalyi examines a curious, under-appreciated yet crucial aspect of the creative mindset: a predisposition to psychological androgyny.

In all cultures, men are brought up to be “masculine” and to disregard and repress those aspects of their temperament that the culture regards as “feminine,” whereas women are expected to do the opposite. Creative individuals to a certain extent escape this rigid gender role stereotyping. When tests of masculinity/femininity are given to young people, over and over one finds that creative and talented girls are more dominant and tough than other girls, and creative boys are more sensitive and less aggressive than their male peers.

Illustration by Yang Liu from ‘Man Meets Woman,’ a pictogram critique of gender stereotypes. Click image for details.

Csikszentmihalyi points out that this psychological tendency toward androgyny shouldn’t be confused with homosexuality — it deals not with sexual constitution but with a set of psychoemotional capacities:

Psychological androgyny is a much wider concept, referring to a person’s ability to be at the same time aggressive and nurturant, sensitive and rigid, dominant and submissive, regardless of gender. A psychologically androgynous person in effect doubles his or her repertoire of responses and can interact with the world in terms of a much richer and varied spectrum of opportunities. It is not surprising that creative individuals are more likely to have not only the strengths of their own gender but those of the other one, too.

Citing his team’s extensive interviews with 91 individuals who scored high on creativity in various fields — including pioneering astronomer Vera Rubin, legendary sociobiologist E.O. Wilson, philosopher and marginalia champion Mortimer Adler, universe-disturber Madeleine L’Engle, social science titan John Gardner, poet extraordinaire Denise Levertov, and MacArthur genius Stephen Jay Gould — Csikszentmihalyi writes:

It was obvious that the women artists and scientists tended to be much more assertive, self-confident, and openly aggressive than women are generally brought up to be in our society. Perhaps the most noticeable evidence for the “femininity” of the men in the sample was their great preoccupation with their family and their sensitivity to subtle aspects of the environment that other men are inclined to dismiss as unimportant. But despite having these traits that are not usual to their gender, they retained the usual gender-specific traits as well.

Illustration from the 1970 satirical book ‘I’m Glad I’m a Boy! I’m Glad I’m a Girl!’ Click image for more.

Creativity: The Psychology of Discovery and Invention is a revelatory read in its entirety, featuring insights on the ideal conditions for the creative process, the key characteristics of the innovative mindset, how aging influences creativity, and invaluable advice to the young from Csikszentmihalyi’s roster of 91 creative luminaries. Complement this particular excerpt with Ursula K. Le Guin on being a man — arguably the most brilliant meditation on gender ever written, by one of the most exuberantly creative minds of our time.

http://www.brainpickings.org/2014/11/07/psychological-androginy-creativity-csikszentmihalyi/