Lies, Damn Lies, and Tech Diversity Statistics

silicon_2678276b

 

Some of the world’s leading Data Scientists are on the payrolls of Microsoft, Google, Facebook, Yahoo, and Apple. So, it’d be interesting to get their take on the infographics the tech giants have passed off as diversity data disclosures. Microsoft, for example, reported its workforce is 29% female, which isn’t great, but if one takes the trouble to run the numbers on a linked EEO-1 filing snippet (PDF), some things look even worse. For example, only 23.35% of its reported white U.S. employee workforce is female (Microsoft, like Google, footnotes that “Gender data are global, ethnicity data are US only”). And while Google and Facebook blame their companies’ lack of diversity on the demographics of U.S. computer science grads, CS grad and nationality breakouts were not provided as part of their diversity disclosures. Also, the EEOC notes that EEO-1 numbers reflect “any individual on the payroll of an employer who is an employee for purposes of the employers withholding of Social Security taxes,” further muddying the disclosures of companies relying on imported talent, like H-1B visa dependent Facebook. So, were the diversity disclosure mea culpas less about providing meaningful data for analysis, and more about deflecting criticism and convincing lawmakers there’s a need for education and immigration legislation (aka Microsoft’s National Talent Strategy) that’s in tech’s interest?

 

Slashdot

Top Secret report details FBI mass surveillance

prism-sticker

By Thomas Gaist
14 January 2015

The Federal Bureau of Investigation (FBI) has been overseeing and co-directing mass surveillance programs run by the National Security Agency (NSA) since at least 2008, a newly declassified document from the Office of the Inspector General (OIG) of the Department of Justice (DOJ) shows.

The classified Top Secret report, “A Review of the Federal Bureau of Investigation’s Activities Under Section 702 of the Foreign Intelligence Surveillance Act Amendments Act of 2008,” acquired by the New York Timesthis week through a Freedom of Information Act lawsuit submitted in 2012, found that the FBI has amassed large quantities of electronic communications data through its involvement in NSA surveillance operations run under Section 702 of the FISA Amendments Act (FAA) of 2008.

The report is based on OIG interviews with some 45 FBI members and officials, as well as officials from the National Security Agency (NSA) and the Office of the Director of National Intelligence (DNI) and lawyers from the DOJ’s National Security Division. OIG also examined “thousands of documents related to the FBI’s 702 activities.”

Beginning in 2008, the bureau received daily emailed reports listing new targets being added to the NSA’s mass spying programs. By 2009, the FBI was receiving a continuous feed of unprocessed data from the NSA “to analyze for its own purposes,” partly through accessing the NSA’s PRISM program, the report states. The FBI did not report its involvement in 702 data collection to Congress until 2012, the report found.

The NSA sends surveillance target lists to the FBI’s spy units using “a system called PRISM,” the document states. After the word PRISM, the rest of the paragraph, a total of 8 lines of text, is completely blacked out.

No further references to PRISM are visible. By opening up PRISM to the FBI, the NSA has placed virtually all electronic communications sent by ordinary Internet users around the world at the fingertips of the FBI. Yet, aside from this one instance, “the Justice Department had redacted all the other references to PRISM in the report,” the Times confirmed.

PRISM collects bulk data directly from leading technology and communications corporations, including Yahoo!, Google, Facebook, YouTube, Skype, AOL and Apple, constantly vacuuming communications of data from hundreds of millions of users around the world, if not more, according to documents leaked by Edward Snowden in 2013.

NSA slides describe PRISM as the agency’s “number one source of raw intelligence,” and note that PRISM captures some 90 percent of electronic data acquired by the NSA spy programs. During a single two-month period in 2012, NSA PRISM operations collected some 70 million communications from the French population.

This immensely powerful surveillance machinery, supposedly needed to target foreign terrorist conspirators and enemy states, has increasingly been placed at the full disposal of the top domestic police agency. Through PRISM, federal police agents can view communications data in real time and search through stored data, such as email archives, at will.

The OIG report states reassuringly that the FBI is “doing a good job in making sure that the email accounts targeted for warrantless collection belonged to non-citizens abroad.”

The endless redactions throughout the OIG document underscore the contempt of the US elite for even minimal forms of democratic accountability.

Redactions are present on nearly every page, and countless paragraphs are entirely blacked out. An entire page of the Table of Contents is redacted.

Section headers and central facts are redacted to an extent that is almost humorous, such as:

** “REDACTED provides operational support to the FBI’s investigative units at the Headquarters and in the field.”

** “The FBI [REDACTED] from participating providers and transmits them in the form of raw unminimized data to the NSA and, at the NSA’s direction, to the FBI and the CIA.”

** “The FBI retains 702 data in its [REDACTED] for analysis and dissemination.”

** “The second basic activity that the FBI conducts in the 702 Program is to [REDACTED]”

** “Findings and Recommendations Relating to the FBI’s [REDACTED]”

** “Findings Relating to Access to and Purging of 702-Data Retained in [REDACTED]”

A full page is dedicated to “The [REDACTED] Factor,” referenced by NSA analysts to justify some 10 percent of additions to the agency’s surveillance lists. Every word that might even suggest the nature of the “factor” has been blacked out.

Descriptions of CIA involvement in the warrantless mass surveillance are all redacted. A typical example reads, “The Central Intelligence Agency (CIA) participates in Section 702 [REDACTED].”

Nonetheless, significant conclusions can still be drawn from partially redacted portions of the document.

The FBI also gathers surveillance data from its own sources, unnamed “participating parties,” and disseminates this information to other government agencies, the document shows.

“The FBI acquires [several words REDACTED] from the participating parties and routes the raw unminimized data to the NSA and, at the NSA’s direction, to the CIA and to the FBI’s [second half of sentence REDACTED]. The FBI retains a portion of the raw data for analysis and dissemination as finished intelligence products,” the report reads.

Secret FBI spy units, referred to collectively by the OIG as the “702 Team,” manage the bureau’s data acquisition and dissemination efforts. The actual names of the units are redacted from the report.

The FBI’s 702 Team is made up of “personnel in the Counterterrorism Division’s [second half of sentence REDACTED]. These personnel are drawn primarily from the [two full lines REDACTED] two of five units within [word REDACTED],” the OIG report reads.

For years, the OIG report shows, FBI officials have been reviewing and signing off on long lists of potential new NSA surveillance targets. The entire review process is conducted “in consultation with” the NSA, and the FBI “shows considerable deference to the NSA’s targeting judgments,” the OIG reported.

The OIG report testifies to the speed with which the US ruling class has overturned core principles of the US Bill of Rights during the past decade and a half.

When the Bush administration began collecting bulk US telephone and Internet data under the President’s Surveillance Program (PSP), in October 2001, the program’s existence was not even publicly acknowledged, so flagrant were its violations of the Fourth Amendment. Within less than a decade, warrantless searches and seizures of user data gained full Congressional approval.

The PSP coordinated secret mass surveillance of US telephone and computer-based communications, establishing secret rooms in major corporate facilities where government agents tapped directly into the companies’ hardware.

After the New York Times first reported on the PSP in 2005, the administration made initial moves to legalize the operation, “persuading” a FISA court judge to order the telecoms to issue a ruling that, the Bush administration claimed, would legalize the operations.

Mild objections by a single FISA judge to expanded spy powers led the intelligence establishment to demand that Congress immediately pass legislation to place the warrantless spying on a firmer legal foundation, according to the OIG report.

“Judge Vinson’s resistance led Congress to enact, in August 2007, the Protect America Act, a temporary law permitting warrantless surveillance of foreigners from domestic network locations,” the OIG report states.

The NSA and FBI were able “to accelerate the government’s efforts” to pass the PAA legislation by insisting that existing laws prevented the agency from spying on enough targets, the document states. The 2007 Protect America Act (PAA) served to temporarily legalize warrantless surveillance until permanent amendments could be added to the decades-old FISA legislation in the form of the 2008 Foreign Intelligence Surveillance Amendments Act (FAA). The FAA gave Congressional approval to the NSA’s warrantless bulk spying and data capture operations by employing “a novel and expansive interpretation of the FISA statute,” the OIG notes.

 

http://www.wsws.org/en/articles/2015/01/14/fbis-j14.html

How Facebook Killed the Internet

Death by Ten Billion Status Updates

White thumb up next to the like from social networks on blue bac

by DAVID ROVICS

Facebook killed the internet, and I’m pretty sure that the vast majority of people didn’t even notice.

I can see the look on many of your faces, and hear the thoughts. Someone’s complaining about Facebook again.  Yes, I know it’s a massive corporation, but it’s the platform we’re all using.  It’s like complaining about Starbucks.  After all the independent cafes have been driven out of town and you’re an espresso addict, what to do?  What do you mean “killed”?  What was killed?

I’ll try to explain.  I’ll start by saying that I don’t know what the solution is.  But I think any solution has to start with solidly identifying the nature of the problem.

First of all, Facebook killed the internet, but if it wasn’t Facebook, it would have been something else.  The evolution of social media was probably as inevitable as the development of cell phones that could surf the internet.  It was the natural direction for the internet to go in.

Which is why it’s so especially disturbing.  Because the solution is not Znet or Ello.  The solution is not better social media, better algorithms, or social media run by a nonprofit rather than a multibillion-dollar corporation.  Just as the solution to the social alienation caused by everybody having their own private car is not more electric vehicles.  Just as the solution to the social alienation caused by everyone having their own cell phone to stare at is not a collectively-owned phone company.

Many people from the grassroots to the elites are thrilled about the social media phenomenon.  Surely some of the few people who will read this are among them.  We throw around phrases like “Facebook revolution” and we hail these new internet platforms that are bringing people together all over the world.  And I’m not suggesting they don’t have their various bright sides.  Nor am I suggesting you should stop using social media platforms, including Facebook.  That would be like telling someone in Texas they should bike to work, when the whole infrastructure of every city in the state is built for sports utility vehicles.

But we should understand the nature of what is happening to us.

From the time that newspapers became commonplace up until the early 1990’s, for the overwhelming majority of the planet’s population, the closest we came to writing in a public forum were the very few of us who ever bothered to write a letter to the editor.  A tiny, tiny fraction of the population were authors or journalists who had a public forum that way on an occasional or a regular basis, depending.  Some people wrote up the pre-internet equivalent of an annual Christmas-time blog post which they photocopied and sent around to a few dozen friends and relatives.

In the 1960s there was a massive flowering of independent, “underground” press in towns and cities across the US and other countries.  There was a vastly increased diversity of views and information that could be easily accessed by anyone who lived near a university and could walk to a news stand and had an extra few cents to spend.

In the 1990s, with the development of the internet – websites, email lists – there was an explosion of communication that made the underground press of the 60’s pale in comparison.  Most people in places like the US virtually stopped using phones (to actually talk on), from my experience.  Many people who never wrote letters or much of anything else started using computers and writing emails to each other, and even to multiple people at once.

Those very few of us who were in the habit in the pre-internet era of sending around regular newsletters featuring our writing, our thoughts, our list of upcoming gigs, products or services we were trying to sell, etc., were thrilled with the advent of email, and the ability to send our newsletters out so easily, without spending a fortune on postage stamps, without spending so much time stuffing envelopes.  For a brief period of time, we had access to the same audience, the same readers we had before, but now we could communicate with them virtually for free.

This, for many of us, was the internet’s golden age – 1995-2005 or so.  There was the increasing problem of spam of various sorts.  Like junk mail, only more of it.  Spam filters started getting better, and largely eliminated that problem for most of us.

The listservs that most of us bothered to read were moderated announcements lists.  The websites we used the most were interactive, but moderated, such as Indymedia.  In cities throughout the world, big and small, there were local Indymedia collectives.  Anyone could post stuff, but there were actual people deciding whether it should get published, and if so, where.  As with any collective decision-making process, this was challenging, but many of us felt it was a challenge that was worth the effort.  As a result of these moderated listservs and moderated Indymedia sites, we all had an unprecedented ability to find out about and discuss ideas and events that were taking place in our cities, our countries, our world.

Then came blogging, and social media.  Every individual with a blog, Facebook page, Twitter account, etc., became their own individual broadcaster.  It’s intoxicating, isn’t it?  Knowing that you have a global audience of dozens or hundreds, maybe thousands of people (if you’re famous to begin with, or something goes viral) every time you post something.  Being able to have conversations in the comments sections with people from around the world who will never physically meet each other.  Amazing, really.

But then most people stopped listening.  Most people stopped visiting Indymedia.  Indymedia died, globally, for the most part.  Newspapers – right, left and center – closed, and are closing, whether offline or online ones.  Listservs stopped existing.  Algorithms replaced moderators.  People generally began to think of librarians as an antiquated phenomenon.

Now, in Portland, Oregon, one of the most politically plugged-in cities in the US, there is no listserv or website you can go to that will tell you what is happening in the city in any kind of readable, understandable format.  There are different groups with different websites, Facebook pages, listservs, etc., but nothing for the progressive community as a whole.  Nothing functional, anyway.  Nothing that approaches the functionality of the announcements lists that existed in cities and states throughout the country 15 years ago.

Because of the technical limitations of the internet for a brief period of time, there was for a few years a happy medium found between a small elite providing most of the written content that most people in the world read, and the situation we now find ourselves in, drowning in Too Much Information, most of it meaningless drivel, white noise, fog that prevents you from seeing anywhere further than the low beams can illuminate at a given time.

It was a golden age, but for the most part an accidental one, and a very brief one.  As it became easy for people to start up a website, a blog, a Myspace or Facebook page, to post updates, etc., the new age of noise began, inevitably, the natural evolution of the technology.

And most people didn’t notice that it happened.

Why do I say that?  First of all, I didn’t just come up with this shit.  I’ve been talking to a lot of people for many years, and a lot of people think social media is the best thing since sliced bread.  And why shouldn’t they?

The bottom line is, there’s no reason most people would have had occasion to notice that the internet died, because they weren’t content providers (as we call authors, artists, musicians, journalists, organizers, public speakers, teachers, etc. these days) in the pre-internet age or during the first decade or so of the internet as a popular phenomenon.  And if you weren’t a content provider back then, why would you know that anything changed?

I and others like me know – because the people who used to read and respond to stuff I sent out on my email list aren’t there anymore.  They don’t open the emails anymore, and if they do, they don’t read them.  And it doesn’t matter what medium I use – blog, Facebook, Twitter, etc.  Of course some people do, but most people are now doing other things.

What are they doing?  I spent most of last week in Tokyo, going all over town, spending hours each day on the trains.  Most people sitting in the trains back during my first visit to Japan in 2007 were sleeping, as they are now.  But those who weren’t sleeping, seven years ago, were almost all reading books.  Now, there’s hardly a book to be seen.  Most people are looking at their phones.  And they’re not reading books on their phones.  (Yes, I peeked.  A lot.)  They’re playing games or, more often, looking at their Facebook “news feeds.”  And it’s the same in the US and everywhere else that I have occasion to travel to.

Is it worth it to replace moderators with algorithms?  Editors with white noise?  Investigative journalists with pictures of your cat?  Independent record labels and community radio stations with a multitude of badly-recorded podcasts?  Independent Media Center collectives with a million Facebook updates and Twitter feeds?

I think not.  But that’s where we’re at.  How do we get out of this situation, and clear the fog, and use our brains again?  I wish I knew.

David Rovics is a singer/songwriter based in Portland, Oregon.

 

http://www.counterpunch.org/2014/12/24/how-facebook-killed-the-internet/

Neglecting the Lessons of Cypherpunk History

 

Over the course of the Snowden revelations there have been a number of high profile figures who’ve praised the merits of encryption as a remedy to the quandary of mass interception. Companies like Google and Apple have been quick to publicize their adoption of cryptographic countermeasures in an effort to maintain quarterly earnings. This marketing campaign has even convinced less credulous onlookers like Glenn Greenwald. For example, in a recent Intercept piece, Greenwald claimed:

“It is well-established that, prior to the Snowden reporting, Silicon Valley companies were secret, eager and vital participants in the growing Surveillance State. Once their role was revealed, and they perceived those disclosures threatening to their future profit-making, they instantly adopted a PR tactic of presenting themselves as Guardians of Privacy. Much of that is simply self-serving re-branding, but some of it, as I described last week, are genuine improvements in the technological means of protecting user privacy, such as the encryption products now being offered by Apple and Google, motivated by the belief that, post-Snowden, parading around as privacy protectors is necessary to stay competitive.”

So, while he concedes the role of public relations in the ongoing cyber security push, Greenwald concurrently believes encryption is a “genuine” countermeasure. In other words, what we’re seeing is mostly marketing hype… except for the part about strong encryption.

With regard to the promise of encryption as a privacy cure-all, history tells a markedly different story. Guarantees of security through encryption have often proven illusory, a magic act. Seeking refuge in a technical quick fix can be hazardous for a number of reasons.

Monolithic corporations aren’t our saviors — they’re the central part of the problem.

Tech Companies Are Peddling a Phony Version of Security, Using the Govt. as the Bogeyman

http://kielarowski.files.wordpress.com/2014/11/b4817-tech.png?w=399&h=337

This week the USA Freedom Act was blocked in the Senate as it failed to garner the 60 votes required to move forward. Presumably the bill would have imposed limits on NSA surveillance. Careful scrutiny of the bill’s text however reveals yet another mere gesture of reform, one that would codify and entrench existing surveillance capabilities rather than eliminate them.

Glenn Greenwald, commenting from his perch at the Intercept, opined:

“All of that illustrates what is, to me, the most important point from all of this: the last place one should look to impose limits on the powers of the U.S. government is . . . the U.S. government. Governments don’t walk around trying to figure out how to limit their own power, and that’s particularly true of empires.”

Anyone who followed the sweeping deregulation of the financial industry during the Clinton era, the Gramm–Leach–Bliley Act of 1999 which effectively repealed Glass-Steagall and the Commodity Futures Modernization Act of 2000, immediately sees through Greenwald’s impromptu dogma. Let’s not forget the energy market deregulation in California and subsequent manipulation that resulted in blackouts throughout the state. Ditto that for the latest roll back of arms export controls that opened up markets for the defense industry. And never mind all those hi-tech companies that want to loosen H1-B restrictions.

The truth is that the government is more than happy to cede power and authority… just as long as doing so serves the corporate factions that have achieved state capture. The “empire” Greenwald speaks of is a corporate empire. In concrete analytic results that affirm Thomas Ferguson’s Investment Theory of Party Competition, researchers from Princeton and Northwestern University conclude that:

“Multivariate analysis indicates that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.”

Glenn’s stance reveals a broader libertarian theme. One that the Koch brothers would no doubt find amenable: the government is suspect and efforts to rein in mass interception must therefore arise from the corporate entities. Greenwald appears to believe that the market will solve everything. Specifically, he postulates that consumer demand for security will drive companies to offer products that protect user privacy, adopt “strong” encryption, etc.

The Primacy of Security Theater

Certainly large hi-tech companies care about quarterly earnings. That definitely explains all of the tax evasion, wage ceilings, and the slave labor. But these same companies would be hard pressed to actually protect user privacy because spying on users is a fundamental part of their business model. Like government spies, corporate spies collect and monetize oceans of data.

Furthermore hi-tech players don’t need to actually bullet-proof their products to win back customers. It’s far more cost effective to simply manufacture the perception of better security: slap on some crypto, flood the news with public relation pieces, and get some government officials (e.g. James ComeyRobert Hannigan, and Stewart Baker) to whine visibly about the purported enhancements in order to lend the marketing campaign credibility. The techno-libertarians of Silicon Valley are masters of Security Theater.

Witness, if you will, Microsoft’s litany of assurances about security over the years, followed predictably by an endless train of critical zero-day bugs. Faced with such dissonance it becomes clear that “security” in high-tech is viewed as a public relations issue, a branding mechanism to boost profits. Greenwald is underestimating the contempt that CEOs have for the credulity of their user base, much less their own workers.

Does allegedly “strong” cryptography offer salvation? Cryptome’s John Young thinks otherwise:

“Encryption is a citizen fraud, bastard progeny of national security, which offers malware insecurity requiring endless ‘improvements’ to correct the innately incorrigible. Its advocates presume it will empower users rather than subject them to ever more vulnerability to shady digital coders complicit with dark coders of law in exploiting fear, uncertainty and doubt.”

Business interests, having lured customers in droves with a myriad of false promises, will go back to secretly cooperating with government spies as they always have: introducing subtle weaknesses into cryptographic protocols, designing backdoors that double as accidental zero-day bugs, building rootkits which hide in plain sight, and handing over user data. In other words all of the behavior that was described by Edward Snowden’s documents. Like a jilted lover, consumers will be pacified with a clever sales pitch that conceals deeper corporate subterfuge.

Ultimately it’s a matter of shared class interest. The private sector almost always cooperates with the intelligence services because American spies pursue the long-term prerogatives of neoliberal capitalism; open markets and access to resources the world over. Or perhaps someone has forgotten the taped phone call of Victoria Nuland selecting the next prime minister of Ukraine as the IMF salivates over austerity measures? POTUS caters to his constituents, the corporate ruling class, which transitively convey their wishes to clandestine services like the CIA. Recall Ed Snowden’s open letter to Brazil:

“These programs were never about terrorism: they’re about economic spying, social control, and diplomatic manipulation. They’re about power.”

To confront the Deep State Greenwald is essentially advocating that we elicit change by acting like consumers instead of constitutionally endowed citizens. This is a grave mistake because profits can be decoupled from genuine security in a society defined by secrecy, propaganda, and state capture. Large monolithic corporations aren’t our saviors. They’re the central part of the problem. We shouldn’t run to the corporate elite to protect us. We should engage politically to retake and remake our republic.

 

Bill Blunden is an independent investigator whose current areas of inquiry include information security, anti-forensics, and institutional analysis.

http://www.alternet.org/tech-companies-are-peddling-phony-version-security-using-govt-bogeyman?akid=12501.265072.yCLOb-&rd=1&src=newsletter1027620&t=29&paging=off&current_page=1#bookmark

You should actually blame America for everything you hate about internet culture

November 21

The tastes of American Internet-users are both well-known and much-derided: Cat videos. Personality quizzes. Lists of things that only people from your generation/alma mater/exact geographic area “understand.”

But in France, it turns out, even viral-content fiends are a bit more … sophistiqués.

“In France, articles about cats do not work,” Buzzfeed’s Scott Lamb told Le Figaro, a leading Parisian paper. Instead, he explained, Buzzfeed’s first year in the country has shown it that “the French love sharing news and politics on social networks – in short, pretty serious stuff.”

This is interesting for two reasons: first, as conclusive proof that the French are irredeemable snobs; second, as a crack in the glossy, understudied facade of what we commonly call “Internet culture.”

When the New York Times’s David Pogue tried to define the term in 2009, he ended up with a series of memes: the “Star Wars” kid, the dancing baby, rickrolling, the exploding whale. Likewise, if you look to anyone who claims to cover the Internet culture space — not only Buzzfeed, but Mashable, Gawker and, yeah, yours truly — their coverage frequently plays on what Lamb calls the “cute and positive” theme. They’re boys who work at Target and have swoopy hair, videos of babies acting like “tiny drunk adults,” hamsters eating burritos and birthday cakes.

That is the meaning we’ve assigned to “Internet culture,” itself an ambiguous term: It’s the fluff and the froth of the global Web.

But Lamb’s observations on Buzzfeed’s international growth would actually seem to suggest something different. Cat memes and other frivolities aren’t the work of an Internet culture. They’re the work of an American one.

American audiences love animals and “light content,” Lamb said, but readers in other countries have reacted differently. Germans were skeptical of the site’s feel-good frivolity, he said, and some Australians were outright “hostile.” Meanwhile, in France — land of la mode and le Michelin — critics immediately complained, right at Buzzfeed’s French launch, that the articles were too fluffy and poorly translated. Instead, Buzzfeed quickly found that readers were more likely to share articles about news, politics and regional identity, particularly in relation to the loved/hated Paris, than they were to share the site’s other fare.

A glance at Buzzfeed’s French page would appear to bear that out. Right now, its top stories “Ça fait le buzz” — that’s making the buzz, for you Americaines — are “21 photos that will make you laugh every time” and “26 images that will make you rethink your whole life.” They’re not making much buzz, though. Neither has earned more than 40,000 clicks — a pittance for the reigning king of virality, particularly in comparison to Buzzfeed’s versions on the English site.

All this goes to show that the things we term “Internet culture” are not necessarily born of the Internet, itself — the Internet is everywhere, but the insatiable thirst for cat videos is not. If you want to complain about dumb memes or clickbait or other apparent instances of socially sanctioned vapidity, blame America: We started it, not the Internet.

Appelons un chat un chat.

Caitlin Dewey runs The Intersect blog, writing about digital and Internet culture. Before joining the Post, she was an associate online editor at Kiplinger’s Personal Finance.
http://www.washingtonpost.com/news/the-intersect/wp/2014/11/21/you-should-actually-blame-america-for-everything-you-hate-about-internet-culture/

William Gibson: I never imagined Facebook

The brilliant science-fiction novelist who imagined the Web tells Salon how writers missed social media’s rise

William Gibson: I never imagined Facebook
William Gibson (Credit: Putnam/Michael O’Shea)

Even if you’ve never heard of William Gibson, you’re probably familiar with his work. Arguably the most important sci-fi writer of his generation, Gibson’s cyber-noir imagination has shaped everything from the Matrix aesthetic to geek culture to the way we conceptualize virtual reality. In a 1982 short story, Gibson coined the term “cyberspace.” Two years later, his first and most famous novel, “Neuromancer,” helped launch the cyberpunk genre. By the 1990s, Gibson was writing about big data, imagining Silk Road-esque Internet enclaves, and putting his characters on reality TV shows — a full four years before the first episode of “Big Brother.”

Prescience is flashy, but Gibson is less an oracle than a kind of speculative sociologist. A very contemporary flavor of dislocation seems to be his specialty. Gibson’s heroes shuttle between wildly discordant worlds: virtual paradises and physical squalor; digital landscapes and crumbling cities; extravagant wealth and poverty.

In his latest novel, “The Peripheral,” which came out on Tuesday, Gibson takes this dislocation to new extremes. Set in mid-21st century Appalachia and far-in-the-future London, “The Peripheral” is partly a murder mystery, and partly a time-travel mind-bender. Gibson’s characters aren’t just dislocated in space, now. They’ve become unhinged from history.

Born in South Carolina, Gibson has lived in Vancouver since the 1960s. Over the phone, we spoke about surveillance, celebrity and the concept of the eternal now.

You’re famous for writing about hackers, outlaws and marginal communities. But one of the heroes of “The Peripheral” is a near-omniscient intelligence agent. She has surveillance powers that the NSA could only dream of. Should I be surprised to see you portray that kind of character so positively?

Well, I don’t know. She’s complicated, because she is this kind of terrifying secret police person in the service of a ruthless global kleptocracy. At the same time, she seems to be slightly insane and rather nice. It’s not that I don’t have my serious purposes with her, but at the same time she’s something of a comic turn.

Her official role is supposed to be completely terrifying, but at the same time her role is not a surprise. It’s not like, “Wow, I never even knew that that existed.”



Most of the characters in “The Peripheral” assume that they’re being monitored at all times. That assumption is usually correct. As a reader, I was disconcerted by how natural this state of constant surveillance felt to me.

I don’t know if it would have been possible 30 years ago to convey that sense to the reader effectively, without the reader already having some sort of cultural module in place that can respond to that. If we had somehow been able to read this text 30 years ago, I don’t know how we would even register that. It would be a big thing for a reader to get their head around without a lot of explaining. It’s a scary thing, the extent to which I don’t have to explain why [the characters] take that surveillance for granted. Everybody just gets it.

You’re considered a founder of the cyberpunk genre, which tends to feature digital cowboys — independent operators working on the frontiers of technology. Is the counterculture ethos of cyberpunk still relevant in an era when the best hackers seem to be working for the Chinese and U.S. governments, and our most famous digital outlaw, Edward Snowden, is under the protection of Vladimir Putin?

It’s seemed to me for quite a while now that the most viable use for the term “cyberpunk” is in describing artifacts of popular culture. You can say, “Did you see this movie? No? Well, it’s really cyberpunk.” Or, “Did you see the cyberpunk pants she was wearing last night?”

People know what you’re talking about, but it doesn’t work so well describing human roles in the world today. We’re more complicated. I think one of the things I did in my early fiction, more or less for effect, was to depict worlds where there didn’t really seem to be much government. In “Neuromancer,” for example, there’s no government really on the case of these rogue AI experiments that are being done by billionaires in orbit. If I had been depicting a world in which there were governments and law enforcement, I would have depicted hackers on both sides of the fence.

In “Neuromancer,” I don’t think there’s any evidence of anybody who has any parents. It’s kind of a very adolescent book that way.

In “The Peripheral,” governments are involved on both sides of the book’s central conflict. Is that a sign that you’ve matured as a writer? Or are you reflecting changes in how governments operate?

I hope it’s both. This book probably has, for whatever reason, more of my own, I guess I could now call it adult, understanding of how things work. Which, I suspect, is as it should be. People in this book live under governments, for better or worse, and have parents, for better or worse.

In 1993, you wrote an influential article about Singapore for Wired magazine, in which you wondered whether the arrival of new information technology would make the country more free, or whether Singapore would prove that “it is possible to flourish through the active repression of free expression.” With two decades of perspective, do you feel like this question has been answered?

Well, I don’t know, actually. The question was, when I asked it, naive. I may have posed innocently a false dichotomy, because some days when you’re looking out at the Internet both things are possible simultaneously, in the same place.

So what do you think is a better way to phrase that question today? Or what would have been a better way to phrase it in 1993?

I think you would end with something like “or is this just the new normal?”

Is there anything about “the new normal” in particular that surprises you? What about the Internet today would you have been least likely to foresee?

It’s incredible, the ubiquity. I definitely didn’t foresee the extent to which we would all be connected almost all of the time without needing to be plugged in.

That makes me think of “Neuromancer,” in which the characters are always having to track down a physical jack, which they then use to plug themselves into this hyper-futuristic Internet.

Yes. It’s funny, when the book was first published, when it was just out — and it was not a big deal the first little while it was out, it was just another paperback original — I went to a science fiction convention. There were guys there who were, by the standards of 1984, far more computer-literate than I was. And they very cheerfully told me that I got it completely wrong, and I knew nothing. They kept saying over and over, “There’s never going to be enough bandwidth, you don’t understand. This could never happen.”

So, you know, here I am, this many years later with this little tiny flat thing in my hand that’s got more bandwidth than those guys thought was possible for a personal device to ever have, and the book is still resonant for at least some new readers, even though it’s increasingly hung with the inevitable obsolescence of having been first published in 1984. Now it’s not really in the pale, but in the broader outline.

You wrote “Neuromancer” on a 1927 Hermes typewriter. In an essay of yours from the mid-1990s, you specifically mention choosing not to use email. Does being a bit removed from digital culture help you critique it better? Or do you feel that you’re immersed in that culture, now?

I no longer have the luxury of being as removed from it as I was then. I was waiting for it to come to me. When I wrote [about staying off email], there was a learning curve involved in using email, a few years prior to the Web.

As soon as the Web arrived, I was there, because there was no learning curve. The interface had been civilized, and I’ve basically been there ever since. But I think I actually have a funny kind of advantage, in that I’m not generationally of [the Web]. Just being able to remember the world before it, some of the perspectives are quite interesting.

Drones and 3-D printing play major roles in “The Peripheral,” but social networks, for the most part, are obsolete in the book’s fictional future. How do you choose which technological trends to amplify in your writing, and which to ignore?

It’s mostly a matter of which ones I find most interesting at the time of writing. And the absence of social media in both those futures probably has more to do with my own lack of interest in that. It would mean a relatively enormous amount of work to incorporate social media into both those worlds, because it would all have to be invented and extrapolated.

Your three most recent novels, before “The Peripheral,” take place in some version of the present. You’re now returning to the future, which is where you started out as a writer in the 1980s. Futuristic sci-fi often feels more like cultural criticism of the present than an exercise in prediction. What is it about the future that helps us reflect on the contemporary world?

When I began to write science fiction, I already assumed that science fiction about the future is only ostensibly written about the future, that it’s really made of the present. Science fiction has wound up with a really good cultural toolkit — an unexpectedly good cultural toolkit — for taking apart the present and theorizing on how it works, in the guise of presenting an imagined future.

The three previous books were basically written to find out whether or not I could use the toolkit that I’d acquired writing fictions about imaginary futures on the present, but use it for more overtly naturalistic purposes. I have no idea at this point whether my next book will be set in an imaginary future or the contemporary present or the past.

Do you feel as if sci-fi has actually helped dictate the future? I was speaking with a friend earlier about this, and he phrased the question well: Did a book like “Neuromancer” predict the future, or did it establish a dress code for it? In other words, did it describe a future that people then tried to live out?

I think that the two halves of that are in some kind of symbiotic relationship with one another. Science fiction ostensibly tries to predict the future. And the people who wind up making the future sometimes did what they did because they read a piece of science fiction. “Dress code” is an interesting way to put it. It’s more like … it’s more like attitude, really. What will our attitude be toward the future when the future is the present? And that’s actually much more difficult to correctly predict than what sort of personal devices people will be carrying.

How do you think that attitude has changed since you started writing? Could you describe the attitude of our current moment?

The day the Apple Watch was launched, late in the day someone on Twitter announced that it was already over. They cited some subject, they linked to something, indicating that our moment of giddy future shock was now over. There’s just some sort of endless now, now.

Could you go into that a little bit more, what you mean by an “endless now”?

Fifty years ago, I think now was longer. I think that the cultural and individual concept of the present moment was a year, or two, or six months. It wasn’t measured in clicks. Concepts of the world and of the self couldn’t change as instantly or in some cases as constantly. And I think that has resulted in there being a now that’s so short that in a sense it’s as though it’s eternal. We’re just always in the moment.

And it takes something really horrible, like some terrible, gripping disaster, to lift us out of that, or some kind of extra-strong sense of outrage, which we know that we share with millions of other people. Unfortunately, those are the things that really perk us up. This is where we get perked up, perked up for longer than for over a new iPhone, say.

The worlds that you imagine are enchanting, but they also tend to be pretty grim. Is it possible to write good sci-fi that doesn’t have some sort of dystopian edge?

I don’t know. It wouldn’t occur to me to try. The world today, considered in its totality, has a considerable dystopian edge. Perhaps that’s always been true.

I often work in a form of literature that is inherently fantastic. But at the same time that I’m doing that, I’ve always shared concerns with more naturalistic forms of writing. I generally try to make my characters emotionally realistic. I do now, at least; I can’t say I always have done that. And I want the imaginary world they live in and the imaginary problems that they have to reflect the real world, and to some extent real problems that real people are having.

It’s difficult for me to imagine a character in a work of contemporary fiction who wouldn’t have any concerns with the more dystopian elements of contemporary reality. I can imagine one, but she’d be a weird … she’d be a strange character. Maybe some kind of monster. Totally narcissistic.

What makes this character monstrous? The narcissism?

Well, yeah, someone sufficiently self-involved. It doesn’t require anything like the more clinical forms of narcissism. But someone who’s sufficiently self-involved as to just not be bothered with the big bad things that are happening in the world, or the bad things — regular-size bad things — that are happening to one’s neighbors. There certainly are people like that out there. The Internet is full of them. I see them every day.

You were raised in the South, and you live in Vancouver, but, like Philip K. Dick, you’ve set some of your most famous work in San Francisco. What is the appeal of the city for technological dreamers? And how does the Silicon Valley of today fit into that Bay Area ethos?

I’m very curious to go back to San Francisco while on tour for this book, because it’s been a few years since I’ve been there, and it was quite a few years before that when I wrote about San Francisco in my second series of books.

I think one of the reasons I chose it was that it was a place that I would get to fairly frequently, so it would stay fresh in memory, but it also seemed kind of out of the loop. It was kind of an easy canvas for me, an easier canvas to set a future in than Los Angeles. It seemed to have fewer moving parts. And that’s obviously no longer the case, but I really know contemporary San Francisco now more by word of mouth than I do from first-person experience. I really think it sounds like a genuinely new iteration of San Francisco.

Do you think that Google and Facebook and this Silicon Valley culture are the heirs to the Internet that you so presciently imagined in the 1980s? Or do they feel like they’ve taken the Web in different directions than what you expected?

Generally it went it directions that didn’t occur to me. It seems to me now that if I had been a very different kind of novelist, I would have been more likely to foresee something like Facebook. But you know, if you try to imagine that somebody in 1982 writes this novel that totally and accurately predicted what it would be like to be on Facebook, and then tried to get it published? I don’t know if you would be able to get it published. Because how exciting is that, or what kind of crime story could you set there?

Without even knowing it, I was limited by the kind of fiction of the imaginary future that I was trying to write. I could use detective gangster stories, and there is a real world of the Internet that’s like that, you know? Very much like that. Although the crimes are so different. The ace Russian hacker mobs are not necessarily crashing into the global corporations. They’re stealing your Home Depot information. If I’d put that as an exploit in “Neuromancer,” nobody would have gotten it. Although it would have made me seem very, very prescient.

You’ve written often and eloquently about cults of celebrity and the surrealness of fame. By this point you’re pretty famous yourself. Has writing about fame changed the way you experience it? Does experiencing fame change the way you write about it?

Writers in our society, even today, have a fairly homeopathic level of celebrity compared to actors and really popular musicians, or Kardashians. I think in [my 1993 novel] “Virtual Light,” I sort of predicted Kardashian. Or there’s an implied celebrity industry in that book that’s very much like that. You become famous just for being famous. And you can keep it rolling.

But writers, not so much. Writers get just a little bit of it on a day-to-day basis. Writers are in an interesting place in our society to observe how that works, because we can be sort of famous, but not really famous. Partly I’d written about fame because I’d seen little bits of it, but the bigger reason is the extent to which it seems that celebrity is the essential postmodern product, and the essential post-industrial product. The so-called developed world pioneered it. So it’s sort of inherently in my ballpark. It would be weird if it wasn’t there.

You have this reputation of being something of a Cassandra. I don’t want to put you on the spot and ask for predictions. But I’m curious: For people who are trying to understand technological trends, and social trends, where do you recommend they look? What should they be observing?

I think the best advice I’ve ever heard on that was from Samuel R. Delany, the great American writer. He said, “If you want to know how something works, look at one that’s broken.” I encountered that remark of his before I began writing, and it’s one of my fridge magnets for writing.

Anything I make, and anything I’m describing in terms of its workings — even if I were a non-literary futuristic writer of some kind — I think that statement would be very resonant for me. Looking at the broken ones will tell you more about what the thing actually does than looking at one that’s perfectly functioning, because then you’re only seeing the surface, and you’re only seeing what its makers want you to see. If you want to understand social media, look at troubled social media. Or maybe failed social media, things like that.

Do you think that’s partly why so much science fiction is crime fiction, too?

Yeah, it might be. Crime fiction gives the author the excuse to have a protagonist who gets her nose into everything and goes where she’s not supposed to go and asks questions that will generate answers that the author wants the reader to see. It’s a handy combination. Detective fiction is in large part related to literary naturalism, and literary naturalism was a quite a radical concept that posed that you could use the novel to explore existing elements of society which had previously been forbidden, like the distribution of capital and class, and what sex really was. Those were all naturalistic concerns. They also yielded to detective fiction. Detective fiction and science fiction are an ideal cocktail, in my opinion.

 

http://www.salon.com/2014/11/09/william_gibson_i_never_imagined_facebook/?source=newsletter

Follow

Get every new post delivered to your Inbox.

Join 1,644 other followers