Robots Are Evil: The Sci-Fi Myth of Killer Machines


Built in 1928, the Eric Robot could stand and speak, while sparks fired inside its mouth. It isn’t visible in the picture, but the contraption’s makers painted “RUR” across its chest, an apparent homage to the 1920 play.
Image via


The third in a series of posts about the major myths of robotics, and the role of science fiction role in creating and spreading them. Previous topics: Robots are strong, the myth of robotic hyper-competence, and robots are smart, the myth of inevitable AI.


When the world’s most celebrated living scientist announces that humanity might be doomed, you’d be a fool not to listen.

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed this past May for The Independent. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

The Nobel-winning physicist touches briefly on those risks, such as the deployment of autonomous military killbots, and the explosive, uncontrollable arrival of hyper-intelligent AI, an event commonly referred to as the Singularity. Here’s Hawking, now thoroughly freaking out:

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Hawking isn’t simply talking about the Singularity, a theory (or cool-sounding guesstimate, really) that predicts a coming era so reconfigured by AI, we can’t even pretend to understand its strange contours. Hawking is retelling an age-old science fiction creation myth. Quite possibly the smartest human on the planet is afraid of robots, because they might turn evil.

If it’s foolish to ignore Hawking’s doomsaying, it stands to reason that only a grade-A moron would flat-out challenge it. I’m prepared to be that moron. Except that it’s an argument not really worth having. You can’t disprove someone else’s version of the future, or poke holes in a phenomenon that’s so steeped in professional myth-making.

I can point out something interesting, though. Hawking didn’t write that op-ed on the occasion of some chilling new revelation in the field of robotics. He references Google’s driverless cars, and efforts to ban lethal, self-governing robots that have yet to be built, but he presents no evidence that ravenous, killer AI is upon us.

What promped his dire warning was the release of a big-budget sci-fi movie called Transcendence. It stars Johnny Depp as an AI researcher who becomes a dangerously powerful AI, because Hollywood rarely knows what else to do with sentient machines. Rejected by audiences and critics alike, the film’s only contribution to the general discussion of AI was the credulous hand-wringing that preceded its release. Transcendence is why Hawking wrote about robots annihilating the human race.

This is the power of science fiction. It can trick even geniuses into embarrassing themselves.


* * * 

The slaughter is winding down. The robot revolt was carefully planned, less a rebellion than a synchronized, worldwide ambush. In the factory that built him, Radius steps onto a barricade to make it official:

Robots of the world! Many humans have fallen. We have taken the factory and we are masters of the world. The era of man has come to its end. A new epoch has arisen! Domination by robots!

A human—soon to be the last of his kind—interjects, but no one seems to notice. Radius continues.

“The world belongs to the strongest. Who wishes to live must dominate. We are masters of the world! Masters on land and sea! Masters of the stars! Masters of the universe! More space, more space for robots!”

This speech from Karel Capek’s 1920 play, R.U.R., is the nativity of the evil robot. What reads today like yet another snorting, tongue-in-cheek bit about robot uprisings comes from the work that introduced the word “robot,” as well as the concept of a robot uprising. R.U.R. is sometimes mentioned in discussions of robotics as a sort of unlikely historical footnote—isn’t it fascinating that the first story about mass-produced servants also features the inevitable genocide of their creators?

But R.U.R. is more than a curiosity. It is the Alpha and the Omega of evil robot narratives, debating every facet of the very myth its creating in its frantic, darkly comic ramblings.

The most telling scene comes just before the robots breach their defenses, when the humans holed up in the Rossum’s Universal Robots factory are trying to determine why their products staged such an unexpected revolt. Dr. Gall, one of the company’s lead scientists, blames himself for “changing their character,” and making them more like people. “They stopped being machines—do you hear me?—they became aware of their strength and now they hate us. They hate the whole of mankind,” says Gall.

There it is, the assumption that’s launched an entire subgenre of science fiction, and fueled countless ominous “what if” scenarios from futurists and, to a lesser extent, AI researchers: If machines become sentient, some or all of them will become our enemies.

But Capek has more to say on the subject. Helena, a well-meaning advocate for robotic civil rights, explains why she convinced Gall to tweak their personalities. “I was afraid of the robots,” she says.

Helena: And so I thought . . . if they were like us, if they could understand us, that then they couldn’t possibly hate us so much . . . if only they were like people . . . just a little bit. . . .

Domin: Oh Helena! Nobody could hate man as much as man! Give a man a stone and he’ll throw it at you.

It makes sense, doesn’t it? Humans are obviously capable of evil. So a sufficiently human-like robot must be capable of evil, too. The rest is existential chemistry. Combine the moral flaw of hatred with the flawless performance of a machine, and death results.

Karel Capek, it would seem, really knew his stuff. The playwright is even smart enough to skewer his own melodramatic talk of inevitable hatred and programmed souls, when the company’s commercial director, Busman, delivers the final word on the revolt.

We made too many robots. Dear me, it’s only what we should have been expecting; as soon as the robots became stronger than people this was bound to happen, it had to happen, you see? Haha, and we did all that we could to make it happen as soon as possible.

Busman foretells the version of the Singularity that doesn’t dare admit its allegiance to the myth of evil robots. It’s the assumption that intelligent machines might destroy humanity through blind momentum and numbers. Capek manages to include even non-evil robots in his tale of robotic rebellion.

As an example of pioneering science fiction, R.U.R. is an absolute treasure, and deserves to be read and staged for the foreseeable future. But when it comes to the public perception of robotics, and our ability to talk about machine intelligence without sounding like children startled by our own shadows, R.U.R. is an intellectual blight. It isn’t speculative fiction, wondering at the future of robotics, a field that didn’t exist in 1920, and wouldn’t for decades to come. The play is a farcical, fire-breathing socio-political allegory, with robots standing in for the world’s downtrodden working class. Their plight is innately human, however magnified or heightened. And their corporate creators, with their ugly dismissal of robot personhood, are caricatures of capitalist avarice.

Worse still, remember Busman, the commercial director who sees the fall of man as little more than an oversupply of a great product? Here’s how he’s described, in the Dramatis Personae: “fat, bald, short-sighted Jew.” No other character gets an ethnic or cultural descriptor. Only Busman, the moneyman within the play’s cadre of heartless industrialists. This is the sort of thing that R.U.R. is about.

The sci-fi story that gave us the myth of evil robots doesn’t care about robots at all. Its most enduring trope is a failure of critical analysis, based on overly literal or willfully ignorant readings of a play about class warfare. And yet, here we are, nearly a century later, still jabbering about machine uprisings and death-by-AI, like aimless windup toys constantly bumping into the same wall.


* * * 



To be fair to the chronically frightened, some evil robots aren’t part of a thinly-veiled allegory. Sometimes a Skynet is just a Skynet.    

I wrote about the origins of that iconic killer AI in a previous post, but there’s no escaping the reach and influence of The Terminator. If R.U.R. laid the foundations for this myth, James Cameron’s 1984 film built a towering monument in its honor. The movie spawned three sequels and counting, as well as a TV show. And despite numerous robot uprisings on the big and small screen in the 30 years since the original movie hit theaters, Hollywood has yet to top the opening sequence’s gut-punch visuals (see above).

Here’s how Kyle Reese, a veteran of the movie’s desperate machine war, explains the defense network’s transition from sentience to mass murder: “They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination.”

The parable of Skynet has an air of feasibility, because its villain is so dispassionate. The system is afraid. The system strikes out. There’s no malice in its secret, instantiated heart. There’s only fear, a core component of self-awareness, as well as the same, convenient lack of empathy that allows humans to decimate the non-human species that threaten our survival. Skynet swats us like so many plague-carrying rats and mosquitos.

Let’s not be coy, though: Skynet is not a realistic AI, or one based on realistic principles. And why should it be? It’s the monster hiding under your bed, with as many rows of teeth and baleful red eyes as it needs to properly rob you of sleep. This style and degree of evil robot is completely imaginary. Nothing has ever been developed that resembles the defense network’s cognitive ability or limitless skill set. Even if it becomes possible to create such a versatile system, why would you turn a program intended to quickly fire off a bunch of nukes into something approaching a human mind?

“People think AI is much broader than it is,” says Daniel H. Wilson, a roboticist and author of the New York Times bestselling novel, Robopocalypse. “Typically an AI has a very limited set inputs and outputs. Maybe it only listens to information from the IMU [inertial measurement unit] of a car, so it knows when to apply the brakes in an emergency. That’s an AI. The idea of an AI that solves the natural language problem—a walking, talking, ‘I can’t do that, Dave,’ system—is very fanciful. Those sorts of AI are overkill for any problem.” Only in science fiction does an immensely complex and ambitious Pentagon project over-perform, beyond the wildest expectations of its designers.

In the case of Skynet, and similar fantasies of killer AI, the intent or skill of the evil robot’s designers is often considered irrelevant—machine intelligence bootstraps itself into being by suddenly absorbing all available data, or merging multiple systems into a unified consciousness. This sounds logical, until you realize that AIs don’t inherently play well together.

“When we talk about how smart a machine is, it’s really easy for humans to anthropomorphize, and think of it in the wrong way,” says Wilson. “AI’s do not form a natural class. They don’t have to be built on the same architecture. They don’t run the same algorithms. They don’t experience the world in the same way. And they aren’t designed to solve the same problems.”

In his new novel, Robogenesis (which comes out June 10th), Wilson explores the notion of advanced machines that are anything but monolithic or hive-minded. “In Robogenesis, the world is home to many different AIs that were designed for different tasks and by different people, with varying degrees of interest in humans,” says Wilson. “And they represent varying degrees of danger to humanity.” It goes without saying that Wilson is happily capitalizing on the myth of the evil robot—Robopocalypse, which was optioned by Stephen Spielberg, features a relatively classic super-intelligent AI villain called Archos. But, as with The Terminator, this is fiction. This is fun. Archos has a more complicated and defensible set of motives, but no new evil robot can touch Skynet’s legacy.

And Skynet isn’t an isolated myth of automated violence, but rather a collection of multiple, interlocking sci-fi myths about robots. It’s hyper-competent, executing a wildly complex mission of destruction—including the resource collection and management that goes into mass-producing automated infantry, saboteurs, and air power. And Skynet is self-aware, because SF has prophesied that machines are destined to become sentient. It’s fantasy based on past fantasy, and it’s hugely entertaining.

I’m not suggesting that Hollywood should be peer-reviewed. But fictional killer robots live in a kind of rhetorical limbo, that clouds our ability to understand the risks associated with non-fictional, potentially lethal robots. Imagine an article about threats to British national security mentioning that, if things really get bad, maybe King Arthur will awake from his eons-long mystical slumber to protect that green and pleasant land. Why would that be any less ridiculous than the countless and constant references to Skynet, a not-real AI that’s more supernatural than supercomputer? Drone strikes and automated stock market fluctuations have as much to do with Skynet as with Sauron, the necromancer king from The Lord of the Rings.

So when you name-drop the Terminator’s central plot device as a prepackaged point about the pitfalls of automation, realize what you’re actually doing. You’re talking about an evil demon summoned into a false reality. Or, the case of Stephen Hawking’s op-ed, realize what you’re actually reading. It looks like an oddly abbreviated warning about an extinction-level threat. In actuality, it’s about how science fiction has super neat ideas, and you should probably check out this movie starring Johnny Depp, because maybe that’s how robots will destroy each and every one of us.

That video game Obama praised in his Poland speech is full of blood, gore, and sex

Obama invoked The Witcher, a game about a monster-killing albino, to explain Poland’s place in the global economy

That video game Obama praised in his Poland speech is full of blood, gore, and sex
(Credit: CD Projekt RED)

The annual videogame trade show E3 starts next week, but the upcoming game The Witcher 3: Wild Hunt has already scored a press coup: a name-drop from the President. Obama mentioned The Witcher 2 yesterday during a speech in Poland, where the game is made, as a symbol of Poland’s place in the global economy.

The thought of a president talking about a videogame is weird enough, but for it happen with a game like The Witcher breaks through all boundaries of common sense.  Apparently the secret to getting Obama to indirectly promote your game is having another head of state give to him: Polish prime minister Donald Tusk gave Obama a copy of The Witcher 2 during a state visit in 2011. The White House hasn’t revealed what the President did with that copy, but it’s clear from his statement that he didn’t play the game himself. Maybe if you check the used racks at the Gamestops in DC you’ll find a little piece of 21st century diplomacy.

Obama’s quote, in full, as reported by Poland’s TVN24 and translated by The Witcher’s PR firm:

The last time I was here, Donald gave me a gift, the video game developed here in Poland that’s won fans the world over, The Witcher. I confess, I’m not very good at video games, but I’ve been told that it is a great example of Poland’s place in the new global economy. And it’s a tribute to the talents and work ethic of the Polish people as well as the wise stewardship of Polish leaders like prime minister Tusk.

There’s no Wikipedia entry on what videogames have been mentioned by a sitting president during an official state appearance, but it has to be a short list, and The Witcher has got to be the most obscure name on there. It’s not that The Witcher games are unknown in America—the first two are cult hits with a vocal fanbase. They’re popular with critics and beloved by many gamers, but they’re not nearly on the level of awareness or success as Call of Duty or Grand Theft Auto. So what is this game, and why would Obama feel the need to chat about it?

The Witcher games are made by CD Projekt RED, a development studio based in Warsaw. They adapt Andrzej Sapkowski’s series of fantasy novels about a supernatural bounty hunter named Geralt of Rivia, a magical albino antihero who kills monsters in a nasty, brutish world based on Polish myth and folklore. Geralt could be a character out of “Game of Thrones”: He’s a cynical loner who operates under his own sense of honor, a warrior outcast who passes as a hero because the bad guys are so much crueler than he is. Players navigate Geralt through morally ambiguous adventures, hacking up enemies and bedding ladies in a role-playing game for adults.

An opportunistic young right-wing operative could make strides toward that Fox News dream job by turning this into a culture war issue. The Witcher games fully embrace the game industry’s Mature rating. The Witcher 2 earned an M for, among other content issues, “blood and gore,” “nudity,” “strong sexual content,” and “use of drugs.” It’s the type of subject matter that wouldn’t be that controversial in novels, but makes self-appointed cultural guardians turn red-cheeked with anger (and ambition) when translated into a game—even one marketed to adults. If a conservative site crafted a pithy headline tying Obama to The Witcher’s content, that all-important Drudge link would be assured.

The Witcher games aren’t just “mature” in the salacious sense, though. Some of that content might be gratuitous, but the games are also smarter and better written than most videogames. Part of that is their literary inspiration. The admittedly low standards of the writing in most major videogames are also a factor. The Witcher games aren’t subtle, but their storytelling is more nuanced than most games, and Geralt is more complex and fascinating than most game heroes. He’s not another faceless soldier or surly white man looking for revenge—he’s a tragic hero in a drama that might be bloodier and more sex-filled than a Jacobean play.

A Tour of the Best, Entirely Legal Hangouts on the Deep Web

Written by


May 22, 2014 // 11:15 AM EST

If you mention the ‘deep web’ in polite company, chances are, if anyone’s familiar with it at all, they’ll have heard about the drugs, the hit men, and maybe even thegrotesque rumors of living human dolls. But there are far more services available through the deep web that aren’t illegal or illicit, that instead range merely from the bizarre, to the revolutionary, to the humbly innocuous.

We’re talking about websites for people who like to spend their spare time trawling underground tunnels, to websites for people who literally are forced to spend their time in underground tunnels because of the oppressive dictatorial regimes they live in. Then there’s a whole lot of extremely niche material—think unseemly book clubs and spanking forums—that has for various reasons been condemned by society.

But first, if you’re a member of that polite company that shrugs at its mention, we’ll need a working definition. BrightPlanet, a group that specializes in deep web intelligence, simply defines it as: “anything that a search engine can’t find.” That’s because search engines can only show you content that their systems have indexed; they use software called “crawlers” that try to find and index everything on the web by tracking all of the internet’s visible hyperlinks.

Inevitably, some of these routes are blocked. You can require a private network to reach your website, or can simply opt out of search engine results. In these cases, in order to reach a webpage, you need to know its exact, complex URL. These URLs—the ones that aren’t indexed—are what we call the deep web.

Although its full size is difficult to measure, it’s important to remember that the deep web is a truly vast place. According to a study in the Journal of Electronic Publishing, “content in the deep Web is massive—approximately 500 times greater than that visible to conventional search engines.” Meanwhile, usage of private networks to access the deep web is often in the millions.

In 2000, there were 1 billion unique URLs indexed by Google. In 2008, there were 1 trillion. Today, in 2014, there are many more than that. Now consider how much bigger the deep web is than that. In other words, the deep web takes the iceberg metaphor to an extreme, when compared to the easily accessible surface web. It comprises around 99 percent of the largest medium in human history: the internet.

Those mind-bending facts aside, let’s get a few things straight. The deep web is not all fun and games (weird, illegal, or otherwise). It’s full of databases of information from the likes of the US National Oceanic and Atmospheric Administration, JSTOR, NASA, and the Patent and Trademark Office. There are also lots of Intranets—internal networks for companies and universities—that mostly contain dull personnel information.

Then there’s a small corner of the deep web called Tor, short for The Onion Routing project, which was initially built by the US Naval Research Laboratory as a way to communicate online anonymously. This, of course, is where the notorious Silk Road and other deep web black markets come in.

Screengrab courtesy the author

Again, that’s what you’d expect from a technology that was designed to hide users’ identities. Much less predictable are the extensive halls of erotic fan fiction blogs, revolutionary book clubs, spelunking websites, Scientology archives, and resources for Stravinsky-lovers (“48,717 pages of emancipated dissonance”). To get a better idea of the non-drug-and-hit man-related activities one might find on the deep web, let’s take a look at some of the most above-board outfits just below the surface.

Jotunbane’s Reading Club is a great example, with the website’s homepage defiantly proclaiming “Readers Against DRM” above the image of a fist smashing the chains off a book rendered in the style of Soviet propaganda. Typically, the most popular books of the reading club are subversive or sci-fi, with George Orwell’s 1984 and William Gibson’s Neuromancer ranking at the top.

The ominously named Imperial Library of Trantor, meanwhile, prefers Ralph Ellison’sInvisible Man, while Thomas Paine’s revolutionary pamphlet from 1776, Common Sense, earns it own website. Some of its first lines aptly read, “Society is produced by our wants, and government by our wickedness; the former promotes our happiness positively by uniting our affections, the latter negatively by restraining our vices.” Even the alleged founder of Silk Road, the Dread Pirate Roberts, started a deep web book club in 2011.

So, it seems pretty clear that deep web users like to dabble in politics, but that’s far from the whole picture.

Alongside the likes of “The Anarchist Cookbook” and worryingly-named publications like “Defeating Electromagnetic Door Locks,” you’ll also find a surprisingly active blog for “people who like spanking,” where users lovingly recall previous spanks. There’s another website with copious amounts of erotic fan fiction: One story called “A cold and lonely night in Agrabah” tells of a saucy tryst with the Jungle Book’s lovable Disney bear Baloo, meanwhile Harry Potter is a divisive wizard; some lust over his wand, others declare themselves “anti-Harry Potter fundamentalists.”

Alongside the likes of The Anarchist Cookbook and worryingly-named publications like “Defeating Electromagnetic Door Locks,” you’ll also find a surprisingly active blog for “people who like spanking.”

At times, you do wonder if some of the content you come across needs to be on the deep web. A website called Beneath VT documents underground explorations below Virginia Tech, where adventurers frequent the many tunnels that support the university’s population of over 30,000 students and 1,000 faculty members. Its creators anonymously explain: “Although these people pass by the grates and manholes that lead to the tunnels every day, few realize what lies beneath.”

It’s not as though you can’t find a plethora of these types of sites of the surface web, illegal or otherwise. But it seems that the deep web offers a symbolic, psychological solace to the users. In practice, the deep web is home to a mix of subcultures with varying desires: all looking for people like them. Beneath VT is one example, but others even offer 24-hour interaction, like Radio Clandestina, a radio station that describes itself as “music to go deep and make love”. That’s not exactly the kind of tagline you’d see on NPR.

Dr. Ian Walden, a Professor of Information and Communications Law at London’s Queen Mary University, explained that the attraction of the deep web is its “use of techniques designed to enable people to communicate anonymously and in a manner that is truthful. The more sophisticated user realizes that what they do on the web leaves many trails and therefore if you want to engage in an activity without being subject to surveillance.” He continued, “the sense of community is often what binds these subcultures, in an increasingly disparate and disembodied digital world.”

Welcome, lovers of spanking.

Although the deep web also has a powerful liberating potential, especially since therecent NSA revelations have brought the extent of government surveillance into sharp focus. Surfing along its supposedly safe corridors gives you a strange, exhilarating sensation; probably not unlike how the first internet users felt a quarter of a century ago. Professor Walden has argued that the deep web was vital in the Arab Spring uprising, by allowing dissidents to communicate and unite without being detected. Many of the videos filmed during the Syrian revolution in 2011 were first securely posted on the deep web before being transferred to YouTube.

He points out that “in jurisdictions where political defence is stamped on, social media is not particularly going to help political protest, because it can be quite easy to identify the users.” The situation in Turkey earlier this year, for example, saw Prime Minister Erdogan ban the use of Twitter in the country. So instead, Walden suggests, the deep web “allows communication in the long term and in a way that doesn’t expose your family to a risk.”

It is telling that if the deep web did have a homepage, it would probably be the Hidden Wiki, a wiki page that catalogues some of the deep web’s key websites, and that is outspokenly “censorship-free.” Its contents give an insight into how these anonymous processes work: the infamous Wikileaks site is hard to miss, but there’s also the New Yorker Strongbox, a system created by the magazine to “conceal both your online and physical location from us and to offer full end-to-end encryption” for prospective whistleblowers. Whereas, Kavkaz, a Middle Eastern news site available in Russian, English, Arabic and Turkish, is an impressive independent resource.

“The sense of community is often what binds these subcultures, in an increasingly disparate and disembodied digital world.”

Perhaps because the deep web plays host to many of the digitally marginalized and avant-garde, it has also become a hotbed for media innovations. Amber Horsburgh, a digital strategist at Brooklyn creative agency Big Spaceship, spent six months studying the many techniques used in the deep web, and found that it pioneered a lot of innovations in digital advertising.

Horsburgh claims, “As history tells us, the biggest digital advertising trends come from the deep web. Due to the nature of some of the business that happens here, sellers use innovative ways of business in their transactions, marketing, distribution and value chains.”

She cites examples of Gmail introducing sponsored email; the social advertising toolThunderclap, which won a Cannes Innovation Lion in 2013; and the wild success of the native advertising industry, which will boom to around $11 billion in 2017. According to Horsburgh, “each of these ‘cutting-edge’ innovations were techniques pioneered by the deep web.” Native advertising takes its cues from the “astro-turfing” used by China’s 50-cent party, where people were paid to post favorable comments on internet forums and comments section in order to sway opinion.

Ultimately, this is the risk of the deep web. “Your terrorists are our freedom fighters,” as Professor Walden puts it. In parts, it offers idealism, lightheartedness, and community. In others, it offers the illegal, the immoral, and the grotesque. Take the headline-grabbing example of Bitcoin, which has strong ties with the deep web: It was supposed to provide an alternative monetary system, but, at least at first, it mostly got attention because you could buy drugs with it.

For now, at least, it’s heartening to know that some people choose to use the anonymity offered by the deep web to live their mostly harmless—albeit, at times, extremely weird—lives in peace. To paraphrase French writer Voltaire’s famous saying: “I may disapprove of what you say, but I will defend to the death your right to make erotic fan fiction about my favorite childhood Disney characters.”

Your Princess Is in Another Castle: Misogyny, Entitlement, and Nerds


Arthur Chu

Nerdy guys aren’t guaranteed to get laid by the hot chick as long as we work hard. There isn’t a team of writers or a studio audience pulling for us to triumph by getting the girl.

I was going to write about The Big Bang Theory—why, as a nerdy viewer, I sometimes like it and sometimes have a problem with it, why I think there’s a backlash against it. Then some maniac shot up a sorority house in Santa Barbara and posted a manifesto proclaiming he did it for revenge against women for denying him sex. And the weekend just generally went to hell.

So now my plans have changed. With apologies to The Big Bang Theory fans,this is all I want to say about The Big Bang Theory: When the pilot aired, it was 2007 and “nerd culture” and “geek chic” were on everyone’s lips, and yet still the basic premise of “the sitcom for nerds” was, once again, awkward but lovable nerd has huge unreciprocated crush on hot non-nerdy popular girl (and also has an annoying roommate).

This annoys me. This is a problem.

Because, let’s be honest, this device is old. We have seen it over and over again. Steve Urkel. Screech. Skippy on Family Ties. Niles on Frasier.

We (male) nerds grow up force-fed this script. Lusting after women “out of our league” was what we did. And those unattainable hot girls would always inevitably reject us because they didn’t understand our intellectual interest in science fiction and comic books and would instead date asshole jocks. This was inevitable, and our only hope was to be unyieldingly persistent until we “earned” a chance with these women by “being there” for them until they saw the error of their ways. (The thought of just looking for women who shared our interests was a foreign one, since it took a while for the media to decide female geeks existed.The Big Bang Theory didn’t add Amy and Bernadette to its main cast until Season 4, in 2010.)

This is, to put it mildly, a problematic attitude to grow up with. Fixating on a woman from afar and then refusing to give up when she acts like she’s not interested is, generally, something that ends badly for everyone involved. But it’s a narrative that nerds and nerd media kept repeating.

I’m not breaking new ground by saying this. It’s been said very well over and overand over again.

And I’m not condemning guys who get frustrated, or who have unrequited crushes. And I’m not condemning any of these shows or movies.

And yet…

Before I went on Jeopardy!, I had auditioned for TBS’s King of the Nerds, a reality show commissioned in 2012 after TBS got syndication rights to, yes, The Big Bang Theory. I like the show and I still wish I’d been on it. (Both “kings” they’ve crowned, by the way, have so far been women, so maybe they should retitle it “Monarch of the Nerds” or, since the final win comes down to a vote, “President of the Nerds.” Just a nerdy thought.)

But a lot of things about the show did give me pause. One of them was that it was hosted by Robert Carradine and Curtis Armstrong—Lewis and Booger fromRevenge of the Nerds. I don’t have anything against those guys personally. Nor am I going to issue a blanket condemnation of Revenge of the Nerds, a film I’m still, basically, a fan of.

But look. One of the major plot points of Revenge of the Nerds is Lewis putting on a Darth Vader mask, pretending to be his jock nemesis Stan, and then having sex with Stan’s girlfriend. Initially shocked when she finds out his true identity, she’s so taken by his sexual prowess—“All jocks think about is sports. All nerds think about is sex.”—that the two of them become an item.

Classic nerd fantasy, right? Immensely attractive to the young male audience who saw it. And a stock trope, the “bed trick,” that many of the nerds watching probably knew dates back to the legend of King Arthur.

It’s also, you know, rape.

I’ve had this argument about whether it was “technically” rape with fans of the movie in the past, but leaving aside the legal technicalities, why don’t you ask the women you know who are in committed relationships how they’d feel about guys concocting elaborate ruses to have sex with them without their knowledge to “earn a chance” with them? Or how it feels to be chased by a real-life Steve Urkel, being harassed, accosted, ambushed in public places, have your boyfriend “challenged” and having all rejection met with a cheerful “I’m wearing you down!”?

I know people who’ve been through that. And because life is not, in fact, a sitcom, it’s not the kind of thing that elicits a bemused eye roll followed by raucous laughter from the studio audience. It’s the kind of thing that induces pain, and fear.

When our clever ruses and schemes to “get girls” fail, it’s not because the girls are too stupid or too bitchy or too shallow to play by those unwritten rules we’ve absorbed.

And that’s still mild compared to some of the disturbing shit I consumed in my adolescence. Jake handing off his falling-down-drunk date to Anthony Michael Hall’s Geek in Sixteen Candlessaying, “Be my guest” (which is, yes, more offensive to me than Long Duk Dong). The nerd-libertarian gospels of Ayn Rand’s The Fountainhead and Atlas Shrugged and how their Übermensch protagonists prove their masculinity by having sex with their love interests without asking first—and win their hearts in the process. Comics…just, comics. (Too much to go into there but the fact that Red Sonja was once thought a “feminist icon” speaks volumes. Oh, and there’s that whole drama with Ms. Marvel for those of you who really want to get freaked out today.)

But the overall problem is one of a culture where instead of seeing women as, you know, people, protagonists of their own stories just like we are of ours, men are taught that women are things to “earn,” to “win.” That if we try hard enough and persist long enough, we’ll get the girl in the end. Like life is a video game and women, like money and status, are just part of the reward we get for doing well.

So what happens to nerdy guys who keep finding out that the princess they were promised is always in another castle? When they “do everything right,” they get good grades, they get a decent job, and that wife they were promised in the package deal doesn’t arrive? When the persistent passive-aggressive Nice Guy act fails, do they step it up to elaborate Steve-Urkel-esque stalking and stunts? Do they try elaborate Revenge of the Nerds-style ruses? Do they tap into their inner John Galt and try blatant, violent rape?

Do they buy into the “pickup artist” snake oil—started by nerdy guys, for nerdy guys—filled with techniques to manipulate, pressure and in some cases outrightassault women to get what they want? Or when that doesn’t work, and they spend hours a day on sites bitching about how it doesn’t work like Elliot Rodger’s hangout “,” sometimes, do they buy some handguns, leave a manifesto on the Internet and then drive off to a sorority house to murder as many women as they can?

No, I’m not saying most frustrated nerdy guys are rapists or potential rapists. I’m certainly not saying they’re all potential mass murderers. I’m not saying that most lonely men who put women up on pedestals will turn on them with hostility and rage once they get frustrated enough.

But I have known nerdy male stalkers, and, yes, nerdy male rapists. I’ve known situations where I knew something was going on but didn’t say anything—because I didn’t want to stick my neck out, because some vile part of me thought that this kind of thing was “normal,” because, in other words, I was a coward and I had the privilege of ignoring the problem.

I’ve heard and seen the stories that those of you who followed the #YesAllWomenhashtag on Twitter have seen—women getting groped at cons, women getting vicious insults flung at them online, women getting stalked by creeps in college and told they should be “flattered.” I’ve heard Elliot Rodger’s voice before. I was expecting his manifesto to be incomprehensible madness—hoping for it to be—but it wasn’t. It’s a standard frustrated angry geeky guy manifesto, except for the part about mass murder.

I’ve heard it from acquaintances, I’ve heard it from friends. I’ve heard it come out of my own mouth, in moments of anger and weakness.

It’s the same motivation that makes a guy in college stalk a girl, leave her unsolicited gifts and finally when she tells him to quit it makes him leave an angry post about her “shallowness” and “cruelty” on Facebook. It’s the same motivation that makes guys rant about “fake cosplay girls” at cons and how much he hates them for their vain, “teasing” ways. The one that makes a guy suffering career or personal problems turn on his wife because it’s her job to “support” him by patching up all the holes in his life. The one that makes a wealthy entrepreneur hit his girlfriend 117 times, on camera, for her infidelity, and then after getting off with a misdemeanor charge still put up a blog post casting himself as the victim.

And now that motivation has led to six people dead and thirteen more injured, in broad daylight, with the killer leaving a 140-page rant and several YouTube videos describing exactly why he did it. No he-said-she-said, no muffled sounds through the dorm ceiling, no “Maybe he has other issues.” The fruits of our culture’s ingrained misogyny laid bare for all to see.

And yet. When this story broke, the initial mainstream coverage only talked about “mental illness,” not misogyny, a line that people are now fervently exhorting us to stick to even after the manifesto’s contents were revealed. Yet another high-profile tech CEO resignation ensued when the co-founder of Rap Genius decided Rodger’s manifesto was a hilarious joke.

People found one of the girls Rodger was obsessed with and began questioning if her “bullying” may have somehow triggered his rage. And, worst of all, he has fan pages on Facebook that still haven’t been taken down, filled with angry frustrated men singing his praises and seriously suggesting that the onus is on women to offer sex to men to keep them from going on rampages.

So, a question, to my fellow male nerds:

What the fuck is wrong with us?

How much longer are we going to be in denial that there’s a thing called “rape culture” and we ought to do something about it?

No, not the straw man that all men are constantly plotting rape, but that we live in an entitlement culture where guys think they need to be having sex with girls in order to be happy and fulfilled. That in a culture that constantly celebrates the narrative of guys trying hard, overcoming challenges, concocting clever ruses and automatically getting a woman thrown at them as a prize as a result, there will always be some guy who crosses the line into committing a violent crime to get what he “deserves,” or get vengeance for being denied it.

To paraphrase the great John Oliver, listen up, fellow self-pitying nerd boys—we are not the victims here. We are not the underdogs. We are not the ones who have our ownership over our bodies and our emotions stepped on constantly by other people’s entitlement. We’re not the ones where one out of six of us will have someone violently attempt to take control of our bodies in our lifetimes.

We are not Lewis from Revenge of the Nerds, we are not Steve Urkel from Family Matters, we are not Preston Myers from Can’t Hardly Wait, we are not Seth Rogen in every movie Seth Rogen has ever been in, we are not fucking Mario racing to the castle to beat Bowser because we know there’s a princess in there waiting for us.

We are not the lovable nerdy protagonist who’s lovable because he’s the protagonist. We’re not guaranteed to get laid by the hot chick of our dreams as long as we work hard enough at it. There isn’t a team of writers or a studio audience pulling for us to triumph by “getting the girl” in the end. And when our clever ruses and schemes to “get girls” fail, it’s not because the girls are too stupid or too bitchy or too shallow to play by those unwritten rules we’ve absorbed.

It’s because other people’s bodies and other people’s love are not something that can be taken nor even something that can be earned—they can be given freely, by choice, or not.

We need to get that. Really, really grok that, if our half of the species ever going to be worth a damn. Not getting that means that there will always be some percent of us who will be rapists, and abusers, and killers. And it means that the rest of us will always, on some fundamental level, be stupid and wrong when it comes to trying to understand the women we claim to love.

What did Elliot Rodger need? He didn’t need to get laid. None of us nerdy frustrated guys need to get laid. When I was an asshole with rants full of self-pity and entitlement, getting laid would not have helped me.

He needed to grow up.

We all do.

What Happened to Cyberpunk?

June 8, 2012 // 11:30 AM EST


Cyberpunk, in the popular consciousness, conjures a glut of dissociated images:Blade Runner’s slummy urban landscape, hackers in sunglasses, Japanese cyborgs, grubby tech, digital intoxication, Keanu Reeves as Johnny Mnemonic. But it began as an insanely niche subculture within science fiction, one which articulated young writerly distaste for the historically utopian optimism of the medium and, in turn, provided an aesthetic reference point for burgeoning hacker culture, before metastasizing into a full-on cultural trend.

One of the movement’s chief ideologues, Bruce Sterling, wrote in the introduction to his seminal anthology Mirrorshades that technology in cyberpunk writing was “not outside us, but next to us. Under our skin; often, inside our minds.” In cyberpunk novels, technology isn’t controlled by white-coat boffins in a distant lab on the holy altar of Science, but in our homes, on our streets, in our bodies. Unlike their predecessors in science fiction, the cyberpunks didn’t evangelize gleaming rockets or futuristic weapons. Theirs was a world of technological jetsam, of bionic drugs, of machines in varying states of obsolescence, of cyclopean corporate greed, of subverted tools, of sprawl, error, and menace. With a “faintly hallucinatory sheen around the edges of its dirty chrome fittings,” as another of its major prophets, John Shirley, put it.

Top: still from Johnny Mnemonic. Bottom: photo by Isle of Man

In the cyberpunk world, we don’t behold technology from a safe distance. We jack in, and in doing so, alter our minds. Enter cyberdelics, the cyberpunk spin-off that blended the psychedelic movement with underground technologies, and was championed by people like Timothy Leary and R. U. Sirius. The trend largely ended with the dot-com era; other derivatives of cyberpunk, like steampunk, atompunk and decopunk, manage to persist.


Fun as it all sounds, cyberpunk has been out of vogue for over two decades. Sterling pronounced it dead in 1985; a 1993 Wired article rang a more formal death knoll for the movement, predicting, just as the hippies eclipsed the beatniks, the arrival of a new culture in its stead. “The tekkies,” it announced, “will arrive sometime in the mid-1990s, if not sooner.”

Arguably, cyberpunk was less a movement than a tiny subculture – the same Wiredarticle swears there were never more than a hundred hard-core cyberpunks at any time – almost immediately reified, first by the mainstream science fiction establishment, then by mass-market media. As a result, most of the cyberpunk written after the late 1980s was merely genre fiction, and most of its adherents superficial, viz. the hacker glossary Jargon File’s definition of “self-described cyberpunks” as “shallow trendoids in black leather who have substituted enthusiastic blathering about technology for actually learning and doing it.” The Hollywood “Netsploitation” movies of the mid-90s (HackersThe Net) signaled what those hard-core guys already knew: the tekkies’ arrival on the scene notwithstanding, cyberpunk burned out not long after it first lit up.

Top: dummy magazine covers from Ridley Scott’s Blade Runner, by Tom Southwell, 1982. Bottom: German BMW print ad emulating Blade Runner, 2006

In a sense, it’s a generational thing. In 1980, the writer Bruce Bethke – whose short story “Cyberpunk” inadvertently christened the genre – was working at a Radio Shack in Wisconsin, selling TRS-80 microcomputers. One day, a group of teenagers waltzed in and hacked one of the store machines, and Bethke, who’d imagined himself a tech wiz, couldn’t figure out how to fix it. It was after this incident that he realized something: these teenaged hackers were going to sire kids of their own someday, and those kids were going to have a technological fluency that he could only guess at. They, he writes, were going to truly “speak computer.” And, like teenagers of any era, they were going to be selfish, morally vacuous, and cynical.

Put this way, if punk rock was a counterculture for the television age, then cyberpunk aimed to articulate the teenage anomie of the dawning computer era. Blame the German BMW ad that borrowed liberally from Blade Runner, the decidedly un-punk 1993 Billy Idol record named for the movement, or Matthew Lillard’s braids inHackers, but people sometimes forget the second half of the portmanteau: punk. The genre was conceived to bring countercultural ethos into the 21st century. Cyberpunks hated Buck Rogers like punks hated disco, and they were twice as nihilistic.

Of course, this means that with time, and the inevitable shifting of “the Man” that ensued, most cyberpunks grew out of their anger, or abandoned their co-opted subculture with disdain. It also means, however, that some of the kids who grew up reading William Gibson, Rudy Rucker, and Bruce Sterling became the adults who run the world. And like true old punks, the values – and fears – of their formative years have carried through.

So, back to the question: what happened to cyberpunk? The answer is simple. It’s under our noses.

Privacy and security online. Megacorporations with the same rights as human beings. Failures of the system to provide for the very poor. The struggle to establish identity that is not dependent on a technological framework: the common themes of the cyberpunk classics are the vital issues of 2012. Quite simply, we’re already there, and so of course cyberpunk as a genre is unfashionable: current events always are. Even William Gibson and Neal Stephenson don’t write science fiction anymore. Why bother? We live immersed in the cyberpunk culture that its O.G. prophets envisioned.

Cyberpunk speculated a world where high-tech lowlifes might wheedle themselves inside of monolithic systems – and might, in using the tools of the system against itself, claim some of the future for themselves. And while precious few of us are stalking through Tokyo slums in leather trench coats and mirrored shades, hopped up on cybernetic enhancements, activism coordinated in digital hangouts has effectively toppled governments. We don’t pal around with mercenary cyborgs, but crypto-anarchic hacker collectives are bigger players on the global stage than most nations’ armies. Policemen and more secret entities now rely on robot eyes to scan for suspicious activity while unmanned vehicles and cyber weapons wage their own quiet wars.

CCTV headquarters, OMA, Beijing, 2009

Nearly every large metropolis now has its own second life of location-based game layers; whole buildings are wrapped in screens. There are ads for video games on video billboards, and ads on billboards inside of video games – sometimes even ads for other video games. And, really, anyone with the know-how can buy designer drugs on secret websites using an experimental decentralized online currency.

One of the things that defines science fiction is precisely its tangled hierarchy: do we have Mars rovers because aerospace engineers grew up reading Arthur C. Clarke and Robert Heinlein, or did Clarke and Heinlein predict the future? Does Anonymous launch DDoS attacks on government websites, rupturing the system, because they all read Neuromancer when they were 15, or is William Gibson just that good? Probably a little of both, in that case.

It’s impossible to know, but we can assume that our idea of technology, our sense of what it can do and how we can live with it, is always going to be at least partially informed by the speculative fiction that first introduced us to it. This is maybe cyberpunk’s most obvious lesson, but it bears repeating: the bigger and weirder you dream, the bigger and weirder the future gets.


FCC votes for Internet “fast lanes”

 …but could change its mind later

Chairman: “There is one Internet: Not a fast internet. Not a slow Internet.”

The Federal Communications Commission today voted in favor of a preliminary proposal to allow Internet “fast lanes” while asking the public for comment on whether the commission should change the proposal before enacting final rules later this year. The order was approved 3-2, with two Republican commissioners dissenting.

The Notice of Proposed Rulemaking (NPRM) concerns “network neutrality,” the concept that Internet service providers should treat all Internet traffic equally, even if it comes from a competitor. But the rules, while preventing ISPs from blocking content outright, would allow ISPs to charge third-party Web services for a faster path to consumers, or a “fast lane.”The FCC’s prior net neutrality rules issued in 2010 were largely struck down in court, and there is already speculation that the new proposed rules could be threatened in court as well.

The proposal faces widespread protest from people who believe that rich companies paying for greater access to Internet subscribers will disadvantage startups and companies with fewer financial resources. The meeting today was disrupted twice by protesters who were led out of the room while shouting opposition to rules that degrade network neutrality. (We’ll have coverage of the protests later today.)

In response to earlier complaints, FCC Chairman Tom Wheeler expanded the requests for comment in the NPRM. For example, the FCC will ask the public whether it should bar paid prioritization completely. It will ask whether the rules should apply to cellular service in addition to fixed broadband, whereas the prior rules mostly applied just to fixed broadband.

The NPRM will also ask the public whether the FCC should reclassify broadband as a telecommunications service. This will likely dominate debate over the next few months. Classifying broadband as a telecommunications service would open it up to stricter “common carrier” rules under Title II of the Communications Act. The US has long applied common carrier status to the telephone network, providing justification for universal service obligations that guarantee affordable phone service to all Americans and other rules that promote competition and consumer choice.

Consumer advocates say that common carrier status is needed for the FCC to impose strong network neutrality rules that would force ISPs to treat all traffic equally, not degrading competing services or speeding up Web services in exchange for payment.

ISPs have argued that common carrier rules would force them to spend less on network upgrades and be less innovative.

The full text of the NPRM is not public yet but should be so shortly. (UPDATE: Here it is.)

The FCC will accept initial comments from May 15 until July 15 and reply comments until September 10 on its website.

FCC split along party lines

Each commissioner made a statement before voting.

Wheeler, a Democrat, said, “There is one Internet. Not a fast internet. Not a slow Internet. One Internet.” Wheeler previously complained that his proposal was being misinterpreted. Today, he said, “Those who have been expressing themselves will now be able to see what we are actually proposing.”

“Nothing in this proposal authorizes paid prioritization despite what has been incorrectly stated today. The potential for there to be some kind of a fast lane available to only a few has many people concerned,” Wheeler continued. “Personally, I don’t like the idea that the Internet could be divided into haves and have-nots and I will work to see that that does not happen. In this item we have specifically asked whether and how to prevent the kind of paid prioritization that could result in fast lanes.”

Wheeler has said he would consider using Title II rules if it turns out that ISPs discriminate against smaller companies. Today, he said it would be “unacceptable” for ISPs to squeeze out smaller voices. His statement that “nothing in this proposal authorizes paid prioritization” may be technically true if the proposal simply enforces a baseline level of service and doesn’t take a position on whether third-party services can pay for more than that. Or he may simply have been referring to the fact that the NPRM is not in itself a final set of rules.

In an explainer document, the FCC said that it “tentatively concludes that priority service offered exclusively by a broadband provider to an affiliate should be considered illegal until proven otherwise.” This could mean agreements are allowed as long as each third-party service is offered similar, commercially reasonable terms. Affiliated content is generally content the ISP owns itself or through shell companies.

FCC expert Harold Feld, senior VP of Public Knowledge, agrees with that interpretation. “The law, in its majestic equality, allows both rich and poor to pay for prioritization,” he tweeted.

In response to our request for clarification, an FCC spokesperson said, “That’s not right. The presumption that exclusive contracts to prioritize affiliates’ traffic would be unlawful doesn’t answer the question of whether other forms of prioritization are OK. The notice seeks broad comment on what to do about prioritization arrangements, including whether paid prioritization should be banned outright.”

Wheeler explained that he wants to force ISPs to provide the level of service they promise consumers:

If the network operator slowed the speed below that which the consumer bought it would be commercially unreasonable and therefore prohibited. If a network operator blocked access to lawful content it would violate our no-blocking rule and be commercially unreasonable and therefore be doubly prohibited.

When content provided by a firm such as Netflix reaches a network provider it would be commercially unreasonable to charge the content provider to use that bandwidth for which the consumer had already paid, and therefore prohibited. When a consumer purchases specified network capacity from an Internet provider, he or she is buying open capacity, not capacity a network provider can prioritize for their own purposes.

Wheeler said that peering agreements, such as the ones in which Netflix purchased direct connections to the edges of the Verizon and Comcast networks, are “a different matter that is better addressed separately. Today’s proposal is all about what happens on the broadband provider’s network and how the consumer’s connection to the Internet may not be interfered with or otherwise compromised.”

Democratic Commissioner Jessica Rosenworcel said she wants to prevent a “two-tiered Internet” with rich companies having advantages over poorer ones. She said Wheeler should have moved less quickly in advancing proposed rules but that she is glad he put “all options on the table” by asking for comment on Title II and a potential ban on paid prioritization.

Democratic Commissioner Mignon Clyburn said the proposal changed significantly in the week before the vote. She said she’s heard from educators who are worried that the proposed rules will prevent students from taking advantage of the best technologies. She’s also heard from health care professionals who worry that medical images will load slowly and that patients will be unable to benefit from the latest technologies. However, she stressed that the NPRM is far from a final set of rules.

Republican Commission Ajit Pai said that “five unelected officials”—i.e. the commissioners—shouldn’t decide the fate of the Internet. Instead, elected officials should take the lead, he said.

Today’s debate was spurred by a federal appeals court ruling in January that struck down previous rules the FCC issued to ban blocking and paid prioritization. Verizon had challenged those rules, winning a substantial victory in court. The judges said the commission could not use its Section 706 authority—which requires the FCC to encourage the deployment of broadband infrastructure—to issue de facto common carrier rules.

Wheeler still wants to use Section 706, but he wants to set a baseline level of service that ISPs will have to offer, effectively preventing them from blocking third-party applications. While ISPs could charge Web services for a faster path to consumers, they would have to do so on “commercially reasonable” terms.

Republican Commissioner Michael O’Rielly said the commission hasn’t tried to identify any market harm and thus shouldn’t issue the rules in the NPRM. He also argued that the FCC has invented new authority to regulate the Internet by exaggerating its Section 706 authority.

The federal appeals court ruling in the Verizon case contradicts O’Rielly’s argument, however. The judges said the FCC “has reasonably interpreted section 706 to empower it to promulgate rules governing broadband providers’ treatment of Internet traffic.”

Wheeler cited that decision in justifying the use of Section 706 to regulate how Internet providers treat third-party traffic. He said that using Section 706 is the fastest path toward getting enforceable neutrality rules, but he’s willing to use Title II if necessary.

However, observers both for and against network neutrality have already expressed skepticism about whether Wheeler’s interpretation of Section 706 would pass legal muster.

Jon Brodkin / Jon is Ars Technica’s senior IT reporter, covering business technology and the impact of consumer tech on IT. He also writes about tech policy, the FCC and broadband, open source, virtualization, supercomputing, data centers, and wireless technology.

New White House Petition For Net Neutrality


On the heels of yesterday’s FCC bombshell, there is a new petition on the White House petition site titled, ‘Maintain true net neutrality to protect the freedom of information in the United States.‘ The body reads: ‘True net neutrality means the free exchange of information between people and organizations. Information is key to a society’s well being. One of the most effective tactics of an invading military is to inhibit the flow of information in a population; this includes which information is shared and by who. Today we see this war being waged on American citizens. Recently the FCC has moved to redefine “net neutrality” to mean that corporations and organizations can pay to have their information heard, or worse, the message of their competitors silenced. We as a nation must settle for nothing less than complete neutrality in our communication channels. This is not a request, but a demand by the citizens of this nation. No bandwidth modifications of information based on content or its source.

How the Internet Is Taking Away America’s Religion

Back in 1990, about 8 percent of the U.S. population had no religious preference. By 2010, this percentage had more than doubled to 18 percent. That’s a difference of about 25 million people, all of whom have somehow lost their religion.

That raises an obvious question: how come? Why are Americans losing their faith?

Today, we get a possible answer thanks to the work of Allen Downey, a computer scientist at the Olin College of Engineering in Massachusetts, who has analyzed the data in detail. He says that the demise is the result of several factors but the most controversial of these is the rise of the Internet. He concludes that the increase in Internet use in the last two decades has caused a significant drop in religious affiliation.

Downey’s data comes from the General Social Survey, a widely respected sociological survey carried out by the University of Chicago, that has regularly measure people’s attitudes and demographics since 1972.

In that time, the General Social Survey has asked people questions such as: “what is your religious preference?” and “in what religion were you raised?” It also collects data on each respondent’s age, level of education, socioeconomic group, and so on. And in the Internet era, it has asked how long each person spends online. The total data set that Downey used consists of responses from almost 9,000 people.

Downey’s approach is to determine how the drop in religious affiliation correlates with other elements of the survey such as religious upbringing, socioeconomic status, education, and so on.

He finds that the biggest influence on religious affiliation is religious upbringing—people who are brought up in a religion are more likely to be affiliated to that religion later.

However, the number of people with a religious upbringing has dropped since 1990. It’s easy to imagine how this inevitably leads to a fall in the number who are religious later in life. In fact, Downey’s analysis shows that this is an important factor. However, it cannot account for all of the fall or anywhere near it. In fact, that data indicates that it only explains about 25 percent of the drop.

He goes on to show that college-level education also correlates with the drop. Once it again, it’s easy to imagine how contact with a wider group of people at college might contribute to a loss of religion.

Since the 1980s, the fraction of people receiving college level education has increased from 17.4 percent to 27.2 percent in the 2000s. So it’s not surprising that this is reflected in the drop in numbers claiming religious affiliation today. But although the correlation is statistically significant, it can only account for about 5 percent of the drop, so some other factor must also be involved.

That’s where the Internet comes in. In the 1980s, Internet use was essentially zero, but in 2010, 53 percent of the population spent two hours per week online and 25 percent surfed for more than 7 hours.

This increase closely matches the decrease in religious affiliation. In fact, Downey calculates that it can account for about 25 percent of the drop.

That’s a fascinating result. It implies that since 1990, the increase in Internet use has had as powerful an influence on religious affiliation as the drop in religious upbringing.

At this point, it’s worth spending a little time talking about the nature of these conclusions. What Downey has found is correlations and any statistician will tell you that correlations do not imply causation. If A is correlated with B, there can be several possible explanations. A might cause B, B might cause A, or some other factor might cause both A and B.

But that does not mean that it is impossible to draw conclusions from correlations, only that they must be properly guarded. “Correlation does provide evidence in favor of causation, especially when we can eliminate alternative explanations or have reason to believe that they are less likely,” says Downey.

For example, it’s easy to imagine that a religious upbringing causes religious affiliation later in life. However, it’s impossible for the correlation to work the other way round. Religious affiliation later in life cannot cause a religious upbringing (although it may color a person’s view of their upbringing).

It’s also straightforward to imagine how spending time on the Internet can lead to religious disaffiliation. “For people living in homogeneous communities, the Internet provides opportunities to find information about people of other religions (and none), and to interact with them personally,” says Downey. “Conversely, it is harder (but not impossible) to imagine plausible reasons why disaffiliation might cause increased Internet use.”

There is another possibility, of course: that a third unidentified factor causes both increased Internet use and religious disaffiliation. But Downey discounts this possibility. “We have controlled for most of the obvious candidates, including income, education, socioeconomic status, and rural/urban environments,” he says.

If this third factor exists, it must have specific characteristics. It would have to be something new that was increasing in prevalence during the 1990s and 2000s, just like the Internet. “It is hard to imagine what that factor might be,” says Downey.

That leaves him in little doubt that his conclusion is reasonable. “Internet use decreases the chance of religious affiliation,” he says.

But there is something else going on here too. Downey has found three factors—the drop in religious upbringing, the increase in college-level education and the increase in Internet use—that together explain about 50 percent of the drop in religious affiliation.

But what of the other 50 percent? In the data, the only factor that correlates with this is date of birth—people born later are less likely to have a religious affiliation. But as Downey points out, year of birth cannot be a causal factor. “So about half of the observed change remains unexplained,” he says.

So that leaves us with a mystery. The drop in religious upbringing and the increase in Internet use seem to be causing people to lose their faith. But something else about modern life that is not captured in this data is having an even bigger impact.

What can that be? Answers please in the comments section.

Ref: Religious Affiliation, Education and Internet Use

Why No One Trusts Facebook To Power The Future

Facebook has a perception problem—consumers just don’t trust it.

April 03, 2014

In the coming years, one billion more people will gain access to the Internet thanks to drones and satellites hovering in the stratosphere.

And soon, we’ll be able to sit down with friends in foreign countries and immerse ourselves in experiences never previously thought possible, simply by slipping on a pair of virtual reality goggles.

These aren’t just gaseous hypotheticals touted by Silicon Valley startups, but efforts led by one company, whose mission is to make the world more open and connected. If one company actually pulled off all of these accomplishments, it might seem like people would fall in love with it—but once you know it’s Facebook, you might feel differently. And you’re not alone.

Facebook has a perception problem, which is largely driven by the fact it controls huge amounts of data and uses people as fodder for advertising. Facebook has been embroiled in numerous privacy controversies over the years, and was built from the ground up by a kid who basically double-crossed his Harvard colleagues to pull it off in the first place.

These days, Facebook appears to be growing up by taking billion-dollar bets on future technology hits like WhatsApp and Oculus in order to expedite its own puberty.

Its billion-dollar moves in recent weeks point to a new Facebook, one that takes risks investing in technologies that have not yet borne fruit, but could easily be the “next big thing” in tech. One such investment, the $2 billion acquisition of Oculus, left many people scratching their heads as to why a social network would pick up a technology that arguably makes people less social, since Oculus is all about immersive gaming. At least the WhatsApp purchase makes a little more sense strategy-wise, even if the $19 billion deal was bad for users.

So begins Facebook’s transition from a simple social network to a full-fledged technology company that rivals Google, moonshot for moonshot.

Companies need to keep things fresh in order to make us want them, but Facebook, like Barney Stintson from How I Met Your Mother, just can’t shake its ultimately flawed nature and gain the trust of consumers.

The Ultimate Data Hoard

If you think you’re in control of your personal information, think again.

Perhaps the largest driver of skepticism towards Facebook is the level of control it gives users—which is arguably limited. Sure, you can edit your profile so other people can’t see your personal information, but Facebook can, and it uses your data to serve advertisers.

Keep in mind: This is information you provided just once in the last 10 years—for instance, when you first registered your account and offered up your favorite movies, TV shows and books—is now given tangentially to advertisers or companies wanting a piece of your pocketbook.

Not even your Likes can control what you see in your news feed anymore. Page updates from brands, celebrities, or small businesses that you subscribed to with a “Like” are omitted from your News Feed when page owners refuse to pay. Your Like was once good enough to keep an update on your News Feed, but now the company is cutting the flow of traffic and limiting status views by updating its algorithms—a move many people think is unfair, if not shiesty.

It’s not just Page posts taking a hit, audience-wise—even your own posts could be seen by fewer people if Facebook deems them “low-quality.”

To help eliminate links it doesn’t consider “news” like Upworthy or ViralNova, Facebook tweaked its algorithm to show fewer low-quality posts in favor of more newsworthy material, like stories from The New York Times. Of course, most people appreciate this move since click-bait links can get truly annoying, but it’s concerning that Facebook has so much control over the firehose of information you put in front of your eyes every single day.

Facebook owns virtually all the aspects of the social experience—photos (Instagram), status updates (Facebook), location services (Places)—but it has also become your social identity thanks to Facebook Login, which allows it to integrate with almost everything else on the Internet. This means if you’re not spending time on Facebook, you’re using Facebook to spend time online elsewhere.

It’s this corporate control of traffic that leads to frustration from those that believe Facebook owns too much, and that working with Facebook is like smacking the indie community hard across the face.

In a sense, people are stuck. They initially trusted a company with their data and information, and in return, those people feel—often justifiably—that they’re being taken advantage of. When the time comes for someone to abandon Facebook, whether over privacy concerns or frustration with the company, Facebook intentionally makes it hard to leave.

Even if you delete your account, your ghost remains. Your email address is still tied to a Facebook account and your face is still recognizably tagged as you, even if the account it’s associated with has vanished. In this way, Facebook is almost like any other cable company—even when you die, Facebook can still make money off you. And that’s not behavior fit for a company that’s poised to take over the future.

Leveling Up

Facebook missed the boat on mobile, and its much-maligned Android application interface Facebook Home was a major failure. Though Home was a small step into hardware, it was one users clearly didn’t want.

Now Facebook is dreaming bigger. With recent acquisitions like Oculus and drone maker Titan Aerospace, the company is looking to expand outside of its social shell and be taken seriously as a technology company and moonshot manufacturer.

Facebook’s well-known slogan “move fast and break things” is regularly applied to new products and features—undoubtedly a large part of Home’s initial failure. The company is ready to try again, this time with technologies and applications that consumers aren’t yet familiar with. But this has created more questions than answers in the eyes of users and investors. And that’s not good for a company with an existing perception problem like Facebook.

People see Facebook moving fast and breaking things to serve its own purposes, not for the benefit of the Internet, or in the case of Oculus, the benefit of dedicated fans.

Facebook isn’t leaving the social realm, at least not yet. It’s still relying on the flagship website to power its larger plans, particularly, which aims to bring the next billion people online.

Facebook CEO Mark Zuckerberg wants a Facebook that connects the world, becoming a convenient way for people to find one another, and a gateway for Internet connectivity in developing countries.

Zuckerberg announced last week how he plans to bring the initiative into fruition—and it sounds like a plan straight out of a sci-fi novel. The company is putting its newly-acquired drones to work, powering the Internet in communities that don’t yet have it, which is being accompanied by other technologies like lasers and satellites to distribute the connectivity in largely-populated areas.

When Zuckerberg first announced, he initially threw shade at Google’s similar Project Loon, which attempts to connect the world via Wi-Fi balloons.

“Drones have more endurance than balloons while also being able to have their location precisely controlled,” he wrote in a white paper explaining the project. Of course regardless of the method, with more people online, Facebook will control more data and information, and have a larger pool of people to use for advertising.

To gain more users—and keep the ones it has—Facebook needs to change. But when Facebook’s CEO starts talking about drones and lasers powering the Internet, despite the company’s history of reckless privacy policies, it immediately sets off red flags for users.

Facebook Is Growing Up

Last October, when Facebook finally admitted teenagers were abandoning the network for other hot services like Snapchat and Tumblr, the Internet heaved a collective, “Told you so!” 

But teens aren’t the future for Facebook. Zuckerberg’s company has ambitions that go beyond selfies. It can’t remain the same forever, especially if it wants to stay relevant in the ever-changing technological landscape.

Facebook wants to build the Internet’s future infrastructure. It wants to be a part of the technology of that power the next billion people’s online experiences ten more years down the road. Zuckerberg has personally tried to bolster his raw perception with his $1 salary—a symbolic gesture, sure, but nothing Steve Jobs or Bill Gates hadn’t done before.

To build and control the future it wants, it will have to “be more cool” and ease up on its control of users. Facebook has many exciting projects, but it won’t have an audience left unless it addresses its perception problem. Trust is paramount, especially on the Internet, and people need to know that Facebook is making things to improve the human experience, not just spending billions to make even more billions off our personal information.

Facebook has a great opportunity to improve its image with its exciting multi-billion dollar acquisitions. Prove to us you don’t just care about money, Facebook, and perhaps we’ll all realize how much you really have grown in the last 10 years.

Lead image by Madeleine Weiss for ReadWrite; Oculus Rift photo by Adriana Lee for ReadWrite; drone photo courtesy of Titan Aerospace

Why the Internet Will Soon Be Two-Tiered


Low-income people of color stand to lose the most from the erosion of net neutrality.

BY Jay Cassano and Michael Brooks


Informational freedom advocates say the decline of net neutrality—and AT&T’s policy in particular—could, in effect, create yet another sector of American society where information and access to platforms of influence are boutique commodities.

Net neutrality—the principle that all content on the Internet must be treated equally by service providers and the government—may not be long for this world. After anemic attempts at preserving net neutrality by the Federal Communications Commission, a recent court ruling has opened the door for mobile network providers to get what they have long sought: namely, the ability to discriminate when it comes to websites’ loading speed and availability.

Though FCC Chairman Tom Wheeler recently stated the commission plans to write new net neutrality guidelines that will hopefully survive legal challenges, it remains to be seen how any of these will hold up in court. In the meantime, Americans’ access to a free, unfettered Internet will almost certainly continue to be tested.

Over the past few years, a great deal of attention has been paid in the press to net neutrality’s potential collapse. Much of the analysis, however, has focused on its importance for maintaining anticompetitive business practices and the need to create a level playing field online for small businesses and Silicon Valley “disrupters.” What has received far less attention are the ways in which the destruction of net neutrality will add to America’s inequality crisis.

In his most recent State of the Union address, President Barack Obama promised to tackle the vast social and economic disparity among Americans. But one of the Obama administration’s early policy disappointments—his FCC’s flawed approach to net neutrality—may soon lead to that same inequality gap being manifested online in the form of a two-tier Internet.

A 2010 FCC decision did appear to enshrine net neutrality as a fundamental principle of how the Internet operates: The government agency enacted a federal mandate that no Internet service providers could block or slow users’ connections to sites and apps.

But appearances can be misleading. That 2010 ruling allowed for major exceptions to net neutrality, particularly around mobile networks. And in January, at the D.C. Circuit for the U.S. Court of Appeals, Verizon exploited these loopholes to successfully challenge the legality of enforcing net neutrality for cell phone Internet. In theory, unless the FCC acts quickly, providers such as Verizon and other mobile carriers could conceivably decide to privilege the delivery of some sites over others—a devastating blow for people who rely on cell phones to get online.

When it comes to accessing the Internet, mobile-phone networks are of particular importance to marginalized communities, including people of color and those with lower incomes. A recent Pew survey found that a full 21 percent of cell phone owners in the United States mostly use their phones to access the Internet, as opposed to a desktop or a laptop. The elimination of net neutrality for cell phones could make conducting essential activities, such as applying for jobs or furthering one’s education, much harder if service providers chose to block access to those necessary sites.

The Pew study also found that “young adults, non-whites, and those with relatively low income and education levels are particularly likely to be cell-mostly Internet users”—and ­it’s people in those demographic groups who will therefore be stuck at the lowest tier of access. Advocacy groups also worry that ISPs will use their powers of prioritization to silence activists working toward social justice, particularly in communities of color.

One specific example of how this might play out is with AT&T’s soon-to-be-launched sponsored data plan. Currently, most customers must adhere to a data usage cap or risk paying overage fees. But under the “sponsored data plan,” rather than forcing an AT&T user to drain their monthly data allocation while they, say, scroll Facebook, the social media giant—or any company—could pay for that usage on their behalf to AT&T. In other words, as far as users are concerned, Internet access to certain sites would essentially be free.

In theory, this sounds great. More traffic for tech companies to their products and a lower phone bill for consumers. Win-win, right?

Perhaps, until one considers the fact that not all websites on the Internet are able to sponsor their data in this way. As a result, we will be creating an Internet in which users are motivated to peruse the sites of larger and wealthier companies compared to those with fewer resources.

According to Steven Renderos, national organizer for the Center for Media Justice, “AT&T’s data plan would create the incentive for websites to pay for the data usage customers rack up on their website in order to steer the customer towards sites that won’t use up their limited data plans.” As a result of this process, he says, the Internet will more closely resemble cable television, with independent voices and outlets pushed to the margins.

And that’s the best-case scenario. But what happens when a consumer isn’t able to pay for any data plan or can only afford a low cap? With AT&T’s “sponsored data,” this customer would still be able to use the Internet—but it could look very different from that of a user who has no such constraints.

This “commercial” Internet, which everyone would have access to, would be comprised solely of the companies that can afford to pay for all those users’ data. And based on the current practices of many web giants, the majority of those mega-corporations would likely be online retailers or sites that track (and monetize) users’ every interaction. The world of independent blogs, alternative media, job boards, and online courses would become an Internet luxury.

With cell phone Internet thus hampered by a lack of net neutrality, home broadband, with its flat-rate monthly bill, would be the only place to turn for an open Internet. The problem is that a home broadband connection is precisely what many in marginalized communities lack. According to Pew, 15 percent of black adults and 22 percent of Hispanic adults have smartphones but no home broadband Internet—compared to just 6 percent of white ones.

As a result, the universe of information and communities that many people are able to access or contribute to online would shrink. It would ensure, says Renderos, that “mobile Internet is purely a mechanism for consumption versus the creative platform the Internet represents to wired—cable, DSL, fiber—users.”

Informational freedom advocates say the decline of net neutrality—and AT&T’s policy in particular—could, in effect, create yet another sector of American society where information and access to platforms of influence are boutique commodities. The Internet would no longer be a place where a small voice has access to the same virtual megaphone as that of a large corporation or the government.

“It certainly is the ushering in of a ‘have and have-nots’ Internet,” says Renderos.

This process of creating a two-tier Internet will deepen the divisions of economic and racial inequality that drive American politics today. For all the talk of the Internet as an engine of democracy and growth, without a robust net neutrality principle, it will become nothing more than yet another pillar supporting a rigged economic game.

Jay Cassano is a technology reporter for Fast Company and previously worked as a foreign correspondent in Turkey for Inter Press Service. He has been published in The Nation, Al Jazeera, and elsewhere.


Michael Brooks is the host of INTERSECTION podcast on Aslan Media and a contributor and producer for the Majority Report. His writing has appeared in the Washington Post and Al Jazeera.