Mouthbreathing Machiavellis Dream of a Silicon Reich


One day in March of 2014, a Google engineer named Justine Tunney created a strange and ultimately doomed petition at the White House website. The petition proposed a three-point national referendum, as follows:

1. Retire all government employees with full pensions.
2. Transfer administrative authority to the tech industry.
3. Appoint [Google executive chairman] Eric Schmidt CEO of America.

This could easily be written off as stunt, a flamboyant act of corporate kiss-assery, which, on one level, it probably was. But Tunney happened to be serious. “It’s time for the U.S. Regime to politely take its exit from history and do what’s best for America,” she wrote. “The tech industry can offer us good governance and prevent further American decline.”

Welcome to the latest political fashion among the California Confederacy: total corporate despotism. It is a potent and bitter ideological mash that could have only been concocted at tech culture’s funky smoothie bar—a little Steve Jobs here, a little Ayn Rand there, and some Ray Kurzweil for color.

Tunney was at one time a prominent and divisive fixture of the Occupy Wall Street movement. Lately, though, her views have . . . evolved. How does an anticapitalist “tranarchist” (transgender anarchist) become a hard-right seditionist?

“Read Mencius Moldbug,” Tunney told her Twitter followers last month, referring to an aggressively dogmatic blogger with a reverent following in certain tech circles.

Keanu Reeves cartoon

Keanu cartoon by Pete Simon

Tunney’s advice is easier said than done, for Moldbug is as prolific as he is incomprehensible. His devotees, many of whom are also bloggers, describe themselves as the “neoreactionary” vanguard of a “Dark Enlightenment.” They oppose popular suffrage, egalitarianism and pluralism. Some are atheists, while others affect obscure orthodox beliefs, but most are youngish white males embittered by “political correctness.” As best I can tell, their ideal society best resembles Blade Runner, but without all those Asian people cluttering up the streets. Neoreactionaries like to see themselves as the heroes of another sci-fi movie, in fact, sometimes boasting that they have been “redpilled,” like Keanu Reeves’s character in The Matrix—a movie Moldbug regards as “genius.”

“Moldbug.” The name sounds like it belongs to a troll who belches from the depths of an Internet rabbit hole. And so it does. Mencius Moldbug is the blogonym of Curtis Guy Yarvin, a San Francisco software developer and frustrated poet. (Here he is reading a poem at a 1997 open mic.)

According to Yarvin, the child of federal civil servants, he dropped out of a graduate computer science program at U. C. Berkeley in the early 1990s (he has self-consciously noted that he is the only man in his immediate family without a PhD) yet managed to make a small pile of money in the original dot-com bubble. Yarvin betrayed an endearingly strange sense of humor in his student days, posting odd stories and absurdist jokes on bulletin board services, contributing to Wired and writing cranky letters to alternative weekly newspapers.

Yet even as a student at Brown in 1991, Yarvin’s preoccupations with domineering strongmen were evident: “I wonder if the Soviet power ladder of vicious bureaucratic backbiting brings stronger men to the top than the American system of feel-good soundbites,” he wrote in one board discussion.

Yarvin’s public writing tapered off as his software career solidified. In 2007, he reemerged under an angry pseudonym, Moldbug, on a humble Blogspot blog called “Unqualified Reservations.” As might be expected of a “DIY ideology . . . designed by geeks for other geeks,” his political treatises are heavily informed by the works of J.R.R. Tolkien and George Lucas. What set Yarvin apart from the typical keyboard kook was his archaic, grandiose tone, which echoed the snippets Yarvin cherry-picked from obscure old reactionary tracts. Yarvin told one friendly interviewer that he spent $500 a month on books.

Elsewhere he confessed to having taken a grand total of five undergraduate humanities courses (history and creative writing). The lack of higher ed creds hasn’t hurt his confidence. On his blog, Yarvin holds forth oneverything from the intricacies of Korean history to contemporary Pakistani politics, from the proper conduct of a counterinsurgency operation to macroeconomic theory and fiscal policy, and he never gives an inch. “The neat thing about primary sources is that often, it takes only one to prove your point,” he writes.

In short, Moldbug reads like an overconfident autodidact’s imitation of a Lewis Lapham essay—if Lewis Lapham were a fascist teenage Dungeon Master.

Yarvin’s most toxic arguments come snugly wrapped in purple prose and coded language. (For instance,“The Cathedral” is Moldbuggian for the oppressive nexus of liberal newspapers, universities and the State Department, where his father worked after getting a PhD in philosophy from Brown.) By so doing, Moldbug has been able to an attract an audience that welcomes the usual teeth-gnashing white supremacists who haunt the web while also leaving room for a more socially acceptable assortment of “men’s rights” advocates, gun nuts, transhumanist libertarians, disillusioned Occupiers and well-credentialed Silicon Valley entrepreneurs.

When Justine Tunney posted her petition online, the press treated it like comic relief that came from nowhere. In fact, it is straight Moldbug. Item one, “retire all government employees,” comes verbatim from a 2012 talk that Yarvin gave to an approving crowd of California techies (see video below). In his typical smarmy, meandering style, Yarvin concluded by calling for “a national CEO [or] what’s called a dictator.”

“If Americans want to change their government, they’re going to have to get over their dictator phobia,” Yarvin said in his talk. He conceded that, given the current political divisions, it might be better to have two dictators, one for Red Staters and one for Blue Staters. The trick would be to “make sure they work together.” (Sure. Easy!)

“There’s really no other solution,” Yarvin concluded. The crowd applauded.

This plea for autocracy is the essence of Yarvin’s work. He has concluded that America’s problems come not from a deficit of democracy but from an excess of it—or, as Yarvin puts it, “chronic kinglessness.” Incredible as it sounds, absolute dictatorship may be the least objectionable tenet espoused by the Dark Enlightenment neoreactionaries.

Moldbug is the widely acknowledged lodestar of the movement, but he’s not the only leading figure. Another is Nick Land, a British former academic now living in Shanghai, where he writes admiringly of Chinese eugenics and the impending global reign of “autistic nerds, who alone are capable of participating effectively in the advanced technological processes that characterize the emerging economy.”

These imaginary übermensch have inspired a sprawling network of blogs, sub-Reddits and meetups aimed at spreading their views. Apart from their reverence for old-timey tyrants, they espouse a belief in “human biodiversity,” which is basically racism in a lab coat. This scientific-sounding euphemism invariably refers to supposed differences in intelligence across races. It is so spurious that the Wikipedia article on human biodiversity was deleted because, in the words of one editor, it is “purely an Internet theory.” Censored once again by The Cathedral, alas.

“I am not a white nationalist, but I do read white-nationalist blogs, and I’m not afraid to link to them . . . I am not exactly allergic to the stuff,” Yarvin writes. He also praises a blogger who advocated the deportation of Muslims and the closure of mosques as “probably the most imaginative and interesting right-wing writer on the planet.” Hectoring a Swarthmore history professor, Yarvin rhapsodizes on colonial rule in Southern Africa, and suggests that black people had it better under apartheid. “If you ask me to condemn [mass murderer] Anders Breivik, but adore Nelson Mandela, perhaps you have a mother you’d like to fuck,” Yarvinwrites.

His jargon may be novel, but whenever Mencius Moldbug descends to the realm of the concrete, he offersfamiliar tropes of white victimhood. Yarvin’s favorite author, the nineteenth-century writer Scot Thomas Carlyle, is perhaps best known for his infamous slavery apologia, “Occasional Discourse on the Negro Question.” “If there is one writer in English whose name can be uttered with Shakespeare’s, it is Carlyle,” Yarvin writes. Later in the same essay Yarvin calls slavery “a natural human relationship” akin to “that of patron and client.”

As I soldiered through the Moldbug canon, my reactions numbed. Here he is expressing sympathy for poor, persecuted Senator Joe McCarthy. Big surprise. Here he claims “America is a communist country.” Sure, whatever. Here he doubts that Barack Obama ever attended Columbia University. You don’t say? After a while, Yarvin’s blog feels like the pseudo-intellectual equivalent of a Gwar concert, one sick stunt after another, calculated to shock. To express revulsion and disapproval is to grant the attention he so transparently craves.

Yet the question inevitably arrives: Do we need to take this stuff seriously? The few mainstream assessments of the neoreactionaries have been divided on the question.

Sympathetic citations are spreading: In the Daily Caller, The American Conservative and National Review. Yet the conservative press remains generally dismissive. The American Spectator’s Matthew Walther calls neoreactionism “silly not scary” and declares that “all of these people need to relax: spend some time with P.G. Wodehouse, watch a football game, get drunk, whatever.”

TechCrunch, which first introduced me to Moldbug, treats the “Geeks for Monarchy” movement as an Internet curio. But The Telegraph says, yes, this is “sophisticated neo-fascism” and must be confronted.Vocativ, which calls it “creepy,” agrees that it should be taken seriously.

The science fiction author David Brin goes further in his comment on a Moldbug blog post, accusing the blogger of auditioning for the part of Machiavelli to some future-fascist dictator:

The world oligarchy is looking for boffins to help them re-establish their old – pyramidal – social order. And your screeds are clearly interview essays. “Pick me! Pick me! Look! I hate democracy too! And I will propagandize for people to accept your rule again, really I will! See the fancy rationalizations I can concoct????”

But your audition materials are just . .  too . . . jibbering . . . loopy. You will not get the job.

As strange as it sounds, Brin may be closest to the truth. Neoreactionaries are explicitly courting wealthy elites in the tech sector as the most receptive and influential audience. Why bother with mass appeal, when you’re rebuilding the ancien régime?

Moldbuggism, for now, remains mostly an Internet phenomenon. Which is not to say it is “merely” an Internet phenomenon. This is, after all, a technological age. Last November, Yarvin claimed that his blog had received 500,000 views. It is not quantity of his audience that matters so much as the nature of it, however. And the neoreactionaries do seem to be influencing the drift of Silicon Valley libertarianism, which is no small force today. This is why I have concluded, sadly, that Yarvin needs answering.

If the Koch brothers have proved anything, it’s that no matter how crazy your ideas are, if you put serious money behind those ideas, you can seize key positions of authority and power and eventually bring large numbers of people around to your way of thinking. Moreover, the radicalism may intensify with each generation. Yesterday’s Republicans and Independents are today’s Libertarians. Today’s Libertarians may be tomorrow’s neoreactionaries, whose views flatter the prejudices of the new Silicon Valley elite.

In a widely covered secessionist speech at a Silicon Valley “startup school” last year, there was more than a hint of Moldbug (see video below). The speech, by former Stanford professor and Andreessen Horowitz partner Balaji Srinivasan, never mentioned Moldbug or the Dark Enlightenment, but it was suffused with neoreactionary rhetoric and ideas. Srinivasan used the phrase “the paper belt” to describe his enemies, namely the government, the publishing industries, and universities. The formulation mirrored Moldbug’s “Cathedral.” Srinivasan’s central theme was the notion of “exit”—as in, exit from democratic society, and entry into any number of corporate mini-states whose arrival will leave the world looking like a patchwork map of feudal Europe.

Forget universal rights; this is the true “opt-in society.”

An excerpt:

We want to show what a society run by Silicon Valley would look like. That’s where “exit” comes in . . . . It basically means: build an opt-in society, ultimately outside the US, run by technology. And this is actually where the Valley is going. This is where we’re going over the next ten years . . . [Google co-founder] Larry Page, for example, wants to set aside a part of the world for unregulated experimentation. That’s carefully phrased. He’s not saying, “take away the laws in the U.S.” If you like your country, you can keep it. Same with Marc Andreessen: “The world is going to see an explosion of countries in the years ahead—doubled, tripled, quadrupled countries.”

Srinivasan ticked through the signposts of the neoreactionary fantasyland: Bitcoin as the future of finance, corporate city-states as the future of government, Detroit as a loaded symbol of government failure and 3D-printed firearms as an example of emerging technology that defies regulation.

The speech succeeded in promoting the anti-democratic authoritarianism at the core of neoreactionary thought, while glossing over the attendant bigotry. This has long been a goal of some in the movement. One such moderate—if the word can be used in this context—is Patri Friedman, grandson of the late libertarian demigod Milton Friedman. The younger Friedman expressed the need for “a more politically correct dark enlightenment” after a public falling out with Yarvin in 2009.

Friedman has lately been devoting his time (and leveraging his family name) to raise money for the SeaSteading Institute, which, as the name suggests, is a blue-sea libertarian dream to build floating fiefdoms free of outside regulation and law. Sound familiar?

The principal backer of the SeaSteading project, Peter Thiel, is also an investor in companies run by Balaji Srinivasan and Curtis Yarvin. Thiel is a co-founder of PayPal, an original investor in Facebook and hedge fund manager, as well as being the inspiration for a villainous investor on the satirical HBO series Silicon Valley. Thiel’s extreme libertarian advocacy is long and storied, beginning with his days founding the Collegiate Network-backed Stanford Review. Lately he’s been noticed writing big checks for Ted Cruz.

He’s invested in Yarvin’s current startup, Tlon. Thiel invested personally in Tlon co-founder John Burnham. In 2011, at age 18, Burnham accepted $100,000 from Thiel to skip college and go directly into business. Instead of mining asteroids as he originally intended, Burnham wound up working on obscure networking software with Yarvin, whose title at Tlon is, appropriately enough, “benevolent dictator for life.”

California libertarian software developers inhabit a small and shallow world. It should be no surprise then, that, although Thiel has never publicly endorsed Yarvin’s side project specifically, or the neoreactionary program in general, there is definitely a whiff of something Moldbuggy in Thiel’s own writing. For instance, Thiel echoed Moldbug in an infamous 2009 essay for the Cato Institute in which he explained that he had moved beyond libertarianism. “I no longer believe that freedom and democracy are compatible,” Thiel wrote.

Thiel’s eponymous foundation funds, among other things, an institute to advance the ideas of a conservative Stanford academic, René Girard, under whom Thiel studied as an undergraduate. In 2012 Thiel delivered a lecture at Stanford that explained his views regarding the divine rights of Silicon Valley CEOs. The lecture did address some of Girard’s ideas about historical “mimetics,” but it also contained a heavy dose of Moldbuggian thought. Thiel says:

A startup is basically structured as a monarchy. We don’t call it that, of course. That would seem weirdly outdated, and anything that’s not democracy makes people uncomfortable. We are biased toward the democratic-republican side of the spectrum. That’s what we’re used to from civics classes. But the truth is that startups and founders lean toward the dictatorial side because that structure works better for startups.

Might a dictatorial approach, in Thiel’s opinion, also work better for society at large? He doesn’t say so in his Stanford lecture (although he does cast tech CEOs as the heirs to mythical “god-kings” such as Romulus). But Thiel knows where to draw the line in mixed company. Ordinary people get so “uncomfortable” when powerful billionaires start talking about the obsolescence of participatory government and “the unthinking demos,” as he put it in his Cato essay. Stupid proles! They don’t deserve our brilliance! “The fate of our world may depend on the effort of a single person who builds or propagates the machinery of freedom,” Thiel wrote.

It is clear that Thiel sees corporations as the governments of the future and capitalists such as himself as the kings, and it is also clear that this is a shockingly common view in Thiel’s cohort. In a 2011 New Yorkerprofile, George Packer wrote:

Thiel and his circle in Silicon Valley may be able to imagine a future that would never occur to other people precisely because they’ve refused to leave that stage of youthful wonder which life forces most human beings to outgrow . . . . He wants to live forever, have the option to escape to outer space or an oceanic city-state, and play chess against a robot that can discuss Tolkien, because these were the fantasies that filled his childhood imagination.

Packer is perhaps too generous to his subject. But he captures the fundamental problem with these mouthbreathers’ dreams of monarchy. They’ve never role-played the part of the peasant.

Corey Pein is a writer and reporter in Brighton, England. He offers free samples at

Is the dotcom bubble about to burst (again)?


In Silicon Valley, millions of dollars change hands every day as investors hunt the next big thing – the ‘unicorn’, or billion-dollar tech firm. There are now almost 150, but can they all succeed?

Have you heard the story about the tip from the shoeshine boy, a Brit called James Pallot asks me on my last day at TechCrunch Disrupt. I have, I say, though later I Google it to get the facts straight.

It’s attributed to Joseph Kennedy, paterfamilias of the Kennedy clan who, in 1929, was getting his shoes shined by a young boy who was also making confident predictions about which stocks would rise. For Kennedy, it was a moment of revelation. He sold his portfolio. Not long afterwards, Wall Street crashed and the world was plunged into the greatest depression ever seen. So a tip from the shoeshine boy is a sign that the bubble is about to burst. That the wave of confidence will finally crash upon the shore. That the jig is up.

Pallot used to be the digital editorial director of Condé Nast in New York and now he has a startup. But then, we’re at the world’s biggest startup conference in San Francisco, a few miles down the road from Silicon Valley where the world’s greatest concentration of technology startups first started up.

His company is in the booming field of VR, or virtual reality, which is to 2015 roughly what Rubik’s Cubes were to 1982, though with rather bigger potential consequences. Pallot claims it’s the logical next step for journalistic content. In 20 years’ time, you won’t be reading this on the page, I’ll probably be leading you by the hand through a 3D rendering of a virtual TechCrunch conference floor. Or, more likely, you’ll be leading yourself and I’ll be claiming jobseeker’s allowance.

But anyway. In the meantime, Pallot asks me if I’ve heard of the tip from the shoeshine boy. I have, I say, and tell him it’s been on my mind. Because for three days, I’ve been hearing about “unicorns” – a Silicon Valley term for companies that have been valued at more than $1bn. When this usage was first coined, less than two years ago, there were 39 of them. Today, there are 147. Or as Matthew Wong, a senior analyst at CB Insights, tells me: “The funding is at levels that we haven’t seen since 2000.”

As those with longer memories will recall, that was the year the dotcom bubble burst. It needs explaining because there are an awful lot of people at TechCrunch whose memories simply don’t go back that far: the typical startup founder is male and in his 20s. Back in 2000, Google was less than 18 months old and Facebook wasn’t even a glimmer in Mark Zuckerberg’s eye – he was still at high school. (At 31, he’s now practically Silicon Valley’s elder statesman.)

Everything has changed. And is changing at an ever-faster pace. Eight years ago, TechCrunch launched its Disrupt conference with 45 startups. This year, there are 5,000 of them. Over three days I talk to founders of companies from San Francisco and Texas and Uruguay and Beirut and Stockholm and Tel Aviv and Warsaw. There are apps for crowdfunded mortgages and cheaper divorces and better sex. There’s “Expedia for golf” and “Facebook for cars” and “Nest for water” and “Tinder for dogs”. There’s a virtual reality teddy bear, a device that claims it will be able to read your emotions via a contact lens in your eye and another that will automate your home cannabis farm (marijuana is a big deal in Silicon Valley right now). I miss the panel on nuclear fusion startups but they’re around.

They’ve all paid upwards of $3,000 (£1,900) to be here and they’re all trying to attract the attention of Silicon Valley’s biggest beasts. The VCs – venture capitalists to you and me. The money guys.

“How do you spot them?” I ask Peter Becronis, the founder of a real estate startup called Owner’s Vault. “Oh, it’s easy,” he says. “They’re all men, older guys who are in jeans and brown boots and perhaps a blue jacket. Oh, and a good watch. They’re the ones who shuffle past you trying not to catch your eye.”

It’s a long shot for the likes of Becronis to be here, but not a total pipe dream. Because hundreds of startups are being funded each month. Vast sums of money are changing hands. Crunchbase, TechCrunch’s sister site, lists the deals that are being done on a daily basis. On the day I write this, I check it and find 24 companies that have just received funding, including Kreditech, which got $92m (it uses “big data and complex machine-learning algorithms to credit score everyone worldwide”) and Medium, which received $57m (it’s a platform that has found another new business model that seems to involve not paying journalists).

Every month the amount of money being invested in early-stage startups goes up. And every month, more and more people are starting to use the B-word. Bubble. The last time this amount of money was swilling around, we know how it ended. “Back then, a lot of websites launched but that’s all they were, websites,” Mike Butcher, TechCrunch’s editor-at-large, tells me. “Now in 2015, all those technologies that were predicted – AI, drones, VR – have all turned up. The innovation is real. And it just continues to get bigger and bigger. There are more VC firms here than you can poke a stick at.

“Is it a bubble?” he asks and then answers the question himself, vividly, if not entirely clearly. “It depends. How many unicorns can you fit through the eye of a needle? Anyway, unicorns are over. It’s all about decacorns now. Companies that are worth tens of billions of dollars.”

In 2000 the bubble was in publicly listed companies – organisations like the then upstart AOL, which bought Time Warner for $164bn, the largest merger in America business history, and then most spectacular blow-up. Or in Britain,, whose share price peaked at 511p before crashing to 80p a month later. Both companies survived, unlike many, but it was a long struggle back up for both of them. (In a neat bit of circularity, AOL bought TechCrunchalong the way.) In 2015, it’s private money flowing into companies that may or may not go public one day.

The shoeshine boy wouldn’t be tipping stocks in 2015, but what would he be doing? I ask Ned Desmond, the chief operating officer of TechCrunch. He thinks for a moment. “He would probably be an Uber driver who has his own angel investment line,” he says.

But James Pallot tops that. He’s flown in from JFK and had his shoes shined in the airport. “And the guy had a startup. I literally got a tip from the shoeshine boy! He was trying to find an investor for his national shoeshine franchise.” But then, in many ways, there has never been a better time to be a startup. Niko Bonatsos, a VC with General Catalyst Partners, tells me that the sheer number of companies at TechCrunch “speaks volumes about how the barriers to entry have been removed. It’s really easy to start a company. And lots of companies from other parts of the world see this as a lottery ticket. And for some of them, it will be. It’s the survival of the fittest. And the luckiest.”

Pallot and his co-founder are currently “bootstrapping” their company, Emblematic Group, which is creating virtual reality news content. “Bootstrapping” is Silicon Valley jargon. It means getting by with what you’ve got. It’s how people have set up companies since the dawn of capitalism. You start a business with a bit of money you already have and you try to attract customers and build it from there.

“Bootstrapping” is how you figure out if there’s a market and, if so, how you reach it. It’s also, like, totally 20th century. The reason 5,000 companies pay $3,000-plus to come to TechCrunch is because Silicon Valley has another model. People – strangers – will give you vast sums of cash to build your company into a global brand overnight. If you can deliver the killer pitch. The pitch that convinces the valley’s top VCs that you are the next Facebook, the next Uber, the next Airbnb.

“It doesn’t work like this in the rest of the world,” Ned Desmond tells me. “In Indonesia or Turkey or wherever, normal business culture demands collateral and security. Venture investing has none of that. You are investing in potential.” You’re gambling, basically. Silicon Valley, in 2015, is a giant casino. And the bets are so large because the potential payoffs are so huge. The next Google has to start somewhere.

So is it a bubble? “Everything is cyclical,” says Desmond. Does he remember the last crash? “I was there! I was in it. It was terrible. We had just launched a magazine, Business 2.0. Even the name sounds so cringeworthy now. We launched in May 2000 with a record number of advertisements. We had 150 ad pages. A year on, we had 15.”

This is not exactly an answer, so I try again. Is it a bubble? “We published a graph showing the unicorns. It’s a hockey stick. It’s near vertical growth.”



New App Lets You Rate and Review Actual Human Beings

Imagine all your worst ideas poured into an app and you’ve basically got it.

Photo Credit: Shutterstock / Chaikom

If there is one thing that absolutely no one has been asking for, it is a social media app that lets you rate people as if they were products or restaurants. But a Calgary-based company isn’t letting that major issue get in the way. Instead, it’s developed an app called Peeple, which allows anyone age 21 or over who has your phone number to rate you on a scale from 1 to 5, and to give you a review.

Sounds like just what the Internet needs, right? Another way for people to voice their unfiltered and unsolicited opinions on something — or someone, in this case — just because!

Here’s how this awful, no-good idea, which cofounders Nicole McCullough and Julia Cordray say will launch in November, will work: Users will log into Peeple via Facebook and enter their phone numbers to demonstrate they aren’t bots and to verify their identity. Then, to rate a person, they’ll have to pick a category that defines the nature of their shared relationship: personal, professional or romantic. From there, they can issue a rating and write a review, the way you might on Yelp or Amazon, only about a flesh-and-blood human being.

But wait, there’s more! Even if you don’t sign up for the app, someone else can create a profile for you. According to the Peeple site, “[i]f the person you are searching for is not in the app you can add their name, profile picture, and start their profile by rating them.” All you need is said person’s cell phone number. And once you have a profile in the Peeple app — even one you yourself didn’t create — it’s there for good.

Peeple co-founder Nicole McCullough, speaking to the Calgary Herald, said, “The aim of our platform is to showcase a person’s true character. I came up with this idea over a year and a half ago from wanting to find a good babysitter in my neighborhood. We tend to trust referrals and so we wanted to create a platform that allowed people to refer each other in several different ways.”

“People do so much research when they buy a car or make those kinds of decisions,” co-founder Julia Cordray told the Washington Post. “Why not do the same kind of research on other aspects of your life?”

The short answer is, because people are not cars or objects. Summing a person up on a scale of 1-5 seems irresponsible and overly simplistic, not to mention completely unnecessary. (You want to know what someone’s like? It sounds crazy, but you might try talking to that person.) If, as McCullough suggests, the company was simply interested in creating a means of vetting service providers — e.g., baby sitters — why not build a site like Healthgrades or Rate My Professor, which focus on rating a person in a capacity that it makes sense for a reviewer to comment on, or a potential customer to know? Why would anyone think that essentially inviting any acquaintance — from old lovers to former co-workers you mostly avoided — to weigh in with thoughts on a person is a good idea? Knowing what we know about the Internet, and how people behave online, who wouldn’t see this as a devolutionary step in social media? It’s all just so obviously made to go terribly awry.

What’s more, the Washington Post also notes that, even under the best of circumstances, rating sites and app users exhibit inevitable human biases:

[A]ll rating apps, from Yelp to Rate My Professor, have a demonstrated problem with self-selection. (The only people who leave reviews are the ones who love or hate the subject.) In fact, as repeat studies of Rate My Professor have shown, ratings typically reflect the biases of the reviewer more than they do the actual skills of the teacher: On RMP, professors whom students consider attractive are way more likely to be given high ratings, and men and women are evaluated on totally different traits.

McCullough and Cordray point to Peeple’s terms and conditions, which rule out things like bullying, abuse, hateful content, sexism and more, but I think we’ve all seen how effective that is in practice on any number of sites. Still, there may be one way to avoid the inevitable downsides of this whole thing. Positive ratings will post on a profile the instant they’re submitted, but negative ratings will be withheld for 48 hours while the parties involved attempt to settle the issue. If you aren’t registered for the site, you can’t engage in that process, and your page will therefore only display positive comments. (You can also respond to negative comments, Yelp-style, but I say not registering seems like the best route for everyone.)

Until their launch, McCullough and Cordray are speaking with angel investors and venture capitalists to raise funds. The Post estimates the company is currently valued at $7.6 million.

“Peeple will revolutionize the way an individual is seen in this world through their relationships,” Cordray said to the Calgary Herald. “When social graces are becoming lost to the past, we want to revive this forgotten manner and bring attention to how a person appears to others.”

It’s an ironic statement, considering the app seems to eschew the very social graces Cordray suggests it was created to promote.

The Myth of Trickle Down Innovation

Why is the world working on clickbait instead of going to Mars?

Here’s a tiny proposition: innovation is in danger of becoming a word that means something like making even more instant instant microwavable noodles


On-demand butlers, maids, chaffeurs, dog-walkers. Pet spas. Tap for a drone-delivered artisanally crafted designer taco. Swipe right for a date with a better profile pic. Swipe up for the economy’s next great, earth-shaking innovation…same day delivery of everything you had to wait two days for.

Let’s take a moment to be painfully honest. The above are fripperies, trivialities, a piffling of the human spirit. Let me present you with a list of great endeavors humanity’s boundless ingenuity should be devoted to. Reversing climate change. Curing cancer (and the like). Ending poverty. Fixing the ills of democracy. Giving every child a life-changing education.

So how did we end up with a generation’s brightest minds slaving furiously over the colossal, world-changing idea of…same day delivery? Right you are: largely, because of short-termism, growthism, and “shareholder” (read: hedge-fund bots programmed to earn a penny more profit even during the implosion of the known universe) pressure.

But those exist in the first place because of a great myth. The Myth of Trickle-Down Innovation. I’d bet that you’ve heard it before, often from venture capitalists high-fiving each other in congrulatory blog posts (“aww shucks, Bob. No, you’re the Thomas Edison of the twenty-first century!!”). It goes like this: today’s luxuries are tomorrow’s necessities. What the rich enjoy today, so the poor will enjoy tomorrow. Hey, presto! Innovate!! Rinse, apply, repeat, problem (aka all of humanity’s greatest and most pressing challenges, issues, and dilemmas) solved.

The problem is that the Myth of Trickle Down Innovation isn’t really true. Like all great myths, it hides a greater truth — and symbolizes an article of faith that we ritualistically repeat, mostly to comfort one another that we are moral, just beings. While it’s certainly true that the majority of innovations trickle downwards through the strata of the economy, to the very bedrock, it’s truer that many don’t — and they are often precisely the ones that should; or worse still, that on its voyage through the strata of the economy, what was once the pure, clean water of prosperity gets polluted into something more like toxic sludge.

I just bought a new TV. Wow! It’s the kind of miraculous gadget six year old me could only have dreamt of in his widest-eyed fantasies. It’s huge, paper-thin, and does wondrous things like making everything on it 3-D. Amazing, right? Right. Innovation trickling down to a humble nobody like me. But. If the hidden cost of my new TV is that I don’t enjoy stability, mobility, opportunity, retirement — which should I want? I know, it’s a tough choice.

Here are three more examples, to make my point. Cars. Everyone has one today. But because society invested heavily in a groundbreaking (at the time) set of highways. No highways — less cars. Lesson: innovation doesn’t trickle down in a magical, unstoppable alchemy —even when it does, it often requires help, a gentle nudge, a spark (read: investment, laws, social norms). Food. It’s true that people today enjoy a cornucopia in their local supermarket. But it’s truer that food deserts exist, and much food is riddled with additives and preservatives: sure, innovation trickled down — but the high-fructose-saturated-food-like products many can afford aren’t quite the pure, clean Whole Foods the rich enjoy. Education. If you’re very rich, you can send your kids to a liberal arts school, where they’ll enjoy careful, personalized one-on-one instruction in classes of a few dozen. But if you’re not…well, fear not, innovation’s trickling down. You might enjoy online classes (read: Powerpoint presentations with canned voice-overs) with thousands of other students, with maybe a scratchy Skype connection and a few multiple-choice quizzes to boot. Not quite the same thing, are they? What’s trickling down at the very bottom isn’t the pure, clean water of life at the top of the economic mountain.

Still don’t accept my tiny theory? Here are just a few things that the richest have, that the middle class doesn’t, and probably won’t in the foreseeable future. Wealth managers, private jets, members’ clubs, private islands, property portfolios, designer yachts. Some things are more like caviar than water: they don’t trickle down the economic mountain at all.

The converse is also true. If what’s trickling down from the top of the mountain is champagne, but the people at the bottom are thirsty for water…then you’re probably not innovating in any meaningful sense. We can make all the on-demand masseuses and pet spas and same-day delivered designer sheets in the world — but they’re not going to benefit people as much as high quality jobs, incomes, savings, rights, mobility, opportunity…happiness, meaning, a planet. While some innovations never trickle down, and some turn into sludge along the way — often, innovations that do trickle down are of little benefit in the first place. Doggy butlers trickling down when most Americans can’t afford $400 for an emergency expense is like smiling and giving a person dying of thirst a designer straw to suck air through.

The trickle-down theory of innovation is essentially the discredited ideology of trickle-down economics restated using gadgets instead of formulas. The latter argues that prosperity will trickle down (it hasn’t), and the former suggests that prosperity trickles down through goods magically getting cheaper (instead of turning toxic, pointless, or simply disappearing along the way). But just as trickle-down economics has been squarely debunked, repudiated even by the IMF, for example, so it’s time for us to update our tired, rusting mental models of innovation.

Rather than employing the illogic of trickle-down innovation, you and I should ask a wiser question: what are the long-term real social benefits of this product, service, idea, project? What does it add in human terms — does it make people smarter, fitter, wiser, closer, happier? Will it change anyone’s life, in even a small way, for the better, and how many lives can it realistically change thus — or is it just another coal in the bonfire of the vanities?

Why? Because the truth is that we don’t get too many shots at groundbreaking things — and it’s those shots that come to define the worthiness of our days. If we’re going to spend our time, effort, money, imagination, our best minds and our brightest spirits, on the grand challenge of…delivering stuff we don’t really need with money we don’t really have to impress people we don’t really like to live lives we don’t really want…a few microseconds sooner…we’re not surely not investing our veryselves wisely: spending our brief time on earth accomplishing things that truly matter. And lest you suggest I’m an idealist, the simple fact is: that is how great institutions, leaders, and lives, those that earn our respect, love, and admiration — and so lend our brief days a sense of greater meaning, higher purpose, and abiding worth — are built.

September 2015

Kids Who Use Computers Heavily in School Have Lower Test Scores

In top performing nations, teachers, not students, use technology.

For those of us who worry that Google might be making us stupid, and that, perhaps, technology and education don’t mix well, here’s a new study to confirm that anxiety.

The Organization for Economic Cooperation and Development (OECD) looked at computer use among 15-year-olds across 31 nations and regions, and found that students who used computers more at school had both lower reading and lower math scores, as measured by PISA or Program for International Student Assessment. The study, published September 15, 2015, was actually conducted back in 2012, when the average student across the world, for example, was using the Internet once a week, doing software drills once a month, and emailing once a month. But the highest-performing students were using computers in the classroom less than that.

“Those that use the Internet every day do the worst,” said Andreas Schleicher, OECD Director for Education and Skills, and author of “Students, Computers and Learning: Making the Connection,” the OECD’s first report to look at the digital skills of students around the world. The study controlled for income and race; between two similar students, the one who used computers more, generally scored worse.*

Home computer use, by contrast, wasn’t as harmful to academic achievement. Many students in many high performing nations reported spending between one to two hours a day on a computer outside of school. Across the 31 nations and regions, the average 15-year-old spent more than two hours a day on the computer. (Compare your country here).

Back in the classroom, however, school systems with more computers tended to be improving less, the study found. Those with fewer computers were seeing larger educational gains, as measured by PISA test score changes between 2009 and 2012.

“That’s pretty sobering for us,” said Schleicher in a press briefing. “We all hope that integrating more and more technology is going to help us enhance learning environments, make learning more interactive, introduce more experiential learning, and give students access to more advanced knowledge. But it doesn’t seem to be working like this.”

Schleicher openly worried that if students end up “cutting and pasting information from Google” into worksheets with “prefabricated” questions, “then they’re not going to learn a lot.”

“There are countless examples of where the appropriate use of technology has had and is having a positive impact on achievement,” said Bruce Friend, the chief operating officer of iNACOL, a U.S.-based advocacy group for increasing the use of technology in education. “We shouldn’t use this report to think that technology doesn’t have a place.”

Friend urges schools in the U.S. and elsewhere to train teachers more in how to use technology, especially in how to analyze real-time performance data from students so that instruction can be modified and tailored to each student.

“Lots of technological investments are not translating into immediate achievement increases. If raising student achievement was as easy as giving every student a device, we would have this solved. It’s not easy,”  Friend added.

In a press briefing on the report, Schleicher noted that many of the the top 10 scoring countries and regions on the PISA test, such as Singapore and Shanghai, China, are cautious about giving computers to students during the school day. But they have sharply increased computer use among teachers. Teachers in Shanghai, Schleicher explained, are expected to upload lesson plans to a database and they are partly evaluated by how much they contribute. In other Asian countries, it is common for teachers to collaborate electronically in writing lessons. And technology is used for video observations of classrooms and feedback. “Maybe that’s something we can learn from,” said Schleicher.

In addition to comparing computer use at schools with academic achievement, the report also released results from a 2012 computerized PISA test that assessed digital skills. U.S. students, it turns out, are much better at “digital reading” than they are at traditional print reading. The U.S. ranked among the group of top performing nations in this category. In math, the U.S. was near the worldwide average on the digital test, whereas it usually ranks below average on the print test.

The digital reading test assesses slightly different skills than the print test. For example, students are presented with a simulated website and asked to answer questions from it. Astonishingly, U.S. students are rather good at remaining on task, clicking strategically and getting back on track after an errant click. By contrast, students in many other nations were more prone to click around aimlessly.

Interestingly, there wasn’t a positive correlation between computer usage at school and performance on the digital tests. Some of the highest scoring nations on the digital tests don’t use computers very much at school.

In the end, 15-year-old students need good comprehension and analysis skills to do well in either the print or the digital worlds. This study leaves me thinking that technology holds a lot of promise, but that it’s hard to implement properly. Yes, maybe there are superstar teachers in Silicon Valley who never get rattled by computer viruses, inspire their students with thrilling lab simulations and connect their classroom with Nobel Prize-winning researchers. But is it realistic to expect the majority of teachers to do that? Is the typical teacher’s attempt to use technology in the classroom so riddled with problems that it’s taking away valuable instructional time that could otherwise be spent teaching how to write a well-structured essay?

Perhaps, it’s best to invest the computer money, into hiring, paying and training good teachers.

* In reading, students who used the computer a little bit did score better than those who never used a computer. But then as computer use increased beyond that little bit, reading performance declined. In math, the highest performing students didn’t use computers at all.

Jill Barshay, a contributing editor, is the founding editor and writer of Education By The Numbers, The Hechinger Report’s blog about education data. Previously she was the New York bureau chief for Marketplace, a national business show on public radio stations.

Disaster capitalism is a permanent state of life for too many Americans

According to the Department of Homeless Services, the number of homeless people in New York City has risen by more than 20,000 over the past five years. Photograph: Spencer Platt/Getty Images

In the United States, disaster has become our most common mode of life. Proof that our daily existence was something other than a simmering, smoldering disaster has been historically held somewhat at bay by the myth that hard work equals some kind of subsistence living. For the more deluded amongst us, this ‘American dream’ even got us to believe we could be something called ‘middle class’. We were deceived.

For those not yet woke, I don’t see how y’all can stay asleep when story after story proves how screwed we are.

The New York Post, no bastion of bleeding heart liberalism, reported on Monday that “Hundreds of full-time city workers are homeless”. These are people who clean our trash and make our city, the heart of American capitalism, safe and livable, including for those who plunder the globe from Wall Street. These are men and women, living in shelters and out of their cars, who have government jobs – the kind of workers conservatives love to paint as greedy, gluttonous pigs.

When a full time government worker can’t “find four walls and a roof to call his own” in the city he serves, we are living in a perpetual state of disaster capitalism.

Across the country, the San Francisco Chronicle told the tale of the “Tech bus drivers forced to live in cars to make ends meet”. It’s arguable whether living in your car can really be considered “making ends meet”, but what can you expect of a newspaper serving a city where tech is supposed to answer all of our needs. Where housing is even more stupidly expensive than in New York City.

This, too, is perpetual disaster capitalism, creating havoc and inflicting disaster upon individual souls for corporate greed without even needing the pretense of a crisis for an excuse.

In her 2007 book The Shock Doctrine: The Rise of Disaster Capitalism, Naomi Klein defined “disaster capitalism” as “orchestrated raids on the public sphere in the wake of catastrophic events, combined with the treatment of disasters as exciting marketing opportunities”. She was riffing on neoconservatives using Hurricane Katrina as an excuse for a New Orleans land grab. She witnessed the same phenomenon in the 2004 Asian Tsunami and in the aftermath of the US invasion of Iraq.

The concept of public plunder after disaster has been embraced in similar linguistic terms by Democrats and Republicans alike. Condoleezza Rice famouslycalled 9/11 an “enormous opportunity”, and indeed it was a profitable one, for war contractors anyway. Similarly, White House Chief of Staff Rahm Emanuel once said: “You never want a serious crisis to go to waste. And what I mean by that is an opportunity to do things you think you could not do before”. Emanuel was good to his word. While American workers lost their jobs, lost their homes and even took their own lives as a result of the 2008 financial meltdown, the Obama White House instituted financial “reforms” that arrested no Wall Street executives, and left even Forbes predicting “ten reasons why there will be another systematic financial crisis”.

When our daily life is one of a state of chaos – and with hundreds slaughtered by police annually, and folks who work full time unable to stave off homelessness, and white anchors shot on live TV, and black worshippers shot up in church, and incarcerated victims behind bars “taking their own lives” daily, it’s hard to say that it’s not – the continuous state of disaster justifies disaster capitalism continuously, and we’re barely able to notice it, and powerless to stop it.

We live in such an interminable state of disaster, we barely see the locusts for the plague. Take the other major sad story this week: that Silicon Valley investor Martin Shkrelli has bought the drug Daraprim, raising its price 5,000%. No crisis necessitated this increase. The drug is 62 years old, and its initial costs had long ago been absorbed.

It’s easy to be angry at Shkrelli, his smug smile and his greedy choices that may well equal the deaths of those priced out from the malaria, Aids and cancer medicine they need. But Shkrelli is just a tool. He lives in a world where disaster capitalism will reward him. He now says he will make the drug “more affordable,” but the richest nation on earth can’t stop him from deciding what “affordable” will mean. He may repulse us, but he represents our American way of disastrous living. Disaster capitalism no longer just reacts to chaos for profit, or even creates chaos for profit. It creates the conditions by which the spectre of social, spiritual and biological death hang over our heads on a daily basis so oppressively, the crises become seamless.

And it asks us to accept that when you work full time driving workers to the richest corporation in the history of the human race and must live in your car, you should be grateful that you’re “making ends meet”, keep calm and carry on.


Robert Reich: How Silicon Valley Giants Are Destroying U.S. Capitalism

‘Saving Capitalism,’ Reich’s new book calls for sweeping anti-trust actions.

Photo Credit:

Throughout American history, whatever industries have dominated the economy have also had outsized control of the political system—until something shifted and their monopoly power was broken.

“Two centuries ago slaves were among the nation’s most valuable assets, and after the Civil War, perhaps land was,” wrote former Secretary of Labor Robert Reich in an excerptin the New York Times from his forthcoming book, Saving Capitalism. “Then factories, machines, railroads and oil transformed America. By the 1920s most working Americans were employees, and the most contested property issue was their freedom to organize into unions.”

Reich’s knows that government and the private sector are not separate entities, but deeply related. He notes the federal government has intervened over the decades to restrain and rebalance capitalism’s excesses. And he says that America is again at one of those moments when the economy is overrun by monopolies—with Silicon Valley’s giants as the top example.

Today, the most valuable asset in America is digitized information pulsing through the portals that Americans tap when sitting at a computer or using a phone, and the underlying infrastructure created by the Internet’s giants: Google, Apple, Facebook, telecoms, cable TV, etc.

“Now information and ideas are the most valuable forms of property,” Reich writes. “The most valuable intellectual properties are platforms so widely used that everyone else has to use them, too. Think of standard operating systems like Microsoft’s Windows or Google’s Android; Google’s search engine; Amazon’s shopping system; and Facebook’s communication network.”

Reich points out that Silicon Valley has a concentration of well-known monopolies. Google runs two-thirds of all Internet searches in the U.S., he notes. Amazon now sells almost half of all new books. Facebook has nearly 1.5 billion global monthly users. This, he said, is “where the money is.”

And just as the biggest slave owners found many ways to keep slavery growing in colonial America and a young nation, so too have high-tech giants convinced the government to keep its regulatory hands off.

“Antitrust laws used to fight this sort of market power,” Reich writes. “In the 1990s, the federal government accused Microsoft of illegally bundling its popular Windows operating system with its Internet Explorer browser to create an industry standard that stifled competition. Microsoft settled the case by agreeing to share its programming interfaces with other companies. But since then Big Tech has been almost immune to serious antitrust scrutiny, even though the largest tech companies have more market power than ever. Maybe that’s because they’ve accumulated so much political power.”

Reich believes the time has already come for the historic regulatory pendulum to start swinging the other way—that is, for the federal government to reign in monopolistic excess.

“As has happened before with other forms of property, the most politically influential owners of the new property are doing their utmost to increase their profits by creating monopolies that must eventually be broken up,” he writes. “Whenever markets become concentrated, consumers end up paying more than they otherwise would, and innovations are squelched. Sure, big platforms let creators showcase and introduce new apps, songs, books, videos and other content. But almost all of the profits go to the platforms’ owners, who have all of the bargaining power.”

Reich points to numerous economic statistics that show that since the late 1970s, “the rate at which new businesses have formed in the United States has slowed markedly.” This is especially true in Silicon Valley, he said, as “Big Tech’s sweeping patents, standard platforms, fleets of lawyers to litigate against potential rivals and armies of lobbyists have created formidable barriers to new entrants.”

Arcane areas of the federal government—such as U.S. Patent Office—helped make this so by assisting giants like Google and Apple acquire near-monopoly control over their respective profit centers. His prescription, of course, is for federal policymakers to reverse course and stand up for the little guy or gal.

“The underlying issue has little to do with whether one prefers the “free market” or government,” he writes, seeking to debunk the notion that these companies exist in a sphere immune from public accountability.

“The real question is how government organizes the market, and who has the most influence over its decisions,” Reich said. “We are now in a new gilded age similar to the first Gilded Age, when the nation’s antitrust laws were enacted. As then, those with great power and resources are making the “free market” function on their behalf. Big Tech — along with the drug, insurance, agriculture and financial giants — dominates both our economy and our politics.”

Reich says the time has come for federal power to break up the 21st centuries newest monopolies for the benefit of the rest of the economy. But it will take clear thinking to see the American economy for what its biggest actors have largely become—modern monopolies.

“Yet as long as we remain obsessed by the debate over the relative merits of the “free market” and “government,” we have little hope of seeing what’s occurring and taking the action that’s needed to make our economy work for the many, not the few.”


Steven Rosenfeld covers national political issues for AlterNet, including America’s retirement crisis, democracy and voting rights, and campaigns and elections. He is the author of “Count My Vote: A Citizen’s Guide to Voting” (AlterNet Books, 2008).