|March 10, 2014|
|Jan Koum and Brian Acton, the founders of the messaging software WhatsApp, have ample reason to celebrate this week’s 25th anniversary of the Internet’s World Wide Web. The pair have just become billionaires.
Facebook’s Mark Zuckerberg, the youngest billionaire ever, gave Koum and Acton that distinction last month when he gobbled up WhatsApp for $19 billion. The three of them have lots of online company in the global billionaire club. Forbes now counts 120 other high-tech whizzes with billion-dollar fortunes.
The Internet has become, in effect, the fastest billionaire-minting machine in world history. Should we consider this incredible concentration of wealth an outcome preordained? Did the last 25 years of online history have to leave us so much more unequal? We ask that question in this week’s Too Much.
And what about the future? How might we change the online world to help create a more equal real world? We ask that question, too.
|GREED AT A GLANCE|
|New European Union rules adopted last year limit banker bonuses to no more than triple their base salary. Philippe Lamberts, a Belgian Green Party lawmaker, led the drive for that cap, and now he’s struggling to stop the British government from letting UK bankers sidestep the EU’s modest new pay restrictions. One such sidestep: Banking giant HSBC is now paying its CEO Stuart Gulliver an extra $53,500 every month as an “allowance.” Lamberts wants the EU to take the UK to court. Barclays CEO Antony Jenkins, for his part, is defending his bank’s bonus culture. Nearly 500 power suits at Barclays, half of them in the United States, are taking home at least $1.6 million a year. Paying them any less, says Jenkins, would force his bank into a “death spiral.” To do business in America, adds the CEO, we must “reconcile ourselves to pay high levels of compensation.”
Don’t talk to Miami maritime lawyer Jim Walker about greedy bankers. The real greedy, says Walker, are running cruise lines and incorporating their operations offshore to avoid U.S. taxes and wage laws. The greediest cruise character of them all? That might be Carnival Cruise chair Micky Arison. He’s now selling off 10 million of his Carnival shares, a sale that figures to bring in $395 million and still leave his family holding over $6 billion in Carnival stock. Arison’s share sale began as passengers left adrift last year on a faulty Carnival cruise ship were testifying on the lawsuit they’ve filed against the company. That incident subjected over 4,200 passengers and crew to five days of overflowing toilets and rotting food. Carnival is dismissing the suit as “an opportunistic attempt to benefit financially” from “alleged emotional distress.”
You won’t find any billionaires standing in line to get their family heirlooms appraised on the hit public TV series Antiques Roadshow. But the world’s ultra rich, luxury editor Tara Loader Wilkinson noted last week, are definitely “going gaga for antiques.” The average billionaire, calculates a new billionaire census from the Swiss bank UBS and researchers at the Singapore-based Wealth-X, holds $14 million worth of antiques and collectibles. Why are so many uber rich hungering for antiques? French antiques expert Mikael Kraemer notes that “anyone with enough money can buy a jet.” Not everyone, he adds, has “what sets one billionaire apart from another”: enough “culture and knowledge” to find and buy something like an 18th century antique royal chandelier.
Quote of the Week
“Do you recall a time in America when the income of a single school teacher or baker or salesman or mechanic was enough to buy a home, have two cars, and raise a family? I remember.”
|PETULANT PLUTOCRAT OF THE WEEK|
|Fracking can be a dirty business. Big Energy CEOs don’t mind. Can’t let those environmentalists endanger our energy security, they like to insist. Unless the environment at risk happens to be their own. ExxonMobil CEO Rex Tillerson has joined a lawsuit that’s trying to stop the construction of a 160-foot water — for fracking — tower near his manse north of Dallas. Tillerson isn’t talking about his lawsuit. But an Exxon flack says his boss doesn’t object to the tower “for its potential use for water and gas operations for fracking.” He’s just upset because the tower would be “much taller” than originally proposed. If the lawsuit fails, Tillerson might have to buy the tower site himself to prevent the fracking eyesore. He can afford it. His 2012 Exxon take-home: $40.3 million.||
|IMAGES OF INEQUALITY|
The promoters of the new “Wealth Badge” are either cynically exploiting a new luxury niche or acutely exposing the cultural depravity of our unequal times. Their new Web site offers the affluent a metal pin that reads “Because I can.” The cost: $5,000. Explains the Wealth Badge pitch: “The idea is simple: If you buy something just because you can, you are truly rich.” The site claims to have sold 61 badges — and features photos of privileged people showing them off. The pins have begun drawing media play. But no one has so far answered the basic question: Is the Wealth Badge crew trying to make money or a point?
|PROGRESS AND PROMISE|
|Few people have contributed as much to our online world as Jaron Lanier, the computer scientist who helped pioneer — and label — what we call “virtual reality.” Lanier has been wondering of late about his fellow contributors, those hundreds of millions of Internet users who donate — for free — the information that a tiny cohort of tycoons has been able to crunch into billion-dollar fortunes. In his latest book, Lanier envisions a “universal micropayment system” that pays people who go online for whatever information of value their online clicks may create, “no matter what kind of information is involved or whether a person intended to provide it or not.” Such a system, says Lanier, would help us “see a less elite distribution of economic benefits.”||
Youth groups eager to help young people understand inequality’s impact on how we relate to each other can download the Theatre and Education Resource Guide from the London-based Equality Trust, part of a package of interactive learning materials.
|inequality by the numbers|
Stat of the Week
The world’s “ultra high net worth” crowd — individuals worth at least $30 million — now number 167,669, says a new Knight Frank report. Their total wealth: $20.1 trillion in 2013, almost half as much as the combined net worth of the 4.2 billion global adults who hold less than $100,000 in wealth.
A Thought for the Web’s Silver Anniversary
Let’s learn from our not-so-distant past and share the gold. New technologies don’t have to bring us new inequalities.
Exactly 25 years ago this week the British computer scientist Tim Berners-Lee conceptually “invented” the World Wide Web — and began a process that would rather rapidly make the online world an essential part of our daily lives.
By 1995, 14 percent of Americans were surfing the Web. The level today: 87 percent. And among young adults, the Pew Research Center notes in a just-published silver anniversary report, the Internet has reached “near saturation.” Some 97 percent of Americans 18 to 29 are now going online.
Americans young and old alike are using the Web to work wonders few people 25 years ago could have ever imagined. We’re talking face-to-face with people thousands of miles away. We’re finding soulmates who share our passions and problems. We’re organizing political movements to change the world.
Life with the Web has become, for hundreds of millions of us, substantially richer. Not literally richer, of course. The same 25 years that have seen the Web explode into our consciousness have seen most of us struggle to stay even economically. The Internet and inequality have grown together.
Tim Berners-Lee never saw this inequality coming. The ground-breaking research he published on March 12, 1989, the paper that proposed the system that became the Web, carried no price tag. Berners-Lee would go on to release the code for his system for free. He didn’t invent the Web to get rich.
But others certainly have become rich via the Web. Fabulously rich. Forbes magazine last week released its annual list of global billionaires. Some 123 of them, Forbes calculates, owe their fortunes to high-tech ventures. The top 15 of these high-tech billionaires hold a collective $382 billion in personal net worth.
Numbers like these don’t particularly bother — or alarm — many of today’s economists. Grand new technologies, their conventional wisdom holds, always bring forth grand new personal fortunes for the entrepreneurs who lead the way.
In the 19th century, points out this standard narrative of American economic progress, the coming of the railroads dotted our landscape with the fortunes of railroad tycoons. In the early 20th century, the new automobile age created huge piles of wealth for car makers like Henry Ford and the oilmen who supplied the juice that kept his auto engines humming along.
Why should the Internet age, mainstream economists wonder, be any different? A new technology comes along that alters the fabric of daily life. That new technology gives rise to a new rich. The one outcome naturally follows the other. No need to get bent out of shape by the resulting inequality.
But epochal new technology doesn’t always automatically generate grand new fortunes. The prime example from our relatively recent past: television.
TV burst onto the American scene even more rapidly — and thoroughly — than the Internet. In 1948, only 1 percent of American households owned a TV. Within seven years, televisions graced 75 percent of American homes.
These TV sets didn’t just drop down into those homes. They had to be designed, manufactured, packaged, distributed, marketed. Programming had to be produced. Imaginations had to be captured. All of this demanded an enormous outlay of entrepreneurial energy.
But this outlay would produce no jaw-dropping grand fortunes, no billionaires, even after adjusting for inflation. That would be no accident. The American people, by the 1950s, had put in place a set of economic rules that made the accumulation of grand new private fortunes almost impossible.
Taxes played a key role here. Income over $400,000 faced a 91 percent tax rate throughout the 1950s. Regulations played an important role as well. In television’s early heyday, for instance, government regs limited how many commercials could run on children’s TV programming. TV’s original corporate execs could only squeeze so much out of their new medium.
And television’s early kingpins couldn’t squeeze their workers all that much either. Most of their employees, from the workers who manufactured TV sets to the technicians who staffed broadcast studios, belonged to unions. TV’s early movers and shakers had to share the wealth their new medium was creating.
Today’s Internet movers and shakers, by contrast, have to share nothing. In an America where less than 7 percent of private-sector workers carry union cards, online corporate giants seldom ever need bargain with their employees.
In a deregulated U.S. economy, meanwhile, these Internet kingpins face precious few public-interest rules that keep them from charging whatever the market can bear — and rigging markets to squeeze out even more.
And taxes? Today’s Internet billionaires face tax rates that run well less than half the rates that early TV kingpins faced.
We can’t — and shouldn’t — fault Tim Berners-Lee for any of this. He freely shared, after all, his invention with the world.
“I wanted to build a creative space,” Berners-Lee observed in an interview a few years ago, “something like a sandpit where everyone could play together.”
Some people didn’t play nice.
Isaiah Poole, Paul Ryan Misses Top Reason We Haven’t ‘Won’ the War on Poverty, OurFuture.Org, March 4, 2014. That reason: policy decisions that concentrate wealth in the top 1 percent.
Joseph Olanyo, African growth fails to bridge inequality gap, Observer, March 4, 2014. Nations need tax systems that could redistribute wealth more fairly.
J.D. Alt, Forget The 1%, New Economic Perspectives, March 5, 2014. They serve no useful social function.
Wayne Besen, Will Economic Inequality Undermine LGBT Equality? Falls Church News-Press, March 5, 2014. Growing divides spawn powerful right-wing movements that scapegoat minorities.
Kathleen Geier, The IMF (Finally) Admits That Inequality Slows Growth, Nation, March 6, 2014. Good background on an important new IMF study.
Colin Gordon, Our Inequality: An Introduction, Dissent, March 6, 2014. Exploring U.S. inequality and antidotes to it.
Yves Smith, Tax Havens Make US and Europe Look Poorer than They Are, Naked Capitalism, March 6, 2014. Around 8 percent of global financial wealth is now sitting in tax havens.
James Kwak, Posturing from Weakness, Baseline Scenario, March 6, 2014. The tax hikes on the rich in the new Obama budget: only for show?
“Make room for The Rich Don’t Always Win on your book shelf right next to Howard Zinn’s The People’s History of America.”
|NEW AND notable|
A Statistical Guide to Our New ‘Plutonomy’
Sherle Schwenninger and Samuel Sherraden, The U.S. Economy After The Great Recession, New America Foundation, Washington, D.C., March 4, 2014.
Need to better understand how the Great Recession — and the political responses to it — have played out? This no-nonsense set of slides brings together, in one place, the key trends that have defined the the U.S. economy since the Great Recession hit in 2008. Just a few of the report’s many choice tidbits . . .
Good times at the top: From 2009 to 2012, America’s top 1 percent incomes grew by 31.4 percent. Bottom 99 percent incomes rose all of 0.4 percent.
Shrinking returns to labor: From 2007′s fourth quarter to 2013′s third, the labor compensation share of national income declined from 64 to 61% percent. If this labor share of national income had remained at the 2007 level, American workers would have earned $520 billion more in 2013 than they actually did.
Enter the “plutonomy”: The U.S. economy is revolving ever more around consumption by the rich. In 2012 the top 5 percent of American income earners accounted for 38 percent of domestic consumption, up from 28 percent in 1995.
- The Observer, Saturday 8 March 2014
1 The importance of “permissionless innovation”
The thing that is most extraordinary about the internet is the way it enables permissionless innovation. This stems from two epoch-making design decisions made by its creators in the early 1970s: that there would be no central ownership or control; and that the network would not be optimised for any particular application: all it would do is take in data-packets from an application at one end, and do its best to deliver those packets to their destination.
It was entirely agnostic about the contents of those packets. If you had an idea for an application that could be realised using data-packets (and were smart enough to write the necessary software) then the network would do it for you with no questions asked. This had the effect of dramatically lowering the bar for innovation, and it resulted in an explosion of creativity.
What the designers of the internet created, in effect, was a global machine for springing surprises. The web was the first really big surprise and it came from an individual – Tim Berners-Lee – who, with a small group of helpers, wrote the necessary software and designed the protocols needed to implement the idea. And then he launched it on the world by putting it on the Cern internet server in 1991, without having to ask anybody’s permission.
2 The web is not the internet
Although many people (including some who should know better) often confuse the two. Neither is Google the internet, nor Facebook the internet. Think of the net as analogous to the tracks and signalling of a railway system, and applications – such as the web, Skype, file-sharing and streaming media – as kinds of traffic which run on that infrastructure. The web is important, but it’s only one of the things that runs on the net.
3 The importance of having a network that is free and open
The internet was created by government and runs on open source software. Nobody “owns” it. Yet on this “free” foundation, colossal enterprises and fortunes have been built – a fact that the neoliberal fanatics who run internet companies often seem to forget. Berners-Lee could have been as rich as Croesus if he had viewed the web as a commercial opportunity. But he didn’t – he persuaded Cern that it should be given to the world as a free resource. So the web in its turn became, like the internet, a platform for permissionless innovation. That’s why a Harvard undergraduate was able to launch Facebook on the back of the web.
4 Many of the things that are built on the web are neither free nor open
Mark Zuckerberg was able to build Facebook because the web was free and open. But he hasn’t returned the compliment: his creation is not a platform from which young innovators can freely spring the next set of surprises. The same holds for most of the others who have built fortunes from exploiting the facilities offered by the web. The only real exception is Wikipedia.
5 Tim Berners-Lee is Gutenberg’s true heir
In 1455, with his revolution in printing, Johannes Gutenberg single-handedly launched a transformation in mankind’s communications environment – a transformation that has shaped human society ever since. Berners-Lee is the first individual since then to have done anything comparable.
6 The web is not a static thing
The web we use today is quite different from the one that appeared 25 years ago. In fact it has been evolving at a furious pace. You can think of this evolution in geological “eras”. Web 1.0 was the read-only, static web that existed until the late 1990s. Web 2.0 is the web of blogging, Web services, mapping, mashups and so on – the web that American commentator David Weinberger describes as “small pieces, loosely joined”. The outlines of web 3.0 are only just beginning to appear as web applications that can “understand” the content of web pages (the so-called “semantic web”), the web of data (applications that can read, analyse and mine the torrent of data that’s now routinely published on websites), and so on. And after that there will be web 4.0 and so on ad infinitum.
7 Power laws rule OK
In many areas of life, the law of averages applies – most things are statistically distributed in a pattern that looks like a bell. This pattern is called the “normal distribution”. Take human height. Most people are of average height and there are relatively small number of very tall and very short people. But very few – if any – online phenomena follow a normal distribution. Instead they follow what statisticians call a power law distribution, which is why a very small number of the billions of websites in the world attract the overwhelming bulk of the traffic while the long tail of other websites has very little.
8 The web is now dominated by corporations
Despite the fact that anybody can launch a website, the vast majority of the top 100 websites are run by corporations. The only real exception is Wikipedia.
9 Web dominance gives companies awesome (and unregulated) powers
Take Google, the dominant search engine. If a Google search doesn’t find your site, then in effect you don’t exist. And this will get worse as more of the world’s business moves online. Every so often, Google tweaks its search algorithms in order to thwart those who are trying to “game” them in what’s called search engine optimisation. Every time Google rolls out the new tweaks, however, entrepreneurs and organisations find that their online business or service suffers or disappears altogether. And there’s no real comeback for them.
10 The web has become a memory prosthesis for the world
Have you noticed how you no longer try to remember some things because you know that if you need to retrieve them you can do so just by Googling?
11 The web shows the power of networking
The web is based on the idea of “hypertext” – documents in which some terms are dynamically linked to other documents. But Berners-Lee didn’t invent hypertext – Ted Nelson did in 1963 and there were lots of hypertext systems in existence long before Berners-Lee started thinking about the web. But the existing systems all worked by interlinking documents on the same computer. The twist that Berners-Lee added was to use the internet to link documents that could be stored anywhere. And that was what made the difference.
12 The web has unleashed a wave of human creativity
Before the web, “ordinary” people could publish their ideas and creations only if they could persuade media gatekeepers (editors, publishers, broadcasters) to give them prominence. But the web has given people a global publishing platform for their writing (Blogger, WordPress, Typepad, Tumblr), photographs (Flickr, Picasa, Facebook), audio and video (YouTube, Vimeo); and people have leapt at the opportunity.
13 The web should have been a read-write medium from the beginning
Berners-Lee’s original desire was for a web that would enable people not only to publish, but also to modify, web pages, but in the end practical considerations led to the compromise of a read-only web. Anybody could publish, but only the authors or owners of web pages could modify them. This led to the evolution of the web in a particular direction and it was probably the factor that guaranteed that corporations would in the end become dominant.
14 The web would be much more useful if web pages were machine-understandable
Web pages are, by definition, machine-readable. But machines can’t understand what they “read” because they can’t do semantics. So they can’t easily determine whether the word “Casablanca” refers to a city or to a movie. Berners-Lee’s proposal for the “semantic web” – ie a way of restructuring web pages to make it easier for computers to distinguish between, say, Casablanca the city and Casablanca the movie – is one approach, but it would require a lot of work upfront and is unlikely to happen on a large scale. What may be more useful are increasingly powerful machine-learning techniques that will make computers better at understanding context.
15 The importance of killer apps
A killer application is one that makes the adoption of a technology a no-brainer. The spreadsheet was the killer app for the first Apple computer. Email was the first killer app for the Arpanet – the internet’s precursor. The web was the internet’s first killer app. Before the web – and especially before the first graphical browser, Mosaic, appeared in 1993 – almost nobody knew or cared about the internet (which had been running since 1983). But after the web appeared, suddenly people “got” it, and the rest is history.
16 WWW is linguistically unique
Well, perhaps not, but Douglas Adams claimed that it was the only set of initials that took longer to say than the thing it was supposed to represent.
17 The web is a startling illustration of the power of software
Software is pure “thought stuff”. You have an idea; you write some instructions in a special language (a computer program); and then you feed it to a machine that obeys your instructions to the letter. It’s a kind of secular magic. Berners-Lee had an idea; he wrote the code; he put it on the net, and the network did the rest. And in the process he changed the world.
18 The web needs a micro-payment system
In addition to being just a read-only system, the other initial drawback of the web was that it did not have a mechanism for rewarding people who published on it. That was because no efficient online payment system existed for securely processing very small transactions at large volumes. (Credit-card systems are too expensive and clumsy for small transactions.) But the absence of a micro-payment system led to the evolution of the web in a dysfunctional way: companies offered “free” services that had a hidden and undeclared cost, namely the exploitation of the personal data of users. This led to the grossly tilted playing field that we have today, in which online companies get users to do most of the work while only the companies reap the financial rewards.
19 We thought that the HTTPS protocol would make the web secure. We were wrong
HTTP is the protocol (agreed set of conventions) that normally regulates conversations between your web browser and a web server. But it’s insecure because anybody monitoring the interaction can read it. HTTPS (stands for HTTP Secure) was developed to encrypt in-transit interactions containing sensitive data (eg your credit card details). The Snowden revelations about US National Security Agency surveillance suggest that the agency may have deliberately weakened this and other key internet protocols.
20 The web has an impact on the environment. We just don’t know how big it is
The web is largely powered by huge server farms located all over the world that need large quantities of electricity for computers and cooling. (Not to mention the carbon footprint and natural resource costs of the construction of these installations.) Nobody really knows what the overall environmental impact of the web is, but it’s definitely non-trivial. A couple of years ago, Google claimed that its carbon footprint was on a par with that of Laos or the United Nations. The company now claims that each of its users is responsible for about eight grams of carbon dioxide emissions every day. Facebook claims that, despite its users’ more intensive engagement with the service, it has a significantly lower carbon footprint than Google.
21 The web that we see is just the tip of an iceberg
The web is huge – nobody knows how big it is, but what we do know is that the part of it that is reached and indexed by search engines is just the surface. Most of the web is buried deep down – in dynamically generated web pages, pages that are not linked to by other pages and sites that require logins – which are not reached by these engines. Most experts think that this deep (hidden) web is several orders of magnitude larger than the 2.3 billion pages that we can see.
22 Tim Berners-Lee’s boss was the first of many people who didn’t get it initially
Berners-Lee’s manager at Cern scribbled “vague but interesting” on the first proposal Berners-Lee submitted to him. Most people confronted with something that is totally new probably react the same way.
23 The web has been the fastest-growing communication medium of all time
One measure is how long a medium takes to reach the first 50 million users. It took broadcast radio 38 years and television 13 years. The web got there in four.
24 Web users are ruthless readers
The average page visit lasts less than a minute. The first 10 seconds are critical for users’ decision to stay or leave. The probability of their leaving is very high during these seconds. They’re still highly likely to leave during the next 20 seconds. It’s only after they have stayed on a page for about 30 seconds that the chances improve that they will finish it.
25 Is the web making us stupid?
Writers like Nick Carr are convinced that it is. He thinks that fewer people engage in contemplative activities because the web distracts them so much. “With the exception of alphabets and number systems,” he writes, “the net may well be the single most powerful mind-altering technology that has ever come into general use.” But technology giveth and technology taketh away. For every techno-pessimist like Carr, there are thinkers like Clay Shirky, Jeff Jarvis, Yochai Benkler, Don Tapscott and many others (including me) who think that the benefits far outweigh the costs.
John Naughton’s From Gutenberg to Zuckerberg is published by Quercus
l haven’t seen “Her,” the Oscar-nominated movie about a man who has an intimate relationship with a Scarlett Johansson-voiced computer operating system. I have, however, read Susan Schneider’s “The Philosophy of ‘Her’,” a post on The Stone blog at the New York Times looking into the possibility, in the pretty near future, of avoiding death by having your brain scanned and uploaded to a computer. Presumably you’d want to Dropbox your brain file (yes, you’ll need to buy more storage) to avoid death by hard-drive crash. But with suitable backups, you, or an electronic version of you, could go on living forever, or at least for a very, very long time, “untethered,” as Ms. Schneider puts it, “from a body that’s inevitably going to die.”
This idea isn’t the loopy brainchild of sci-fi hacks. Researchers at Oxford University have been on the path to human digitization for a while now, and way back in 2008 the Future of Humanity Institute at Oxford released a 130-page technical report entitled Whole Brain Emulation: A Roadmap. Of the dozen or so benefits of whole-brain emulation listed by the authors, Andrew Sandberg and Nick Bostrom, one stands out:
If emulation of particular brains is possible and affordable, and if concerns about individual identity can be met, such emulation would enable back‐up copies and “digital immortality.”
Scanning brains, the authors write, “may represent a radical new form of human enhancement.”
Hmm. Immortality and radical human enhancement. Is this for real? Yes:
It appears feasible within the foreseeable future to store the full connectivity or even multistate compartment models of all neurons in the brain within the working memory of a large computing system.
Foreseeable future means not in our lifetimes, right? Think again. If you expect to live to 2050 or so, you could face this choice. And your beloved labrador may be ready for upload by, say, 2030:
A rough conclusion would nevertheless be that if electrophysiological models are enough, full human brain emulations should be possible before mid‐century. Animal models of simple mammals would be possible one to two decades before this.
Interacting with your pet via a computer interface (“Hi Spot!”/“Woof!”) wouldn’t be quite the same as rolling around the backyard with him while he slobbers on your face or watching him dash off after a tennis ball you toss into a pond. You might be able to simulate certain aspects of his personality with computer extensions, but the look in his eyes, the cock of his head and the feel and scent of his coat will be hard to reproduce electronically. All these limitations would probably not make up for no longer having to scoop up his messes or feed him heartworm pills. The electro-pet might also make you miss the real Spot unbearably as you try to recapture his consciousness on your home PC.
But what about you? Does the prospect of uploading your own brain allay your fear of abruptly disappearing from the universe? Is it the next best thing to finding the fountain of youth? Ms. Schneider, a philosophy professor at the University of Connecticut, counsels caution. First, she writes, we might find our identity warped in disturbing ways if we pour our brains into massive digital files. She describes the problem via an imaginary guy named Theodore:
[If Theodore were to truly upload his mind (as opposed to merely copy its contents), then he could be downloaded to multiple other computers. Suppose that there are five such downloads: Which one is the real Theodore? It is hard to provide a nonarbitrary answer. Could all of the downloads be Theodore? This seems bizarre: As a rule, physical objects and living things do not occupy multiple locations at once. It is far more likely that none of the downloads are Theodore, and that he did not upload in the first place.
This is why the Oxford futurists included the caveat “if concerns about individual identity can be met.” It is the nightmare of infinitely reproducible individuals — a consequence that would, in an instant, undermine and destroy the very notion of an individual.
But Ms. Schneider does not come close to appreciating the extent of the moral failure of brain uploads. She is right to observe an apparent “categorical divide between humans and programs.” Human beings, she writes, “cannot upload themselves to the digital universe; they can upload only copies of themselves — copies that may themselves be conscious beings.” The error here is screamingly obvious: brains are parts of us, but they are not “us.” A brain contains the seed of consciousness, and it is both the bank for our memories and the fount of our rationality and our capacity for language, but a brain without a body is fundamentally different from the human being that possessed both.
It sounds deeply claustrophobic to be housed (imprisoned?) forever in a microchip, unable to dive into the ocean, taste chocolate or run your hands through your loved one’s hair. Our participation in these and infinite other emotive and experiential moments are the bulk of what constitutes our lives, or at least our meaningful lives. Residing forever in the realm of pure thought and memory and discourse doesn’t sound like life, even if it is consciousness. Especially if it is consciousness.
So I cannot agree with Ms. Schneider’s conclusion when she writes that brain uploads may be choiceworthy for the benefits they can bring to our species or for the solace they provide to dying individuals who “wish to leave a copy of [themselves] to communicate with [their] children or complete projects that [they] care about.” It may be natural, given the increasingly virtual lives many of us live in this pervasively Internet-connected world, to think ourselves mainly in terms of avatars and timelines and handles and digital faces. Collapsing our lives into our brains, and offloading the contents of our brains to a supercomputer is a fascinating idea. It does not sound to me, though, like a promising recipe for preserving our humanity.
Image credit: Shutterstock.com
Apple and Amazon’s big lie:
The rebel hacker and hipster nerd is a capitalist stooge
To this point it appears that the zeitgeist memes for 2014 are, first, the assumption that the future of our economy belongs to robots (see Tyler Cowen’s book “Average Is Over”) and, second, that what’s left of the workforce — those whom Cowen calls “freestylers” working in synchrony with the ’bots — will have their job performance improved if they meditate.
Robots and meditation wouldn’t seem at first glance to have a lot to do with each other, but closer inspection reveals that they do.
This is so largely because the way in which the business world understands meditation (specifically forms derived from what Buddhists call “mindfulness”) is driven not by Buddhism but by science. Mindfulness Based Stress Reduction (MBSR) was developed in 1979 by Jon Kabat-Zinn, an MIT-trained scientist. In her cover story for the February 3 issue of Time magazine, Kate Pickert quoted Kabat-Zinn: “It was always my intention that mindfulness move into the mainstream. This is something that people are now finding compelling in many countries and many cultures. The reason is the science.”
And then there was the annual World Economic Forum in Davos where, according to Otto Scharmer (another MIT man), writing for the Huffington Post, corporate mindfulness is at the “tipping point.” Scharmer writes, “Mindfulness practices like meditation are now used in technology companies such as Google and Twitter (amongst others), in traditional companies in the car and energy sectors, in state-owned enterprises in China, and in UN organizations, governments, and the World Bank.”
So, the narrative conjunction of these two zeitgeist themes would seem to be this: In the future, “high earners” will work with “intelligent machines” (aka robots); the robots will drive them crazy; but they will have happy, productive lives thanks to neuroscience-certified mindfulness.
This narrative leaves out what few people are commenting on: corporate economics and Buddhism are two very different ways of thinking. For all of their countercultural pretensions, mega-corporations like Google, Amazon and Apple are still corporations. They seek profits, they try to maximize their monopoly power, they externalize costs, and, of course, they exploit labor. Apple’s dreadful labor practices in China are common knowledge, and those Amazon packages with the sunny smile issue forth from warehouses that are more like Blake’s “dark satanic mills” than they are the new employment model for the Internet age.
The technology industry has manufactured images of the rebel hacker and hipster nerd, of products that empower individual and social change, of new ways of doing business, and now of a mindful capitalism. Whatever truth might attach to any of these, the fact is that these are impressions that are carefully managed to get us to keep buying their products. In that very basic sense, it is business as usual.
As for Buddhism, contrary to popular reports it is not primarily about stress reduction for middle management. For the Search Inside Yourself program developed at Google, mindfulness training builds “the core emotional intelligence skills needed for peak performance and effective leadership…. We help professionals at all levels adapt, management teams evolve, and leaders optimize their impact and influence.”
Mindfulness is enabling corporations to “optimize impact”? In this view of things, mindfulness can be extracted from a context of Buddhist meanings, values and purposes. Meditation and mindfulness are not part of a whole way of life but only a spiritual technology, a mental app that is the same regardless of how it is used and what it is used for.
Buddhism has its own orienting perspectives, attitudes and values, as does American corporate culture. And not only are they very different from each other, they are often fundamentally opposed to each other. Indeed, one of the foundations of Buddhism is the idea of right livelihood, which entails engaging in trades or occupations that cause minimal harm to other living beings. And yet in the literature of mindfulness as stress reduction for business, we’ve seen no suggestion that employees ought to think about — be mindful of — whether they or the company they work for practice right livelihood. Corporate mindfulness takes something that has the capacity to be oppositional, Buddhism, and redefines it. Mindfulness becomes just another aspect of “workforce preparation.” Eventually, we forget that it ever had its own meaning.
While Tyler Cowen’s top 15 percent of earners struggle on with the stress of working with their robot comrades, pausing now and then to focus on their breath, where’s everybody else? For corporate mindfulness, the world looks a lot like it does in Spike Jonze’s “Her”: there are no poor people; everyone is walking around with the latest high-tech gizmo stuck in their ears. In short, mindfulness is for the world of the winners. As for the workers in Amazon warehouses, their only stress reduction will be what it’s always been: a beer after work. And it won’t be a craft beer, either. That’s for their betters.
When my partner Billy Agan first told me the story, he called them “Google Goggles.”
“Matt Hunt keeps coming in to Telegraph wearing those Google Goggles and he won’t take them off. It’s like if someone came in holding a camera at eye level — I’d tell them to put that away, too. But he won’t do it.”
“I would normally avoid someone with them, but I’m at work, he’s staring right at me, and I can’t go anywhere.” Billy has worked in Oakland bars and restaurants since 2009.
“I saw other people getting creeped out by it. And because he was a regular, I thought I could tell him to take them off while he was in there. I didn’t think it was going to be that big of a deal.”
On three occasions before requiring Hunt to leave, Billy had asked him directly. I once witnessed this: Hunt walked in donning Glass, Billy asked that he remove it, Hunt laughed, walked behind the bar, and poured himself a beer. Matt ran Telegraph’s social media accounts in exchange for free food and drink, and took the same liberties afforded to actual employees.
Billy just sighed. Matt later told me, “I thought he was joking.”
A few days later, Billy had had enough. He tells it this way: Matt walked up to the bar to order a beer on a busy Friday night. Billy demanded he remove the Glass or leave. Billy yelled. He stood on top of a box and yelled some more. Matt ignored him until Billy grabbed him by the arm and delivered him to restaurant security, who escorted him out.
Later Matt said that Billy, as staff and not owner, had no right to ask him to remove the Glass or leave, and that while, yes, he was ejected for wearing Glass, Billy assaulted him and called him a “faggot.” Witnesses don’t support the claim, and the police report Matt filed against Billy later that night is essentially blank, but he maintains his version of events.
“I didn’t use any slurs,” says Billy. “I called him an ‘asshole.’”
Two witnesses do recall Matt telling Billy, though, just before he was escorted out:
Over the last year, much has been written about the changes coming to the Bay Area through an influx of new money and influence from a once-again burgeoning technology sector. Symbols of a new, disruptive, tech-driven wealth have come in the unlikely form of, among other things, luxury buses and head-mounted computers.
It would be fair to say that lately, urban techies and their attendant trappings have come under attack. When PR writer Sarah Slocum’s Google Glass sparked an altercation in San Francisco bar Molotov’s last week, her supporters and detractors fell along familiar and well-worn battle lines: “Cyborgs” vs. “Luddites;” “techies” vs. the rest of us.
Slocum went so far as to call the incident a “hate crime” against her. She didn’t start this fire, but Slocum reveled in the resulting attention, and despite claiming not to use her Glass to film unsuspecting bar patrons and staff, then revealed a video of her doing exactly that.
But despite recent tensions, these relations are not strictly new. This has always been a tale of two cities, of with and without. Someone has always felt entitled, someone has always felt aggrieved. And one form or another of an intellectual, “creative” class has always thought its labor and cause a higher kind than that of others.
In the days after Billy asked Matt to leave the bar, he fretted over the potential loss of his job. Matt had taken the restaurant’s social accounts hostage, and Billy’s boss was receiving hate mail.
“I had to dance with the devil to get my accounts back. I told him whatever he wanted to hear — that I’d fire Billy, that I’d do whatever,” Telegraph owner John Mardikian tells me. “I tell my employees that if someone or something is making them uncomfortable, they should do what they feel is appropriate. I didn’t have an issue with Google Glass before, but I wasn’t there. I investigated this myself. I wasn’t going to fire Billy just because Matt was embarrassed.”
Tech still thinks it’s the scrappy rebel when it’s looking more like the ruling class: A white man with a $1,500 face computer trying to cost a brown man his minimum wage job.
When Google Glass first became available last spring, the publicity was positive, but the public reaction was mixed. Some said that even despite its apparent usability or its creepy spy capacity, it just looked too aggressively goofy for the broader public to embrace.
“To be fair, there’s every possibility that Google Glass will change society just as deeply and profoundly as did the Segway, a technologically nifty machine that now serves primarily to identify its owner as a complete dork with far too much money,” Chris Clarke wrote at KCET last year.
Google readily admits that Glass is in a beta stage. While users aren’t trading in their hardware regularly, there are monthly software updates, and the company hopes that a new prescription eyeglass interface will make the technology look more, well, normal.
And if they’re not hoping, they’re politicking. Last week Reuters revealed that Google had hired lobbyists to fight distracted driving legislation in several states that are attempting to ban Glass on the road.
“While Glass is currently in the hands of a small group of Explorers, we find that when people try it for themselves they better understand the underlying principle that it’s not meant to distract but rather connect people more with the world around them,” Google told Reuters.
(This was, word-for-word, the same prepared statement I received from a Google spokesperson when I asked about the technology’s unexpected social consequences.)
To say nothing of their alleged incompetence behind the wheel, Glass “Explorers” have undoubtedly become connected to one another. The devices are still rare, and can’t be readily bought (though purchase codes now go for as low as $25 on Craigslist); that exclusivity binds the Explorers together into an exclusive community. Explorers not only use the devices, but develop software and hardware improvements for them, solving one anothers problems. But this specialness also promotes the idea that each user is an ambassador for the product, the kind of relationship one wouldn’t usually expect — or perhaps want — with the manufacturer of one’s consumer technology.
To this end, Google recently released suggested “do’s and don’ts” aimed at, well, connecting those Explorers a little more to the world around them, a world that is still largely bereft of face computers projecting an augmented experience of reality.
There’s a case to be made that wearable technology can connect oneself to one’s environment more than it isolates, by providing context that we otherwise wouldn’t see. But often it’s the rest of the world that bears the burden of that. The data ones face collects could be used and monetized by Google or the third-party applications Glass runs. While that may be the choice of the wearer, there is little to no agency on the other side of the Glass eye prism.
“Wearing Google Glass automatically means that all social interaction you have must be not just on yours, but Google’s terms,” Adrian Chen wrote at Gawker almost a year ago, when we all first cringed in fear.
“Glass has become second nature to me,” says Washington, D.C.-based early adopter and Glass developer Noble Ackerson. “I have yet to have a terrible experience publicly wearing Glass and I have worn the device at least every day for nearly a year.” Ackerson bought his Glass about a month after Chen popularized the term “Glasshole.”
“There are, however, times that I find it is either polite or convenient to park or leave Glass behind. Polite in situations like meetings, interviews and generally gatherings where I wouldn’t need my smart phone either,” he says. “Convenient to use my best judgement and caution on occasions where the street, subway, or bar may warrant some keen situational awareness.”
Glass evangelists point to a Time piece from last spring, decrying fears of these new face-phones as unfounded. The author pointed to late-19th century paranoia that Kodak cameras would chip away at our personal space in public.
But Kodak cameras did play a part in that chipping, as did the next 120 years of advances in camera technology. Those gadgets melted into our lives. There are now whole websites devoted to making cruel fun of embarrassing photos of people taken in public likely without their permission or knowledge. At best, we take this behavior for granted; at worst, we laugh along with it.
Still, I perhaps naively thought that I could avoid the influence of Glass in my own life.
I was wrong.
“He yelled, ‘All you have to do is take them off!’” Billy’s coworker Zach Keiler-Bradshaw tells me a week later. “Matt was just ignoring it. I saw Billy touch him — it was after a lot. But no slurs were said.”
A week after the incident, the Telegraph restaurant Twitter account went on a hateful anti-tech, anti-gay tirade for several hours. Besides the owner, Matt was the only other party with access to the account, and there’s strong evidence that he sent the tweets. Mardikian pursued legal action against him, and Telegraph now has an explicit “no Glass” policy.
One after another, customers wearing Glass have been more quietly asked to remove them or leave other Oakland bars and San Francisco coffee shops. Other restaurants and cafes across the Bay Area have banned the devices preemptively. These weren’t stunts for media attention, but attempts at heading off the kinds of disruption that other establishments have reluctantly weathered in favor of trying to keep camera-shy staff and customers comfortable.
At Nabeel Silmi’s Grand Coffee in San Francisco last week, a customer had to be asked to leave because they refused to take off their Glass.
“We ask that guests, whether using a disposable camera or wearing Glass, ask permission before photographing,” Silmi says. “This gives anyone who does not want to be in the shot a chance to leave the frame.”
When the Glass came in the mail, Billy was more excited than I was to try it out. (There are dozens of the devices available on Craigslist in the Bay Area for less than what Google sells them for, but ours was a generous loan from certified Explorer Molly Crabapple.)
While Billy and I shot videos and searched for cat pictures with our new face gadget, some are applying Glass’ capabilities to more professional endeavors.
North Carolina firefighter Patrick Jackson developed an app that routes relevant information directly to his eyeballs in case of an emergency — everything from maps to urgent communications. For now, Glass isn’t compatible with the oxygen masks firefighters are required to wear while in action, but specialized designs could solve this problem in the future.
Similarly, some doctors are using Glass in order to better serve their patients. Charts and scans can be easily queued up on ones eye prism more quickly and painlessly than rifling through papers in front of a nervous patient, and the head-mounted camera can help serve as a learning tool by recording a surgeon’s vision while performing complicated tasks. The New York police are currently trying out the new technology, and other police departments are considering it, too. Even some “professional activists” are trying to get their hands on Glass, “a potential force multiplier,” “a powerful weapon.”
But for everyday use, I’m left wondering: What is the point? After several weeks using Glass, I still struggle to see the appeal and the particular specialness. The voice activation isn’t Siri-smart. The prism’s display blurs, and I strain my eyes trying to read the small text. It doesn’t seem less rude to glance up and right to the tiny screen than to look down and away at my smart phone to check an incoming text or email.
In public, I am far more self-conscious. Even in situations where I might like to use the Glass — to read a sign, take a picture — I often decline when I see others staring, looks of trepidation on their faces more than judgment. In close quarters, the voice commands become less convenient and more irritating, a public announcement that I have a $1,500 face gadget and no, it can’t always understand me very well.
I truly feel naked with the technology, but in less of a digital-utopia and more of an emperor’s-new-clothes kind of way.
“In its obviousness, it announces an entitlement. It doesn’t have the decency to realize it’s being creepy,” one Glass user in the tech industry told me on condition of anonymity. “I had no bias against it when I got it. I just realized it’s good for basically nothing except being a jerk.”
Do you know how often you’re surveilled in a single day? It’s probably hard to even count.
Since the National Security Administration’s digital dealings became public last spring by way of Edward Snowden’s leaks, it’s become harder to delineate what our expectation of privacy can be in 2014 America, bill of rights or not.
But for all the anxiety about its use as a covert surveillance tool, Glass is not actually very good at that. Snapping pictures is simple and extremely discreet, but when recording a video, the prism illuminates. Any number of other, cheaper cameras would make for a better mode of secret filming.
Still, on its surface, the gadget perpetuates a dynamic that looks like a privileged class — both private citizens and corporations as well as secretive government forces — purchasing the tools to surveil those without means.
“Glass is definitely for now a plaything for a privileged few. And I think that, coupled with how deeply weird and noticeable it is, is what makes it a class divide on your face,” Wired writer Mat Honan tells me. “Glass is a terrible surveillance tool, at least in its current form. Absolutely useless.”
This future is nearly here. The city of Oakland is currently embroiled in the process of finalizing a plan for a surveillance fusion center that would combine public camera footage with social media streams, license plate reader photos and other forms of data in a project bankrolled by the Department of Homeland Security.
It’s hardly a paranoid fantasy to imagine that Google Glass users in the city might find their “lifecasting” streams directed into this big data pool. This dystopian vision only grows darker with the growing potential for tying in facial recognition software.
It takes me a while to build up the courage to take the Glass out into the general population. It’s not just the reactions I fear — I don’t feel like myself.
At Dogwood bar in Oakland, Billy and I take turns wearing the Glass for a couple minutes until we notice how uncomfortable nearby patrons are getting. As we’re leaving, the man working the door asks us what it is. Billy explains.
“Oh, yeah, I’d ask someone to take that off next time.”
Private-public gathering spaces like bars and restaurants play a huge part in our social lives, and present all kinds of potential privacy problems. Many have internal closed circuit camera systems to ostensibly protect from theft or exonerate the establishment in a case of alleged over-serving customers. The San Francisco police department even tried to force bars in the city to film their customers at all times as a requirement for a liquor license, though has relented on this policy with some privacy-minded bar owners.
These aren’t truly public houses — private owners can dictate their own private rules, presuming they do not discriminate against protected classes, and presuming they still have enough customers who want to play by the rules they set in order to stay in business.
In 2011, one startup attempted to set up dozens of San Francisco bars to livestream their occupants, providing the rest of the public with a view inside a previously closed-door nightlife scene from the comfort of their own homes. The concept didn’t go over well with bar patrons or the American Civil Liberties Union. Less than three years later, Barspace.tv is now a curated selection of search-engine-optimization garbage. The project appears to be dead.
We stop at a taco truck on our way home. I am still wearing the Glass and I am more conscious of it than ever, after midnight in this neighborhood with a median household income of less than $30,000. After we order, an Oakland police patrol car rolls up, and a young cop steps out.
Billy approaches him with confidence.
“Hey, have you seen these before?” Billy holds out the Glass for the officer to inspect, but he looks incredulous.
“No, what is that?”
“It’s Google Glass. It’s like a small computer, that can take photos and video. The NYPD has them now. Maybe you will soon too.”
The cop looks uncomfortable and shuffles backward a half step. “Oh, I hope not.” Then he smirks and gingerly holds up his department-issued chest camera, the kind local police departments are required to use (but don’t always).
While tools can certainly facilitate bad behavior, technology does not breed human monsters.
This is essentially the defense of the aggressive, entitled Glass-wearer: We’ve already decided against privacy, we’ve given it up, there’s nothing left to preserve, and to wish or work toward any other future is to be an enemy of technology’s promise.
In many ways this particular new tech does not necessitate new fundamental relations — it just reveals how deeply we’ve already broken those relations and how much we’ve already lost.
We do not need to redefine etiquette for a new century of innovation — society needs to decide where its values truly lie.
Caught in large-scale government and corporate surveillance dragnets, we often have little to no choice in how we, our images, our data, ourselves, are mined, commodified, used for purposes beyond our control. But in our daily personal relations, in this, perhaps, we still do. At least sometimes. At least I’d like to think so.
When all was said and done, Billy wasn’t fired. He still works at Telegraph, and still worries about what Matt’s claims might do to his reputation. When all was said and done, he doesn’t think he had a choice.
Ray Kurzweil, the director of engineering at Google, believes that the tech behemoth will soon know you even better than your spouse does.
Kurzweil, who Bill Gates has reportedly called “the best person [he knows] at predicting the future of artificial intelligence,” told the Observer in a recent interview that he is working with Google to create a computer system that will be able to intimately understand human beings.
“I have a one-sentence spec which is to help bring natural language understanding to Google,” the 66-year-old tech whiz told the news outlet of his job. “My project is ultimately to base search on really understanding what the language means.”
“When you write an article, you’re not creating an interesting collection of words,” he continued. “You have something to say and Google is devoted to intelligently organizing and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would want them to read everything on the web and every page of every book, then be able to engage in intelligent dialogue with the user to be able to answer their questions.”
In short, the Observer writes, Kurzweil believes that Google will soon “know the answer to your question before you have asked it. It will have read every email you’ve ever written, every document, every idle thought you’ve ever tapped into a search-engine box. It will know you better than your intimate partner does. Better, perhaps, than even yourself.”
As creepy as this may sound to some, Kurzweil — who has long contended that computers will outsmart us by 2029 — believes that the improvement of artificial intelligence is merely the next step in our evolution.
“[Artificial intelligence] is not an intelligent invasion from Mars,” he told the Montecito Journal in 2012, per a post on his website. “These are brain extenders that we have created to expand our own mental reach. They are part of our civilization. They are part of who we are. So over the next few decades our human-machine civilization will become increasingly dominated by its non-biological component.”
- theguardian.com, Saturday 22 February 2014 08.30 EST
As San Francisco‘s class war rages on, contested rumors go a-flying. Google just allegedly snapped up a large building in the Mission District. The Mission, sometimes called “gentrification ground zero,” is a historically Latino community that’s been attracting lots of tech newcomers – and lots of anti-tech protests. One San Francisco-born friend commented wryly about the purchase: “Google just doesn’t get it, do they?”
Before I continue, I’ll note that I love the internet – and some internet-colonizing companies – with an absurd passion. When I served in the Peace Corps, I spent one-sixth of my stipend on smartphone airtime because there was no local wired internet. My Peace Corps friends used to (nicely) tease me about it: After an accident destroyed most of my possessions, my Country Director first asked whether my electronics had been spared.
I am now privileged to live in San Francisco and work for an internet company. From the home I share with a nest of techies in the Mission, I have a front-row seat on the clash between “high tech” and “local San Francisco.”
Nowadays, I pay a lot less for my internet fix. But I still recall frameworks I learned from the Peace Corps and other social justice organizations. With humility – and with the understanding that none of us have The One Right Answer – I want to outline what I see here and offer thoughts about moving forward.
We already know that complex local problems need better handling. The gentrification juggernaut has led to enormous financial hardship, including unjust evictions. Local writer Rebecca Solnit recently described one eviction map as “a map of bruises, like a city being punched by money.” Activists paste posters on doors and sidewalks that say: “A family has been evicted from this home.”
Two blocks from my home, a mural was recently constructed: First, the wall was papered with activists’ family eviction posters. Then a silhouetted funeral procession was painted on top of the posters, with dark pallbearers carrying a coffin labeled “La Misión.”
Other activists have created angrier symbols, like by picketing the Twitter building on the day of the company’s $25bn initial public offering, or publicly destroying effigies of Google’s luxury buses.
Gentrification is just a single, local example of how High Tech is starting to feel itself a “public enemy.” And “public enemy” is a tough pill to swallow for a largely optimistic industry, where “change the world” is such a common phrase that it’s even mocked and used ironically.
I talk regularly to tech industry people who feel shocked by San Franciscan anger, who are struggling to figure out how to feel and what to do. A lot are trying to understand how to help. Yet signs of this clash surfaced long before any tech buses got boarded by shouting protesters. And Bay Area High Tech would have seen the signs sooner if it weren’t so out of touch with the communities surrounding it.
I’m not saying that everyone in local High Tech has been paying zero attention to the non-tech world. Just most of us. The buses that ferry employees of Google, Apple, Genentech and other tech corporations are potent symbols for several reasons, but the biggest thing that makes them hate-able is that they are so very exclusive. They are visibly luxurious and can only be used by the tech elite. The buses highlight class and culture separations between riders and other San Franciscans.
In the Peace Corps, the main lesson that was hammered into us was that we wouldn’t do any good without understanding and participating in the communities we served. You can’t just move in and say you’re going to Change The World. If you try, you rarely accomplish much, and also you look like an arrogant jerk.
What would it be like if Bay Area High Tech, as a community, started thinking about improving its members’ cultural awareness before Changing The World? I don’t just mean individuals: How could companies take this on, beyond a few token donations?
If we can improve our connection with the larger communities around us, then maybe we’ll spot cultural problems soon enough to mitigate them, and maybe we’ll see larger problems we can help with. (For a company like Google or Facebook, whose product is used by just about everybody, this could even be seen as a market-centered approach!)
Real community engagement is unbelievably hard; if it were easy, more people would do it. Simply finding a starting point is hard. But one place we might learn from is the SF-based organization Code For America, which is a bit like a tech Peace Corps (although it primarily serves American cities). CFA exposes its year-long Fellows directly to the workings of various governments and local communities.
Crucially, Fellows gain months of exposure before they even start thinking about solutions they can build. As Catherine Bracy, CFA’s Director for Community Organizing, told me: “The point is to start with the problem and not with the technology.”
“My fellowship year clarified more problems than solutions,” writes one Code For America Fellow in a blog post. I felt encouraged reading that, because it exalts a cautious learning process.
As far as I know, the CFA Fellowship is the only program of its kind, and it’s overwhelmed with applications. Hundreds of people apply for 30 one-year-long Fellowship slots. So CFA also runs a program they call the Code for America Brigade, which helps tech folks learn more about and contribute to local communities for a few hours every week. Code For America is not the only game in town, but it’s worth knowing about.
I often think about how lucky I am to be part of this vibrant, gorgeous, extraordinary city. I also believe myself lucky to be in this industry. Techies work hard, and sometimes create truly great things. But this industry could also be a source of great social change – if we’re able to listen carefully to the worlds we’re changing.
Narcissistic, Machiavellian, psychopathic, and sadistic.
In the past few years, the science of Internet trollology has made some strides. Last year, for instance, we learned that by hurling insults and inciting discord in online comment sections, so-called Internet trolls (who are frequently anonymous) have a polarizing effect on audiences, leading to politicization, rather than deeper understanding of scientific topics.
That’s bad, but it’s nothing compared with what a new psychology paper has to say about the personalities of trolls themselves. The research, conducted by Erin Buckels of the University of Manitoba and two colleagues, sought to directly investigate whether people who engage in trolling are characterized by personality traits that fall in the so-called Dark Tetrad: Machiavellianism (willingness to manipulate and deceive others), narcissism (egotism and self-obsession), psychopathy (the lack of remorse and empathy), and sadism (pleasure in the suffering of others).
It is hard to underplay the results: The study found correlations, sometimes quite significant, between these traits and trolling behavior. What’s more, it also found a relationship between all Dark Tetrad traits (except for narcissism) and the overall time that an individual spent, per day, commenting on the Internet.
In the study, trolls were identified in a variety of ways. One was by simply asking survey participants what they “enjoyed doing most” when on online comment sites, offering five options: “debating issues that are important to you,” “chatting with others,” “making new friends,” “trolling others,” and “other.” Here’s how different responses about these Internet commenting preferences matched up with responses to questions designed to identify Dark Tetrad traits:
To be sure, only 5.6 percent of survey respondents actually specified that they enjoyed “trolling.” By contrast, 41.3 percent of Internet users were “non-commenters,” meaning they didn’t like engaging online at all. So trolls are, as has often been suspected, a minority of online commenters, and an even smaller minority of overall Internet users.
The researchers conducted multiple studies, using samples from Amazon’s Mechanical Turk but also of college students, to try to understand why the act of trolling seems to attract this type of personality. They even constructed their own survey instrument, which they dubbed the Global Assessment of Internet Trolling, or GAIT, containing the following items:
I have sent people to shock websites for the lulz.
I like to troll people in forums or the comments section of websites.
I enjoy griefing other players in multiplayer games.
The more beautiful and pure a thing is, the more satisfying it is to corrupt.
Yes, some people actually say they agree with such statements. And again, doing so was correlated with sadism in its various forms, with psychopathy, and with Machiavellianism. Overall, the authors found that the relationship between sadism and trolling was the strongest, and that indeed, sadists appear to troll because they find it pleasurable. “Both trolls and sadists feel sadistic glee at the distress of others,” they wrote. “Sadists just want to have fun … and the Internet is their playground!”
The study comes as websites, particularly at major media outlets, are increasingly weighing steps to rein in trollish behavior. Last year Popular Science did away with its comments sections completely, citing research on the deleterious effects of trolling, and YouTube also took measures to rein in trolling.
But study author Buckels actually isn’t sure that fix is a realistic one. “Because the behaviors are intrinsically motivating for sadists, comment moderators will likely have a difficult time curbing trolling with punishments (e.g., banning users),” she said by email. “Ultimately, the allure of trolling may be too strong for sadists, who presumably have limited opportunities to express their sadistic interests in a socially-desirable manner.”
Chris Mooney is the author of The Republican War on Science and, with Sheril Kirshenbaum, Unscientific America: How Scientific Illiteracy Threatens Our Future.
Glenn Greenwald reports at his new independent news site ‘The Intercept’ that according to a former drone operator for the military’s Joint Special Operations Command (JSOC), the NSA often identifies targets based on controversial metadata analysis and cell-phone tracking technologies. In one tactic, the NSA ‘geolocates’ the SIM card or handset of a suspected terrorist’s mobile phone, enabling the CIA and U.S. military to conduct night raids and drone strikes to kill or capture the individual in possession of the device. The technology has been responsible for taking out terrorists and networks of people facilitating improvised explosive device attacks against US forces in Afghanistan. But he also states that innocent people have ‘absolutely’ been killed as a result of the NSA’s increasing reliance on the surveillance tactic. One problem is that targets are increasingly aware of the NSA’s reliance on geolocating, and have moved to thwart the tactic. Some have as many as 16 different SIM cards associated with their identity within the High Value Target system while other top Taliban leaders, knowing of the NSA’s targeting method, have purposely and randomly distributed SIM cards among their units in order to elude their trackers. As a result, even when the agency correctly identifies and targets a SIM card belonging to a terror suspect, the phone may actually be carried by someone else, who is then killed in a strike. The Bureau of Investigative Journalism, which uses a conservative methodology to track drone strikes, estimates that at least 2,400 people in Pakistan, Yemen and Somalia have been killed by unmanned aerial assaults under the Obama administration. Greenwald’s source says he has come to believe that the drone program amounts to little more than death by unreliable metadata. ‘People get hung up that there’s a targeted list of people. It’s really like we’re targeting a cell phone. We’re not going after people – we’re going after their phones, in the hopes that the person on the other end of that missile is the bad guy.’ Whether or not Obama is fully aware of the errors built into the program of targeted assassination, he and his top advisers have repeatedly made clear that the president himself directly oversees the drone operation and takes full responsibility for it.
PUNTA CANA–Costin Raiu is a cautious man. He measures his words carefully and says exactly what he means, and is not given to hyperbole or exaggeration. Raiu is the driving force behind much of the intricate research into APTs and targeted attacks that Kaspersky Lab’s Global Research and Analysis Team has been doing for the last few years, and he has first-hand knowledge of the depth and breadth of the tactics that top-tier attackers are using.
So when Raiu says he conducts his online activities under the assumption that his movements are being monitored by government hackers, it is not meant as a scare tactic. It is a simple statement of fact.
“I operate under the principle that my computer is owned by at least three governments,” Raiu said during a presentation he gave to industry analysts at the company’s analyst summit here on Thursday.
The comment drew some chuckles from the audience, but Raiu was not joking. Security experts for years have been telling users–especially enterprise users–to assume that their network or PC is compromised. The reasoning is that if you assume you’re owned then you’ll be more cautious about what you do. It’s the technical equivalent of telling a child to behave as if his mother is watching everything he does. It doesn’t always work, but it can’t hurt.
Raiu and his fellow researchers around the world are obvious targets for highly skilled attackers of all stripes. They spend their days analyzing new attack techniques and working out methods for countering them. Intelligence agencies, APT groups and cybercrime gangs all would love to know what researchers know and how they get their information. Just about every researcher has a story about being attacked or compromised at some point. It’s an occupational hazard.
But one of the things that the events of the last year have made clear is that the kind of paranoia and caution that Raiu and others who draw the attention of attackers employ as a matter of course should now be the default setting for the rest of us, as well. As researcher Claudio Guarnieri recently detailed, the Internet itself is compromised. Not this bit or that bit. The entire network. We now know that intelligence agencies have spent the last decade systematically penetrating virtually every portion of the Internet and are conducting surveillance and exploitation on a scale that a year ago would have seemed inconceivable to all but the most paranoid among us.
Email? Broken. Mobile communications? Broken. Web traffic? Really broken. Crypto? So, so broken.
It would be understandable, even natural, for most casual observers to have grown so completely overwhelmed by the inundation of stories about government surveillance and exploitation techniques that they tuned it out months ago. Why get worked up about something you can’t change? It’s like getting mad at cake for being delicious.
And that’s exactly the attitude that attackers want. Indeed, they depend on it. Complacency and indifference to clear threats are their lifeblood. Attackers can’t operate effectively without them.
The best response, of course, isn’t panic or indulging the urge to throw your laptop out the window and drop off the grid, as tempting as that might be. Rather, the best course of action is to follow Raiu’s simple advice. You’re being watched at all times; act accordingly.
Image from Flickr photos of Lyudagreen.