As Silicon Valley money continues to pour into San Francisco, wealthy tech workers are displacing longtime residents at record pace. The conflict, referred to by former mayor Willie Brown as “a war brewing in the streets,” was once viewed as a typical case of gentrification. However, it now appears many of these evictions have been carried out by a small group of landowners who, according to California Secretary of State filings, may be linked to a single company.
The activist group Anti-Eviction Mapping has identified 12 landlords it dubs “serial evictors” for their repeated usage of no-fault evictions, which have tripled since 2009. By tracing their records though CorporationWiki, Newsweek found that at least seven are associated with LLCs bearing names like “CA1 REAL ESTATE,” “CAS REAL ESTATE” and “CA1 INVESTMENT.”
In all, Newsweek found a dozen similarly named LLCs linked to landlords and companies involved in the evictions. Because the LLCs are private, it is difficult to confirm whether they are in fact all linked to the same company, or what if anything their exact role in the evictions is. “It’s so weird,” says Erin McElroy of Anti-Eviction Mapping, “We can’t figure it out.”
The evictions are carried out using a controversial California renting law known as the Ellis Act, which was written to allow landowners to evict tenants if they chose to exit the rental market. According to Walter Baczkowski, the CEO of the San Francisco Association of Realtors, the act is necessary to ensure that no property owner can be compelled to be a landlord.
Baczkowski says the rising costs of living in San Francisco coupled with strict rent control laws have forced many longtime landlords to evict tenants and sell their buildings in order to avoid bankruptcy. “They keep saying everyone’s a speculator,” he says, referring to tenant rights activists now pushing California to reform the law. “Now they want to restrict more of the private owners’ rights.”
If, in fact, many of these evictions are linked to–and receiving investments from–a single company, it would seem to refute Baczkowski’s argument that longtime building owners in financial straits are responsible for the increase in Ellis evictions.
Dean Preston, the founder of the advocacy group Tenants Together, called Baczkowski’s bankruptcy argument “a fiction.” He points out that half of the Ellis evictions are carried out within a year after a property is purchased, indicating that it isn’t longtime owners bailing out; it’s speculators looking to turn a quick profit by exploiting a legal loophole that removes rent control constraints on the city.
The result is that San Francisco’s already scant housing becomes unaffordable to the city’s older, less affluent residents. “I desperately want to stay in my neighborhood,” says Theresa Flandrich, a longtime resident who was recently served with an Ellis eviction. Her only hope, she says, is to find an older landlord who may have a vacancy. “What’s important to them is the neighborhood.”
FAIR USE NOTICE. Tenants Together is not the author of this article and the posting of this document does not imply any endorsement of the content by Tenants Together. This document may contain copyrighted material the use of which may not have been specifically authorized by the copyright owner. Tenants Together is making this article available on our website in an effort to advance the understanding of tenant rights issues in California. We believe that this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the U.S. Copyright Law. If you wish to use this copyrighted material for purposes of your own that go beyond ‘fair use,’ you must obtain permission from the copyright owner.
Jan Koum and Brian Acton, the founders of the messaging software WhatsApp, have ample reason to celebrate this week’s 25th anniversary of the Internet’s World Wide Web. The pair have just become billionaires.
Facebook’s Mark Zuckerberg, the youngest billionaire ever, gave Koum and Acton that distinction last month when he gobbled up WhatsApp for $19 billion. The three of them have lots of online company in the global billionaire club. Forbesnow counts 120 other high-tech whizzes with billion-dollar fortunes.
The Internet has become, in effect, the fastest billionaire-minting machine in world history. Should we consider this incredible concentration of wealth an outcome preordained? Did the last 25 years of online history have to leave us so much more unequal? We ask that question in this week’s Too Much.
And what about the future? How might we change the online world to help create a more equal real world? We ask that question, too.
GREED AT A GLANCE
New European Union rules adopted last year limit banker bonuses to no more than triple their base salary. Philippe Lamberts, a Belgian Green Party lawmaker, led the drive for that cap, and now he’s struggling to stop the British government from letting UK bankers sidestep the EU’s modest new pay restrictions. One such sidestep: Banking giant HSBC is now paying its CEO Stuart Gulliver an extra $53,500 every month as an “allowance.” Lamberts wants the EU to take the UK to court. Barclays CEO Antony Jenkins, for his part, is defending his bank’s bonus culture. Nearly 500 power suits at Barclays, half of them in the United States, are taking home at least $1.6 million a year. Paying them any less, says Jenkins, would force his bank into a “death spiral.” To do business in America, adds the CEO, we must “reconcile ourselves to pay high levels of compensation.”
Don’t talk to Miami maritime lawyer Jim Walker about greedy bankers. The real greedy, says Walker, are running cruise lines and incorporating their operations offshore to avoid U.S. taxes and wage laws. The greediest cruise character of them all? That might be Carnival Cruise chair Micky Arison. He’s now selling off 10 million of his Carnival shares, a sale that figures to bring in $395 million and still leave his family holding over $6 billion in Carnival stock. Arison’s share sale began as passengers left adrift last year on a faulty Carnival cruise ship were testifying on the lawsuit they’ve filed against the company. That incident subjected over 4,200 passengers and crew to five days of overflowing toilets and rotting food. Carnival is dismissing the suit as “an opportunistic attempt to benefit financially” from “alleged emotional distress.”
You won’t find any billionaires standing in line to get their family heirlooms appraised on the hit public TV series Antiques Roadshow. But the world’s ultra rich, luxury editor Tara Loader Wilkinson noted last week, are definitely “going gaga for antiques.” The average billionaire, calculates a new billionaire census from the Swiss bank UBS and researchers at the Singapore-based Wealth-X, holds $14 million worth of antiques and collectibles. Why are so many uber rich hungering for antiques? French antiques expert Mikael Kraemer notes that “anyone with enough money can buy a jet.” Not everyone, he adds, has “what sets one billionaire apart from another”: enough “culture and knowledge” to find and buy something like an 18th century antique royal chandelier.
Quote of the Week
“Do you recall a time in America when the income of a single school teacher or baker or salesman or mechanic was enough to buy a home, have two cars, and raise a family? I remember.”
Robert Reich, former U.S. labor secretary, The Great U-Turn, March 6, 2014
PETULANT PLUTOCRAT OF THE WEEK
Fracking can be a dirty business. Big Energy CEOs don’t mind. Can’t let those environmentalists endanger our energy security, they like to insist. Unless the environment at risk happens to be their own. ExxonMobil CEO Rex Tillerson has joined a lawsuit that’s trying to stop the construction of a 160-foot water — for fracking — tower near his manse north of Dallas. Tillerson isn’t talking about his lawsuit. But an Exxon flack says his boss doesn’t object to the tower “for its potential use for water and gas operations for fracking.” He’s just upset because the tower would be “much taller” than originally proposed. If the lawsuit fails, Tillerson might have to buy the tower site himself to prevent the fracking eyesore. He can afford it. His 2012 Exxon take-home: $40.3 million.
IMAGES OF INEQUALITY
The promoters of the new “Wealth Badge” are either cynically exploiting a new luxury niche or acutely exposing the cultural depravity of our unequal times. Their new Web site offers the affluent a metal pin that reads “Because I can.” The cost: $5,000. Explains the Wealth Badge pitch: “The idea is simple: If you buy something just because you can, you are truly rich.” The site claims to have sold 61 badges — and features photos of privileged people showing them off. The pins have begun drawing media play. But no one has so far answered the basic question: Is the Wealth Badge crew trying to make money or a point?
PolluterWatch/ This Greenpeace site zeroes in on the “pollutocrats” now “poisoning the debate about policies that would lower our greenhouse gas emissions and kickstart a clean energy revolution.”
PROGRESS AND PROMISE
Few people have contributed as much to our online world as Jaron Lanier, the computer scientist who helped pioneer — and label — what we call “virtual reality.” Lanier has been wondering of late about his fellow contributors, those hundreds of millions of Internet users who donate — for free — the information that a tiny cohort of tycoons has been able to crunch into billion-dollar fortunes. In his latest book, Lanier envisions a “universal micropayment system” that pays people who go online for whatever information of value their online clicks may create, “no matter what kind of information is involved or whether a person intended to provide it or not.” Such a system, says Lanier, would help us “see a less elite distribution of economic benefits.”
The world’s “ultra high net worth” crowd — individuals worth at least $30 million — now number 167,669, says a new Knight Frank report. Their total wealth: $20.1 trillion in 2013, almost half as much as the combined net worth of the 4.2 billion global adults who hold less than $100,000 in wealth.
A Thought for the Web’s Silver Anniversary
Let’s learn from our not-so-distant past and share the gold. New technologies don’t have to bring us new inequalities.
Exactly 25 years ago this week the British computer scientist Tim Berners-Lee conceptually “invented” the World Wide Web — and began a process that would rather rapidly make the online world an essential part of our daily lives.
By 1995, 14 percent of Americans were surfing the Web. The level today: 87 percent. And among young adults, the Pew Research Center notes in a just-published silver anniversary report, the Internet has reached “near saturation.” Some 97 percent of Americans 18 to 29 are now going online.
Americans young and old alike are using the Web to work wonders few people 25 years ago could have ever imagined. We’re talking face-to-face with people thousands of miles away. We’re finding soulmates who share our passions and problems. We’re organizing political movements to change the world.
Life with the Web has become, for hundreds of millions of us, substantially richer. Not literally richer, of course. The same 25 years that have seen the Web explode into our consciousness have seen most of us struggle to stay even economically. The Internet and inequality have grown together.
Tim Berners-Lee never saw this inequality coming. The ground-breaking research he published on March 12, 1989, the paper that proposed the system that became the Web, carried no price tag. Berners-Lee would go on to release the code for his system for free. He didn’t invent the Web to get rich.
But others certainly have become rich via the Web. Fabulously rich. Forbes magazine last week released its annual list of global billionaires. Some 123 of them, Forbescalculates, owe their fortunes to high-tech ventures. The top 15 of these high-tech billionaires hold a collective $382 billion in personal net worth.
Numbers like these don’t particularly bother — or alarm — many of today’s economists. Grand new technologies, their conventional wisdom holds, always bring forth grand new personal fortunes for the entrepreneurs who lead the way.
In the 19th century, points out this standard narrative of American economic progress, the coming of the railroads dotted our landscape with the fortunes of railroad tycoons. In the early 20th century, the new automobile age created huge piles of wealth for car makers like Henry Ford and the oilmen who supplied the juice that kept his auto engines humming along.
Why should the Internet age, mainstream economists wonder, be any different? A new technology comes along that alters the fabric of daily life. That new technology gives rise to a new rich. The one outcome naturally follows the other. No need to get bent out of shape by the resulting inequality.
But epochal new technology doesn’t always automatically generate grand new fortunes. The prime example from our relatively recent past: television.
TV burst onto the American scene even more rapidly — and thoroughly — than the Internet. In 1948, only 1 percent of American households owned a TV. Within seven years, televisions graced 75 percent of American homes.
These TV sets didn’t just drop down into those homes. They had to be designed, manufactured, packaged, distributed, marketed. Programming had to be produced. Imaginations had to be captured. All of this demanded an enormous outlay of entrepreneurial energy.
But this outlay would produce no jaw-dropping grand fortunes, no billionaires, even after adjusting for inflation. That would be no accident. The American people, by the 1950s, had put in place a set of economic rules that made the accumulation of grand new private fortunes almost impossible.
Taxes played a key role here. Income over $400,000 faced a 91 percent tax rate throughout the 1950s. Regulations played an important role as well. In television’s early heyday, for instance, government regs limited how many commercials could run on children’s TV programming. TV’s original corporate execs could only squeeze so much out of their new medium.
And television’s early kingpins couldn’t squeeze their workers all that much either. Most of their employees, from the workers who manufactured TV sets to the technicians who staffed broadcast studios, belonged to unions. TV’s early movers and shakers had to share the wealth their new medium was creating.
Today’s Internet movers and shakers, by contrast, have to share nothing. In an America where less than 7 percent of private-sector workers carry union cards, online corporate giants seldom ever need bargain with their employees.
In a deregulated U.S. economy, meanwhile, these Internet kingpins face precious few public-interest rules that keep them from charging whatever the market can bear — and rigging markets to squeeze out even more.
And taxes? Today’s Internet billionaires face tax rates that run well less than half the rates that early TV kingpins faced.
We can’t — and shouldn’t — fault Tim Berners-Lee for any of this. He freely shared, after all, his invention with the world.
“I wanted to build a creative space,” Berners-Lee observed in an interview a few years ago, “something like a sandpit where everyone could play together.”
Sherle Schwenninger and Samuel Sherraden, The U.S. Economy After The Great Recession, New America Foundation, Washington, D.C., March 4, 2014.
Need to better understand how the Great Recession — and the political responses to it — have played out? This no-nonsense set of slides brings together, in one place, the key trends that have defined the the U.S. economy since the Great Recession hit in 2008. Just a few of the report’s many choice tidbits . . .
Good times at the top: From 2009 to 2012, America’s top 1 percent incomes grew by 31.4 percent. Bottom 99 percent incomes rose all of 0.4 percent.
Shrinking returns to labor: From 2007′s fourth quarter to 2013′s third, the labor compensation share of national income declined from 64 to 61% percent. If this labor share of national income had remained at the 2007 level, American workers would have earned $520 billion more in 2013 than they actually did.
Enter the “plutonomy”: The U.S. economy is revolving ever more around consumption by the rich. In 2012 the top 5 percent of American income earners accounted for 38 percent of domestic consumption, up from 28 percent in 1995.
Briton Tim Berners-Lee, the inventor of the world wide web, at the opening ceremony of the London 2012 Olympic Games. Photograph: Wang Lili/xh/Xinhua Press/Corbis
1 The importance of “permissionless innovation”
The thing that is most extraordinary about the internet is the way it enables permissionless innovation. This stems from two epoch-making design decisions made by its creators in the early 1970s: that there would be no central ownership or control; and that the network would not be optimised for any particular application: all it would do is take in data-packets from an application at one end, and do its best to deliver those packets to their destination.
It was entirely agnostic about the contents of those packets. If you had an idea for an application that could be realised using data-packets (and were smart enough to write the necessary software) then the network would do it for you with no questions asked. This had the effect of dramatically lowering the bar for innovation, and it resulted in an explosion of creativity.
What the designers of the internet created, in effect, was a global machine for springing surprises. The web was the first really big surprise and it came from an individual – Tim Berners-Lee – who, with a small group of helpers, wrote the necessary software and designed the protocols needed to implement the idea. And then he launched it on the world by putting it on the Cern internet server in 1991, without having to ask anybody’s permission.
2 The web is not the internet
Although many people (including some who should know better) often confuse the two. Neither is Google the internet, nor Facebook the internet. Think of the net as analogous to the tracks and signalling of a railway system, and applications – such as the web, Skype, file-sharing and streaming media – as kinds of traffic which run on that infrastructure. The web is important, but it’s only one of the things that runs on the net.
3 The importance of having a network that is free and open
The internet was created by government and runs on open source software. Nobody “owns” it. Yet on this “free” foundation, colossal enterprises and fortunes have been built – a fact that the neoliberal fanatics who run internet companies often seem to forget. Berners-Lee could have been as rich as Croesus if he had viewed the web as a commercial opportunity. But he didn’t – he persuaded Cern that it should be given to the world as a free resource. So the web in its turn became, like the internet, a platform for permissionless innovation. That’s why a Harvard undergraduate was able to launch Facebook on the back of the web.
4 Many of the things that are built on the web are neither free nor open
Mark Zuckerberg was able to build Facebook because the web was free and open. But he hasn’t returned the compliment: his creation is not a platform from which young innovators can freely spring the next set of surprises. The same holds for most of the others who have built fortunes from exploiting the facilities offered by the web. The only real exception is Wikipedia.
5 Tim Berners-Lee is Gutenberg’s true heir
In 1455, with his revolution in printing, Johannes Gutenberg single-handedly launched a transformation in mankind’s communications environment – a transformation that has shaped human society ever since. Berners-Lee is the first individual since then to have done anything comparable.
6 The web is not a static thing
The web we use today is quite different from the one that appeared 25 years ago. In fact it has been evolving at a furious pace. You can think of this evolution in geological “eras”. Web 1.0 was the read-only, static web that existed until the late 1990s. Web 2.0 is the web of blogging, Web services, mapping, mashups and so on – the web that American commentator David Weinberger describes as “small pieces, loosely joined”. The outlines of web 3.0 are only just beginning to appear as web applications that can “understand” the content of web pages (the so-called “semantic web”), the web of data (applications that can read, analyse and mine the torrent of data that’s now routinely published on websites), and so on. And after that there will be web 4.0 and so on ad infinitum.
7 Power laws rule OK
In many areas of life, the law of averages applies – most things are statistically distributed in a pattern that looks like a bell. This pattern is called the “normal distribution”. Take human height. Most people are of average height and there are relatively small number of very tall and very short people. But very few – if any – online phenomena follow a normal distribution. Instead they follow what statisticians call a power law distribution, which is why a very small number of the billions of websites in the world attract the overwhelming bulk of the traffic while the long tail of other websites has very little.
8 The web is now dominated by corporations
Despite the fact that anybody can launch a website, the vast majority of the top 100 websites are run by corporations. The only real exception is Wikipedia.
9 Web dominance gives companies awesome (and unregulated) powers
Take Google, the dominant search engine. If a Google search doesn’t find your site, then in effect you don’t exist. And this will get worse as more of the world’s business moves online. Every so often, Google tweaks its search algorithms in order to thwart those who are trying to “game” them in what’s called search engine optimisation. Every time Google rolls out the new tweaks, however, entrepreneurs and organisations find that their online business or service suffers or disappears altogether. And there’s no real comeback for them.
10 The web has become a memory prosthesis for the world
Have you noticed how you no longer try to remember some things because you know that if you need to retrieve them you can do so just by Googling?
11 The web shows the power of networking
The web is based on the idea of “hypertext” – documents in which some terms are dynamically linked to other documents. But Berners-Lee didn’t invent hypertext – Ted Nelson did in 1963 and there were lots of hypertext systems in existence long before Berners-Lee started thinking about the web. But the existing systems all worked by interlinking documents on the same computer. The twist that Berners-Lee added was to use the internet to link documents that could be stored anywhere. And that was what made the difference.
12 The web has unleashed a wave of human creativity
Before the web, “ordinary” people could publish their ideas and creations only if they could persuade media gatekeepers (editors, publishers, broadcasters) to give them prominence. But the web has given people a global publishing platform for their writing (Blogger, WordPress, Typepad, Tumblr), photographs (Flickr, Picasa, Facebook), audio and video (YouTube, Vimeo); and people have leapt at the opportunity.
13 The web should have been a read-write medium from the beginning
Berners-Lee’s original desire was for a web that would enable people not only to publish, but also to modify, web pages, but in the end practical considerations led to the compromise of a read-only web. Anybody could publish, but only the authors or owners of web pages could modify them. This led to the evolution of the web in a particular direction and it was probably the factor that guaranteed that corporations would in the end become dominant.
14 The web would be much more useful if web pages were machine-understandable
Web pages are, by definition, machine-readable. But machines can’t understand what they “read” because they can’t do semantics. So they can’t easily determine whether the word “Casablanca” refers to a city or to a movie. Berners-Lee’s proposal for the “semantic web” – ie a way of restructuring web pages to make it easier for computers to distinguish between, say, Casablanca the city and Casablanca the movie – is one approach, but it would require a lot of work upfront and is unlikely to happen on a large scale. What may be more useful are increasingly powerful machine-learning techniques that will make computers better at understanding context.
15 The importance of killer apps
A killer application is one that makes the adoption of a technology a no-brainer. The spreadsheet was the killer app for the first Apple computer. Email was the first killer app for the Arpanet – the internet’s precursor. The web was the internet’s first killer app. Before the web – and especially before the first graphical browser, Mosaic, appeared in 1993 – almost nobody knew or cared about the internet (which had been running since 1983). But after the web appeared, suddenly people “got” it, and the rest is history.
16 WWW is linguistically unique
Well, perhaps not, but Douglas Adams claimed that it was the only set of initials that took longer to say than the thing it was supposed to represent.
17 The web is a startling illustration of the power of software
Software is pure “thought stuff”. You have an idea; you write some instructions in a special language (a computer program); and then you feed it to a machine that obeys your instructions to the letter. It’s a kind of secular magic. Berners-Lee had an idea; he wrote the code; he put it on the net, and the network did the rest. And in the process he changed the world.
18 The web needs a micro-payment system
In addition to being just a read-only system, the other initial drawback of the web was that it did not have a mechanism for rewarding people who published on it. That was because no efficient online payment system existed for securely processing very small transactions at large volumes. (Credit-card systems are too expensive and clumsy for small transactions.) But the absence of a micro-payment system led to the evolution of the web in a dysfunctional way: companies offered “free” services that had a hidden and undeclared cost, namely the exploitation of the personal data of users. This led to the grossly tilted playing field that we have today, in which online companies get users to do most of the work while only the companies reap the financial rewards.
19 We thought that the HTTPS protocol would make the web secure. We were wrong
HTTP is the protocol (agreed set of conventions) that normally regulates conversations between your web browser and a web server. But it’s insecure because anybody monitoring the interaction can read it. HTTPS (stands for HTTP Secure) was developed to encrypt in-transit interactions containing sensitive data (eg your credit card details). The Snowden revelations about US National Security Agency surveillance suggest that the agency may have deliberately weakened this and other key internet protocols.
20 The web has an impact on the environment. We just don’t know how big it is
The web is largely powered by huge server farms located all over the world that need large quantities of electricity for computers and cooling. (Not to mention the carbon footprint and natural resource costs of the construction of these installations.) Nobody really knows what the overall environmental impact of the web is, but it’s definitely non-trivial. A couple of years ago, Google claimed that its carbon footprint was on a par with that of Laos or the United Nations. The company now claims that each of its users is responsible for about eight grams of carbon dioxide emissions every day. Facebook claims that, despite its users’ more intensive engagement with the service, it has a significantly lower carbon footprint than Google.
21 The web that we see is just the tip of an iceberg
The web is huge – nobody knows how big it is, but what we do know is that the part of it that is reached and indexed by search engines is just the surface. Most of the web is buried deep down – in dynamically generated web pages, pages that are not linked to by other pages and sites that require logins – which are not reached by these engines. Most experts think that this deep (hidden) web is several orders of magnitude larger than the 2.3 billion pages that we can see.
22 Tim Berners-Lee’s boss was the first of many people who didn’t get it initially
Berners-Lee’s manager at Cern scribbled “vague but interesting” on the first proposal Berners-Lee submitted to him. Most people confronted with something that is totally new probably react the same way.
23 The web has been the fastest-growing communication medium of all time
One measure is how long a medium takes to reach the first 50 million users. It took broadcast radio 38 years and television 13 years. The web got there in four.
24 Web users are ruthless readers
The average page visit lasts less than a minute. The first 10 seconds are critical for users’ decision to stay or leave. The probability of their leaving is very high during these seconds. They’re still highly likely to leave during the next 20 seconds. It’s only after they have stayed on a page for about 30 seconds that the chances improve that they will finish it.
25 Is the web making us stupid?
Writers like Nick Carr are convinced that it is. He thinks that fewer people engage in contemplative activities because the web distracts them so much. “With the exception of alphabets and number systems,” he writes, “the net may well be the single most powerful mind-altering technology that has ever come into general use.” But technology giveth and technology taketh away. For every techno-pessimist like Carr, there are thinkers like Clay Shirky, Jeff Jarvis, Yochai Benkler, Don Tapscott and many others (including me) who think that the benefits far outweigh the costs.
It’s hard to pinpoint the exact moment that San Francisco morphed into bizarro-world New York, when it went from being the city’s dorky, behoodied West Coast cousin to being, in many ways, more New York–ish than New York itself—its wealth more impressive, its infatuation with power and status more blinding. Maybe it was this past November, when New York elected a tax-the-rich progressive as mayor and, two days later, Twitter, a company that had been courted by San Francisco politicians with a Bloombergian combination of municipal tax breaks and mayoral flattery, went public at around a $25 billion valuation. Maybe it was when, after the crash, bonus-starved Wall Street bankers started quitting their jobs and heading to the Bay Area in droves to join the start-up gold rush. Or maybe it was when San Francisco became the new American capital of real-estate kvetching, thanks to supra-Manhattan rents and gentrification at a pace that would make Bushwick blush.
For me, the epiphany came in December, when I attended a party at a seven-story San Francisco townhouse. The house—used as an office and party pad by a young entrepreneur who had sold his start-up for millions a few years earlier—was the kind of bachelor pad Richie Rich might have set up for himself, had he been 23 and a Burning Man regular. The walls were covered in inspirational phrases (FOLLOW YOUR HEART, HOLISTIC MINDFULNESS & WELLNESS), and the party was centered on a split-level pool and hot tub that took up the entire middle section of the house. Five inflatable killer whales floated idly in the water. A bearded man was giving out back massages at water’s edge using a pair of repurposed automotive buffers, one in each hand. And loaner swimsuits—washed between wearings, we were assured—were provided for all.
As the hours ticked on and the booze kicked in, some shed their Louboutin heels and jumped in the pool; others marinated in the hot tub and told start-up war stories. It was the kind of bash you’d have found in Easthampton circa 2006, or West Egg circa 1922. And as if to cement San Francisco’s newfound place at the center of a certain social universe, the person greeting newcomers at the door was Julia Allison, the notorious glam blogger, whose smile had dotted the New York party scene just a few years earlier.
It’s no secret that New York is having a bit of an identity crisis these days. Wall Street lost its swagger during the crash and hasn’t gotten it back despite the market’s broader recovery. Big banks are adding employees in Bangalore and Salt Lake City while cutting them in Manhattan. New York City’s budget wonks expect the city to add only 67,000 jobs this year, a sluggish number that faster-growing cities like Denver and Austin will look upon with pity. The city’s culture seems to be changing, too: Greenpoint and “normcore” are in, stilettos and pinstripes are out; junior bankers now get Saturdays off; “work-life balance” is no longer a euphemism for sloth.
Meanwhile, certain pockets of San Francisco have become the sort of gilded playground that New York once was. Brand-new Teslas with vanity plates like DISRUPTD drift down the streets of the Mission District, where pawnshops and porn stores used to be. Paper millionaires spend their nights at the Battery, a members-only club with a tech-heavy roster and a $10,000-per-night penthouse suite. Upscale restaurants pop up at regular intervals, each with a more elite clientele and a more Portlandia-esque menu—everything from the $4 artisanal toast that sparked a citywide craze to the underground supper clubs serving kombucha pairings with sustainable-seafood dinners. Finding an affordable apartment in the city has become, as one tech worker lamented to me recently, “a Hunger Games scenario.”
In many ways, San Francisco is the nation’s new success theater. It’s the city where dreamers go to prove themselves—the place where just being able to afford a normal life serves as an indicator of pluck and ability. I had lunch the other day with a Harvard Business School student who belonged to a 90-person section, of whom 12 were start-up entrepreneurs. You can imagine the whole dozen packing their bags for the West Coast after collecting their M.B.A.’s, thinking: If I can make it there, I’ll make it anywhere.
Which isn’t to say that San Francisco has pulled off this transition effortlessly. The city still has its lefty legacy, after all, and as the tech sector has grown into an economic powerhouse, so has resentment toward its elites. Protesters, angry about Silicon Valley’s effect on the local economy, are blockading tech-employee shuttles in the streets; in Oakland last year, a Google bus had its window shattered by a rock. San Francisco Mayor Ed Lee, long suspected of being in the tech industry’s pocket, is accused of not doing enough to help the working class cope with rising costs and widening inequality. Although most right-thinking one-percenters cringed when venture capitalist Tom Perkins compared the treatment of the rich in San Francisco to the treatment of Jews by Nazis on Kristallnacht, the hostility he felt is real. Silicon Valley is exploding, as Wall Street did in the 1980s, as Detroit did in the 1940s. And as in those booms, not everyone is going along for the ride.
Of course, San Francisco won’t truly become New York, and not just because New York’s economy is nearly twice as big as the country’s next biggest (that’s L.A.’s, not San Francisco’s, which ranks eighth). San Francisco is too earnest, too eager to be liked, to truly wallow in its wealth like Bloomberg’s New York. (If Martin Scorsese had made The Wolf of Silicon Valley, it would have been two hours of Leonardo DiCaprio apologizing for spilling the Dom Pérignon.) The utopian streak of the tech sector paints a thick veneer of do-gooderism over even the rawest capitalistic conquests, and coupled with a desire to appease the locals, it’s what keeps San Francisco’s ruling class from really letting go.
My New York friends tend to brush off what’s happening in San Francisco with one word: bubble. After all, people flocked to Silicon Valley in 1999, they say, only to be flung back to New York when the start-up scene burst. But what if this tech bubble doesn’t end in sock puppets and Schadenfreude? What if, as MIT professors Erik Brynjolfsson and Andrew McAfee recently wrote, we’re not just dealing with a temporary tech craze but the dawn of a “second machine age” that will fundamentally realign the entire global economy? And what if most of the technology that powers that revolution is made in California?
Whatever the Silicon Valley gold rush has done or will do, it’s already given us an entirely new species of yuppie mogul: the one who stockpiles bitcoin and speaks in hacker pidgin, the one who wears Uniqlo on a Gulfstream and obsesses over single-origin coffees. The kind, in other words, who plays the underdog even while sitting on top of the world.
Technology promises to improve people’s quality of life, and what could be a better example of that than sending robots instead of humans into dangerous situations? Robots can help conduct research in deep oceans and harsh climates, or deliver food and medical supplies to disaster areas.
As the science advances, it’s becoming increasingly possible to dispatch robots into war zones alongside or instead of human soldiers. Several military powers, including the United States, the United Kingdom, Israel and China, are already using partially autonomous weapons in combat and are almost certainly pursuing other advances in private, according to experts.
The idea of a killer robot, as a coalition of international human rights groups has dubbed the autonomous machines, conjures a humanoid Terminator-style robot. The humanoid robots Google recently bought are neat, but most machines being used or tested by national militaries are, for now, more like robotic weapons than robotic soldiers. Still, the line between useful weapons with some automated features and robot soldiers ready to kill can be disturbingly blurry.
Whatever else they do, robots that kill raise moral questions far more complicated than those posed by probes or delivery vehicles. Their use in war would likely save lives in the short run, but many worry that they would also result in more armed conflicts and erode the rules of war — and that’s not even considering what would happen if the robots malfunctioned or were hacked.
Seeing a slippery slope ahead, human rights groups began lobbying last year for lethal robots to be added to the list of prohibited weapons that includes chemical weapons. And the U.N., driven in part by a 2013 report by Special Rapporteur Christof Heyns, has set a meeting in May for nations to explore that and other limits on the technology.
“Robots should not have the power of life and death over human beings,” Heyns wrote in the report.
There’s no doubt that major military powers are moving aggressively into automation. Late last year, Gen. Robert Cone, head of the U.S. Army’s Training and Doctrine Command, suggested that up to a quarter of the service’s boots on the ground could be replaced by smarter and leaner weaponry. In January, the Army successfully tested a robotic self-driving convoy that would reduce the number of personnel exposed to roadside explosives in war zones like Iraq and Afghanistan.
According to Heyns’s 2013 report, South Korea operates “surveillance and security guard robots” in the demilitarized zone that buffers it from North Korea. Although there is an automatic mode available on the Samsung machines, soldiers control them remotely.
The U.S. and Germany possess robots that automatically target and destroy incoming mortar fire. They can also likely locate the source of the mortar fire, according to Noel Sharkey, a University of Sheffield roboticist who is active in the “Stop Killer Robots” campaign.
And of course there are drones. While many get their orders directly from a human operator, unmanned aircraft operated by Israel, the U.K. and the U.S. are capable of tracking and firing on aircraft and missiles. On some of its Navy cruisers, the U.S. also operates Phalanx, a stationary system that can track and engage anti-ship missiles and aircraft.
The Army is testing a gun-mounted ground vehicle, MAARS, that can fire on targets autonomously. One tiny drone, the Raven is primarily a surveillance vehicle but among its capabilities is “target acquisition.”
No one knows for sure what other technologies may be in development.
“Transparency when it comes to any kind of weapons system is generally very low, so it’s hard to know what governments really possess,” Michael Spies, a political affairs officer in the U.N.’s Office for Disarmament Affairs, told Singularity Hub.
At least publicly, the world’s military powers seem now to agree that robots should not be permitted to kill autonomously. That is among the criteria laid out in a November 2012 U.S. military directive that guides the development of autonomous weapons. The European Parliament recently established a non-binding ban for member states on using or developing robots that can kill without human participation.
Yet, even robots not specifically designed to make kill decisions could do so if they malfunctioned, or if their user experience made it easier to accept than reject automated targeting.
What if, for example, a robot tasked with destroying an unmanned military installation instead destroyed a school? Robotic sensing technology can only barely identify big, obvious targets in clutter-free environments. For that reason, the open ocean is the first place robots are firing on targets. In more cluttered environments like the cities where most recent wars have been fought, the sensing becomes less accurate.
The U.S. Department of Defense directive, which insists that humans make kill decisions, nonetheless addresses the risk of “unintended engagements,” as a spokesman put it in an email interview with Singularity Hub.
Sensing and artificial intelligence technologies are sure to improve, but there are some risks that military robot operators may never be able to eliminate.
Some issues are the same ones that plague the adoption of any radically new technology: the chance of hacking, for instance, or the legal question of who’s responsible if a war robot malfunctions and kills civilians.
“The technology’s not fit for purpose as it stands, but as a computer scientist there are other things that bother me. I mean, how reliable is a computer system?” Sharkey, of Stop Killer Robots, said.
Sharkey noted that warrior robots would do battle with other warrior robots equipped with algorithms designed by an enemy army.
“If you have two competing algorithms and you don’t know the contents of the other person’s algorithm, you don’t know the outcome. Anything could happen,” he said.
For instance, when two sellers recently unknowingly competed for business on Amazon, the interactions of their two algorithms resulted in prices in the millions of dollars. Competing robot armies could destroy cities as their algorithms exponentially escalated, Sharkey said.
An even likelier outcome would be that human enemies would target the weaknesses of the robots’ algorithms to produce undesirable outcomes. For instance, say a machine that’s designed to destroy incoming mortar fire such as the U.S.’s C-RAM or Germany’s MANTIS, is also tasked with destroying the launcher. A terrorist group could place a launcher in a crowded urban area, where its neutralization would cause civilian casualties.
NASA is building humanoid robots.
Or consider a real scenario. The U.S. sometimes programs its semi-autonomous drones to locate a terrorist based on his cell phone SIM card. The terrorists, knowing that, often offload used SIM cards to unwitting civilians. Would an autonomous killing machine be able to plan for such deception? Even if robots plan for particular deceptions, the history of the web suggests that terrorists could find others.
Of course, most technologies stumble at first and many turn out okay. The militaries developing war-fighting robots are assuming this model and starting with limited functions and use cases. But they are almost certainly working toward exploring disruptive options, if only to keep up with their enemies.
Sharkey argues that, given the lack of any clear delineation between limited automation and killer robots, a hard ban on robots capable of making kill decisions is the only way to ensure that machines never have the power of life and death over human beings.
“Once you’ve put in billions of dollars of investment, you’ve got to use these things,” he said.
Few expect the U.N. meeting this spring to result in an outright ban, but it will begin to lay the groundwork for the role robots will play in war.
Photos: Lockheed Martin, QinetiQ North America, NASA
l haven’t seen “Her,” the Oscar-nominated movie about a man who has an intimate relationship with a Scarlett Johansson-voiced computer operating system. I have, however, read Susan Schneider’s “The Philosophy of ‘Her’,” a post on The Stone blog at the New York Times looking into the possibility, in the pretty near future, of avoiding death by having your brain scanned and uploaded to a computer. Presumably you’d want to Dropbox your brain file (yes, you’ll need to buy more storage) to avoid death by hard-drive crash. But with suitable backups, you, or an electronic version of you, could go on living forever, or at least for a very, very long time, “untethered,” as Ms. Schneider puts it, “from a body that’s inevitably going to die.”
This idea isn’t the loopy brainchild of sci-fi hacks. Researchers at Oxford University have been on the path to human digitization for a while now, and way back in 2008 the Future of Humanity Institute at Oxford released a 130-page technical report entitled Whole Brain Emulation: A Roadmap. Of the dozen or so benefits of whole-brain emulation listed by the authors, Andrew Sandberg and Nick Bostrom, one stands out:
If emulation of particular brains is possible and affordable, and if concerns about individual identity can be met, such emulation would enable back‐up copies and “digital immortality.”
Scanning brains, the authors write, “may represent a radical new form of human enhancement.”
Hmm. Immortality and radical human enhancement. Is this for real? Yes:
It appears feasible within the foreseeable future to store the full connectivity or even multistate compartment models of all neurons in the brain within the working memory of a large computing system.
Foreseeable future means not in our lifetimes, right? Think again. If you expect to live to 2050 or so, you could face this choice. And your beloved labrador may be ready for upload by, say, 2030:
A rough conclusion would nevertheless be that if electrophysiological models are enough, full human brain emulations should be possible before mid‐century. Animal models of simple mammals would be possible one to two decades before this.
Interacting with your pet via a computer interface (“Hi Spot!”/“Woof!”) wouldn’t be quite the same as rolling around the backyard with him while he slobbers on your face or watching him dash off after a tennis ball you toss into a pond. You might be able to simulate certain aspects of his personality with computer extensions, but the look in his eyes, the cock of his head and the feel and scent of his coat will be hard to reproduce electronically. All these limitations would probably not make up for no longer having to scoop up his messes or feed him heartworm pills. The electro-pet might also make you miss the real Spot unbearably as you try to recapture his consciousness on your home PC.
But what about you? Does the prospect of uploading your own brain allay your fear of abruptly disappearing from the universe? Is it the next best thing to finding the fountain of youth? Ms. Schneider, a philosophy professor at the University of Connecticut, counsels caution. First, she writes, we might find our identity warped in disturbing ways if we pour our brains into massive digital files. She describes the problem via an imaginary guy named Theodore:
[If Theodore were to truly upload his mind (as opposed to merely copy its contents), then he could be downloaded to multiple other computers. Suppose that there are five such downloads: Which one is the real Theodore? It is hard to provide a nonarbitrary answer. Could all of the downloads be Theodore? This seems bizarre: As a rule, physical objects and living things do not occupy multiple locations at once. It is far more likely that none of the downloads are Theodore, and that he did not upload in the first place.
This is why the Oxford futurists included the caveat “if concerns about individual identity can be met.” It is the nightmare of infinitely reproducible individuals — a consequence that would, in an instant, undermine and destroy the very notion of an individual.
But Ms. Schneider does not come close to appreciating the extent of the moral failure of brain uploads. She is right to observe an apparent “categorical divide between humans and programs.” Human beings, she writes, “cannot upload themselves to the digital universe; they can upload only copies of themselves — copies that may themselves be conscious beings.” The error here is screamingly obvious: brains are parts of us, but they are not “us.” A brain contains the seed of consciousness, and it is both the bank for our memories and the fount of our rationality and our capacity for language, but a brain without a body is fundamentally different from the human being that possessed both.
It sounds deeply claustrophobic to be housed (imprisoned?) forever in a microchip, unable to dive into the ocean, taste chocolate or run your hands through your loved one’s hair. Our participation in these and infinite other emotive and experiential moments are the bulk of what constitutes our lives, or at least our meaningful lives. Residing forever in the realm of pure thought and memory and discourse doesn’t sound like life, even if it is consciousness. Especially if it is consciousness.
So I cannot agree with Ms. Schneider’s conclusion when she writes that brain uploads may be choiceworthy for the benefits they can bring to our species or for the solace they provide to dying individuals who “wish to leave a copy of [themselves] to communicate with [their] children or complete projects that [they] care about.” It may be natural, given the increasingly virtual lives many of us live in this pervasively Internet-connected world, to think ourselves mainly in terms of avatars and timelines and handles and digital faces. Collapsing our lives into our brains, and offloading the contents of our brains to a supercomputer is a fascinating idea. It does not sound to me, though, like a promising recipe for preserving our humanity.
‘This is the same argument that the NSA made in the face of public outcry about its collection of telephone metadata.’
Do corporations have a legal right to track your car? If you think that is a purely academic question, think again. Working with groups like the American Civil Liberties Union, states are considering laws to prevent private companies from continuing to mass photograph license plates.
This is one of the backlashes to the news about mass surveillance. However, this backlash is now facing legal pushback from the corporations who take the photographs and then sell the data gleaned from the images.
In a lawsuit against the state of Utah, Digital Recognition Network, Inc. and Vigilant Solutions are attempting to appropriate the ACLU’s own pro-free speech arguments for themselves. They argue that a recent Utah law banning them from using automated cameras to collect images, locations and times of license plates is a violation of their own free speech rights. Indeed, in an interview, DRN’s counsel Michael Carvin defends this practice by noting, “Everyone has a First Amendment right to take these photographs and disseminate this information.”
He argues that a license plate is an inherently public piece of information.
“The only purpose of license plate information is to identify a vehicle to members of the public,” he says. “The government has no problem with people taking pictures of license plates in a particular location. But for some irrational reason it has a problem with people taking high speed photographs of those license plates.”
The analogy to an individual’s right to take photos only goes so far, though. Vigilant’s website notes that “DRN fuels a national network of more than 550 affiliates,” its tracking “technology is used in every major metropolitan area” and it “captures data on over 50 million vehicles each month.”
“This is a complicated area where we are going to need to carefully balance First Amendment rights of corporations versus individuals privacy rights,” says ACLU attorney Catherine Crump. “The mere fact that an individual has a First Amendment right doesn’t mean that right is unlimited. There are circumstances under which the government is free to regulate speech.”
Crump cited the Fair Credit Reporting Act and laws regulating the dissemination of health information as examples of legal privacy-related restrictions of speech rights.
“One could argue that the privacy implications of a private individual taking a picture of a public place is sufficiently less than a company collecting millions of license plate images,” Crump says. “Especially with technology becoming more widespread and databases going back in time, there may be justification for regulation.”
The Wall Street Journal reports that DRN’s own website boasted to its corporate clients that it can “combine automotive data such as where millions of people drive their cars … with household income and other valuable information” so companies can “pinpoint consumers more effectively.” Yet, in announcing its lawsuit, DRN and Vigilant argue that their methods do not violate individual privacy because the “data collected, stored or provided to private companies (and) to law enforcement … is anonymous, in the sense that it does not contain personally identifiable information.”
In response, Crump says: “This is the same argument that the NSA made in the face of public outcry about its collection of telephone metadata, The argument was essentially, we’re not collecting information about people, we are collecting info about telephone numbers. But every telephone number is associated with an individual, just like a license plate is.”
The courts could follow corporate personhood precedents and strengthen First Amendment protections for private firms. Alternately, the courts could more narrowly rule on whether individuals’ license plate information is entitled to any minimal privacy protections.
Either way, the spat epitomizes how the collision of free speech rights, the desire for private and the expansion of data-collecting technology is raising huge questions about what is—and is not—public.
David Sirota, an In These Times senior editor and syndicated columnist, is a staff writer at PandoDaily and a bestselling author whose book Back to Our Future: How the 1980s Explain the World We Live In Now—Our Culture, Our Politics, Our Everything was released in 2011. Sirota, whose previous books include The Uprising and Hostile Takeover, co-hosts “The Rundown” on AM630 KHOW in Colorado. E-mail him at firstname.lastname@example.org, follow him on Twitter @davidsirota or visit his website at http://www.davidsirota.com.
Jeff Bezos, Tim Cook (Credit: Reuters/Gus Ruelas/Robert Galbraith/Salon)
To this point it appears that the zeitgeist memes for 2014 are, first, the assumption that the future of our economy belongs to robots (see Tyler Cowen’s book “Average Is Over”) and, second, that what’s left of the workforce — those whom Cowen calls “freestylers” working in synchrony with the ’bots — will have their job performance improved if they meditate.
Robots and meditation wouldn’t seem at first glance to have a lot to do with each other, but closer inspection reveals that they do.
This is so largely because the way in which the business world understands meditation (specifically forms derived from what Buddhists call “mindfulness”) is driven not by Buddhism but by science. Mindfulness Based Stress Reduction (MBSR) was developed in 1979 by Jon Kabat-Zinn, an MIT-trained scientist. In her cover story for the February 3 issue of Time magazine, Kate Pickert quoted Kabat-Zinn: “It was always my intention that mindfulness move into the mainstream. This is something that people are now finding compelling in many countries and many cultures. The reason is the science.”
And then there was the annual World Economic Forum in Davos where, according to Otto Scharmer (another MIT man), writing for the Huffington Post, corporate mindfulness is at the “tipping point.” Scharmer writes, “Mindfulness practices like meditation are now used in technology companies such as Google and Twitter (amongst others), in traditional companies in the car and energy sectors, in state-owned enterprises in China, and in UN organizations, governments, and the World Bank.”
So, the narrative conjunction of these two zeitgeist themes would seem to be this: In the future, “high earners” will work with “intelligent machines” (aka robots); the robots will drive them crazy; but they will have happy, productive lives thanks to neuroscience-certified mindfulness.
This narrative leaves out what few people are commenting on: corporate economics and Buddhism are two very different ways of thinking. For all of their countercultural pretensions, mega-corporations like Google, Amazon and Apple are still corporations. They seek profits, they try to maximize their monopoly power, they externalize costs, and, of course, they exploit labor. Apple’s dreadful labor practices in China are common knowledge, and those Amazon packages with the sunny smile issue forth from warehouses that are morelike Blake’s “dark satanic mills” than they are the new employment model for the Internet age.
The technology industry has manufactured images of the rebel hacker and hipster nerd, of products that empower individual and social change, of new ways of doing business, and now of a mindful capitalism. Whatever truth might attach to any of these, the fact is that these are impressions that are carefully managed to get us to keep buying their products. In that very basic sense, it is business as usual.
As for Buddhism, contrary to popular reports it is not primarily about stress reduction for middle management. For the Search Inside Yourself program developed at Google, mindfulness training builds “the core emotional intelligence skills needed for peak performance and effective leadership…. We help professionals at all levels adapt, management teams evolve, and leaders optimize their impact and influence.”
Mindfulness is enabling corporations to “optimize impact”? In this view of things, mindfulness can be extracted from a context of Buddhist meanings, values and purposes. Meditation and mindfulness are not part of a whole way of life but only a spiritual technology, a mental app that is the same regardless of how it is used and what it is used for.
Buddhism has its own orienting perspectives, attitudes and values, as does American corporate culture. And not only are they very different from each other, they are often fundamentally opposed to each other. Indeed, one of the foundations of Buddhism is the idea of right livelihood, which entails engaging in trades or occupations that cause minimal harm to other living beings. And yet in the literature of mindfulness as stress reduction for business, we’ve seen no suggestion that employees ought to think about — be mindful of — whether they or the company they work for practice right livelihood. Corporate mindfulness takes something that has the capacity to be oppositional, Buddhism, and redefines it. Mindfulness becomes just another aspect of “workforce preparation.” Eventually, we forget that it ever had its own meaning.
While Tyler Cowen’s top 15 percent of earners struggle on with the stress of working with their robot comrades, pausing now and then to focus on their breath, where’s everybody else? For corporate mindfulness, the world looks a lot like it does in Spike Jonze’s “Her”: there are no poor people; everyone is walking around with the latest high-tech gizmo stuck in their ears. In short, mindfulness is for the world of the winners. As for the workers in Amazon warehouses, their only stress reduction will be what it’s always been: a beer after work. And it won’t be a craft beer, either. That’s for their betters.
When my partner Billy Agan first told me the story, he called them “Google Goggles.”
“Matt Hunt keeps coming in to Telegraph wearing those Google Goggles and he won’t take them off. It’s like if someone came in holding a camera at eye level — I’d tell them to put that away, too. But he won’t do it.”
“I would normally avoid someone with them, but I’m at work, he’s staring right at me, and I can’t go anywhere.” Billy has worked in Oakland bars and restaurants since 2009.
“I saw other people getting creeped out by it. And because he was a regular, I thought I could tell him to take them off while he was in there. I didn’t think it was going to be that big of a deal.”
On three occasions before requiring Hunt to leave, Billy had asked him directly. I once witnessed this: Hunt walked in donning Glass, Billy asked that he remove it, Hunt laughed, walked behind the bar, and poured himself a beer. Matt ran Telegraph’s social media accounts in exchange for free food and drink, and took the same liberties afforded to actual employees.
Billy just sighed. Matt later told me, “I thought he was joking.”
A few days later, Billy had had enough. He tells it this way: Matt walked up to the bar to order a beer on a busy Friday night. Billy demanded he remove the Glass or leave. Billy yelled. He stood on top of a box and yelled some more. Matt ignored him until Billy grabbed him by the arm and delivered him to restaurant security, who escorted him out.
Later Matt said that Billy, as staff and not owner, had no right to ask him to remove the Glass or leave, and that while, yes, he was ejected for wearing Glass, Billy assaulted him and called him a “faggot.” Witnesses don’t support the claim, and the police report Matt filed against Billy later that night is essentially blank, but he maintains his version of events.
“I didn’t use any slurs,” says Billy. “I called him an ‘asshole.’”
Two witnesses do recall Matt telling Billy, though, just before he was escorted out:
Over the last year, much has been written about the changes coming to the Bay Area through an influx of new money and influence from a once-again burgeoning technology sector. Symbols of a new, disruptive, tech-driven wealth have come in the unlikely form of, among other things, luxury buses and head-mounted computers.
It would be fair to say that lately, urban techies and their attendant trappings have come under attack. When PR writer Sarah Slocum’s Google Glass sparked an altercation in San Francisco bar Molotov’s last week, her supporters and detractors fell along familiar and well-worn battle lines: “Cyborgs” vs. “Luddites;” “techies” vs. the rest of us.
But despite recent tensions, these relations are not strictly new. This has always been a tale of two cities, of with and without. Someone has always felt entitled, someone has always feltaggrieved. And one form or another of an intellectual, “creative” class has always thought its labor and cause a higher kind than that of others.
In the days after Billy asked Matt to leave the bar, he fretted over the potential loss of his job. Matt had taken the restaurant’s social accounts hostage, and Billy’s boss was receiving hate mail.
“I had to dance with the devil to get my accounts back. I told him whatever he wanted to hear — that I’d fire Billy, that I’d do whatever,” Telegraph owner John Mardikian tells me. “I tell my employees that if someone or something is making them uncomfortable, they should do what they feel is appropriate. I didn’t have an issue with Google Glass before, but I wasn’t there. I investigated this myself. I wasn’t going to fire Billy just because Matt was embarrassed.”
Tech still thinks it’s the scrappy rebel when it’s looking more like the ruling class: A white man with a $1,500 face computer trying to cost a brown man his minimum wage job.
When Google Glass first became available last spring, the publicity was positive, but the public reaction was mixed. Some said that even despite its apparent usability or its creepy spy capacity, it just looked too aggressively goofy for the broader public to embrace.
“To be fair, there’s every possibility that Google Glass will change society just as deeply and profoundly as did the Segway, a technologically nifty machine that now serves primarily to identify its owner as a complete dork with far too much money,” Chris Clarke wrote at KCET last year.
Google readily admits that Glass is in a beta stage. While users aren’t trading in their hardware regularly, there are monthly software updates, and the company hopes that a new prescription eyeglass interface will make the technology look more, well, normal.
“While Glass is currently in the hands of a small group of Explorers, we find that when people try it for themselves they better understand the underlying principle that it’s not meant to distract but rather connect people more with the world around them,” Google told Reuters.
(This was, word-for-word, the same prepared statement I received from a Google spokesperson when I asked about the technology’s unexpected social consequences.)
To say nothing of their alleged incompetence behind the wheel, Glass “Explorers” have undoubtedly become connected to one another. The devices are still rare, and can’t be readily bought (though purchase codes now go for as low as $25 on Craigslist); that exclusivity binds the Explorers together into an exclusive community. Explorers not only use the devices, but develop software and hardware improvements for them, solving one anothers problems. But this specialness also promotes the idea that each user is an ambassador for the product, the kind of relationship one wouldn’t usually expect — or perhaps want — with the manufacturer of one’s consumer technology.
There’s a case to be made that wearable technology can connect oneself to one’s environment more than it isolates, by providing context that we otherwise wouldn’t see. But often it’s the rest of the world that bears the burden of that. The data ones face collects could be used and monetized by Google or the third-party applications Glass runs. While that may be the choice of the wearer, there is little to no agency on the other side of the Glass eye prism.
“Wearing Google Glass automatically means that all social interaction you have must be not just on yours, but Google’s terms,” Adrian Chen wrote at Gawker almost a year ago, when we all first cringed in fear.
“Glass has become second nature to me,” says Washington, D.C.-based early adopter and Glass developer Noble Ackerson. “I have yet to have a terrible experience publicly wearing Glass and I have worn the device at least every day for nearly a year.” Ackerson bought his Glass about a month after Chen popularized the term “Glasshole.”
“There are, however, times that I find it is either polite or convenient to park or leave Glass behind. Polite in situations like meetings, interviews and generally gatherings where I wouldn’t need my smart phone either,” he says. “Convenient to use my best judgement and caution on occasions where the street, subway, or bar may warrant some keen situational awareness.”
Glass evangelists point to a Time piece from last spring, decrying fears of these new face-phones as unfounded. The author pointed to late-19th century paranoia that Kodak cameras would chip away at our personal space in public.
But Kodak cameras didplay a part in that chipping, as did the next 120 years of advances in camera technology. Those gadgets melted into our lives. There are now whole websites devoted to making cruel fun of embarrassing photos of people taken in public likely without their permission or knowledge. At best, we take this behavior for granted; at worst, we laugh along with it.
Still, I perhaps naively thought that I could avoid the influence of Glass in my own life.
I was wrong.
“He yelled, ‘All you have to do is take them off!’” Billy’s coworker Zach Keiler-Bradshaw tells me a week later. “Matt was just ignoring it. I saw Billy touch him — it was after a lot. But no slurs were said.”
A week after the incident, the Telegraph restaurant Twitter account went on a hateful anti-tech, anti-gay tirade for several hours. Besides the owner, Matt was the only other party with access to the account, and there’s strong evidence that he sent the tweets. Mardikian pursued legal action against him, and Telegraph now has an explicit “no Glass” policy.
One after another, customers wearing Glass have been more quietly asked to remove them or leave other Oakland bars and San Francisco coffee shops. Other restaurants and cafes across the Bay Area have banned the devices preemptively. These weren’t stunts for media attention, but attempts at heading off the kinds of disruption that other establishments have reluctantly weathered in favor of trying to keep camera-shy staff and customers comfortable.
At Nabeel Silmi’s Grand Coffee in San Francisco last week, a customer had to be asked to leave because they refused to take off their Glass.
“We ask that guests, whether using a disposable camera or wearing Glass, ask permission before photographing,” Silmi says. “This gives anyone who does not want to be in the shot a chance to leave the frame.”
When the Glass came in the mail, Billy was more excited than I was to try it out. (There are dozens of the devices available on Craigslist in the Bay Area for less than what Google sells them for, but ours was a generous loan from certified Explorer Molly Crabapple.)
While Billy and I shot videos and searched for cat pictures with our new face gadget, some are applying Glass’ capabilities to more professional endeavors.
North Carolina firefighter Patrick Jackson developed an app that routes relevant information directly to his eyeballs in case of an emergency — everything from maps to urgent communications. For now, Glass isn’t compatible with the oxygen masks firefighters are required to wear while in action, but specialized designs could solve this problem in the future.
But for everyday use, I’m left wondering: What is the point? After several weeks using Glass, I still struggle to see the appeal and the particular specialness. The voice activation isn’t Siri-smart. The prism’s display blurs, and I strain my eyes trying to read the small text. It doesn’t seem less rude to glance up and right to the tiny screen than to look down and away at my smart phone to check an incoming text or email.
In public, I am far more self-conscious. Even in situations where I might like to use the Glass — to read a sign, take a picture — I often decline when I see others staring, looks of trepidation on their faces more than judgment. In close quarters, the voice commands become less convenient and more irritating, a public announcement that I have a $1,500 face gadget and no, it can’t always understand me very well.
“In its obviousness, it announces an entitlement. It doesn’t have the decency to realize it’s being creepy,” one Glass user in the tech industry told me on condition of anonymity. “I had no bias against it when I got it. I just realized it’s good for basically nothing except being a jerk.”
Do you know how often you’re surveilled in a single day? It’s probably hard to even count.
Since the National Security Administration’s digital dealings became public last spring by way of Edward Snowden’s leaks, it’s become harder to delineate what our expectation of privacy can be in 2014 America, bill of rights or not.
But for all the anxiety about its use as a covert surveillance tool, Glass is not actually very good at that. Snapping pictures is simple and extremely discreet, but when recording a video, the prism illuminates. Any number of other, cheaper cameras would make for a better mode of secret filming.
Still, on its surface, the gadget perpetuates a dynamic that looks like a privileged class — both private citizens and corporations as well as secretive government forces — purchasing the tools to surveil those without means.
“Glass is definitely for now a plaything for a privileged few. And I think that, coupled with how deeply weird and noticeable it is, is what makes it a class divide on your face,” Wired writer Mat Honan tells me. “Glass is a terrible surveillance tool, at least in its current form. Absolutely useless.”
This future is nearly here. The city of Oakland is currently embroiled in the process of finalizing a plan for a surveillance fusion center that would combine public camera footage with social media streams, license plate reader photos and other forms of data in a project bankrolled by the Department of Homeland Security.
It’s hardly a paranoid fantasy to imagine that Google Glass users in the city might find their “lifecasting” streams directed into this big data pool. This dystopian vision only grows darker with the growing potential for tying in facial recognition software.
It takes me a while to build up the courage to take the Glass out into the general population. It’s not just the reactions I fear — I don’t feel like myself.
At Dogwood bar in Oakland, Billy and I take turns wearing the Glass for a couple minutes until we notice how uncomfortable nearby patrons are getting. As we’re leaving, the man working the door asks us what it is. Billy explains.
“Oh, yeah, I’d ask someone to take that off next time.”
Private-public gathering spaces like bars and restaurants play a huge part in our social lives, and present all kinds of potential privacy problems. Many have internal closed circuit camera systems to ostensibly protect from theft or exonerate the establishment in a case of alleged over-serving customers. The San Francisco police department even tried to force bars in the city to film their customers at all times as a requirement for a liquor license, though has relented on this policy with some privacy-minded bar owners.
These aren’t truly public houses — private owners can dictate their own private rules, presuming they do not discriminate against protected classes, and presuming they still have enough customers who want to play by the rules they set in order to stay in business.
In 2011, one startup attempted to set up dozens of San Francisco bars to livestream their occupants, providing the rest of the public with a view inside a previously closed-door nightlife scene from the comfort of their own homes. The concept didn’t go over well with bar patrons or the American Civil Liberties Union. Less than three years later, Barspace.tv is now a curated selection of search-engine-optimization garbage. The project appears to be dead.
We stop at a taco truck on our way home. I am still wearing the Glass and I am more conscious of it than ever, after midnight in this neighborhood with a median household income of less than $30,000. After we order, an Oakland police patrol car rolls up, and a young cop steps out.
Billy approaches him with confidence.
“Hey, have you seen these before?” Billy holds out the Glass for the officer to inspect, but he looks incredulous.
“No, what is that?”
“It’s Google Glass. It’s like a small computer, that can take photos and video. The NYPD has them now. Maybe you will soon too.”
The cop looks uncomfortable and shuffles backward a half step. “Oh, I hope not.” Then he smirks and gingerly holds up his department-issued chest camera, the kind local police departments are required to use (but don’t always).
While tools can certainly facilitate bad behavior, technology does not breed human monsters.
This is essentially the defense of the aggressive, entitled Glass-wearer: We’ve already decided against privacy, we’ve given it up, there’s nothing left to preserve, and to wish or work toward any other future is to be an enemy of technology’s promise.
In many ways this particular new tech does not necessitate new fundamental relations — it just reveals how deeply we’ve already broken those relations and how much we’ve already lost.
We do not need to redefine etiquette for a new century of innovation — society needs to decide where its values truly lie.
Caught in large-scale government and corporate surveillance dragnets, we often have little to no choice in how we, our images, our data, ourselves, are mined, commodified, used for purposes beyond our control. But in our daily personal relations, in this, perhaps, we still do. At least sometimes. At least I’d like to think so.
When all was said and done, Billy wasn’t fired. He still works at Telegraph, and still worries about what Matt’s claims might do to his reputation. When all was said and done, he doesn’t think he had a choice.
Posted: 02/23/2014 1:12 pm EST Updated: 02/23/2014 1:59 pm EST
Ray Kurzweil, the director of engineering at Google, believes that the tech behemoth will soon know you even better than your spouse does.
Kurzweil, who Bill Gates has reportedly called “the best person [he knows] at predicting the future of artificial intelligence,” told the Observer in a recent interview that he is working with Google to create a computer system that will be able to intimately understand human beings.
“I have a one-sentence spec which is to help bring natural language understanding to Google,” the 66-year-old tech whiz told the news outlet of his job. “My project is ultimately to base search on really understanding what the language means.”
“When you write an article, you’re not creating an interesting collection of words,” he continued. “You have something to say and Google is devoted to intelligently organizing and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would want them to read everything on the web and every page of every book, then be able to engage in intelligent dialogue with the user to be able to answer their questions.”
In short, the Observer writes, Kurzweil believes that Google will soon “know the answer to your question before you have asked it. It will have read every email you’ve ever written, every document, every idle thought you’ve ever tapped into a search-engine box. It will know you better than your intimate partner does. Better, perhaps, than even yourself.”
As creepy as this may sound to some, Kurzweil — who has long contended that computers will outsmart us by 2029 — believes that the improvement of artificial intelligence is merely the next step in our evolution.
“[Artificial intelligence] is not an intelligent invasion from Mars,” he told the Montecito Journal in 2012, per a post on his website. “These are brain extenders that we have created to expand our own mental reach. They are part of our civilization. They are part of who we are. So over the next few decades our human-machine civilization will become increasingly dominated by its non-biological component.”