As Silicon Valley money continues to pour into San Francisco, wealthy tech workers are displacing longtime residents at record pace. The conflict, referred to by former mayor Willie Brown as “a war brewing in the streets,” was once viewed as a typical case of gentrification. However, it now appears many of these evictions have been carried out by a small group of landowners who, according to California Secretary of State filings, may be linked to a single company.
The activist group Anti-Eviction Mapping has identified 12 landlords it dubs “serial evictors” for their repeated usage of no-fault evictions, which have tripled since 2009. By tracing their records though CorporationWiki, Newsweek found that at least seven are associated with LLCs bearing names like “CA1 REAL ESTATE,” “CAS REAL ESTATE” and “CA1 INVESTMENT.”
In all, Newsweek found a dozen similarly named LLCs linked to landlords and companies involved in the evictions. Because the LLCs are private, it is difficult to confirm whether they are in fact all linked to the same company, or what if anything their exact role in the evictions is. “It’s so weird,” says Erin McElroy of Anti-Eviction Mapping, “We can’t figure it out.”
The evictions are carried out using a controversial California renting law known as the Ellis Act, which was written to allow landowners to evict tenants if they chose to exit the rental market. According to Walter Baczkowski, the CEO of the San Francisco Association of Realtors, the act is necessary to ensure that no property owner can be compelled to be a landlord.
Baczkowski says the rising costs of living in San Francisco coupled with strict rent control laws have forced many longtime landlords to evict tenants and sell their buildings in order to avoid bankruptcy. “They keep saying everyone’s a speculator,” he says, referring to tenant rights activists now pushing California to reform the law. “Now they want to restrict more of the private owners’ rights.”
If, in fact, many of these evictions are linked to–and receiving investments from–a single company, it would seem to refute Baczkowski’s argument that longtime building owners in financial straits are responsible for the increase in Ellis evictions.
Dean Preston, the founder of the advocacy group Tenants Together, called Baczkowski’s bankruptcy argument “a fiction.” He points out that half of the Ellis evictions are carried out within a year after a property is purchased, indicating that it isn’t longtime owners bailing out; it’s speculators looking to turn a quick profit by exploiting a legal loophole that removes rent control constraints on the city.
The result is that San Francisco’s already scant housing becomes unaffordable to the city’s older, less affluent residents. “I desperately want to stay in my neighborhood,” says Theresa Flandrich, a longtime resident who was recently served with an Ellis eviction. Her only hope, she says, is to find an older landlord who may have a vacancy. “What’s important to them is the neighborhood.”
FAIR USE NOTICE. Tenants Together is not the author of this article and the posting of this document does not imply any endorsement of the content by Tenants Together. This document may contain copyrighted material the use of which may not have been specifically authorized by the copyright owner. Tenants Together is making this article available on our website in an effort to advance the understanding of tenant rights issues in California. We believe that this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the U.S. Copyright Law. If you wish to use this copyrighted material for purposes of your own that go beyond ‘fair use,’ you must obtain permission from the copyright owner.
On March 12 music listeners who are dissatisfied with their iProduct or smartphone’s sound quality will have the chance to pony up $399 on Kickstarter for Neil Young’s PonoMusic. “It’s about the music, real music. We want to move digital music into the 21st century and PonoMusic does that,” Young said in the company’s release, “We couldn’t be more excited — not for ourselves, but for those that are moved by what music means in their lives.”
PonoMusic is not just a portable digital music player (PonoPlayer); it will also have an online music store (PonoMusic.com), where according to the makers you’ll be able to buy the “finest quality, highest-resolution digital music from both major labels and prominent independent labels, curated and archived for discriminating PonoMusic customers.”
The player is in the shape of a triangular prism, rather than the nearly flat, pocket-size design of most players. Its odd configuration allows it to rest on its side in a home or car. PonoPlayer can store between 100 and 500 high-resolution digital-music albums, depending on the size of the album, on its 128GB of memory. It also has an LCD touchscreen for “intuitive” navigating, and promises the highest fidelity of sound, as if you’re hearing it live. If you’re an audiophile, the device seems to bridge the gap between quality and convenience — with Neil Young’s stamp of approval.
Jan Koum and Brian Acton, the founders of the messaging software WhatsApp, have ample reason to celebrate this week’s 25th anniversary of the Internet’s World Wide Web. The pair have just become billionaires.
Facebook’s Mark Zuckerberg, the youngest billionaire ever, gave Koum and Acton that distinction last month when he gobbled up WhatsApp for $19 billion. The three of them have lots of online company in the global billionaire club. Forbesnow counts 120 other high-tech whizzes with billion-dollar fortunes.
The Internet has become, in effect, the fastest billionaire-minting machine in world history. Should we consider this incredible concentration of wealth an outcome preordained? Did the last 25 years of online history have to leave us so much more unequal? We ask that question in this week’s Too Much.
And what about the future? How might we change the online world to help create a more equal real world? We ask that question, too.
GREED AT A GLANCE
New European Union rules adopted last year limit banker bonuses to no more than triple their base salary. Philippe Lamberts, a Belgian Green Party lawmaker, led the drive for that cap, and now he’s struggling to stop the British government from letting UK bankers sidestep the EU’s modest new pay restrictions. One such sidestep: Banking giant HSBC is now paying its CEO Stuart Gulliver an extra $53,500 every month as an “allowance.” Lamberts wants the EU to take the UK to court. Barclays CEO Antony Jenkins, for his part, is defending his bank’s bonus culture. Nearly 500 power suits at Barclays, half of them in the United States, are taking home at least $1.6 million a year. Paying them any less, says Jenkins, would force his bank into a “death spiral.” To do business in America, adds the CEO, we must “reconcile ourselves to pay high levels of compensation.”
Don’t talk to Miami maritime lawyer Jim Walker about greedy bankers. The real greedy, says Walker, are running cruise lines and incorporating their operations offshore to avoid U.S. taxes and wage laws. The greediest cruise character of them all? That might be Carnival Cruise chair Micky Arison. He’s now selling off 10 million of his Carnival shares, a sale that figures to bring in $395 million and still leave his family holding over $6 billion in Carnival stock. Arison’s share sale began as passengers left adrift last year on a faulty Carnival cruise ship were testifying on the lawsuit they’ve filed against the company. That incident subjected over 4,200 passengers and crew to five days of overflowing toilets and rotting food. Carnival is dismissing the suit as “an opportunistic attempt to benefit financially” from “alleged emotional distress.”
You won’t find any billionaires standing in line to get their family heirlooms appraised on the hit public TV series Antiques Roadshow. But the world’s ultra rich, luxury editor Tara Loader Wilkinson noted last week, are definitely “going gaga for antiques.” The average billionaire, calculates a new billionaire census from the Swiss bank UBS and researchers at the Singapore-based Wealth-X, holds $14 million worth of antiques and collectibles. Why are so many uber rich hungering for antiques? French antiques expert Mikael Kraemer notes that “anyone with enough money can buy a jet.” Not everyone, he adds, has “what sets one billionaire apart from another”: enough “culture and knowledge” to find and buy something like an 18th century antique royal chandelier.
Quote of the Week
“Do you recall a time in America when the income of a single school teacher or baker or salesman or mechanic was enough to buy a home, have two cars, and raise a family? I remember.”
Robert Reich, former U.S. labor secretary, The Great U-Turn, March 6, 2014
PETULANT PLUTOCRAT OF THE WEEK
Fracking can be a dirty business. Big Energy CEOs don’t mind. Can’t let those environmentalists endanger our energy security, they like to insist. Unless the environment at risk happens to be their own. ExxonMobil CEO Rex Tillerson has joined a lawsuit that’s trying to stop the construction of a 160-foot water — for fracking — tower near his manse north of Dallas. Tillerson isn’t talking about his lawsuit. But an Exxon flack says his boss doesn’t object to the tower “for its potential use for water and gas operations for fracking.” He’s just upset because the tower would be “much taller” than originally proposed. If the lawsuit fails, Tillerson might have to buy the tower site himself to prevent the fracking eyesore. He can afford it. His 2012 Exxon take-home: $40.3 million.
IMAGES OF INEQUALITY
The promoters of the new “Wealth Badge” are either cynically exploiting a new luxury niche or acutely exposing the cultural depravity of our unequal times. Their new Web site offers the affluent a metal pin that reads “Because I can.” The cost: $5,000. Explains the Wealth Badge pitch: “The idea is simple: If you buy something just because you can, you are truly rich.” The site claims to have sold 61 badges — and features photos of privileged people showing them off. The pins have begun drawing media play. But no one has so far answered the basic question: Is the Wealth Badge crew trying to make money or a point?
PolluterWatch/ This Greenpeace site zeroes in on the “pollutocrats” now “poisoning the debate about policies that would lower our greenhouse gas emissions and kickstart a clean energy revolution.”
PROGRESS AND PROMISE
Few people have contributed as much to our online world as Jaron Lanier, the computer scientist who helped pioneer — and label — what we call “virtual reality.” Lanier has been wondering of late about his fellow contributors, those hundreds of millions of Internet users who donate — for free — the information that a tiny cohort of tycoons has been able to crunch into billion-dollar fortunes. In his latest book, Lanier envisions a “universal micropayment system” that pays people who go online for whatever information of value their online clicks may create, “no matter what kind of information is involved or whether a person intended to provide it or not.” Such a system, says Lanier, would help us “see a less elite distribution of economic benefits.”
The world’s “ultra high net worth” crowd — individuals worth at least $30 million — now number 167,669, says a new Knight Frank report. Their total wealth: $20.1 trillion in 2013, almost half as much as the combined net worth of the 4.2 billion global adults who hold less than $100,000 in wealth.
A Thought for the Web’s Silver Anniversary
Let’s learn from our not-so-distant past and share the gold. New technologies don’t have to bring us new inequalities.
Exactly 25 years ago this week the British computer scientist Tim Berners-Lee conceptually “invented” the World Wide Web — and began a process that would rather rapidly make the online world an essential part of our daily lives.
By 1995, 14 percent of Americans were surfing the Web. The level today: 87 percent. And among young adults, the Pew Research Center notes in a just-published silver anniversary report, the Internet has reached “near saturation.” Some 97 percent of Americans 18 to 29 are now going online.
Americans young and old alike are using the Web to work wonders few people 25 years ago could have ever imagined. We’re talking face-to-face with people thousands of miles away. We’re finding soulmates who share our passions and problems. We’re organizing political movements to change the world.
Life with the Web has become, for hundreds of millions of us, substantially richer. Not literally richer, of course. The same 25 years that have seen the Web explode into our consciousness have seen most of us struggle to stay even economically. The Internet and inequality have grown together.
Tim Berners-Lee never saw this inequality coming. The ground-breaking research he published on March 12, 1989, the paper that proposed the system that became the Web, carried no price tag. Berners-Lee would go on to release the code for his system for free. He didn’t invent the Web to get rich.
But others certainly have become rich via the Web. Fabulously rich. Forbes magazine last week released its annual list of global billionaires. Some 123 of them, Forbescalculates, owe their fortunes to high-tech ventures. The top 15 of these high-tech billionaires hold a collective $382 billion in personal net worth.
Numbers like these don’t particularly bother — or alarm — many of today’s economists. Grand new technologies, their conventional wisdom holds, always bring forth grand new personal fortunes for the entrepreneurs who lead the way.
In the 19th century, points out this standard narrative of American economic progress, the coming of the railroads dotted our landscape with the fortunes of railroad tycoons. In the early 20th century, the new automobile age created huge piles of wealth for car makers like Henry Ford and the oilmen who supplied the juice that kept his auto engines humming along.
Why should the Internet age, mainstream economists wonder, be any different? A new technology comes along that alters the fabric of daily life. That new technology gives rise to a new rich. The one outcome naturally follows the other. No need to get bent out of shape by the resulting inequality.
But epochal new technology doesn’t always automatically generate grand new fortunes. The prime example from our relatively recent past: television.
TV burst onto the American scene even more rapidly — and thoroughly — than the Internet. In 1948, only 1 percent of American households owned a TV. Within seven years, televisions graced 75 percent of American homes.
These TV sets didn’t just drop down into those homes. They had to be designed, manufactured, packaged, distributed, marketed. Programming had to be produced. Imaginations had to be captured. All of this demanded an enormous outlay of entrepreneurial energy.
But this outlay would produce no jaw-dropping grand fortunes, no billionaires, even after adjusting for inflation. That would be no accident. The American people, by the 1950s, had put in place a set of economic rules that made the accumulation of grand new private fortunes almost impossible.
Taxes played a key role here. Income over $400,000 faced a 91 percent tax rate throughout the 1950s. Regulations played an important role as well. In television’s early heyday, for instance, government regs limited how many commercials could run on children’s TV programming. TV’s original corporate execs could only squeeze so much out of their new medium.
And television’s early kingpins couldn’t squeeze their workers all that much either. Most of their employees, from the workers who manufactured TV sets to the technicians who staffed broadcast studios, belonged to unions. TV’s early movers and shakers had to share the wealth their new medium was creating.
Today’s Internet movers and shakers, by contrast, have to share nothing. In an America where less than 7 percent of private-sector workers carry union cards, online corporate giants seldom ever need bargain with their employees.
In a deregulated U.S. economy, meanwhile, these Internet kingpins face precious few public-interest rules that keep them from charging whatever the market can bear — and rigging markets to squeeze out even more.
And taxes? Today’s Internet billionaires face tax rates that run well less than half the rates that early TV kingpins faced.
We can’t — and shouldn’t — fault Tim Berners-Lee for any of this. He freely shared, after all, his invention with the world.
“I wanted to build a creative space,” Berners-Lee observed in an interview a few years ago, “something like a sandpit where everyone could play together.”
Sherle Schwenninger and Samuel Sherraden, The U.S. Economy After The Great Recession, New America Foundation, Washington, D.C., March 4, 2014.
Need to better understand how the Great Recession — and the political responses to it — have played out? This no-nonsense set of slides brings together, in one place, the key trends that have defined the the U.S. economy since the Great Recession hit in 2008. Just a few of the report’s many choice tidbits . . .
Good times at the top: From 2009 to 2012, America’s top 1 percent incomes grew by 31.4 percent. Bottom 99 percent incomes rose all of 0.4 percent.
Shrinking returns to labor: From 2007′s fourth quarter to 2013′s third, the labor compensation share of national income declined from 64 to 61% percent. If this labor share of national income had remained at the 2007 level, American workers would have earned $520 billion more in 2013 than they actually did.
Enter the “plutonomy”: The U.S. economy is revolving ever more around consumption by the rich. In 2012 the top 5 percent of American income earners accounted for 38 percent of domestic consumption, up from 28 percent in 1995.
Briton Tim Berners-Lee, the inventor of the world wide web, at the opening ceremony of the London 2012 Olympic Games. Photograph: Wang Lili/xh/Xinhua Press/Corbis
1 The importance of “permissionless innovation”
The thing that is most extraordinary about the internet is the way it enables permissionless innovation. This stems from two epoch-making design decisions made by its creators in the early 1970s: that there would be no central ownership or control; and that the network would not be optimised for any particular application: all it would do is take in data-packets from an application at one end, and do its best to deliver those packets to their destination.
It was entirely agnostic about the contents of those packets. If you had an idea for an application that could be realised using data-packets (and were smart enough to write the necessary software) then the network would do it for you with no questions asked. This had the effect of dramatically lowering the bar for innovation, and it resulted in an explosion of creativity.
What the designers of the internet created, in effect, was a global machine for springing surprises. The web was the first really big surprise and it came from an individual – Tim Berners-Lee – who, with a small group of helpers, wrote the necessary software and designed the protocols needed to implement the idea. And then he launched it on the world by putting it on the Cern internet server in 1991, without having to ask anybody’s permission.
2 The web is not the internet
Although many people (including some who should know better) often confuse the two. Neither is Google the internet, nor Facebook the internet. Think of the net as analogous to the tracks and signalling of a railway system, and applications – such as the web, Skype, file-sharing and streaming media – as kinds of traffic which run on that infrastructure. The web is important, but it’s only one of the things that runs on the net.
3 The importance of having a network that is free and open
The internet was created by government and runs on open source software. Nobody “owns” it. Yet on this “free” foundation, colossal enterprises and fortunes have been built – a fact that the neoliberal fanatics who run internet companies often seem to forget. Berners-Lee could have been as rich as Croesus if he had viewed the web as a commercial opportunity. But he didn’t – he persuaded Cern that it should be given to the world as a free resource. So the web in its turn became, like the internet, a platform for permissionless innovation. That’s why a Harvard undergraduate was able to launch Facebook on the back of the web.
4 Many of the things that are built on the web are neither free nor open
Mark Zuckerberg was able to build Facebook because the web was free and open. But he hasn’t returned the compliment: his creation is not a platform from which young innovators can freely spring the next set of surprises. The same holds for most of the others who have built fortunes from exploiting the facilities offered by the web. The only real exception is Wikipedia.
5 Tim Berners-Lee is Gutenberg’s true heir
In 1455, with his revolution in printing, Johannes Gutenberg single-handedly launched a transformation in mankind’s communications environment – a transformation that has shaped human society ever since. Berners-Lee is the first individual since then to have done anything comparable.
6 The web is not a static thing
The web we use today is quite different from the one that appeared 25 years ago. In fact it has been evolving at a furious pace. You can think of this evolution in geological “eras”. Web 1.0 was the read-only, static web that existed until the late 1990s. Web 2.0 is the web of blogging, Web services, mapping, mashups and so on – the web that American commentator David Weinberger describes as “small pieces, loosely joined”. The outlines of web 3.0 are only just beginning to appear as web applications that can “understand” the content of web pages (the so-called “semantic web”), the web of data (applications that can read, analyse and mine the torrent of data that’s now routinely published on websites), and so on. And after that there will be web 4.0 and so on ad infinitum.
7 Power laws rule OK
In many areas of life, the law of averages applies – most things are statistically distributed in a pattern that looks like a bell. This pattern is called the “normal distribution”. Take human height. Most people are of average height and there are relatively small number of very tall and very short people. But very few – if any – online phenomena follow a normal distribution. Instead they follow what statisticians call a power law distribution, which is why a very small number of the billions of websites in the world attract the overwhelming bulk of the traffic while the long tail of other websites has very little.
8 The web is now dominated by corporations
Despite the fact that anybody can launch a website, the vast majority of the top 100 websites are run by corporations. The only real exception is Wikipedia.
9 Web dominance gives companies awesome (and unregulated) powers
Take Google, the dominant search engine. If a Google search doesn’t find your site, then in effect you don’t exist. And this will get worse as more of the world’s business moves online. Every so often, Google tweaks its search algorithms in order to thwart those who are trying to “game” them in what’s called search engine optimisation. Every time Google rolls out the new tweaks, however, entrepreneurs and organisations find that their online business or service suffers or disappears altogether. And there’s no real comeback for them.
10 The web has become a memory prosthesis for the world
Have you noticed how you no longer try to remember some things because you know that if you need to retrieve them you can do so just by Googling?
11 The web shows the power of networking
The web is based on the idea of “hypertext” – documents in which some terms are dynamically linked to other documents. But Berners-Lee didn’t invent hypertext – Ted Nelson did in 1963 and there were lots of hypertext systems in existence long before Berners-Lee started thinking about the web. But the existing systems all worked by interlinking documents on the same computer. The twist that Berners-Lee added was to use the internet to link documents that could be stored anywhere. And that was what made the difference.
12 The web has unleashed a wave of human creativity
Before the web, “ordinary” people could publish their ideas and creations only if they could persuade media gatekeepers (editors, publishers, broadcasters) to give them prominence. But the web has given people a global publishing platform for their writing (Blogger, WordPress, Typepad, Tumblr), photographs (Flickr, Picasa, Facebook), audio and video (YouTube, Vimeo); and people have leapt at the opportunity.
13 The web should have been a read-write medium from the beginning
Berners-Lee’s original desire was for a web that would enable people not only to publish, but also to modify, web pages, but in the end practical considerations led to the compromise of a read-only web. Anybody could publish, but only the authors or owners of web pages could modify them. This led to the evolution of the web in a particular direction and it was probably the factor that guaranteed that corporations would in the end become dominant.
14 The web would be much more useful if web pages were machine-understandable
Web pages are, by definition, machine-readable. But machines can’t understand what they “read” because they can’t do semantics. So they can’t easily determine whether the word “Casablanca” refers to a city or to a movie. Berners-Lee’s proposal for the “semantic web” – ie a way of restructuring web pages to make it easier for computers to distinguish between, say, Casablanca the city and Casablanca the movie – is one approach, but it would require a lot of work upfront and is unlikely to happen on a large scale. What may be more useful are increasingly powerful machine-learning techniques that will make computers better at understanding context.
15 The importance of killer apps
A killer application is one that makes the adoption of a technology a no-brainer. The spreadsheet was the killer app for the first Apple computer. Email was the first killer app for the Arpanet – the internet’s precursor. The web was the internet’s first killer app. Before the web – and especially before the first graphical browser, Mosaic, appeared in 1993 – almost nobody knew or cared about the internet (which had been running since 1983). But after the web appeared, suddenly people “got” it, and the rest is history.
16 WWW is linguistically unique
Well, perhaps not, but Douglas Adams claimed that it was the only set of initials that took longer to say than the thing it was supposed to represent.
17 The web is a startling illustration of the power of software
Software is pure “thought stuff”. You have an idea; you write some instructions in a special language (a computer program); and then you feed it to a machine that obeys your instructions to the letter. It’s a kind of secular magic. Berners-Lee had an idea; he wrote the code; he put it on the net, and the network did the rest. And in the process he changed the world.
18 The web needs a micro-payment system
In addition to being just a read-only system, the other initial drawback of the web was that it did not have a mechanism for rewarding people who published on it. That was because no efficient online payment system existed for securely processing very small transactions at large volumes. (Credit-card systems are too expensive and clumsy for small transactions.) But the absence of a micro-payment system led to the evolution of the web in a dysfunctional way: companies offered “free” services that had a hidden and undeclared cost, namely the exploitation of the personal data of users. This led to the grossly tilted playing field that we have today, in which online companies get users to do most of the work while only the companies reap the financial rewards.
19 We thought that the HTTPS protocol would make the web secure. We were wrong
HTTP is the protocol (agreed set of conventions) that normally regulates conversations between your web browser and a web server. But it’s insecure because anybody monitoring the interaction can read it. HTTPS (stands for HTTP Secure) was developed to encrypt in-transit interactions containing sensitive data (eg your credit card details). The Snowden revelations about US National Security Agency surveillance suggest that the agency may have deliberately weakened this and other key internet protocols.
20 The web has an impact on the environment. We just don’t know how big it is
The web is largely powered by huge server farms located all over the world that need large quantities of electricity for computers and cooling. (Not to mention the carbon footprint and natural resource costs of the construction of these installations.) Nobody really knows what the overall environmental impact of the web is, but it’s definitely non-trivial. A couple of years ago, Google claimed that its carbon footprint was on a par with that of Laos or the United Nations. The company now claims that each of its users is responsible for about eight grams of carbon dioxide emissions every day. Facebook claims that, despite its users’ more intensive engagement with the service, it has a significantly lower carbon footprint than Google.
21 The web that we see is just the tip of an iceberg
The web is huge – nobody knows how big it is, but what we do know is that the part of it that is reached and indexed by search engines is just the surface. Most of the web is buried deep down – in dynamically generated web pages, pages that are not linked to by other pages and sites that require logins – which are not reached by these engines. Most experts think that this deep (hidden) web is several orders of magnitude larger than the 2.3 billion pages that we can see.
22 Tim Berners-Lee’s boss was the first of many people who didn’t get it initially
Berners-Lee’s manager at Cern scribbled “vague but interesting” on the first proposal Berners-Lee submitted to him. Most people confronted with something that is totally new probably react the same way.
23 The web has been the fastest-growing communication medium of all time
One measure is how long a medium takes to reach the first 50 million users. It took broadcast radio 38 years and television 13 years. The web got there in four.
24 Web users are ruthless readers
The average page visit lasts less than a minute. The first 10 seconds are critical for users’ decision to stay or leave. The probability of their leaving is very high during these seconds. They’re still highly likely to leave during the next 20 seconds. It’s only after they have stayed on a page for about 30 seconds that the chances improve that they will finish it.
25 Is the web making us stupid?
Writers like Nick Carr are convinced that it is. He thinks that fewer people engage in contemplative activities because the web distracts them so much. “With the exception of alphabets and number systems,” he writes, “the net may well be the single most powerful mind-altering technology that has ever come into general use.” But technology giveth and technology taketh away. For every techno-pessimist like Carr, there are thinkers like Clay Shirky, Jeff Jarvis, Yochai Benkler, Don Tapscott and many others (including me) who think that the benefits far outweigh the costs.
This story was done in collaboration with VICE News
CRANBURY, N.J. – Half a century ago, the legendary journalist Edward R. Murrow came to this pancake-flat town in central New Jersey to document the plight of migrant farmworkers for a television special called “Harvest of Shame.”
Today, many of Cranbury’s potato fields have been built up with giant warehouses that form a distribution hub off Exit 8A of the Jersey Turnpike.
But amid this 21st century system of commerce, an old way of labor persists. Temporary workers make a daily migration on buses to this area, just as farmworkers did for every harvest in the 1960s. Temp workers today face many similar conditions in how they get hired, how they live and what they can afford to eat. Adjusted for inflation, many of today’s temp workers earn roughly the same amount as those farmworkers did 50 years ago.
Across the country, farms full of migrant workers have been replaced with warehouses full of temp workers, as American consumers depend more on foreign products, online shopping and just-in-time delivery. It is a story that begins at the ports of Los Angeles and Newark, N.J., follows the railroads to Chicago and ends at your neighborhood box store, or your doorstep.
The temp industry now employs 2.8 million workers – the highest number and highest proportion of the American workforce in history. As the economy continues to recover from the Great Recession, temp work has grown nine times faster than private-sector employment as a whole. Overall, nearly one-sixth of the total job growth since the recession ended has been in the temp sector.
Many temps work for months or years packing and assembling products for some of the world’s largest companies, including Walmart, Amazon and Nestlé. They make our frozen pizzas, cut our vegetables and sort the recycling from our trash. They unload clothing and toys made overseas and pack them to fill our store shelves.
The temp system insulates companies from workers’ compensation claims, unemployment taxes, union drives and the duty to ensure that their workers are citizens or legal immigrants. In turn, temp workers suffer high injury rates, wait unpaid for work to begin and face fees that depress their pay below minimum wage.
Temp agencies consistently rank among the worst large industries for the rate of wage and hour violations, according to a ProPublica analysis of federal enforcement data.
A ProPublica analysis of millions of workers’ comp claims found that in five states, representing more than a fifth of the U.S. population, temps face a significantly greater risk of getting injured on the job than permanent employees. In Florida, for example, temps were about twice as likely as regular employees to suffer crushing injuries, dislocations, lacerations, fractures and punctures. They were about three times as likely to suffer an amputation on the job in Florida and the three other states for which records were available.
The disparity was even worse when we looked just at dangerous occupations, such as manufacturing, construction and warehousing. In Florida, temps in blue-collar workplaces were about six times as likely to be injured as permanent employees doing similar jobs.
Day Davis, 21, was crushed by a machine at a Bacardi bottling plant barely 90 minutes into the first day on the first job of his life. Samir Storey, 39, suffocated from hydrogen sulfide gas on his first day when he was assigned to clean the inside of a tank at a paper mill. Mark Jefferson, 47, died after collapsing from heat stroke after a long day on a garbage route.
Ninety minutes into his first day on the first job of his life, Day Davis was called over to help at Palletizer No. 4 at the Bacardi bottling plant in Jacksonville, Fla. Watch what happens next.
Here too, the plight of the lowest level workers has changed little. The workers who reaped the nation’s fruit and vegetables also passed out from working in the heat or became sick from pesticides such as DDT.
In “Harvest of Shame,” two Florida towns – Belle Glade and Immokalee – became symbolic of the plight of farm labor. Today, researchers have identified “temp towns,” such as New Brunswick, N.J., and Little Village in Chicago, Ill. “Temp towns” are often densely populated Latino neighborhoods teeming with temp agencies. Or they are cities where it has become nearly impossible for anyone, even for whites and African-Americans with vocational training, to find blue-collar work without going through a temp firm.
New Jersey has five of the counties – Middlesex, Passaic, Burlington, Camden and Union – with the highest concentration of temp workers in the country.
Lou Kimmel, an organizer for New Labor, a workers advocacy center in New Brunswick, said that when he first started working there, the founder used to say, “We’re all farmworkers in a way.”
For temp workers today, he said, “A lot of the conditions are the same: Low wages, wage theft, unsafe conditions, working with chemicals with no respect and dignity, and no organized effort to try to fight back.”
Murrow opened his documentary with the scene of a “shape up,” in which labor contractors hawk available jobs. Temp agencies today use a similar system that researchers have called a “modern-day shape up.” Temp workers stand on street corners or arrive at agency hiring halls as early as 4 a.m. so the agency’s dispatchers can round up enough to fill an order. In New Brunswick, one agency operated for a while out of a neon-lit beauty salon.
One morning last year, in Little Village, Chicago, workers lined up in an alleyway behind a dentist clinic and a shop selling quinceañer adresses. They knew little of where they were going to work, except that everyone called it los peluches– Spanish for stuffed animals – and that a guy named “Rigo” told them there was work. After following the bus, I discovered the warehouse was run by Ty Inc., one of the largest makers of stuffed animals in the world.
Locations of Temp Workers
These counties had high concentrations of temporary help service workers for counties with more than 100,000 workers in 2012. Overall, 2.2 percent of private-sector workers were temps in 2012.
Greenville County, S.C.
Kane County, Ill.
Kent County, Mich.
Middlesex County, N.J.
Shelby County, Tenn.
Lake County, Ill.
Passaic County, N.J.
San Bernardino County, Calif.
Fayette County, Ky.
Burlington County, N.J.
Fulton County, Ga.
Source: ProPublica analysis of Bureau of Labor Statistics’ Quarterly Census of Employment and Wages data; Updated July 1, 2013
Rigo, whose full name is Rigoberto Aguilar, was what’s known in Little Village as a raitero, a Spanglish invention that roughly means “a person who gives rides.” But raiteros do more than that, essentially serving as immigrant labor brokers for the temp agencies. They recruit the workers, often charging them to apply for the job; then round up the workers in the predawn hours, charging them for the obligatory ride to the warehouse or factory. At the end of the week, the raiteros pick up the workers’ paychecks from the agencies and bring them to check-cashing stores, where workers are charged to cash them. If they don’t have the money for the ride, dozens of workers said, they don’t get their paychecks.
In “Harvest of Shame,” the farm laborers had similar brokers known as “crew leaders,” who skimmed money from workers’ wages.
After ProPublica published a story on the raiteros in Chicago, some temp agencies there stopped using them and started providing free transportation for the workers. Many agencies stopped giving the paychecks to the raiteros—although others continue to operate as they have for years.The Illinois and federal labor departments have launched a joint initiative to investigate issues temp workers face on the job, and have since opened investigations into three temp agencies for issues involving the transportation of workers.
Raiteros, however, are barely better off than temp workers. When I knocked on Aguilar’s door one Friday night, he was holding his infant son. He was renting an apartment not much bigger than what the workers have, with peeling paint and mold in the bathroom. He spoke of his own struggles to make ends meet. At one point, his adult son Victor grew angry as we talked about how the temp agency deals with his father.
“They don’t want to pay him,” his son said. “They have all the people come here. They don’t care. Screw you. You take the people. You give them the ride and you charge the fee. We don’t want to have anything to do with you.”
Here again, the past mirrors the present. Officials at temp agencies and the third-party warehouses told me they are squeezed by the retailers and big-name brands at the top of the supply chain. When workers are killed or don’t receive their pay, those companies deny knowledge or responsibility, directing blame at the temp agencies at the bottom.
Such was the case with migrant farmworkers. A farmer told Murrow’s correspondent that he was “trapped between what society expects and his market demands.” He, too, pointed to the supermarket chains at the top for demanding a price that didn’t allow him to improve the poor working conditions.
In “Harvest of Shame,” the farmworkers traveled in buses and packed into the backs of trucks. Today, temp workers travel in buses and pack into vans. Workers say the drivers sometimes carry 22 people in a 15-passenger van.
They sit on the wheel wells, in the trunk space, or on milk crates. Female workers complain that they are forced sit on the laps of men they do not know. Sometimes, workers must lie on the floor, the other passengers’ feet on top of them.
As before, the products change by the season. But now, instead of picking strawberries, tomatoes and corn, the temp workers pack chocolates for Valentine’s Day, barbecue grills for Memorial Day, turkey pans for Thanksgiving, and clothing and toys for Christmas.
Back then, the farms provided housing, often shacks with shoddy bunk beds. Temp workers rent rooms in rundown houses, sometimes in a basement or attic with not much space other than for a bed. It is not uncommon to find a different family in every room. Rosa Ramirez, a 50-year-old temp worker, rents the living room of an old boarding house in Elgin, Ill. There is a cheap mattress on the floor, and a sheet blocks the French doors that separate her room from the hallway. A trap by her door guards against the rats that have woken her up at night.
“Harvest of Shame” reported that migrant farmworkers in 1960 worked 136 days of the year and earned just $900 – $7,087 in today’s dollars. Many temp workers struggle to find steady work. In 2010, a good year in which she wasn’t out for an injury, Ramirez earned $6,549, according to tax forms she provided.
One of the most memorable scenes in “Harvest of Shame” comes when the correspondent asks the mother of a migrant family, “What is an average dinner for the family?” Surrounded by her children, she replies, “Well, I cook a pot of beans and fry some potatoes.”
Remembering this scene, I began asking the temp workers I met the same question. A conversation with one Chicago man, whose family shares an attic with another family, underscored how very little things have changed. “Frijoles y algunas papitas,” he said.
It’s hard to pinpoint the exact moment that San Francisco morphed into bizarro-world New York, when it went from being the city’s dorky, behoodied West Coast cousin to being, in many ways, more New York–ish than New York itself—its wealth more impressive, its infatuation with power and status more blinding. Maybe it was this past November, when New York elected a tax-the-rich progressive as mayor and, two days later, Twitter, a company that had been courted by San Francisco politicians with a Bloombergian combination of municipal tax breaks and mayoral flattery, went public at around a $25 billion valuation. Maybe it was when, after the crash, bonus-starved Wall Street bankers started quitting their jobs and heading to the Bay Area in droves to join the start-up gold rush. Or maybe it was when San Francisco became the new American capital of real-estate kvetching, thanks to supra-Manhattan rents and gentrification at a pace that would make Bushwick blush.
For me, the epiphany came in December, when I attended a party at a seven-story San Francisco townhouse. The house—used as an office and party pad by a young entrepreneur who had sold his start-up for millions a few years earlier—was the kind of bachelor pad Richie Rich might have set up for himself, had he been 23 and a Burning Man regular. The walls were covered in inspirational phrases (FOLLOW YOUR HEART, HOLISTIC MINDFULNESS & WELLNESS), and the party was centered on a split-level pool and hot tub that took up the entire middle section of the house. Five inflatable killer whales floated idly in the water. A bearded man was giving out back massages at water’s edge using a pair of repurposed automotive buffers, one in each hand. And loaner swimsuits—washed between wearings, we were assured—were provided for all.
As the hours ticked on and the booze kicked in, some shed their Louboutin heels and jumped in the pool; others marinated in the hot tub and told start-up war stories. It was the kind of bash you’d have found in Easthampton circa 2006, or West Egg circa 1922. And as if to cement San Francisco’s newfound place at the center of a certain social universe, the person greeting newcomers at the door was Julia Allison, the notorious glam blogger, whose smile had dotted the New York party scene just a few years earlier.
It’s no secret that New York is having a bit of an identity crisis these days. Wall Street lost its swagger during the crash and hasn’t gotten it back despite the market’s broader recovery. Big banks are adding employees in Bangalore and Salt Lake City while cutting them in Manhattan. New York City’s budget wonks expect the city to add only 67,000 jobs this year, a sluggish number that faster-growing cities like Denver and Austin will look upon with pity. The city’s culture seems to be changing, too: Greenpoint and “normcore” are in, stilettos and pinstripes are out; junior bankers now get Saturdays off; “work-life balance” is no longer a euphemism for sloth.
Meanwhile, certain pockets of San Francisco have become the sort of gilded playground that New York once was. Brand-new Teslas with vanity plates like DISRUPTD drift down the streets of the Mission District, where pawnshops and porn stores used to be. Paper millionaires spend their nights at the Battery, a members-only club with a tech-heavy roster and a $10,000-per-night penthouse suite. Upscale restaurants pop up at regular intervals, each with a more elite clientele and a more Portlandia-esque menu—everything from the $4 artisanal toast that sparked a citywide craze to the underground supper clubs serving kombucha pairings with sustainable-seafood dinners. Finding an affordable apartment in the city has become, as one tech worker lamented to me recently, “a Hunger Games scenario.”
In many ways, San Francisco is the nation’s new success theater. It’s the city where dreamers go to prove themselves—the place where just being able to afford a normal life serves as an indicator of pluck and ability. I had lunch the other day with a Harvard Business School student who belonged to a 90-person section, of whom 12 were start-up entrepreneurs. You can imagine the whole dozen packing their bags for the West Coast after collecting their M.B.A.’s, thinking: If I can make it there, I’ll make it anywhere.
Which isn’t to say that San Francisco has pulled off this transition effortlessly. The city still has its lefty legacy, after all, and as the tech sector has grown into an economic powerhouse, so has resentment toward its elites. Protesters, angry about Silicon Valley’s effect on the local economy, are blockading tech-employee shuttles in the streets; in Oakland last year, a Google bus had its window shattered by a rock. San Francisco Mayor Ed Lee, long suspected of being in the tech industry’s pocket, is accused of not doing enough to help the working class cope with rising costs and widening inequality. Although most right-thinking one-percenters cringed when venture capitalist Tom Perkins compared the treatment of the rich in San Francisco to the treatment of Jews by Nazis on Kristallnacht, the hostility he felt is real. Silicon Valley is exploding, as Wall Street did in the 1980s, as Detroit did in the 1940s. And as in those booms, not everyone is going along for the ride.
Of course, San Francisco won’t truly become New York, and not just because New York’s economy is nearly twice as big as the country’s next biggest (that’s L.A.’s, not San Francisco’s, which ranks eighth). San Francisco is too earnest, too eager to be liked, to truly wallow in its wealth like Bloomberg’s New York. (If Martin Scorsese had made The Wolf of Silicon Valley, it would have been two hours of Leonardo DiCaprio apologizing for spilling the Dom Pérignon.) The utopian streak of the tech sector paints a thick veneer of do-gooderism over even the rawest capitalistic conquests, and coupled with a desire to appease the locals, it’s what keeps San Francisco’s ruling class from really letting go.
My New York friends tend to brush off what’s happening in San Francisco with one word: bubble. After all, people flocked to Silicon Valley in 1999, they say, only to be flung back to New York when the start-up scene burst. But what if this tech bubble doesn’t end in sock puppets and Schadenfreude? What if, as MIT professors Erik Brynjolfsson and Andrew McAfee recently wrote, we’re not just dealing with a temporary tech craze but the dawn of a “second machine age” that will fundamentally realign the entire global economy? And what if most of the technology that powers that revolution is made in California?
Whatever the Silicon Valley gold rush has done or will do, it’s already given us an entirely new species of yuppie mogul: the one who stockpiles bitcoin and speaks in hacker pidgin, the one who wears Uniqlo on a Gulfstream and obsesses over single-origin coffees. The kind, in other words, who plays the underdog even while sitting on top of the world.
Over the past several days, it has emerged that the Central Intelligence Agency (CIA) has been illegally spying on the US Senate Intelligence Committee—the very legislative body that is charged with overseeing and regulating the agency—in flagrant violation of legality and the constitutional separation of powers.
Among the basic conceptions of the revolutionaries who created the American system of government was the conviction that the natural trajectory of government, left unchecked, was toward executive tyranny. To combat this tendency, the Founders designed a system in which state power was divided among separate branches of government. The separate branches, under a system of “checks and balances,” were meant to limit the powers of the other branches. Legislative oversight of federal agencies, including intelligence agencies, is one historical outgrowth of this conception.
The revelations of CIA spying on Congress underscore the fact that America is run by an unelected, unaccountable military/intelligence apparatus. It is this apparatus, in conjunction with the corporate-financial elite, that dictates official policy in Washington, irrespective of which political party is in power.
The outlook of those who run this apparatus is one of utter impunity and contempt for basic democratic principles. In the day-to-day activities of the intelligence agencies—spying, conspiracy, infiltration, subversion, torture, assassination—the limitations imposed by the Constitution, the Bill of Rights and existing law are seen merely as impediments to be evaded or overridden. This contempt for democratic rights is consistent with a political system that as a whole carries out its reactionary foreign and domestic policies over the heads of the population, which overwhelmingly opposes them.
The CIA spying scandal has its origins in a Senate investigation into the CIA system of abductions (“renditions”), secret prisons (“black sites”) and torture dating from the period following September 11, 2001. It goes without saying that these CIA practices were and remain completely illegal, violating both US and international law.
To date, under the Obama administration’s slogan of “looking forwards not backwards,” not one individual involved has been criminally prosecuted or otherwise held accountable. Instead, the Obama administration has threatened to prosecute (and has actually prosecuted) anyone from within the agency who publicly revealed the agency’s activities. The Senate Intelligence Committee has produced, but not publicly released, a 6,300-page report documenting these crimes.
The CIA lied to the Senate Intelligence Committee in an effort to cover up its activities. In September of last year, CIA Director John Brennan, appointed by President Obama, filed a 122-page answer to the committee’s report that purported to rebut the committee’s findings. This answer was later exposed as a fraud when the Senate committee obtained a document reviewing CIA practices prepared for Brennan’s predecessor, Leon Panetta.
While the CIA granted the Senate Intelligence Committee restricted access to certain documents, requiring Senate staff to physically attend a facility set up by the CIA for that purpose, the CIA had attempted to conceal the Panetta document from the investigation. Senator Mark Udall, a member of the committee, said the Panetta document was “consistent with the [Senate] Intelligence Committee’s report” and “conflicts with the official CIA response to the committee’s report.”
Finally, having committed these crimes and then lied about them, the CIA retaliated against the Senate Intelligence Committee staffers who viewed the Panetta document by spying on them and monitoring their computers.
The revelations of executive spying on Congress bring to mind the Watergate scandal of 1972-74, which involved the Nixon administration’s illegal attempts to spy on and discredit political opponents. In the wake of that scandal, no less than 43 people were prosecuted, convicted and jailed, while Nixon himself was forced to resign in the face of near-certain impeachment and removal from office by Congress.
A far different response greets the exposure of executive criminality 40 years later. The media has expressed indifference to the story and the episode has thus far generated no significant response from anywhere in the political establishment.
The Senate Intelligence Committee did refer the CIA’s actions to the Department of Justice for possible criminal prosecution. The CIA’s provocative response was to demand that instead of prosecuting the CIA, the Department of Justice prosecute the Senate staffers for allegedly gaining “unauthorized access” to “classified” material.
In recent weeks, the American political establishment supported an armed coup in Ukraine by a coalition of far-right and fascist forces, recklessly bringing the world to the brink of a nuclear conflict between the United States and Russia. One of the US-backed parties, Svoboda, has called for the summary execution of all “Russian-speaking intellectuals” and all “members of the anti-Ukrainian political parties,” while publicly denouncing Jews as enemies of the Ukrainian people. A government that forges such alliances abroad is perfectly capable of developing similar forces at home.
The “military-industrial complex” against which Eisenhower warned in 1961 has massively increased its size and power. In numbers, resources, wealth, connections and influence, the 21st century American military/intelligence/corporate-financial complex dwarfs anything Eisenhower could have imagined. Congress is subservient and impotent before it, and the president functions largely as its public relations representative and functionary.
A major factor in the ever more reckless and aggressive foreign policy of the United States is the unprecedented scale of social inequality within the country and the explosive social conditions that it produces. One motivation behind repeated military interventions is the desire to divert social and political opposition outward. At the same time, the rise of an unimaginably rich oligarchy at one pole of society and ever-greater misery and poverty at the other pole is incompatible with democratic forms of rule.
The American ruling class is terrified above all that a movement will develop in the working class against capitalism. The growing list of police state measures—NSA spying, drone assassinations, internment without trial, renditions—are directed against popular opposition.
US Supreme Court Justice Antonin Scalia recently told an audience in Hawaii that “you are kidding yourself if you think” there will not be mass internment in the United States along the lines of the internment of Japanese-Americans during the Second World War. Doubtless, “classified” lists of “enemies of the state” have already been drawn up. While American political functionaries mouth empty phrases about “freedom and democracy,” their open support for out-and-out fascists in the Ukraine indicates where they really stand. The defense of basic democratic rights necessitates that the military/intelligence complex be permanently broken up and abolished. All of the intelligence agencies must be disbanded, all of their “classified” files must be published, and all of the poison fruit of their illegal spying operations must be destroyed. In order to accomplish these necessary tasks, a confrontation with the capitalist system that has produced this complex cannot be avoided.
The world capitalist crisis has generated untenable levels of social inequality worldwide and in the United States in particular. Social inequality drives the collapse of democracy and the turn towards a police state, together with the bloody and provocative expansion of American militarism abroad. The only means of halting and reversing these processes—which lead inevitably to totalitarianism, mass poverty and world war—is the independent mobilization of the working class on the basis of a socialist program.
Technology promises to improve people’s quality of life, and what could be a better example of that than sending robots instead of humans into dangerous situations? Robots can help conduct research in deep oceans and harsh climates, or deliver food and medical supplies to disaster areas.
As the science advances, it’s becoming increasingly possible to dispatch robots into war zones alongside or instead of human soldiers. Several military powers, including the United States, the United Kingdom, Israel and China, are already using partially autonomous weapons in combat and are almost certainly pursuing other advances in private, according to experts.
The idea of a killer robot, as a coalition of international human rights groups has dubbed the autonomous machines, conjures a humanoid Terminator-style robot. The humanoid robots Google recently bought are neat, but most machines being used or tested by national militaries are, for now, more like robotic weapons than robotic soldiers. Still, the line between useful weapons with some automated features and robot soldiers ready to kill can be disturbingly blurry.
Whatever else they do, robots that kill raise moral questions far more complicated than those posed by probes or delivery vehicles. Their use in war would likely save lives in the short run, but many worry that they would also result in more armed conflicts and erode the rules of war — and that’s not even considering what would happen if the robots malfunctioned or were hacked.
Seeing a slippery slope ahead, human rights groups began lobbying last year for lethal robots to be added to the list of prohibited weapons that includes chemical weapons. And the U.N., driven in part by a 2013 report by Special Rapporteur Christof Heyns, has set a meeting in May for nations to explore that and other limits on the technology.
“Robots should not have the power of life and death over human beings,” Heyns wrote in the report.
There’s no doubt that major military powers are moving aggressively into automation. Late last year, Gen. Robert Cone, head of the U.S. Army’s Training and Doctrine Command, suggested that up to a quarter of the service’s boots on the ground could be replaced by smarter and leaner weaponry. In January, the Army successfully tested a robotic self-driving convoy that would reduce the number of personnel exposed to roadside explosives in war zones like Iraq and Afghanistan.
According to Heyns’s 2013 report, South Korea operates “surveillance and security guard robots” in the demilitarized zone that buffers it from North Korea. Although there is an automatic mode available on the Samsung machines, soldiers control them remotely.
The U.S. and Germany possess robots that automatically target and destroy incoming mortar fire. They can also likely locate the source of the mortar fire, according to Noel Sharkey, a University of Sheffield roboticist who is active in the “Stop Killer Robots” campaign.
And of course there are drones. While many get their orders directly from a human operator, unmanned aircraft operated by Israel, the U.K. and the U.S. are capable of tracking and firing on aircraft and missiles. On some of its Navy cruisers, the U.S. also operates Phalanx, a stationary system that can track and engage anti-ship missiles and aircraft.
The Army is testing a gun-mounted ground vehicle, MAARS, that can fire on targets autonomously. One tiny drone, the Raven is primarily a surveillance vehicle but among its capabilities is “target acquisition.”
No one knows for sure what other technologies may be in development.
“Transparency when it comes to any kind of weapons system is generally very low, so it’s hard to know what governments really possess,” Michael Spies, a political affairs officer in the U.N.’s Office for Disarmament Affairs, told Singularity Hub.
At least publicly, the world’s military powers seem now to agree that robots should not be permitted to kill autonomously. That is among the criteria laid out in a November 2012 U.S. military directive that guides the development of autonomous weapons. The European Parliament recently established a non-binding ban for member states on using or developing robots that can kill without human participation.
Yet, even robots not specifically designed to make kill decisions could do so if they malfunctioned, or if their user experience made it easier to accept than reject automated targeting.
What if, for example, a robot tasked with destroying an unmanned military installation instead destroyed a school? Robotic sensing technology can only barely identify big, obvious targets in clutter-free environments. For that reason, the open ocean is the first place robots are firing on targets. In more cluttered environments like the cities where most recent wars have been fought, the sensing becomes less accurate.
The U.S. Department of Defense directive, which insists that humans make kill decisions, nonetheless addresses the risk of “unintended engagements,” as a spokesman put it in an email interview with Singularity Hub.
Sensing and artificial intelligence technologies are sure to improve, but there are some risks that military robot operators may never be able to eliminate.
Some issues are the same ones that plague the adoption of any radically new technology: the chance of hacking, for instance, or the legal question of who’s responsible if a war robot malfunctions and kills civilians.
“The technology’s not fit for purpose as it stands, but as a computer scientist there are other things that bother me. I mean, how reliable is a computer system?” Sharkey, of Stop Killer Robots, said.
Sharkey noted that warrior robots would do battle with other warrior robots equipped with algorithms designed by an enemy army.
“If you have two competing algorithms and you don’t know the contents of the other person’s algorithm, you don’t know the outcome. Anything could happen,” he said.
For instance, when two sellers recently unknowingly competed for business on Amazon, the interactions of their two algorithms resulted in prices in the millions of dollars. Competing robot armies could destroy cities as their algorithms exponentially escalated, Sharkey said.
An even likelier outcome would be that human enemies would target the weaknesses of the robots’ algorithms to produce undesirable outcomes. For instance, say a machine that’s designed to destroy incoming mortar fire such as the U.S.’s C-RAM or Germany’s MANTIS, is also tasked with destroying the launcher. A terrorist group could place a launcher in a crowded urban area, where its neutralization would cause civilian casualties.
NASA is building humanoid robots.
Or consider a real scenario. The U.S. sometimes programs its semi-autonomous drones to locate a terrorist based on his cell phone SIM card. The terrorists, knowing that, often offload used SIM cards to unwitting civilians. Would an autonomous killing machine be able to plan for such deception? Even if robots plan for particular deceptions, the history of the web suggests that terrorists could find others.
Of course, most technologies stumble at first and many turn out okay. The militaries developing war-fighting robots are assuming this model and starting with limited functions and use cases. But they are almost certainly working toward exploring disruptive options, if only to keep up with their enemies.
Sharkey argues that, given the lack of any clear delineation between limited automation and killer robots, a hard ban on robots capable of making kill decisions is the only way to ensure that machines never have the power of life and death over human beings.
“Once you’ve put in billions of dollars of investment, you’ve got to use these things,” he said.
Few expect the U.N. meeting this spring to result in an outright ban, but it will begin to lay the groundwork for the role robots will play in war.
Photos: Lockheed Martin, QinetiQ North America, NASA
Bryan Cranston as Walter White in “Breaking Bad” (Credit: AMC/Ursula Coyote)
Editor’s note: A number of drug dealers were interviewed over the course of reporting this article. Their names have been changed to protect their identities.
Quality cocaine has a sheen to it, like the paint on a lowrider. Shorts, a drug dealer in Albuquerque, flicks a clump with his nail to show me.
“See?” he says. “Like fish scales.” I’m watching him measure out $20 and $40 bags using a metal, digital scale and two business cards. He’s bent one card into a trough and uses the other to scoop up the blow. Two lines he has set aside for himself, from “the good shit.” The good shit we just picked up in a diner parking lot, from a kid in a black Honda Civic. No rims, no underbody glow, 4-door, nothing fancy. Maybe in a place more prosperous than Albuquerque, flashy cars would blend right in, but when parking lots, alleys, and gas stations are your office, it’s best to have transport that seems just like everybody else’s. We’re in a old Subaru wagon.
As Rico, another dealer, tells me, “You have to maintain appearances.”
Rico works a full-time job and only deals as much as he can reasonably use or hide. He lives in the the same small house he’s lived in for 12 years, in a down-and-out part of Albuquerque that recently began to “yuppify,” as he puts it.
“I’m not trying to be some rich guy. I’m just trying to get money to enjoy myself. Real-world jobs don’t allow people to do that. I think that’s why a lot of people sell drugs,” Rico says.
His “real-world job” pays a few bucks more than minimum wage. He says that it’s just enough to pay bills and occasionally go out. “You don’t make enough money to do anything: Travel, get your car fixed up. Naw.” He explains that when an hour’s work at minimum wage buys you two gallons of gas, and you spend a gallon each day getting to work, the choice becomes pretty clear. “It’s almost like you work to go to work,” Rico says. “I wanted something else.”
Mention “Albuquerque” and “drugs,” and chances are someone will squeal “Breaking Bad!” Walter White’s transformation from a cancer-stricken chemistry teacher to a successful drug lord made great TV, but for most dealers here in Albuquerque, selling will never be so bloody, nor so profitable. They are cogs in a multi-billion dollar industry. (The United Nations estimates that the drug trade generates $600 billion per year. If the drug trade were its own country, this would put its GDP somewhere between Saudi Arabia and Switzerland.)
And, like any other Fortune 500 company, there is little opportunity for the lower-level employees to rise to its upper echelons. Finn Selander, a former DEA agent, puts it this way: “There is almost zero chance any of these men will end up an Escobar.”
These men don’t belong to cartels or gangs. They’ve never murdered or physically hurt anyone while selling drugs. They don’t keep guns. With the exception of Shorts, they’ve never been arrested. Each of the dealers I spoke with said that they began selling drugs when they realized that there was no way their jobs would allow them to do what they wanted to do.
Selander sees it as a larger societal problem. “Try to raise a family working at McDonald’s or Wal-Mart. Try to buy a house.”
While dealing is not significantly more lucrative — economic researchers report that independent drug dealers make, on average, $20,000-to-$30,000 a year – being self-employed offers these men a freedom unavailable to them at a normal job. Working at McDonald’s or Wal-Mart puts them at the mercy of a system that will ruthlessly replace them should they break any of its rules. Drug dealing, they say, allows them to set their own priorities and schedules.
Shorts had three jobs when he first moved to Albuquerque, doing menial work for the city and in restaurants. “I turned to hustlin’ because it was easy. I saw that it was easy,” he says. “I got wore out working all the time and never getting anywhere.”
“I’m not lazy,” he continues. “They call it hustling for a reason!” He cackles. “But I ain’t dumb enough to wear myself out making someone else money.”
* * *
Shorts sold meth for a short time, he told me, but complains that the people he sold to were unable to wait, and liable to do something crazy. He prefers to deal only with professionals — and, he says, the professionals do cocaine.
“I like to sell to the lawyers, the doctors, you know, people who have something to lose.”
The doctors and lawyers come into the bar where I’ve met Shorts, and I watch them from a distance. I hear them talking about the lines they’re doing or have done, about waking up still fucked up, about 36-hour shifts at the hospital. The drugs ease the stress of lost cases and long shifts. The drugs help them keep up or wind down, make them feel pepped up, ready to go.
Or the drugs make them feel adventurous, post-paperwork. Get a few drinks in ‘em, and these Whole Foods shoppers and REI members, these anesthesiologists, marketers, and engineers, start swapping stories about their exploits. Hearing them talk, you might even think the drugs were legal.
Remember that crazy time we took acid in Moab, that full moon on Molly in Thailand, that night you fell asleep in a park after a whole night of cocaine and car bombs in San Francisco? Duuuuude.
And like the iPhones they’re holding that were put together in Chinese factories, and the clothes they bought at Urban Outfitters that were sewn in sweatshops, they don’t think about how that neat baggy of powder, printed with Batman logos or Playboy bunnies, got to them. Got to us.
Because, in all likelihood, they are not the people who will go to jail for painstakingly measuring out the six-tenths of a gram that fills just a corner of these bags. They aren’t the ones who will die transporting the product from South America.
“The cartels run the odds,” Mark, a former APD narcotics officer, told me. “If they send 10 trucks or 10 mules across the border, and four get caught, they still get six across.” The people driving the trucks who end up spending years in jail? The women with baggies in their stomachs that leak? Too bad. They knew the risks.
So the diversions of the more fortunate are supplied by others’ need to survive. Or their desire to not just get by, but prosper. To not worry about how to both buy food and pay rent. To have money to buy a car, a house, to travel.
“Everybody does drugs, but it’s the poor who go to jail for it, ” another dealer, named Cruz, told me.
Cruz had grown up broke. At one point, he, his mom and his brother were living on $9,800 a year. “We tried to go through the bank. No financial institutions would lend to us, because we didn’t have repossess-able assets.”
Without the money Cruz made selling drugs, he never could have opened his legal, and so far successful, business. Once he had the money he needed, he stopped selling blow. When I asked him why, he told me, “If you don’t get addicted to the drugs, you get addicted to the money.”
* * *
Over and over I hear this, from dealers and narcotics officers alike: it’s the money. It all comes down to the money. Maybe that’s obvious when speaking of drug trafficking, a profession encrusted in the popular imagination in $100 bills and iridescent ’64 Impalas. Millions of hit albums have bragged, truthfully or not, about Benzes and gold chains bought with drug profits. Prohibition, among other things, is good for business.
“The drug lords don’t want [drugs] legalized,” Selander explains, “because it would reduce their profits.” A 2012 study by the Mexican Institute of Competitiveness concluded that if weed were made legal in just three American states — Oregon, Washington and Colorado — Mexican cartels would lose $4.6 billion dollars.
But it’s not only criminals that profit. Selander points out that law enforcement agencies would lose millions of dollars and thousands of jobs should the drug war end. “Yes, there is a lot of money on the black market. But there is also a lot of money for those agencies working drugs.”
The federal agencies who hold $1.6 billion in seized assets; the local police forces that make millions off confiscated cash, property and cars; the lucrative private prisons fed by drug convictions? All of them stand to lose millions if drugs are made legal.
Rico puts it this way: “They don’t care about anyone on the streets. They care about getting their pocket money.”
According to Mark, a former cop who worked narcotics for 18 years, the Albuquerque Police Department — like many local police departments — counts on asset seizures to increase its budgets. He said that when he was an officer, the emphasis was on property seizures rather than drug seizures, because the department could then use the profits to buy equipment and cars. Such forfeitures happen before the suspect is convicted, meaning that even if they are innocent, they can still lose their property. The average amount seized annually by police departments of APD’s size is worth more than a million dollars, according to a 2007 Bureau of Justice Statistics report.
Another officer — who asked to remain anonymous because he was violating department policy by speaking to me without supervisor approval — told me that he had left narcotics for a different unit because he felt uncomfortable with the tactics. As a devout Christian, the deceit that surrounded undercover investigations seemed wrong to him.
He admits that the war on drugs isn’t effective, and he acknowledges that the current system often targets the poor and powerless, but argues that even if drugs are legalized, people will still commit crimes to get the money to buy drugs.
What, then, would be effective?
He turns and looks out the window. “A lot of the people I saw, they’re living in the slums or down and out, and the only thing they have to make them feel good is that high. You’d have to get them out of that situation — out of being poor, with nowhere to go — to fix it. And that’s maybe harder than getting them off drugs.”
Selander, the former DEA agent, now works with Law Enforcement Against Prohibition, a group comprised of former criminal justice professionals who now support legalization. Working as an agent in Miami and then Albuquerque, he was involved in landmark drug cases, including evidence gathering against former Panamanian dictator Manuel Noriega.
He argues that legalization will undermine both corruption and cartel activity, as well as facilitate treatment for the addiction-fueled horrors that drug officers see first-hand.
“The big operations moved to Mexico. It’s a joke. We closed a bunch of mom-and-pop labs — just making enough for themselves, maybe a few friends — now they’re in jail. They’d be better off in rehab.”
But even Rico has his doubts about legalization, though in many ways it would make his life easier. “Yeah, it’d be great if you could just buy whatever you wanted. But even if the government made a bunch of money taxing it, they wouldn’t use that money to help people. They’d use it to fund wars.”
Rico subscribes to the feeds of several purported cartel members on Instagram. We sit on his couch and flip through them. One is, reportedly, the nephew of Joaquín Guzmán Loera, a leader of the Sinaloa Cartel and one of the richest men in the world. There are pictures on yachts, and of driving Lamborghinis, a picture with Paris Hilton, but the man’s face is blurred out in all of them. All the pleasures money can buy, but also, always, fear and worry, always the blurred-out face. “I don’t know if I’d want to live that life,” he says. “I’m paranoid as it is. But I’m fascinated by it.”
“Honestly,” Rico tells me, “If I made more money, I wouldn’t do it.”
“I mean, that’s why so many people are willing to run the risk. What’s the point of surviving if you can’t live?”
l haven’t seen “Her,” the Oscar-nominated movie about a man who has an intimate relationship with a Scarlett Johansson-voiced computer operating system. I have, however, read Susan Schneider’s “The Philosophy of ‘Her’,” a post on The Stone blog at the New York Times looking into the possibility, in the pretty near future, of avoiding death by having your brain scanned and uploaded to a computer. Presumably you’d want to Dropbox your brain file (yes, you’ll need to buy more storage) to avoid death by hard-drive crash. But with suitable backups, you, or an electronic version of you, could go on living forever, or at least for a very, very long time, “untethered,” as Ms. Schneider puts it, “from a body that’s inevitably going to die.”
This idea isn’t the loopy brainchild of sci-fi hacks. Researchers at Oxford University have been on the path to human digitization for a while now, and way back in 2008 the Future of Humanity Institute at Oxford released a 130-page technical report entitled Whole Brain Emulation: A Roadmap. Of the dozen or so benefits of whole-brain emulation listed by the authors, Andrew Sandberg and Nick Bostrom, one stands out:
If emulation of particular brains is possible and affordable, and if concerns about individual identity can be met, such emulation would enable back‐up copies and “digital immortality.”
Scanning brains, the authors write, “may represent a radical new form of human enhancement.”
Hmm. Immortality and radical human enhancement. Is this for real? Yes:
It appears feasible within the foreseeable future to store the full connectivity or even multistate compartment models of all neurons in the brain within the working memory of a large computing system.
Foreseeable future means not in our lifetimes, right? Think again. If you expect to live to 2050 or so, you could face this choice. And your beloved labrador may be ready for upload by, say, 2030:
A rough conclusion would nevertheless be that if electrophysiological models are enough, full human brain emulations should be possible before mid‐century. Animal models of simple mammals would be possible one to two decades before this.
Interacting with your pet via a computer interface (“Hi Spot!”/“Woof!”) wouldn’t be quite the same as rolling around the backyard with him while he slobbers on your face or watching him dash off after a tennis ball you toss into a pond. You might be able to simulate certain aspects of his personality with computer extensions, but the look in his eyes, the cock of his head and the feel and scent of his coat will be hard to reproduce electronically. All these limitations would probably not make up for no longer having to scoop up his messes or feed him heartworm pills. The electro-pet might also make you miss the real Spot unbearably as you try to recapture his consciousness on your home PC.
But what about you? Does the prospect of uploading your own brain allay your fear of abruptly disappearing from the universe? Is it the next best thing to finding the fountain of youth? Ms. Schneider, a philosophy professor at the University of Connecticut, counsels caution. First, she writes, we might find our identity warped in disturbing ways if we pour our brains into massive digital files. She describes the problem via an imaginary guy named Theodore:
[If Theodore were to truly upload his mind (as opposed to merely copy its contents), then he could be downloaded to multiple other computers. Suppose that there are five such downloads: Which one is the real Theodore? It is hard to provide a nonarbitrary answer. Could all of the downloads be Theodore? This seems bizarre: As a rule, physical objects and living things do not occupy multiple locations at once. It is far more likely that none of the downloads are Theodore, and that he did not upload in the first place.
This is why the Oxford futurists included the caveat “if concerns about individual identity can be met.” It is the nightmare of infinitely reproducible individuals — a consequence that would, in an instant, undermine and destroy the very notion of an individual.
But Ms. Schneider does not come close to appreciating the extent of the moral failure of brain uploads. She is right to observe an apparent “categorical divide between humans and programs.” Human beings, she writes, “cannot upload themselves to the digital universe; they can upload only copies of themselves — copies that may themselves be conscious beings.” The error here is screamingly obvious: brains are parts of us, but they are not “us.” A brain contains the seed of consciousness, and it is both the bank for our memories and the fount of our rationality and our capacity for language, but a brain without a body is fundamentally different from the human being that possessed both.
It sounds deeply claustrophobic to be housed (imprisoned?) forever in a microchip, unable to dive into the ocean, taste chocolate or run your hands through your loved one’s hair. Our participation in these and infinite other emotive and experiential moments are the bulk of what constitutes our lives, or at least our meaningful lives. Residing forever in the realm of pure thought and memory and discourse doesn’t sound like life, even if it is consciousness. Especially if it is consciousness.
So I cannot agree with Ms. Schneider’s conclusion when she writes that brain uploads may be choiceworthy for the benefits they can bring to our species or for the solace they provide to dying individuals who “wish to leave a copy of [themselves] to communicate with [their] children or complete projects that [they] care about.” It may be natural, given the increasingly virtual lives many of us live in this pervasively Internet-connected world, to think ourselves mainly in terms of avatars and timelines and handles and digital faces. Collapsing our lives into our brains, and offloading the contents of our brains to a supercomputer is a fascinating idea. It does not sound to me, though, like a promising recipe for preserving our humanity.