Hedge-fund manager: “We are witnessing our second tech bubble in 15 years”

David Einhorn has a three-pronged thesis as to why we’re in bubble territory — and the result won’t be pretty VIDEO

 

Hedge-fund manager: "We are witnessing our second tech bubble in 15 years"

We’re in a world where venture capital flows to companies without clear sources of revenue, Facebook buys apps for billions and Twitter is a publicly traded company. Amid this status quo, David Einhorn, president of Greenlight Capital Inc., wrote an ominous letter to investors warning of the growing tech bubble and what might “pop” it.

Einhorn, as one may remember, gained a deal of notoriety for his dealings with Allied Capital, and, of course, for short-selling Lehman Brothers and very publicly calling out its fuzzy practices. (At the 2008 Ira W. Sohn Investment Research Conference he called for federal regulators to “guide Lehman toward a recapitalization and recognition of its losses—hopefully before federal taxpayer assistance is required.”)

Now he has a different warning: an expanding tech bubble. Though he doesn’t name any specific companies, according to the Wall Street Journal, he is particularly worried about three specific issues. They are the “rejection of conventional valuation methods,” huge first day IPOs and “short-sellers forced to cover due to intolerable market-to-market losses.”

Einhorn is not the only market-watcher worried about the growing tech bubble and how it might burst. Venture capitalist George Zachary went on Bloomberg West earlier this month and warned of a bubble. In March, Canadian research firm BCA Research managing editor and chief strategist Chen Zhao wrote to his clients about bubblelike conditions.

A portion of Einhorn’s letter can be viewed below via ValleyWag:



“We have repeatedly noted that it is dangerous to short stocks that have disconnected from traditional valuation methods. After all, twice a silly price is not twice as silly; it’s still just silly. This understanding limited our enthusiasm for shorting the handful of momentum stocks that dominated the headlines last year. Now there is a clear consensus that we are witnessing our second tech bubble in 15 years. What is uncertain is how much further the bubble can expand, and what might pop it.

In our view the current bubble is an echo of the previous tech bubble, but with fewer large capitalization stocks and much less public enthusiasm.

Some indications that we are pretty far along include:

  • The rejection of conventional valuation methods;
  • Short-sellers forced to cover due to intolerable mark-to-market losses; and
  • Huge first day IPO pops for companies that have done little more than use the right buzzwords and attract the right venture capital.

And once again, certain “cool kid” companies and the cheerleading analysts are pretending that compensation paid in equity isn’t an expense because it is “non-cash.” Would these companies be able to retain their highly talented workforces if they stopped doling out large amounts of equity? If you are trying to determine the creditworthiness of these ventures, it might make sense to back out non-cash expenses. But if you are an equity holder trying to value the businesses as a multiple of profits, how can you ignore the real cost of future dilution that comes from paying the employees in stock?”

The entire letter can be viewed below via Scribd:

Greenlight

h/t ValleyWag, Wall Street Journal

http://www.salon.com/2014/04/23/hedge_fund_manager_we_are_witnessing_our_second_tech_bubble_in_15_years/?source=newsletter

Google’s plot for world domination now includes your credit cards.


Three Reasons Why You Should Keep

Gmail far Away from Your Credit Card Information

Google wants your money. Or, more precisely, Google wants your bank account and credit card info.

At Quartz, Chris Mims reports that  Google appears to be accelerating its roll-out of a service that will allow gmail users to send money via email to whomever they want as easily as sending an attachment. Sounds great — but wait, there’s more!

Here’s what’s brilliant about offering the “send money” feature: Google almost certainly doesn’t care whether you use it to send money. What it cares about is getting you to sign up to Google Wallet and capture your bank account and credit-card information. And it’s using Gmail, which has a reach comparable to that of Facebook—425 million as of June 2012, the last time Google released numbers—to do it.

Once Google has your payment info, it can then implement PayPal-like functionality throughout the Google universe — YouTube, search, Maps, you name it. Anywhere you travel online while logged into your Google Account, you will have the ability to click-and-pay.

I can easily see this becoming popular. But here are three reasons to be wary.

1) Your Gmail account is already a hugely tempting target for hackers. Adding your financial info to that account will make it irresistible.

2) Google’s ability to effectively target ads already gives it tremendous power to manipulate consumer behavior. Adding the instant gratification of easy-checkout to those ads will make the company even more powerful.

3) Google already knows far too much about what we want, what we do, where we go, and who we communicate with. Do we really want to complete the chain and give the company our most intimate financial information?

The question posed by Google — and, really, all online Web services. At what point does convenience become vulnerability?

Sean Parker and the next generation of libertarian billionaires

Young, rich and politically ignorant:

A young billionaire adorably thinks he can solve Washington’s problems through centrism! Why our democracy’s a mess

Young, rich and politically ignorant: Sean Parker and the next generation of libertarian billionaires
Sean Parker (Credit: Reuters/Gonzalo Fuentes)

Those who own the country ought to govern it -Founder John Jay

That quote may be apocryphal, but the sentiment has been with us since the beginning of the Republic. I think we all know he wasn’t talking about that amorphous mass known as “the people,” don’t you? Much better for the real stakeholders of democracy to be in charge. You know, the people with money and property.

And it appears as though they got what they wanted. According to this blockbuster study from Martin Gilens and Benjamin Gage, the fact that the people sometimes have their policy preferences enacted is purely a matter of coincidence: It only happens if it happens to coincide with the preferences of the wealthy. If not, we’re just out of luck. The practical result of that is that while the wealthy might view certain issues along progressives lines, such as gay rights or maybe even gun control, it’s highly unlikely they will ever allow us to get too far out on the egalitarian limb. They tend to draw the line at anything that affects their own interest. And that interest is to keep most of the money and all of the power within their own control, regardless of whether they see themselves as liberals or conservatives.

Take, for instance, what Thomas Frank in this piece astutely described as liberal Mugwumpery, the “reformist” strain among aristocrats in which they take it upon themselves to reform the state and better the characters of the lower orders. Yes, they may want to clean up government corruption and coerce the “unhealthy” to change their ways, whether through temperance or prohibition, but they cannot be counted upon to fully engage on issues that infringe on their own interests. Frank quotes historian Richard Hofstadter describing the “Mugwump types” of the 19th century:



[T]he most serious abuses of the unfolding economic order of the Gilded Age he either resolutely ignored or accepted complacently as an inevitable result of the struggle for existence or the improvidence and laziness of the masses. As a rule, he was dogmatically committed to the prevailing theoretical economics of laissez faire. . . . He imagined that most of the economic ills that were remediable at all could be remedied by free trade, just as he believed that the essence of government lay in honest dealing by honest and competent men.

Frank applied that term specifically to Michael Bloomberg, who has pledged to spend $50 million to defeat the NRA, a worthy cause if there ever was one. As he points out, however, as much as this particular pledge might benefit the population at large, Bloomberg will also use his power to defeat anything that looks to directly benefit working people economically. Just as the Gilens-Gage paper suggests, as long as the billionaires’ interests align with the people, there is a chance it might get done. Where they diverge, it might as well be impossible. That is a very odd definition of democracy.

Not all of our wealthy warriors for liberal causes are as openly hostile to the economic reforms at the center of the progressive agenda as a Wall Street billionaire like Bloomberg. In fact, many of them are probably just unaware of it, as they are the scions of great wealth who are flush with the idealism of youth and simply wish to make a difference. These Baby Mugwumps have good intentions, but somehow their focus also tends to be directed at worthy causes that don’t upset the economic apple cart.

For instance, these nice young would-be philanthropists who were invited to the White House last week to share their thoughts on how to fix the problems of the world:

“Moon shots!” one administration official said, kicking off the day on an inspirational note to embrace the White House as a partner and catalyst for putting their personal idealism into practice.

The well-heeled group seemed receptive. “I think it’s fantastic,” said Patrick Gage, a 19-year-old heir to the multibillion-dollar Carlson hotel and hospitality fortune. “I’ve never seen anything like this before.” Mr. Gage, physically boyish with naturally swooping Bieber bangs, wore a conservative pinstripe suit and a white oxford shirt. His family’s Carlson company, which owns Radisson hotels, Country Inns and Suites, T.G.I. Friday’s and other brands, is an industry leader in enforcing measures to combat trafficking and involuntary prostitution.

A freshman at Georgetown University, Mr. Gage was among the presenters at a breakout session, titled “Combating Human Trafficking,” that attracted a notable group of his peers. “The person two seats away from me was a Marriott,“ he said. “And when I told her about trafficking, right away she was like, ‘Uh, yeah, I want to do that.’ ”

Of course. Who wouldn’t be against human trafficking? Or limiting the proliferation of guns, either? But I think one can see with those two examples just how limited the scope of our patrician betters’ interest in the public good really is. Whether undertaken through the prism of their own self-interest or a youthful idealism born of privilege, it represents causes, not any real challenge to the status quo.

But what should we make of the latest audacious entry into the political arena? Sean Parker, Napster inventor and Facebook billionaire, announced that he’s jumping into politics with both feet. He’s not signing on to any specific cause or even a vague political philosophy. In fact, it’s almost impossible to figure out what it’s about.

One of the nice things about being a billionaire is that even if you have no idea about what you believe or any sense of how the political system works in theory or in practice you can meet with the actual players and have them explain it to you. That’s what Parker has been doing, meeting with politicians of such disparate ideologies as Rand Paul, Bill DeBlasio and Charlie Christ. I’m sure they all told him to put them on speed dial and to call day or night if he had any questions.

His plan, if one can call it that, makes the naive young heirs to the great fortunes look like grizzled old political veterans by comparison:

Unlike other politically-inclined billionaires, such as the conservative Koch brothers and liberal environmentalist Tom Steyer, Parker hopes to avoid a purely partisan role as he ventures more deeply into politics.

Having donated almost exclusively to Democrats up to this point, Parker made a trip to Washington in December for the purpose of meeting quietly with Republican officeholders and strategists around town. He plans to donate to both sides starting this year, associates say, for the first time committing big sums to aid Republicans he views as credible deal-makers in a bitterly divided Congress.

He’s not even a Mugwump. He’s just a mess. Apparently, he thinks he can “make Washington work” by financing a group of deal makers in both parties who will knock some heads together and get the job done. What, exactly, it is they are supposed to get done remains a mystery. Indeed, from the sound of it, it doesn’t really matter.

I have an idea for Parker. There’s a group of “activists” out there who are right up his alley. He could just buy their outfit outright and rebrand it to his liking. It’s called “No-Labels” and they’ve been flogging this idea of bipartisan nothingness for a very long time. For some unknown reason it just hasn’t taken hold with the public. But if anyone can market this dog to the public, the guy who made hundreds of millions for the singular advice to take “the” out of “The Facebook” seems like just the guy to make it happen.

According to the article, a battalion of opportunistic political consultants from across the ideological spectrum are already on the payroll and are going to make a whole lot of money from this quixotic venture, however it goes, so no matter what, I suppose he’s at least trickling some of his wealth down to lower orders. In the current political environment run by radical right-wing billionaires, Mugwumps and fools, that may be the best we can hope for.

Heather Digby Parton is a writer also well-known as “Digby.” Her political and cultural observations can be found at www.digbysblog.blogspot.com.

 

http://www.salon.com/2014/04/22/young_rich_and_politically_ignorant_the_next_generation_of_libertarian_billionaires/?source=newsletter

Portrait of a modern-day plutocrat: Larry Ellison, CEO of Oracle

http://static2.businessinsider.com/image/4fdb632c69bedd9e20000004-480/larry-ellison-with-a-gun.jpg

By Julien Kiemle
22 April 2014

Earlier this month, the data firm Equilar published its list of the highest paid CEOs for 2013. The report paints a picture of vast inequality, with extraordinary sums being paid to a tiny layer of the population. Pay for the top-earning 100 CEOs in the US increased by 9 percent last year, to $13.9 million a piece.

Topping the list was Larry Ellison, the CEO of Oracle Corporation, a man who exemplifies the social character of the ruling class and its manner of wealth accumulation.

Ellison’s total compensation for 2013 was $78.4 million, almost all of it in stock options. For the eight years that Equilar has tracked executive compensation, Ellison’s cumulative pay was $582 million, almost $83 million more than the runner-up, Tim Cook of Apple. His pay in 2013 was more than double that of the runner-up for that year, CEO Bob Iger of Walt Disney Company, paid $34.3 million.

Ellison’s pay was actually down $18 million from its high in 2012, perhaps a reflection of the slowing performance of Oracle’s stock. Ellison’s wealth consists largely of real estate, and his fortunes have been amassed primarily through the medium of the stock market—a practice that has become pervasive among the ruling class since the 1980s, and vastly accelerated by the policies of the Federal Reserve.

Indeed, Ellison is one of the intended beneficiaries of the Obama administration’s policy since the 2008 crash, which has consisted of making available an unlimited stream of cash to the financial system. The stock market has soared as a result, even as pay for the vast majority of the population has stagnated or declined, and unemployment remains at catastrophic levels.

Oracle Corp. is a developer of business software, such as supply chain management and enterprise resource planning, founded by Ellison and Bob Miner. One of the famed Silicon Valley startups, the company is now a tech giant, with revenues second only to Microsoft in the world of software development.

Ellison runs his company as something of a despot, even over its shareholders. Though Ellison himself owns only about a quarter of the company’s stock, their votes to roll back his pay package in two consecutive years were offered as nothing more than nonbinding suggestions, and promptly ignored.

Nominal management of the company has catapulted Ellison up the ranks of the super-rich. He is now the fifth-wealthiest person in the world, with a personal net-worth of about $50 billion. Ellison is a personification of the obscenity of contemporary capital accumulation, the object of a fawning media and his fellow aristocrats. A New York Times feature on the CEO recently proclaimed, “It is good to be the king. It is even better to be Larry Ellison.” In this they have surely not exaggerated.

Oracle’s CEO is well-known for his egomaniacal and costly pet projects. His recent sponsorship of the America’s Cup race in San Francisco at a cost of $100 million is a small expense compared with some of his previous undertakings, such as spending $250 million to buy up a third of Malibu.

In 2004 he commissioned a gargantuan custom “superyacht” for $377 million, and his 2012 purchase of the sixth-largest island in the Hawaiian chain, Lanai, was paid for with between $500 million and $600 million—cash. He continues to make changes and “develop” the island as a “model for environmentally sound living.”

Frugal in his own way, Ellison managed to cheat San Mateo County out of $3 million of property taxes by arguing that one of his mansions, an imitation of a Japanese Shogun estate, was “functionally obsolete” and worth hundreds of millions less than he paid for it. As a result, the public school system lost some $1.4 million in tax dollars.

Like many billionaires of his type, he gives freely to both political parties, including some of the most influential figures at the national level, such as Democratic Party House Majority Whip Kevin McCarthy and former Senator, now Secretary of State in the Obama administration, John Kerry.

Breaking down Ellison’s pay on the assumption of a 40-hour workweek means that the CEO makes more in an hour ($37,692.31) than a typical worker is likely to in a year—paying the much lower capital gains tax rate on almost all of it.

The tragedy of this spectacle hardly needs to be pointed out, when 12.5 percent of the globe’s population sits on the cusp of starvation. In the United States, Ellison’s home, a third of Americans experienced “poverty” in its narrow official definition between 2009 and 2011.

In a period when the most basic forms of assistance to the working class—food stamps, unemployment insurance, education, health care, nurturing of culture—are being systematically dismantled to free up resources for the accumulation of capital by the aristocracy of finance, the likes of Oracle Corp.’s “king” are the incarnation of reaction all down the line.

In the early years of Oracle, possessing a fortune several orders of magnitude smaller than today, Ellison invited the company’s co-founder Bob Miner to come along with him for a joyride on a hired fighter jet. Miner wrote back, perhaps more prophetically than he intended, “You obviously have far more money than you should. It’s things like this that caused the French Revolution.”

What goes for Ellison goes for the social layer of which he is a particular expression. Drunk on wealth, their relationship to society as a whole is fundamentally parasitic. They sit atop a social power keg—not unlike the aristocracy of the ancien regime. And similar causes produce similar effects.

Google: the unelected superpower

Google has cosied up to governments around the world so effectively that its chairman is a White House advisor
Google Glass, smart glasses under development by Google are seen in handout photo

If Google Glass is widely adopted, it will be able to clock everything we see Photo: Reuters

Researchers at Princeton and Northwestern universities have pored over 1,800 US policies and concluded that America is an oligarchy. Instead of looking out for the majority of the country’s citizens, the US government is ruled by the interests of the rich and the powerful, they found. No great surprises there, then.

But the government is not the only American power whose motivations need to be rigourously examined. Some 2,400 miles away from Washington, in Silicon Valley, Google is aggressively gaining power with little to keep it in check.

It has cosied up to governments around the world so effectively that its chairman, Eric Schmidt, is a White House advisor. In Britain, its executives meet with ministers more than almost any other corporation.

Google can’t be blamed for this: one of its jobs is to lobby for laws that benefit its shareholders, but it is up to governments to push back. As things stand, Google – and to a lesser extent, Facebook – are in danger of becoming the architects of the law.

Meanwhile, these companies are becoming ever more sophisticated about the amount of information they access about users. Google scans our emails. It knows where we are. It anticipates what we want before we even know it. Sure there are privacy settings and all that, but surrendering to Google also feels nigh on impossible to avoid if you want to live in the 21st century. It doesn’t stop there either. If Google Glass is widely adopted, it will be able to clock everything we see, while the advance of Google Wallet could position the company at the heart of much of the world’s spending.

One source at the technology giant put it well when she referred to the company as an “unelected superpower”. I think this is a fair summary. So far, we are fortunate that that dictatorship is a relatively benign one. The company’s mantra is “do no evil”, and while people may disagree on what evil means, broadly speaking, its founders are pretty good guys. But Larry Page and Sergey Brin will not be around forever. Nor should we rely on any entity that powerful to regulate its own behaviour.

The government in America, and its counterparts around the world, should stop kowtowing to Google and instead work in concert to keep this and any other emerging corporate superpowers firmly in check.

“Cubed”: How the American office worker wound up in a box

From the Organization Man to open-plan design, a new history of the way the office has shaped the modern world

 

"Cubed": How the American office worker wound up in a box

Over the past week, as I’ve been carrying around a copy of Nikil Saval’s “Cubed: A Secret History of the Workplace,” I’ve gotten some quizzical looks. “It’s a history of the office,” I’d explain, whereupon a good number of people would respond, “Well, that sounds boring.”

It isn’t. In fact, “Cubed” is anything but, despite an occasional tendency to read like a grad school thesis. The fact that anyone would expect it to be uninteresting is striking, though, and marks an unexpected limit to the narcissistic curiosity of the average literate American. The office, after all, is where most contemporary labor takes place and is a crucible of our culture. We’ve almost all worked in one. Of course it’s a subject that merits a thoroughly researched and analytical history, because the history of the office is also the history of us.

Saval’s book glides smoothly between his two primary subjects: the physical structure of offices and the social institution of white-collar work over the past 150 years or so. “Cubed” encompasses everything from the rise of the skyscraper to the entrance of women into the professional workplace to the mid-20th-century angst over grey-flannel-suit conformity to the dorm-like “fun” workplaces of Silicon Valley. His stance is skeptical, a welcome approach given that most writings on the contemporary workplace are rife with dubious claims to revolutionary innovation — office design or management gimmicks that bestselling authors indiscriminately pounce on like magpies seizing glittering bits of trash.

Some of the most fascinating parts of “Cubed” come in the book’s early chapters, in which Saval traces the roots of the modern office to the humble 19th-century countinghouse. Such firms — typically involved in the importation or transport of goods — were often housed in a single room, with one or more of the partners working a few feet away from a couple of clerks, who copied and filed documents, as well as a bookkeeper. A novice clerk might earn less than skilled manual laborers, but he had the potential to make significantly more, and he could boast the intangible but nevertheless significant status of working with his mind rather than his hands.



Even more formative to the identity of white-collar workers (so named for the detachable collars, changed daily in lieu of more regular washings of the actual shirt, that served as a badge of the class) was the proximity to the boss himself and the very real possibility of advancement to the role of partner. Who better to marry the boss’ daughter and take over when he retired? Well, one of the other clerks is who, and in this foundational rivalry much of the character of white-collar work was set. “Unlike their brothers in the factory, who had begun to see organizing on the shop floors as a way to counter the foul moods and arbitrary whims of their bosses,” Saval writes, “clerks saw themselves as potential bosses.” Blue-collar workers, by contrast, knew they weren’t going to graduate to running the company one day.

The current American ethic of self-help and individualism owes at least as much to the countinghouse as it does to the misty ideal of the Jeffersonian yeoman farmer. An ambitious clerk identified and sought to ingratiate himself with the partners, not his peers, who were, after all, his competitors. He was on his own, and had only himself to blame if he failed. A passion for self-education and self-improvement, via books and night schools and lecture series, took root and flourished. So did a culture of what Saval calls “unctuous male bonding,” the ancestor of the 20th century’s golf outings and three-martini lunches, customs that would make it that much harder for outsiders like women and ethnic minorities to rise in the ranks — once they managed to get into the ranks in the first place.

The meritocratic dream became a lot more elusive in the 1920s, when the population employed in white-collar work surpassed the number of blue-collar workers for the first time. Saval sees this stage in the evolution of the office as epitomized in the related boom in skyscrapers. The towering buildings, enabled by new technologies like elevators and steel-frame construction, were regarded as the concrete manifestation of American boldness and ambition; certainly modern architects, reaching for new heights of self-aggrandizement, encouraged that view. They rarely acknowledged that a skyscraper was, at heart, a hive of standardized cells in which human beings were slotted like interchangeable parts. Most of the workers inhabiting those cells, increasingly female, could not hope to climb to the executive suites.

Office workers had always made observers uncomfortable; the clerk, with his “minute leg, chalky face and hollow chest,” was one of the few members of the American multitude that Walt Whitman scorned to contain. After World War II, that unease grew into a nationwide obsession, bumping several works of not-so-pop sociology — “The Lonely Crowd,” “The Man in the Grey Flannel Suit,” “The Organization Man,” etc. — onto the bestseller lists. In turn, the challenge of managing this new breed of “knowledge workers” became the subject of intensive rumination and theorizing. Saval lists Douglas McGregor’s 1960 guide, “The Human Side of Enterprise,” as the seminal work in a field that would spawn such millionaire gurus as Tom Peters and Peter Drucker.

Office design — typically regimented rows of identical desks — was deemed in need of an overhaul as well, and perhaps the most rueful story Saval recounts is that of Robert Propst, hired to head the research wing of the Herman Miller company in 1958. Propst made an intensive study of white-collar work and in 1964, Herman Miller unveiled his Action Office, a collection of pieces designed to support work on “thinking projects.” Saval praises this incarnation of the Action Office as the first instance in which ” the aesthetics of design and progressive ideas about human needs were truly united,” but it didn’t entirely catch on until the unveiling of Action Office II, which incorporated wooden frames covered with fabric that served as short, modular, temporary walls.

Oh, what a difference 30 degrees makes! Propst’s original Action Office II arranged these walls in 120 degree-angles, an orthogonal configuration that looked and felt open and dynamic. One of Propst’s colleagues told Saval of the dismal day that “someone figured out you don’t need the 120-degree [angle] and it went click.” Ninety-degrees used up the available space much more efficiently, enabling employers to cram in more workstations. The American office worker had been cubed.

The later chapters of “Cubed” speak to more recent trends, like the all-inclusive, company-town-like facilities of tech firms like Google and wacky experiments in which no one is allowed to have a permanent desk at all. Saval visits the Los Angeles office of the ad agency Chiat/Day, which is organized around an artificial main street, features a conference table made of surfboards and resembles nothing so much as a theme park. Using architecture and amenities to persuade employees that their work is also play is a gambit to keep them on the premises and producing for as much of the day (and night) as possible. It all seems a bit desperate. In my own personal experience, if people 1) are paid sufficiently; 2) like the other people they work with; and 3) find the work they do a meaningful use of their energy and skills, then they don’t really care if they work in cubicles or on picnic benches. Item No. 3 is a bitch, though, the toughest criteria for most contemporary corporations to meet. Maybe that’s what they ought to be worrying about instead.

 

Laura Miller is a senior writer for Salon. She is the author of “The Magician’s Book: A Skeptic’s Adventures in Narnia” and has a Web site, magiciansbook.com.

 

http://www.salon.com/2014/04/20/cubed_how_the_american_office_worker_wound_up_in_a_box/?source=newsletter

The Rise of the Digital Proletariat

In open systems, discrimination and barriers can become invisibilized,’ says author and activist Astra Taylor. (Deborah DeGraffenreid.)

Astra Taylor reminds us that the Internet cannot magically produce revolution.

BY Sarah Jaffe

It really challenges the notion that we’re all on these social media platforms purely by choice, because there’s a real obligatory dimension to so much of this.

The conversation about the impact of technology tends to be binary: Either it will save us, or it will destroy us. The Internet is an opportunity for revolution; our old society is being “disrupted”; tech-savvy college dropouts are rendering the staid elite obsolete. Or else our jobs are being lost to automation and computers; drones wipe out families on their wedding day; newly minted millionaires flush with tech dollars are gentrifying San Francisco at lightning speed.

Neither story is completely true, of course. In her new book, The People’s Platform: Taking Back Power and Culture in the Digital Age, out now from Metropolitan Books, Astra Taylor takes on both the techno-utopians and the techno-skeptics, reminding us that the Internet was created by the society we live in and thus is more likely to reflect its problems than transcend them. She delves into questions of labor, culture and, especially, money, reminding us who profits from our supposedly free products. She builds a strong case that in order to understand the problems and potentials of technology, we have to look critically at the market-based society that produced it.

Old power dynamics don’t just fade away, she points out—they have to be destroyed. That will require political action, struggle, and a vision of how we want the Internet (and the rest of our society) to be. I spoke with Taylor about culture, creativity, the possibility of nationalizing Facebook and more.

Many people know you as a filmmaker or as an activist with Occupy and Strike Debt. How do you see this book fitting in with the other work you’ve done?

Initially I saw it as a real departure, and now that it’s done, I recognize the continuity. I felt that the voices of culture makers were left out of the debate about the consequences of Internet technology. There are lots of grandiose statements being made about social change and organizing and about how social media tools are going to make it even easier for us to aggregate and transform the world. I felt there was a role I could play rooted in my experiences of being a culture maker and an activist. It was important for somebody grounded in those areas to make a sustained effort to be part of the conversation. I was really troubled that people on all sides of the political spectrum were using Silicon Valley rhetoric to describe our new media landscape. Using terms like “open” and “transparent” and saying things were “democratizing” without really analyzing those terms. A big part of the book was just trying to think through the language we’re using and to look at the ideology underpinning the terminology that’s now so commonplace.

You make the point in the book that the Internet and the offline world aren’t two separate worlds. Can you talk about that a bit more?

It’s amazing that these arguments even need to be made. That you need to point out that these technologies cannot just magically overcome the structures and material conditions that shape regular life.

It harkens back to previous waves of technological optimism. People have always invested a lot of hope in their tools. I talk about the way that we often imbue our machines with the power to liberate us. There was lots of hope that machines would be doing all of our labor and that we would have, as a society, much more free time, and that we would have this economy of abundance because machines would be so dramatically improved over time. The reasons that those predictions never came to pass is because machines are embedded in a social context and the rewards are siphoned off by the elite.

The rise of the Internet really fits that pattern. We can see that there is this massive shifting of wealth [to corporations]. These gigantic digital companies are emerging that can track and profit from not just our online interactions, but increasingly things that we’re doing away from the keyboard. As we move towards the “Internet-of-things,” more and more of the devices around us are going to have IP addresses and be leaking data. These are avenues for these companies that are garnering enormous power to increase their wealth.

The rhetoric a few years ago was that these companies are going to vanquish the old media dinosaurs. If you read the tech books from a few years ago, it’s just like “Disney and these companies are so horrible. Google is going to overthrow them and create a participatory culture.” But Google is going to be way more invasive than Mickey Mouse ever was.

Google’s buying drone companies.

Google’s in your car, Google’s in your thermostat, it’s in your email box. But then there’s the psychological element. There was this hope that you could be anyone you wanted to be online. That you could pick an avatar and be totally liberated from your offline self. That was a real animating fantasy. That, too, was really misleading. Minority groups and women are often forced back into their real bodies, so to speak. They’re not given equal access to the supposedly open space of the Internet.

This is one of the conversations that I think your book is incredibly relevant to right now. Even supposedly progressive spaces are still dominated by white people, mostly men, and there’s a real pushback against women and people of color who are using social media.

It’s been amazing how much outrage can get heaped on one person who’s making critical observations about an institution with such disproportionate power and reach.

The new media elites end up looking a whole lot like the old ones. The other conversations about race and gender and the Internet recently has been about these new media websites that are launched with a lot of fanfare, that have been funded in many cases by Silicon Valley venture capital, that are selling themselves as new and rebellious and exciting and a challenge to the old media—the faces of them are still white men.

The economic rewards flow through the usual suspects. Larry Lessig has done a lot of interesting work around copyright. But he wrote basically that we need to cheer on the Facebooks of the world because they’re new and not the old media dinosaurs. He has this line about “Stanford is vanquishing Harvard.” We need something so much more profound than that.

This is why I really take on the concept of “openness.” Because open is not equal. In open systems, discrimination and barriers can become invisibilized. It’s harder to get your mind around how inequitable things actually are. I myself follow a diverse group of people and feel like Twitter is full of people of color or radicals. But that’s because I’m getting a very distorted view of the overall picture.

I think it’s helpful to look at the handful of examples of these supposedly open systems in action. Like Wikipedia, which everyone can contribute to. Nonetheless, only like 15 percent of the editors are women. Even the organizations that are held up as exemplars of digital democracy, there’s still such structural inequality. By the time you get to the level of these new media ventures that you’re talking about, it’s completely predictable.

We really need to think through these issues on a social level. I tried to steer the debate away from our addiction to our devices or to crappy content on the Internet, and really take a structural view. It’s challenging because ultimately it comes down to money and power and who has it and how do you wrest it away and how do you funnel some of it to build structures that will support other types of voices. That’s far more difficult than waiting around for some new technology to come around and do it for you.

You write about this tension between professional work from the amateurs who are working for free and the way the idea of doing work for the love of it has crept in everywhere. Except people are working longer hours than ever and they’re making less money than ever, and who has time to come home at the end of your two minimum wage jobs and make art?

It would be nice to come out and say follow your heart, do everything for the love of it, and things’ll work out. Artists are told not to think about money. They’re actively encouraged to deny the economic sphere. What that does though is it obscures the way privilege operates—the way that having a trust fund can sure be handy if you want to be a full time sculptor or digital video maker.

I think it’s important that we tackle these issues. That’s where I look at these beautiful predictions about the way these labor-saving devices would free us all and the idea that the fruits of technological advancement would be evenly shared. It’s really interesting how today’s leading tech pundits don’t pretend that [the sharing is] going to be even at all. Our social imagination is so diminished.

There’s something really off about celebrating amateurism in an economy where people are un- and under-employed, and where young people are graduating with an average of $30,000 of student debt. It doesn’t acknowledge the way that this figure of the artist—[as] the person who loves their work so much that they’ll do it for nothing—is increasingly central to this precarious labor force.

I quote this example of people at an Apple store asking for a raise and the response was “When you’re working for Apple, money shouldn’t be a consideration.” You’re supposed to just love your work so much you’ll exploit yourself. That’s what interning is. That’s what writing for free is when you’re hoping to get a foot in the door as a journalist. There are major social implications if that’s the road we go down. It exacerbates inequality, because who can afford to do this kind of work?

Of course, unpaid internships are really prevalent in creative fields.

Ultimately, it’s a corporate subsidy. People are sometimes not just working for free but then also going into debt for college credit to do it. In a way, all of the unpaid labor online is also a corporate subsidy. I agree that calling our participation online “labor” is problematic because it’s not clear exactly how we’re being exploited, but the point is the value being extracted. We need to talk about that value extraction and the way that people’s free participation feeds into it.

Of course we enjoy so much of what we do online. People enjoy creating art and culture and doing journalism too. The idea that work should only be well-compensated and secure if it makes you miserable ultimately leads to a world where the people who feel like they should make a lot of money are the guys on Wall Street working 80 hours a week. It’s a bleak, bleak view.

In many ways the problem with social media is it does break down this barrier between home and work. You point this out in the book–it’s everywhere, you can’t avoid it, especially if you are an independent creative person where you have to constantly promote your own work, or it is part of your job. There’s now the Wages for Facebook conversation—people are starting to talk about the way we are creating value for these companies.

It really challenges the notion that we’re all on these social media platforms purely by choice, because there’s a real obligatory dimension to so much of this. Look also at the way we talk to young people. “Do you want a college recruiter to see that on your Facebook profile?” What we’re really demanding is that they create a Facebook profile that appeals to college recruiters, that they manage a self that will help them get ahead.

I was at a recent talk about automation and the “end of jobs,” and one researcher said that the jobs that would be hardest to automate away would be ones that required creativity or social intelligence—skills that have been incredibly devalued in today’s economy, only in part because of technology.

Those skills are being pushed out of the economy because they’re supposed to be things you just choose to do because they’re pleasurable. There is a paradox there. Certain types of jobs will be automated away, that can be not just deskilled but done better by machines, and meanwhile all the creative jobs that can’t be automated away are actually considered almost superfluous to the economy.

The thing about the jobs conversation is that it’s a political question and a policy question as well as a technological question. There can be lots of different types of jobs in the world if we invest in them. This question of what kind of jobs we’re going to have in the future. So much of it is actually comes down to these social decisions that we’re making. The technological aspect has always been overhyped.

You do bring up ideas like a basic income and shorter working hours as ways to allow people to have time and money for culture creation.

The question is, how do you get there? You’d have to have a political movement, you’d have to challenge power. They’re not just going to throw the poor people who’ve had their jobs automated away a bone and suddenly provide a basic income. People would really have to organize and fight for it. It’s that fight, that element of antagonism and struggle that isn’t faced when we just think tools are evolving rapidly and we’ll catch up with them.

The more romantic predictions about rising prosperity and the inevitable increase in free time were made against the backdrop of the post-war consensus of the 1940s, ‘50s and ‘60s. There was a social safety net, there were structures in place that redistributed wealth, and so people made predictions colored by that social fabric, that if there were advancements in our tools that they would be shared by people. It just shows the way that the political reality shapes what we can collectively imagine.

Finally, you make the case for state-subsidized media as well as regulations—for ensuring that people have the ability to make culture as well as consume it. You note that major web companies like Google and Facebook operate like public utilities, and that nationalizing them would be a really hard sell, and yet if these things are being founded with government subsidies and our work, they are in a sense already ours.

The invisible subsidy is the thing that we really have to keep in mind. People say, “Where’s the money going to come from?” We’re already spending it. So much innovation is the consequence of state investment. Touchscreens, the microchip, the Internet itself, GPS, all of these things would not exist if the government had not invested in them, and the good thing about state investment is it takes a much longer view than short-term private-market investment. It can have tremendous, socially valuable breakthroughs. But all the credit for these innovations and the financial rewards is going to private companies, not back to us, the people, whose tax dollars actually paid for them.

You raise a moral question: If we’re paying for these things already, then shouldn’t they in some sense be ours? I think the answer is yes. There are some leverage points in the sense that these companies like to talk about themselves as though they actually are public utilities. There’s this public-spiritedness in their rhetoric but it doesn’t go deep enough—it doesn’t go into the way they’re actually run. That’s the gap we need to bridge. Despite Silicon Valley’s hostility to the government and the state, and the idea that the Internet is sort of this magic place where regulation should not touch, the government’s already there. We just need it to be benefiting people, not private corporations.

Sarah Jaffe is a staff writer at In These Times and the co-host of Dissent magazine’s Belabored podcast. Her writings on labor, social movements, gender, media, and student debt have been published in The Atlantic, The Nation, The American Prospect, AlterNet, and many other publications, and she is a regular commentator for radio and television. You can follow her on Twitter @sarahljaffe.

Google Glass, techno-rage and the battle for San Francisco’s soul

The Bay is burning!

The advent of Google Glass has started an incendiary new chapter in tech’s culture wars. Here’s what’s at stake

The Bay is burning! Google Glass, techno-rage and the battle for San Francisco's soul
Sergey Brin (Credit: Reuters/Stephen Lam/Ilya Andriyanov via Shutterstock/Salon)

In San Francisco, the tech culture wars continue to rage. On April 15, Google opened up purchases of its Google Glass headgear to the general public for 24 hours. The sale was marked by mockery, theft and the continuing fallout from an incident a few days earlier, when a Business Insider reporter covering an anti-eviction protest had his Glass snatched and smashed.

That same day, protesters organized by San Francisco’s most powerful union marched to Twitter’s headquarters — a major San Francisco gentrification battleground — and presented the company with a symbolic tax bill, designed to recoup the “millions” that some San Franciscans believe the city squandered by bribing Twitter with a huge tax break to stay in the city.

We learned two things on April 15. First, Google isn’t about to give up on its plans to make Glass the second coming of the iPhone, even if it’s clear that a significant number of people consider Google Glass to be a despicable symbol of the surveillance society and a pricey calling card of the techno-elite. Second, judging by the march on Twitter, the tide of anti-tech protest sentiment has yet to crest in the San Francisco Bay Area. The two points turn out to be inseparable. Scratch an anti-tech protester and you are unlikely to find a fan of Google Glass.

What’s it all mean? Earlier this week, after I promoted an article on Twitter that attempted to explore reasons for anti-Glass hatred, I received a one-word tweet in response: “Neoluddism.”

The Luddites of the early 19th century are famous for smashing weaving machinery in a fruitless attempt to resist the reshaping of society and the economy by the Industrial Revolution. They took their name from Ned Ludd, a possibly apocryphal character who is said to have smashed two stocking frames in a fit of rage — thus inspiring a rebellion. While I can’t be certain, I suspect that my correspondent was deploying the term in the sense most familiar to pop culture — the Luddite as barbarian goon, futilely standing against the relentless march of progress.



But the story isn’t quite that simple.Yes, the Luddite movement may have been smashed by the forces of the state and the newly ascendant industrialist bourgeoisie. Yes, the Luddites may never have had the remotest chance of maintaining their pre-industrial way of life in the face of the steam engine. But there is a version of history in which the Luddites were far from unthinking goons. Instead, they were acute critics of their changing times, grasping the first glimpse of the increasingly potent ways in which capital was learning to exploit labor. In this view, the Luddites were actually the avante garde for the formation of working-class consciousness, and paved the way for the rise of organized labor and trade unions. It’s no accident that Ned Ludd hailed from Nottingham, right up against Sherwood Forest.

Economic inequality and technologically induced dislocation? Ned Ludd, that infamous wrecker of weaving machinery, would recognize a clear echo of his own time in present-day San Francisco. But there’s more to see here than just the challenge of a new technological revolution. Just as the Luddites, despite their failure, spurred the creation of worker-class consciousness, the current Bay Area tech protests have had a pronounced political effect. While the tactics range from savvy, well-organized protest marches to juvenile acts of violence, the impact is clear. The attention of political leaders and the media has been engaged. Everyone is paying attention.

 

 

* * *

If you live in San Francisco, you may have seen them around town: Decals on bar windows that state “Google Glass is barred on these premises.” They are the work of an outfit called StopTheCyborgs.org, a group of scientists and engineers who have articulated a critique of Google Glass that steers cagily away from the half-baked nonsense of Counterforce.

I contacted StopTheCyborgs by email and asked them how they responded to being called “neoluddites.”

“If ‘neoluddism’ means blindly being anti-technology then we refute the charge,” said Jack Winters, who described himself as a Scala and Java developer. If ‘neoluddism’ means not being blindly pro-technology then guilty as charged.”

“We are technologically sophisticated enough to realize that technology is politics and code is law,” continued Winters. “Technology isn’t some external force of nature. It is created and used by people. It has an effect on power relations. It can be good, it can be bad. We can choose what kind of society we want rather than passively accepting that ‘the future’ is whatever data-mining corporations want.”

“Basically anyone who views critics of particular technologies as ‘luddites’ fundamentally misunderstands what technology is. There is no such thing as ‘technology.’ Rather there are specific technologies, produced by specific economic and political actors, and deployed in specific economic and social contexts. You can be anti-nukes without being anti-antibiotics. You can be pro-surveillance of powerful institutions without being pro-surveillance of individual people. You can work on machine vision for medical applications while campaigning against the use of the same technology for automatically identifying and tracking people. How? Because you take a moral view of the likely consequences of a technology in a particular context.” [Emphasis added.]

The argument made by StopTheCyborgs resonates with one of the core observations that revisionist historians have made about the original Luddites: They were not indiscriminate in their assaults on technology. (At least not at first.) They chose to destroy machines that were owned by employers who were acting in ways they believed were particularly economically harmful while leaving other machines undamaged. To translate that to a present-day stance: It is not hypocritical for protesters to argue that Glass embodies surveillance in a way that iPhones don’t, or that it is hypocritical to critique technology’s impact on inequality via Twitter or Facebook. Every mode of technology needs to be evaluated on its own merits. Some start-up entrepreneurs might legitimately be using technology to achieve a social good. Some tech tycoons might be genuinely committed to a higher standard of life for all San Franciscans. Some might just be tools. So Jack Winters of StopTheCyborgs is correct: The deployment of different technologies have different consequences. These consequences require a social and political response.

This is not to say that ripping Google Glass from the face of a young reporter, or otherwise demonizing individuals just because they happen to be employed by a particular company, is comparable to Ned Ludd’s destruction of two stocking frames. But Glass is just as embedded in the larger transformations we are going through as the spinning jenny was to the Industrial Revolution. By taking it seriously, we are giving “the second machine age” the respect it deserves.

The question is: Is Google?

* * *

I tried to find out from Google how many units of Google Glass had been sold during the one-day special promotion. I received a statement that read, “We were getting through our stock faster than we expected, so we decided to shut the store down. While you can still access the site, Glass will be marked as sold out.”

I followed up by asking how Google was coping with the fact that its signature device had become a symbol of tech-economy driven gentrification.

“It’s early days and we are thinking very carefully about how we design Glass because new technology always raises new issues,” said a Google spokesperson. “Our Glass Explorers come from all walks of life. They are firefighters, gardeners, athletes, moms, dads and doctors. No one should be targeted simply because of the technology they choose. We find that when people actually try Glass firsthand, they understand the philosophy that underpins it: Glass let’s you look up and engage with the world around you, rather than looking down and being distracted by your technology.”

You can hear an echo here of Ned Ludd in the statement that “new technology raises new issues.” But the rest is just marketing zombie chatter, about as useless in its own way as some of the more overheated and unhinged rhetoric from the more extreme dissident wings of Bay Area protest. When a group styling itself “Counterforce” shows ups at the home of a Google executive, demands $3 billion to build “anarchist colonies” and declares, as Adrianne Jeffries documented in Verge, that their goal is to “to destroy the capitalist system … [and] … create a new world without an economy,” well, good luck with that. We are a long way from “the precipice of a complete anarcho-primitivist rebellion against the technocracy.”

One thing seems reasonably clear: Moms and firefighters might be wearing Google Glass, but if Ned Ludd were around today, he’d probably be looking for different accessories.

Apparently you can’t be empathetic, or help the homeless, without a GoPro

Today in bad ideas: Strapping video cameras to homeless

people to capture “extreme living”

Today in bad ideas: Strapping video cameras to homeless people to capture "extreme living"

GoPro cameras are branded as recording devices for extreme sports, but a San Francisco-based entrepreneur had a different idea of what to do with the camera: Strap it to a homeless man and capture “extreme living.”

The project is called Homeless GoPro, and it involves learning the first-person perspective of homeless people on the streets of San Francisco. The website explains:

“With a donated HERO3+ Silver Edition from GoPro and a small team of committed volunteers in San Francisco, Homeless GoPro explores how a camera normally associated with extreme sports and other ’hardcore’ activities can showcase courage, challenge, and humanity of a different sort - extreme living.”

The intentions of the founder, Kevin Adler, seem altruistic. His uncle was homeless for 30 years, and after visiting his gravesite he decided to start the organization and help others who are homeless.

The first volunteer to film his life is a man named Adam, who has been homeless for 30 years, six of those in San Francisco. There are several edited videos of him on the organization’s site.

In one of the videos, titled “Needs,” Adam says, “I notice every day that people are losing their compassion and empathy — not just for homeless people — but for society in general. I feel like technology has changed so much — where people are emailing and don’t talk face to face anymore.”

Without knowing it Adam has critiqued the the entire project, which is attempting to use technology (a GoPro) to garner empathy and compassion. It is a sad reminder that humanity can ignore the homeless population in person on a day-to-day basis, and needs a video to build empathy. Viewers may feel a twinge of guilt as they sit removed from the situation, watching a screen.

According to San Francisco’s Department of Human Services‘ biennial count there were 6,436 homeless people living in San Francisco (county and city). “Of the 6,436 homeless counted,” a press release stated, “more than half (3,401) were on the streets without shelter, the remaining 3,035 were residing in shelters, transitional housing, resource centers, residential treatment, jail or hospitals.” The homeless population is subject to hunger, illness, violence, extreme weather conditions, fear and other physical and emotional ailments.



Empathy — and the experience of “walking a mile in somebody’s shoes” — are important elements of social change, and these documentary-style videos do give Adam a medium and platform to be a voice for the homeless population. (One hopes that the organization also helped Adam in other ways — shelter, food, a place to stay on his birthday — and isn’t just using him as a human tool in its project.) But something about the project still seems off.

It is in part because of the product placement. GoPro donated a $300 camera for the cause, which sounds great until you remember that it is a billion-dollar company owned by billionaire Nick Woodman. If GoPro wants to do something to help the Bay Area homeless population there are better ways to go about it than donate a camera.

As ValleyWag‘s Sam Biddle put it, “Stop thinking we can innovate our way out of one of civilization’s oldest ailments. Poverty, homelessness, and inequality are bigger than any app …”

 

http://www.salon.com/2014/04/17/today_in_bad_ideas_strapping_video_cameras_to_homeless_people_to_capture_extreme_living/?source=newsletter

The 2,000-Year History of GPS Tracking

| Tue Apr. 15, 2014 3:00 AM PDT
Egyptian geographer Claudius Ptolemy and Hiawatha Bray’s “You Are Here”

Boston Globe technology writer Hiawatha Bray recalls the moment that inspired him to write his new book, You Are Here: From the Compass to GPS, the History and Future of How We Find Ourselves. “I got a phone around 2003 or so,” he says. “And when you turned the phone on—it was a Verizon dumb phone, it wasn’t anything fancy—it said, ‘GPS’. And I said, ‘GPS? There’s GPS in my phone?’” He asked around and discovered that yes, there was GPS in his phone, due to a 1994 FCC ruling. At the time, cellphone usage was increasing rapidly, but 911 and other emergency responders could only accurately track the location of land line callers. So the FCC decided that cellphone providers like Verizon must be able to give emergency responders a more accurate location of cellphone users calling 911. After discovering this, “It hit me,” Bray says. “We were about to enter a world in which…everybody had a cellphone, and that would also mean that we would know where everybody was. Somebody ought to write about that!”

So he began researching transformative events that lead to our new ability to navigate (almost) anywhere. In addition, he discovered the military-led GPS and government-led mapping technologies that helped create new digital industries. The result of his curiosity is You Are Here, an entertaining, detailed history of how we evolved from primitive navigation tools to our current state of instant digital mapping—and, of course, governments’ subsequent ability to track us. The book was finished prior to the recent disappearance of Malaysia Airlines flight 370, but Bray says gaps in navigation and communication like that are now “few and far between.”

Here are 13 pivotal moments in the history of GPS tracking and digital mapping that Bray points out in You Are Here:

1st century: The Chinese begin writing about mysterious ladles made of lodestone. The ladle handles always point south when used during future-telling rituals. In the following centuries, lodestone’s magnetic abilities lead to the development of the first compasses.

Image: ladle

Model of a Han Dynasty south-indicating ladle Wikimedia Commons

2nd century: Ptolemy’s Geography is published and sets the standard for maps that use latitude and longitude.

Image: Ptolemy map

Ptolemy’s 2nd-century world map (redrawn in the 15th century) Wikimedia Commons

1473: Abraham Zacuto begins working on solar declination tables. They take him five years, but once finished, the tables allow sailors to determine their latitude on any ocean.

Image: declination tables

The Great Composition by Abraham Zacuto. (A 17th-century copy of the manuscript originally written by Zacuto in 1491.) Courtesy of The Library of The Jewish Theological Seminary

1887: German physicist Heinrich Hertz creates electromagnetic waves, proof that electricity, magnetism, and light are related. His discovery inspires other inventors to experiment with radio and wireless transmissions.

Image: Hertz

The Hertz resonator John Jenkins. Sparkmuseum.com

1895: Italian inventor Guglielmo Marconi, one of those inventors inspired by Hertz’s experiment, attaches his radio transmitter antennae to the earth and sends telegraph messages miles away. Bray notes that there were many people before Marconi who had developed means of wireless communication. “Saying that Marconi invented the radio is like saying that Columbus discovered America,” he writes. But sending messages over long distances was Marconi’s great breakthrough.

Image: Marconi

Inventor Guglielmo Marconi in 1901, operating an apparatus similar to the one he used to transmit the first wireless signal across Atlantic Wikimedia Commons

1958: Approximately six months after the Soviets launched Sputnik, Frank McLure, the research director at Johns Hopkins Applied Physics Laboratory, calls physicists William Guier and George Weiffenbach into his office. Guier and Weiffenbach used radio receivers to listen to Sputnik’s consistent electronic beeping and calculate the Soviet satellite’s location; McLure wants to know if the process could work in reverse, allowing a satellite to location their position on earth. The foundation for GPS tracking is born.

​1969: A pair of Bell Labs scientists named William Boyle and George Smith create a silicon chip that records light and coverts it into digital data. It is called a charge-coupled device, or CCD, and serves as the basis for digital photography used in spy and mapping satellites.

1976: The top-secret, school-bus-size KH-11 satellite is launched. It uses Boyle and Smith’s CCD technology to take the first digital spy photographs. Prior to this digital technology, actual film was used for making spy photographs. It was a risky and dangerous venture for pilots like Francis Gary Powers, who was shot down while flying a U-2 spy plane and taking film photographs over the Soviet Union in 1960.

Image: KH-11 image

KH-11 satellite photo showing construction of a Kiev-class aircraft carrier Wikimedia Commons

1983: Korean Air Lines flight 007 is shot down after leaving Anchorage, Alaska, and veering into Soviet airspace. All 269 passengers are killed, including Georgia Democratic Rep. Larry McDonald. Two weeks after the attack, President Ronald Reagan directs the military’s GPS technology to be made available for civilian use so that similar tragedies would not be repeated. Bray notes, however, that GPS technology had always been intended to be made public eventually. Here’s Reagan’s address to the nation following the attack:

1989: The US Census Bureau releases (PDF) TIGER (Topologically Integrated Geographic Encoding and Referencing) into the public domain. The digital map data allows any individual or company to create virtual maps.

1994: The FCC declares that wireless carriers must find ways for emergency services to locate mobile 911 callers. Cellphone companies choose to use their cellphone towers to comply. However, entrepreneurs begin to see the potential for GPS-integrated phones, as well. Bray highlights SnapTrack, a company that figures out early on how to squeeze GPS systems into phones—and is purchased by Qualcomm in 2000 for $1 billion.

1996: GeoSystems launches an internet-based mapping service called MapQuest, which uses the Census Bureau’s public-domain mapping data. It attracts hundreds of thousands of users and is purchased by AOL four years later for $1.1 billion.

2004: Google buys Australian mapping startup Where 2 Technologies and American satellite photography company Keyhole for undisclosed amounts. The next year, they launch Google Maps, which is now the most-used mobile app in the world.

2012: The Supreme Court ruling in United States v. Jones (PDF) restricts police usage of GPS to track suspected criminals. Bray tells the story of Antoine Jones, who was convicted of dealing cocaine after police placed a GPS device on his wife’s Jeep to track his movements. The court’s decision in his case is unanimous: The GPS device had been placed without a valid search warrant. Despite the unanimous decision, just five justices signed off on the majority opinion. Others wanted further privacy protections in such cases—a mixed decision that leaves future battles for privacy open to interpretation.

 

http://www.motherjones.com/mixed-media/2014/04/you-are-here-book-hiawatha-bray-gps-navigation