How Google and the Big Tech Companies Are Helping Maintain America’s Empire


Military, intelligence agencies and defense contractors are totally connected to Silicon Valley.

Silicon Valley has been in the media spotlight for its role in gentrifying and raising rents in San Francisco, helping the NSA spy on American citizens, and lack of racial and gender diversity. Despite that, Silicon Valley still has a reputation for benevolence, innocence and progressivism. Hence Google’s phrase, “Don’t be evil.” A recent Wall Street Journal/NBC News poll found that, even after the Snowden leaks, 53% of those surveyed had high confidence in the tech industry. The tech industry is not seen as evil as, say, Wall Street or Big Oil.

One aspect of Silicon Valley that would damage this reputation has not been scrutinized enough—its involvement in American militarism. Silicon Valley’s ties to the National Security State extend beyond the NSA’s PRISM program. Through numerous partnerships and contracts with the U.S. military, intelligence and law enforcement agencies, Silicon Valley is part of the American military-industrial complex. Google sells its technologies to the U.S. military, FBI, CIA, NSA, DEA, NGA, and other intelligence and law enforcement agencies, has managers with backgrounds in military and intelligence work, and partners with defense contractors like Lockheed Martin and Northrop Grumman. Amazon designed a cloud computing system that will be used by the CIA and every other intelligence agency. The CIA-funded tech company Palantir sells its data-mining and analysis software to the U.S. military, CIA, LAPD, NYPD, and other security agencies. These technologies have several war-zone and intelligence-gathering applications.

First, a little background to explain how the military has been involved with Silicon Valley since its conception as a technology center. Silicon Valley’s roots date back to World War II, according to a presentation by researcher and entrepreneur Steve Blank. During the war, the U.S. government funded a secret lab at Harvard University to research how to disrupt Germany’s radar-guided electronic air defense system. The solution — drop aluminum foil in front of German radars to jam them. This birthed modern electronic warfare and signals intelligence. The head of that lab was Stanford engineering professor Fred Terman who, after World War II, took 11 staffers from that lab to create Stanford’s Electronic Research Lab (ERL), which received funding from the military. Stanford also had an Applied Electronics Lab(AEL) that did classified research in jammers and electronic intelligence for the military.

In fact, much of AEL’s research aided the U.S. war in Vietnam. This made the lab a target for student antiwar protesters who nonviolently occupied the lab in April 1969 and demanded an end to classified research at Stanford. After nearly a year of teach-ins, protests, and violent clashes with the police, Stanford effectively eliminated war-related classified research at the university.

The ERL did research in and designed microwave tubes and electronic receivers and jammers. This helped the U.S. military and intelligence agencies spy on the Soviet Union and jam their air defense systems. Local tube companies and contractors developed the technologies based on that research. Some researchers from ERL also founded microwave companies in the area. This created a boon of microwave and electronic startups that ultimately formed the Silicon Valley known today.

Don’t be evil, Google

Last year, the first Snowden documents revealed that Google, Facebook, Yahoo!, and other major tech companies provided the NSA access to their users’ data through the PRISM program. All the major tech companies denied knowledge of PRISM and put up an adversarial public front to government surveillance. However, Al Jazeera America’s Jason Leopold obtained, via FOIA request, two sets of email communications between former NSA Director Gen. Keith Alexander and Google executives Sergey Brin and Eric Schmidt. The communications, according to Leopold, suggest “a far cozier working relationship between some tech firms and the U.S. government than was implied by Silicon Valley brass” and that “not all cooperation was under pressure.” In the emails, Alexander and the Google executives discussed information sharing related to national security purposes.

But PRISM is the tip of the iceberg. Several tech companies are deeply in bed with the U.S. military, intelligence agencies, and defense contractors. One very notable example is Google. Google markets and sells its technology to the U.S. military and several intelligence and law enforcement agencies, such as the FBI, CIA, NSA, DEA, and NGA.

Google has a contract with the National Geospatial-Intelligence Agency (NGA) that allows the agency to use Google Earth Builder. The NGA provides geospatial intelligence, such as satellite imagery and mapping, to the military and other intelligence agencies like the NSA. In fact, NGA geospatial intelligence helped the military and CIA locate and kill Osama bin Laden. This contract allows the NGA to utilize Google’s mapping technology for geospatial intelligence purposes. Google’s Official Enterprise Blog announced that “Google’s work with NGA marks one of the first major government geospatial cloud initiatives, which will enable NGA to use Google Earth Builder to host its geospatial data and information. This allows NGA to customize Google Earth & Maps to provide maps and globes to support U.S. government activities, including: U.S. national security; homeland security; environmental impact and monitoring; and humanitarian assistance, disaster response and preparedness efforts.”

Google Earth’s technology “got its start in the intelligence community, in a CIA-backed firm called Keyhole,” which Google purchased in 2004, according to the Washington Post. PandoDaily reporter Yasha Levine, who has extensively reported on Google’s ties to the military and intelligence communitypoints out that Keyhole’s “main product was an application called EarthViewer, which allowed users to fly and move around a virtual globe as if they were in a video game.”

In 2003, a year before Google bought Keyhole, the company was on the verge of bankruptcy, until it was saved by In-Q-Tel, a CIA-funded venture capital firm. The CIA worked with other intelligence agencies to fit Keyhole’s systems to its needs. According to the CIA Museum page, “The finished product transformed the way intelligence officers interacted with geographic information and earth imagery. Users could now easily combine complicated sets of data and imagery into clear, realistic visual representations. Users could ‘fly’ from space to street level seamlessly while interactively exploring layers of information including roads, schools, businesses, and demographics.”

How much In-Q-Tel invested into Keyhole is classified. However, Levine writes that “the bulk of the funds didn’t come from the CIA’s intelligence budget — as they normally do with In-Q-Tel — but from the NGA, which provided the money on behalf of the entire ‘Intelligence Community.’ As a result, equity in Keyhole was held by two major intelligence agencies.” Shortly after In-Q-Tel bought Keyhole, the NGA (then known as the National Imagery and Mapping Agency or NIMA) announced it immediately used Keyhole’s technology to support U.S. troops in Iraq at the 2003-2011 war. The next year, Google purchased Keyhole and used its technology to develop Google Earth.

Four years after Google purchased Keyhole, in 2008, Google and the NGA purchased GeoEye-1, the world’s highest-resolution satellite, from the company GeoEye. The NGA paid for half of the satellite’s $502 million development and committed to purchasing its imagery. Because of a government restriction, Google gets lower-resolution images but still retains exclusive access to the satellite’s photos. GeoEye later merged into DigitalGlobe in 2013.

Google’s relationship to the National Security State extends beyond contracts with the military and intelligence agencies. Many managers in Google’s public sector division come from the U.S. military and intelligence community, according to one of Levine’s reports.

Michele R. Weslander-Quaid is one example. She became Google’s Innovation Evangelist and Chief Technology Officer of the company’s public sector division in 2011. Before joining Google, since 9/11, Weslander-Quaid worked throughout the military and intelligence world in positions at the National Geospatial-Intelligence Agency, Office of the Director of National Intelligence, National Reconnaissance Office, and later, the Office of the Secretary of Defense. Levine noted that Weslander-Quaid also “toured combat zones in both Iraq and Afghanistan in order to see the tech needs of the military first-hand.”

Throughout her years working in the intelligence community, Weslander-Quaid “shook things up by dropping archaic software and hardware and convincing teams to collaborate via web tools” and “treated each agency like a startup,” according to a 2014 Entrepreneur Magazine profile. She was a major advocate for web tools and cloud-based software and was responsible for implementing them at the agencies she worked at. At Google, Weslander-Quaid’s job is to meet “with agency directors to map technological paths they want to follow, and helps Google employees understand what’s needed to work with public-sector clients.” Weslander-Quaid told Entrepreneur, “A big part of my job is to translate between Silicon Valley speak and government dialect” and “act as a bridge between the two cultures.”

Another is Shannon Sullivan, head of defense and intelligence at Google. Before working at Google, Sullivan served in the U.S. Air Force working at various intelligence positions. First as senior military advisor and then in the Air Force’s C4ISR Acquisition and Test; Space Operations, Foreign Military Sales unit. C4ISR stands for “Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance.” Sullivan left his Air Force positions to work as Defense Director for BAE Systems, a British-based arms and defense company, and then Army and Air Force COCOMs Director at Oracle. His last project at Google was “setting up a Google Apps ‘transformational’ test program to supply 50,000 soldiers in the US Army and DoD with a customized Google App Universe”, according to Levine.

Google not only has a revolving door with the Pentagon and intelligence community, it also partners with defense and intelligence contractors. Levine writes that “in recent years, Google has increasingly taken the role of subcontractor: selling its wares to military and intelligence agencies by partnering with established military contractors.”

The company’s partners include two of the biggest American defense contractors — Lockheed Martin, an aerospace, defense, and information security company, and Northrop Grumman, an aerospace and defense technology company. Both Lockheed and Northrop produce aircraft, missiles and defense systems, naval and radar systems, unmanned systems, satellites, information technology, and other defense-related technologies. In 2011, Lockheed Martin made $36.3 billion in arms sales, while Northrop Grumman made $21.4 billion. Lockheed has a major office in Sunnyvale, California, right in the middle of Silicon Valley. Moreover, Lockheed was also involved in interrogating prisoners in Iraq and Guantanamo, through its purchase of Sytex Corporation and the information technology unit of Affiliated Computer Services (ACS), both of whom directly interrogated detainees.

Google worked with Lockheed to design geospatial technologies. In 2007, describing the company as “Google’s partner,” the Washington Post reported that Lockheed “demonstrated a Google Earth product that it helped design for the National Geospatial-Intelligence Agency’s work in Iraq. These included displays of key regions of the country and outlined Sunni and Shiite neighborhoods in Baghdad, as well as U.S. and Iraqi military bases in the city. Neither Lockheed nor Google would say how the geospatial agency uses the data.” Meanwhile, Google has a $1-million contract with Northrop to install a Google Earth plug-in.

Both Lockheed and Northrop manufacture and sell unmanned systems, also known as drones. Lockheed’s drones include the Stalker, which can stay airborne for 48 hours; Desert Hawk III, a small reconnaissance drone used by British troops in Iraq and Afghanistan; and the RQ-170 Sentinel, a high-altitude stealth reconnaissance drone used by the U.S. Air Force and CIA. RQ-170s have been used in Afghanistan and for the raidthat killed Osama bin Laden. One American RQ-170 infamously crashed in Iran while on a surveillance mission over the country in late 2011.

Northrop Grumman built the RQ-4 Global Hawk, a high-altitude surveillance drone used by the Air Force and Navy. Northrop is also building a new stealth drone for the Air Force called the RQ-180, which may be operational by 2015. In 2012, Northrop sold $1.2 billion worth of drones to South Korea.

Google is also cashing in on the drone market. It recently purchased drone manufacturer Titan Aerospace, which makes high-altitude, solar-powered drones that can “stay in the air for years without needing to land,” reported the Wire. Facebook entered into talks to buy the company a month before Google made the purchase.

Last December, Google purchased Boston Dynamics, a major engineering and robotics company that receives funding from the military for its projects. According to the Guardian, “Funding for the majority of the most advanced Boston Dynamics robots comes from military sources, including the US Defence Advanced Research Projects Agency (DARPA) and the US army, navy and marine corps.” Some of these DARPA-funded projects include BigDog, Legged Squad Support System (LS3), Cheetah, WildCat, and Atlas, all of which are autonomous, walking robots. Altas is humanoid, while BigDog, LS3, Cheetah, WildCat are animal-like quadrupeds. In addition to Boston Dynamics, Google purchased eight robotics companies in 2013—Industrial Perception, Redwood Robotics, Meka, Schaft, Holomni, Bot & Dolly, and Autofuss. Google has been tight-lipped about the specifics of its plans for the robotics companies. But some sources told the New York Times that Google’s robotics efforts are not aimed at consumers but rather manufacturing, such as automating supply chains.

Google’s “Enterprise Government” page also lists military/intelligence contractors Science Applications International Corporation (SAIC) and Blackbird Technologies among the companies it partners with. In particularly, Blackbird is a military contractor that supplies locators for “the covert ‘tagging, tracking and locating’ of suspected enemies,” according to Wired. Its customers include the U.S. Navy and U.S. Special Operations Command. SOCOM oversees the U.S. military’s special operations forces units, such as the Navy SEALs, Delta Force, Army Rangers, and Green Berets. Blackbird even sent some employees as armed operatives on secret missions with special operations forces. The company’s vice president is Cofer Black, a former CIA operative who ran the agency’s Counterterrorist Center before 9/11.

Palantir and the military

Many others tech companies are working with military and intelligence agencies. Amazon recently developed a $600 million cloud computing system for the CIA that will also service all 17 intelligence agencies. Both Amazon and the CIA have said little to nothing about the system’s capabilities.

Palantir, which is based in Palo Alto, California produces and sells data-mining and analysis software. Its customers include the U.S. Marine Corps, U.S. Special Operations Command, CIA, NSA, FBI, Defense Intelligence Agency, Department of Homeland Security, National Counterterrorism Center, LAPD, and NYPD. In California, the Northern California Regional Intelligence Center (NCRIC), one of 72 federally run fusion centers built across the nation since 9/11, uses Palantir software to collect and analyze license plate photos.

While Google sells its wares to whomever in order to make a profit, Palantir, as a company, isn’t solely dedicated to profit-maximizing. Counterterrorism has been part of the company’s mission since it began. The company was founded in 2004 by investor Alex Karp, who is the company’s chief executive, and billionaire PayPal founder Peter Thiel. In 2003, Thiel came up with the idea to develop software to fight terrorism based on PayPal’s fraud recognition software. The CIA’s In-Q-Tel helped jumpstart the company by investing $2 million. The rest of the company’s $30 million start-up costs were funded by Thiel and his venture capital fund.

Palantir’s software has “a user-friendly search tool that can scan multiple data sources at once, something previous search tools couldn’t do,” according to a 2009 Wall Street Journal profile. The software fills gaps in intelligence “by using a ‘tagging’ technique similar to that used by the search functions on most Web sites. Palantir tags, or categorizes, every bit of data separately, whether it be a first name, a last name or a phone number.” Analysts can quickly categorize information as it comes in. The software’s ability to scan and categorize multiple sources of incoming data helps analysts connect the dots among large and different pools of information — signals intelligence, human intelligence, geospatial intelligence, and much more. All this data is collected and analyzed in Palantir’s system. This makes it useful for war-related, intelligence, and law enforcement purposes. That is why so many military, police, and intelligence agencies want Palantir’s software.

U.S. troops in Afghanistan who used Palantir’s software, particularly the Marines and SOCOM, found it very helpful for their missions. Commanders liked Palantir’s ability to direct them at insurgents who “build and bury homemade bombs, the biggest killer of U.S. troops in Afghanistan,” the Washington Times reported. A Government Accountability Office report said Palantir’s software “gained a reputation for being intuitive and easy to use, while also providing effective tools to link and visualize data.” Special operations forces found Palantir to be “a highly effective system for conducting intelligence information analysis and supporting operations” and “provided flexibility to support mobile, disconnected users out on patrols or conducting missions.” Many within the military establishment are pushing to have other branches, such as the Army, adopt Palantir’s software in order to improve intelligence-sharing.

Palantir’s friends include people from the highest echelons of the National Security State. Former CIA Director George Tenet and former Secretary of State Condoleezza Rice are advisers to Palantir, while former CIA director Gen. David Petraeus “considers himself a friend of Palantir CEO Alex Karp”, according to Forbes. Tenet told Forbes, “I wish I had Palantir when I was director. I wish we had the tool of its power because it not only slice and dices today, but it gives you an enormous knowledge management tool to make connections for analysts that go back five, six, six, eight, 10 years. It gives you a shot at your data that I don’t think any product that we had at the time did.”

High-tech militarism

Silicon Valley’s technology has numerous battlefield applications, which is something the U.S. military notices. Since the global war on terror began, the military has had a growing need for high-tech intelligence-gathering and other equipment. “A key challenge facing the military services is providing users with the capabilities to analyze the huge amount of intelligence data being collected,” the GAO report said. The proliferation of drones, counter-insurgency operations, sophisticated intelligence-surveillance-reconnaissance (ISR) systems, and new technologies and sensors changed how intelligence is used in counterinsurgency campaigns in Iraq and Afghanistan and counterterrorism operations in Pakistan, Somalia, Yemen, and other countries.

According to the report, “The need to integrate the large amount of available intelligence data, including the ability to synthesize information from different types of intelligence sources (e.g., HUMINT, SIGINT, GEOINT, and open source), has become increasingly important in addressing, for example, improvised explosive device threats and tracking the activities of certain components of the local population.” This is where Palantir’s software comes in handy. It does what the military needs — data-mining and intelligence analysis. That is why it is used by SOCOM and other arms of the National Security State.

Irregular wars against insurgents and terrorist groups present two problems— finding the enemy and killing them. This is because such groups know how to mix in with, and are usually part of, the local population. Robotic weapons, such as drones, present “an asymmetric solution to an asymmetric problem,” according to a Foster-Miller executive quoted in P.W. Singer’s book Wired for War. Drones can hover over a territory for long periods of time and launch a missile at a target on command without putting American troops in harm’s way, making them very attractive weapons.

Additionally, the U.S. military and intelligence agencies are increasingly relying on signals intelligence to solve this problem. Signals intelligence monitors electronic signals, such as phone calls and conversations, emails, radio or radar signals, and electronic communications. Intelligence analysts or troops on the ground will collect and analyze the electronic communications, along with geospatial intelligence, of adversaries to track their location, map human behavior, and carry out lethal operations.

Robert Steele, a former Marine, CIA case officer, and current open source intelligence advocate, explained the utility of signals intelligence. “Signals intelligence has always relied primarily on seeing the dots and connecting the dots, not on knowing what the dots are saying. When combined with a history of the dots, and particularly the dots coming together in meetings, or a black (anonymous) cell phone residing next to a white (known) cellphone, such that the black acquires the white identity by extension, it becomes possible to ‘map’ human activity in relation to weapons caches, mosques, meetings, etcetera,” he said in an email interview. Steele added the “only advantage” to signals intelligence “is that it is very very expensive and leaves a lot of money on the table for pork and overhead.”

In Iraq and Afghanistan, for example, Joint Special Operations Command (JSOC) commandos combined images from surveillance drones with the tracking of mobile phone numbers to analyze insurgent networks. Commandos then used this analysis to locate and capture or kill their intended targets during raids. Oftentimes, however, this led to getting the wrong person. Steele added that human and open source intelligence are “vastly superior to signals intelligence 95% of the time” but “are underfunded precisely because they are not expensive and require face to face contact with foreigners, something the US Government is incompetent at, and Silicon Valley could care less.”

Capt. Michael Kearns, a retired U.S. and Australian Air Force intelligence officer and former SERE instructor with experience working in Silicon Valley, explained how digital information makes it easier for intelligence agencies to collect data. In an email, he told AlterNet, “Back in the day when the world was analog, every signal was one signal. Some signals contained a broad band of information contained within, however, there were no ‘data packets’ embedded within the electromagnetic spectrum. Therefore, collecting a signal, or a phone conversation, was largely the task of capturing / decoding / processing some specifically targeted, singular source. Today, welcome to the digital era. Data ‘packets’ flow as if like water, with pieces and parts of all things ‘upstream’ contained within. Therefore, the task today for a digital society is largely one of collecting everything, so as to fully unwrap and exploit the totality of the captured data in an almost exploratory manner. And therein lies the apparent inherently unconstitutional-ness of wholesale collection of digital data…it’s almost like ‘pre-crime.'”

One modern use of signals intelligence is in the United States’ extrajudicial killing program, a major component of the global war on terror. The extrajudicial killing program began during the Bush administration as a means to kill suspected terrorists around the world without any due process. However, as Bush focused on the large-occupations of Iraq and Afghanistan, the extrajudicial killing program was less emphasized.

The Obama administration continued the war on terror but largely shifted away from large-scale occupations to emphasizing CIA/JSOC drone strikes, airstrikes, cruise missile attacks,proxies, and raids by special operations forces against suspected terrorists and other groups. Obama continued and expanded Bush’s assassination program, relying on drones and special operations forces to do the job. According to the Bureau of Investigative Journalism, U.S. drone strikes and other covert operations have killed nearly 3,000 to over 4,800 people, including 500 to over 1,000 civilians, in Pakistan, Yemen, and Somalia. During Obama’s five years in office, over 2,400 people were killed by U.S. drone strikes. Most of those killed by drone strikes are civilians or low-level fighters and, in Pakistan, only 2 percent were high-level militants. Communities living under drone strikes are regularly terrorized and traumatized by them.

Targeting for drone strikes is based on metadata analysis and geolocating the cell phone SIM card of a suspected terrorist, according to a report by the Intercept. This intelligence is provided by the NSA and given to the CIA or JSOC which will then carry out the drone strike. However, it is very common for people in countries like Yemen or Pakistan to hold multiple SIM cards, hand their cell phones to family and friends, and groups like the Taliban to randomly hand out SIM cards among their fighters to confuse trackers.

Since this methodology targets a SIM card linked to a suspect rather than an actual person, innocent civilians are regularly killed unintentionally. To ensure the assassination program will continue, the National Counterterrorism Center developed the “disposition matrix,” a database that continuously adds the names, locations, and associates of suspected terrorists to kill-or-capture lists.

The Defense Department’s 2015 budget proposal requests $495.6 billion, down $0.4 billion from last year, and decreases the Army to around 440,000 to 450,000 troops from the post-9/11 peak of 570,000. But it protects money — $5.1 billion — for cyberwarfare and special operations forces, giving SOCOM $7.7 billion, a 10 percent increase from last year, and 69,700 personnel. Thus, these sorts of operations will likely continue.

As the United States emphasizes cyberwarfare, special operations, drone strikes, electronic-based forms of intelligence, and other tactics of irregular warfare to wage perpetual war, sophisticated technology will be needed. Silicon Valley is the National Security State’s go-to industry for this purpose.

Adam Hudson is a journalist, writer, and photographer.

http://www.alternet.org/news-amp-politics/how-google-and-big-tech-companies-are-helping-maintain-americas-empire?akid=12149.265072.iCZIs-&rd=1&src=newsletter1016284&t=6&paging=off&current_page=1#bookmark

Not Content To Ruin Just San Francisco, Rich Techies Are Gentrifying Burning Man Too

facebook-like-altar.jpg
Artist Dadara‘s Facebook like altar from Burning Man 2013. Photo: Bexx Brown-Spinelli/Flickr

This will come as news only to people who have not attended Burning Man in the last couple of years, but the New York Times has just caught on to the fact that Silicon Valley millionaires (and billionaires) have been attending the desert festival in greater numbers and quickly ruining it with their displays of wealth. While we used to call Coachella “Burning Man Lite for Angelenos,” Burning Man itself is quickly becoming Coachella on Crack for rich tech folk who want to get naked and do bong hits with Larry Page in Elon Musk’s decked-out RV.

Burners won’t just be sharing the playa with Larry and Sergey, Zuck, Grover Norquist, and at least one Winklevoss twin this year. There will also be a legion of new millionaires, most of them probably Burning Man virgins, who will be living in the lap of luxury and occasionally dropping in on your parties to ask for molly.

Per the Times piece:

“We used to have R.V.s and precooked meals,” said a man who attends Burning Man with a group of Silicon Valley entrepreneurs. (He asked not to be named so as not to jeopardize those relationships.) “Now, we have the craziest chefs in the world and people who build yurts for us that have beds and air-conditioning.” He added with a sense of amazement, “Yes, air-conditioning in the middle of the desert!”His camp includes about 100 people from the Valley and Hollywood start-ups, as well as several venture capital firms. And while dues for most non-tech camps run about $300 a person, he said his camp’s fees this year were $25,000 a person. A few people, mostly female models flown in from New York, get to go free, but when all is told, the weekend accommodations will collectively cost the partygoers over $2 million.

“Anyone who has been going to Burning Man for the last five years is now seeing things on a level of expense or flash that didn’t exist before,” said Brian Doherty, author of the book “This Is Burning Man.” “It does have this feeling that, ‘Oh, look, the rich people have moved into my neighborhood.’ It’s gentrifying.”

The blockaded camps of the tech gentrifiers have tended to be in the outer rings of Black Rock City, as was previously reported in 2011 when a guest of Elon Musk’s spoke to the Wall Street Journal. “We’re out of the thick of it,” he said, “so we’re not offending the more elaborate or involved set ups.”

But as Silicon Valley assumes more and more of a presence on the playa, what’s to stop them from claiming better and better real estate, closer to where the action is?

You won’t see any evidence of this on Facebook, though. All of this happens without the tech world’s usual passion for documentation, since they do abide by at least that one tenet of Burning Man culture that frowns on photography. And at least, as of 2014, they seem to understand that their displays of wealth aren’t all that welcome, and should probably be kept on the down-low.

But seriously? Models flown in from New York? Gross.

[NYT]

 

http://sfist.com/2014/08/21/not_content_to_ruin_just_san_franci.php

Facebook, email and the neuroscience of always being distracted

I used to be able to read for hours without digital interruption. Now? That’s just funny. I want my focus back!

"War and Peace" tortured me: Facebook, email and the neuroscience of always being distracted
This essay is adapted from “The End of Absence”

I’m enough of a distraction addict that a low-level ambient guilt about not getting my real work done hovers around me for most of the day. And this distractible quality in me pervades every part of my life. The distractions—What am I making for dinner?, Who was that woman in “Fargo”?, or, quite commonly, What else should I be reading?—are invariably things that can wait. What, I wonder, would I be capable of doing if I weren’t constantly worrying about what I ought to be doing?

And who is this frumpy thirty-something man who has tried to read “War and Peace” five times, never making it past the garden gate? I took the tome down from the shelf this morning and frowned again at those sad little dog-ears near the fifty-page mark.

Are the luxuries of time on which deep reading is reliant available to us anymore? Even the attention we deign to give to our distractions, those frissons, is narrowing.

It’s important to note this slippage. As a child, I would read for hours in bed without the possibility of a single digital interruption. Even the phone (which was anchored by wires to the kitchen wall downstairs) was generally mute after dinner. Our two hours of permitted television would come to an end, and I would seek out the solitary refuge of a novel. And deep reading (as opposed to reading a Tumblr feed) was a true refuge. What I liked best about that absorbing act was the fact books became a world unto themselves, one that I (an otherwise powerless kid) had some control over. There was a childish pleasure in holding the mysterious object in my hands; in preparing for the story’s finale by monitoring what Austen called a “tell-tale compression of the pages”; in proceeding through some perfect sequence of plot points that bested by far the awkward happenstance of real life.

The physical book, held, knowable, became a small mental apartment I could have dominion over, something that was alive because of my attention and then lived in me.

But now . . . that thankful retreat, where my child-self could become so lost, seems unavailable to me. Today there is no room in my house, no block in my city, where I am unreachable.

Eventually, if we start giving them a chance, moments of absence reappear, and we can pick them up if we like. One appeared this morning, when my partner flew to Paris. He’ll be gone for two weeks. I’ll miss him, but this is also my big break.



I’ve taken “War and Peace” back down off the shelf. It’s sitting beside my computer as I write these lines—accusatory as some attention-starved pet.

You and me, old friend. You, me, and two weeks. I open the book, I shut the book, and I open the book again. The ink swirls up at me. This is hard. Why is this so hard?

* * *

Dr. Douglas Gentile, a friendly professor at Iowa State University, recently commiserated with me about my pathetic attention span. “It’s me, too, of course,” he said. “When I try to write a paper, I can’t keep from checking my e-mail every five minutes. Even though I know it’s actually making me less productive.” This failing is especially worrying for Gentile because he happens to be one of the world’s leading authorities on the effects of media on the brains of the young. “I know, I know! I know all the research on multitasking. I can tell you absolutely that everyone who thinks they’re good at multitasking is wrong. We know that in fact it’s those who think they’re good at multitasking who are the least productive when they multitask.”

The brain itself is not, whatever we may like to believe, a multitasking device. And that is where our problem begins. Your brain does a certain amount of parallel processing in order to synthesize auditory and visual information into a single understanding of the world around you, but the brain’s attention is itself only a spotlight, capable of shining on one thing at a time. So the very word multitask is a misnomer. There is rapid-shifting minitasking, there is lame-spasms-of-effort-tasking, but there is, alas, no such thing as multitasking. “When we think we’re multitasking,” says Gentile, “we’re actually multiswitching.”

We can hardly blame ourselves for being enraptured by the promise of multitasking, though. Computers—like televisions before them—tap into a very basic brain function called an “orienting response.” Orienting responses served us well in the wilderness of our species’ early years. When the light changes in your peripheral vision, you must look at it because that could be the shadow of something that’s about to eat you. If a twig snaps behind you, ditto. Having evolved in an environment rife with danger and uncertainty, we are hardwired to always default to fast-paced shifts in focus. Orienting responses are the brain’s ever-armed alarm system and cannot be ignored.

Gentile believes it’s time for a renaissance in our understanding of mental health. To begin with, just as we can’t accept our body’s cravings for chocolate cake at face value, neither can we any longer afford to indulge the automatic desires our brains harbor for distraction.

* * *

It’s not merely difficult at first. It’s torture. I slump into the book, reread sentences, entire paragraphs. I get through two pages and then stop to check my e-mail—and down the rabbit hole I go. After all, one does not read “War and Peace” so much as suffer through it. It doesn’t help that the world at large, being so divorced from such pursuits, is often aggressive toward those who drop away into single-subject attention wells. People don’t like it when you read “War and Peace.” It’s too long, too boring, not worth the effort. And you’re elitist for trying.

In order to finish the thing in the two weeks I have allotted myself, I must read one hundred pages each day without fail. If something distracts me from my day’s reading—a friend in the hospital, a magazine assignment, sunshine—I must read two hundred pages on the following day. I’ve read at this pace before, in my university days, but that was years ago and I’ve been steadily down-training my brain ever since.

* * *

Another week has passed—my “War and Peace” struggle continues. I’ve realized now that the subject of my distraction is far more likely to be something I need to look at than something I need to do. There have always been activities—dishes, gardening, sex, shopping—that derail whatever purpose we’ve assigned to ourselves on a given day. What’s different now is the addition of so much content that we passively consume.

Only this morning I watched a boy break down crying on “X Factor,” then regain his courage and belt out a half-decent rendition of  Beyoncé’s “Listen”; next I looked up the original Beyoncé video and played it twice while reading the first few paragraphs of a story about the humanity of child soldiers; then I switched to a Nina Simone playlist prepared for me by Songza, which played while I flipped through a slide show of American soldiers seeing their dogs for the first time in years; and so on, ad nauseam. Until I shook I out of this funk and tried to remember what I’d sat down to work on in the first place.

* * *

If I’m to break from our culture of distraction, I’m going to need practical advice, not just depressing statistics. To that end, I switch gears and decide to stop talking to scientists for a while; I need to talk to someone who deals with attention and productivity in the so-called real world. Someone with a big smile and tailored suits such as organizational guru Peter Bregman. He runs a global consulting firm that gets CEOs to unleash the potential of their workers, and he’s also the author of the acclaimed business book 18 Minutes, which counsels readers to take a minute out of every work hour (plus five minutes at the start and end of the day) to do nothing but set an intention.

Bregman told me he sets his watch to beep every hour as a reminder that it’s time to right his course again. Aside from the intention setting, Bregman counsels no more than three e-mail check-ins a day. This notion of batch processing was anathema to someone like me, used to checking my in-box so constantly, particularly when my work feels stuck. “It’s incredibly inefficient to switch back and forth,” said Bregman, echoing every scientist I’d spoken to on multitasking. “Besides, e-mail is, actually, just about the least efficient mode of conversation you can have. And what we know about multitasking is that, frankly, you can’t. You just derail.”

“I just always feel I’m missing something important,” I said. “And that’s precisely why we lose hours every day, that fear.” Bregman argues that it’s people who can get ahead of that fear who end up excelling in the business world that he spends his own days in. “I think everyone is more distractible today than we used to be. It’s a very hard thing to fix. And as people become more distracted, we know they’re actually doing less, getting less done. Your efforts just leak out. And those who aren’t—aren’t leaking—are going to be the most successful.”

I hate that I leak. But there’s a religious certainty required in order to devote yourself to one thing while cutting off the rest of the world. We don’t know that the inbox is emergency-free, we don’t know that the work we’re doing is the work we ought to be doing. But we can’t move forward in a sane way without having some faith in the moment we’ve committed to. “You need to decide that things don’t matter as much as you might think they matter,” Bregman suggested as I told him about my flitting ways. And that made me think there might be a connection between the responsibility-free days of my youth and that earlier self’s ability to concentrate. My young self had nowhere else to be, no permanent anxiety nagging at his conscience. Could I return to that sense of ease? Could I simply be where I was and not seek out a shifting plurality to fill up my time?

* * *

It happened softly and without my really noticing.

As I wore a deeper groove into the cushions of my sofa, so the book I was holding wore a groove into my (equally soft) mind. Moments of total absence began to take hold more often; I remembered what it was like to be lost entirely in a well-spun narrative. There was the scene where Anna Mikhailovna begs so pitifully for a little money, hoping to send her son to war properly dressed. And there were, increasingly, more like it. More moments where the world around me dropped away and I was properly absorbed. A “causeless springtime feeling of joy” overtakes Prince Andrei; a tearful Pierre sees in a comet his last shimmering hope; Emperor Napoleon takes his troops into the heart of Russia, oblivious to the coming winter that will destroy them all…

It takes a week or so for withdrawal symptoms to work through a heroin addict’s body. While I wouldn’t pretend to compare severity here, doubtless we need patience, too, when we deprive ourselves of the manic digital distractions we’ve grown addicted to.

That’s how it was with my Tolstoy and me. The periods without distraction grew longer, I settled into the sofa and couldn’t hear the phone, couldn’t hear the ghost-buzz of something else to do. I’m teaching myself to slip away from the world again.

* * *

Yesterday I fell asleep on the sofa with a few dozen pages of “War and Peace” to go. I could hear my cell phone buzzing from its perch on top of the piano. I saw the glowing green eye of my Cyclops modem as it broadcast potential distraction all around. But on I went past the turgid military campaigns and past the fretting of Russian princesses, until sleep finally claimed me and my head, exhausted, dreamed of nothing at all. This morning I finished the thing at last. The clean edges of its thirteen hundred pages have been ruffled down into a paper cabbage, the cover is pilled from the time I dropped it in the bath. Holding the thing aloft, trophy style, I notice the book is slightly larger than it was before I read it.

It’s only after the book is laid down, and I’ve quietly showered and shaved, that I realize I haven’t checked my e-mail today. The thought of that duty comes down on me like an anvil.

Instead, I lie back on the sofa and think some more about my favorite reader Milton – about his own anxieties around reading. By the mid-1650s, he had suffered that larger removal from the crowds, he had lost his vision entirely and could not read at all—at least not with his own eyes. From within this new solitude, he worried that he could no longer meet his potential. One sonnet, written shortly after the loss of his vision, begins:

When I consider how my light is spent,

Ere half my days, in this dark world and wide, and that one Talent

which is death to hide Lodged with me useless . . .

Yet from that position, in the greatest of caves, he began producing his greatest work. The epic “Paradise Lost,” a totemic feat of concentration, was dictated to aides, including his three daughters.

Milton already knew, after all, the great value in removing himself from the rush of the world, so perhaps those anxieties around his blindness never had a hope of dominating his mind. I, on the other hand, and all my peers, must make a constant study of concentration itself. I slot my ragged “War and Peace” back on the shelf. It left its marks on me the same way I left my marks on it (I feel awake as a man dragged across barnacles on the bottom of some ocean). I think: This is where I was most alive, most happy. How did I go from loving that absence to being tortured by it? How can I learn to love that absence again?

This essay is adapted from “The End of Absence” by Michael Harris, published by Current / Penguin Random House.

 

http://www.salon.com/2014/08/17/war_and_peace_tortured_me_facebook_email_and_the_neuroscience_of_always_being_distracted/?source=newsletter

Eight (No, Nine!) Problems With Big Data

Credit Open, N.Y.

BIG data is suddenly everywhere. Everyone seems to be collecting it, analyzing it, making money from it and celebrating (or fearing) its powers. Whether we’re talking about analyzing zillions of Google search queries to predict flu outbreaks, or zillions of phone records to detect signs of terrorist activity, or zillions of airline stats to find the best time to buy plane tickets, big data is on the case. By combining the power of modern computing with the plentiful data of the digital era, it promises to solve virtually any problem — crime, public health, the evolution of grammar, the perils of dating — just by crunching the numbers.

Or so its champions allege. “In the next two decades,” the journalist Patrick Tucker writes in the latest big data manifesto, “The Naked Future,” “we will be able to predict huge areas of the future with far greater accuracy than ever before in human history, including events long thought to be beyond the realm of human inference.” Statistical correlations have never sounded so good.

Is big data really all it’s cracked up to be? There is no doubt that big data is a valuable tool that has already had a critical impact in certain areas. For instance, almost every successful artificial intelligence computer program in the last 20 years, from Google’s search engine to the I.B.M. “Jeopardy!” champion Watson, has involved the substantial crunching of large bodies of data. But precisely because of its newfound popularity and growing use, we need to be levelheaded about what big data can — and can’t — do.

The first thing to note is that although big data is very good at detecting correlations, especially subtle correlations that an analysis of smaller data sets might miss, it never tells us which correlations are meaningful. A big data analysis might reveal, for instance, that from 2006 to 2011 the United States murder rate was well correlated with the market share of Internet Explorer: Both went down sharply. But it’s hard to imagine there is any causal relationship between the two. Likewise, from 1998 to 2007 the number of new cases of autism diagnosed was extremely well correlated with sales of organic food (both went up sharply), but identifying the correlation won’t by itself tell us whether diet has anything to do with autism.

Second, big data can work well as an adjunct to scientific inquiry but rarely succeeds as a wholesale replacement. Molecular biologists, for example, would very much like to be able to infer the three-dimensional structure of proteins from their underlying DNA sequence, and scientists working on the problem use big data as one tool among many. But no scientist thinks you can solve this problem by crunching data alone, no matter how powerful the statistical analysis; you will always need to start with an analysis that relies on an understanding of physics and biochemistry.

Third, many tools that are based on big data can be easily gamed. For example, big data programs for grading student essays often rely on measures like sentence length and word sophistication, which are found to correlate well with the scores given by human graders. But once students figure out how such a program works, they start writing long sentences and using obscure words, rather than learning how to actually formulate and write clear, coherent text. Even Google’s celebrated search engine, rightly seen as a big data success story, is not immune to “Google bombing” and “spamdexing,” wily techniques for artificially elevating website search placement.

Fourth, even when the results of a big data analysis aren’t intentionally gamed, they often turn out to be less robust than they initially seem. Consider Google Flu Trends, once the poster child for big data. In 2009, Google reported — to considerable fanfare — that by analyzing flu-related search queries, it had been able to detect the spread of the flu as accurately and more quickly than the Centers for Disease Control and Prevention. A few years later, though, Google Flu Trends began to falter; for the last two years it has made more bad predictions than good ones.

As a recent article in the journal Science explained, one major contributing cause of the failures of Google Flu Trends may have been that the Google search engine itself constantly changes, such that patterns in data collected at one time do not necessarily apply to data collected at another time. As the statistician Kaiser Fung has noted, collections of big data that rely on web hits often merge data that was collected in different ways and with different purposes — sometimes to ill effect. It can be risky to draw conclusions from data sets of this kind.

A fifth concern might be called the echo-chamber effect, which also stems from the fact that much of big data comes from the web. Whenever the source of information for a big data analysis is itself a product of big data, opportunities for vicious cycles abound. Consider translation programs like Google Translate, which draw on many pairs of parallel texts from different languages — for example, the same Wikipedia entry in two different languages — to discern the patterns of translation between those languages. This is a perfectly reasonable strategy, except for the fact that with some of the less common languages, many of the Wikipedia articles themselves may have been written using Google Translate. In those cases, any initial errors in Google Translate infect Wikipedia, which is fed back into Google Translate, reinforcing the error.

A sixth worry is the risk of too many correlations. If you look 100 times for correlations between two variables, you risk finding, purely by chance, about five bogus correlations that appear statistically significant — even though there is no actual meaningful connection between the variables. Absent careful supervision, the magnitudes of big data can greatly amplify such errors.

Seventh, big data is prone to giving scientific-sounding solutions to hopelessly imprecise questions. In the past few months, for instance, there have been two separate attempts to rank people in terms of their “historical importance” or “cultural contributions,” based on data drawn from Wikipedia. One is the book “Who’s Bigger? Where Historical Figures Really Rank,” by the computer scientist Steven Skiena and the engineer Charles Ward. The other is an M.I.T. Media Lab project called Pantheon.

Both efforts get many things right — Jesus, Lincoln and Shakespeare were surely important people — but both also make some egregious errors. “Who’s Bigger?” claims that Francis Scott Key was the 19th most important poet in history; Pantheon has claimed that Nostradamus was the 20th most important writer in history, well ahead of Jane Austen (78th) and George Eliot (380th). Worse, both projects suggest a misleading degree of scientific precision with evaluations that are inherently vague, or even meaningless. Big data can reduce anything to a single number, but you shouldn’t be fooled by the appearance of exactitude.

FINALLY, big data is at its best when analyzing things that are extremely common, but often falls short when analyzing things that are less common. For instance, programs that use big data to deal with text, such as search engines and translation programs, often rely heavily on something called trigrams: sequences of three words in a row (like “in a row”). Reliable statistical information can be compiled about common trigrams, precisely because they appear frequently. But no existing body of data will ever be large enough to include all the trigrams that people might use, because of the continuing inventiveness of language.

To select an example more or less at random, a book review that the actor Rob Lowe recently wrote for this newspaper contained nine trigrams such as “dumbed-down escapist fare” that had never before appeared anywhere in all the petabytes of text indexed by Google. To witness the limitations that big data can have with novelty, Google-translate “dumbed-down escapist fare” into German and then back into English: out comes the incoherent “scaled-flight fare.” That is a long way from what Mr. Lowe intended — and from big data’s aspirations for translation.

Wait, we almost forgot one last problem: the hype. Champions of big data promote it as a revolutionary advance. But even the examples that people give of the successes of big data, like Google Flu Trends, though useful, are small potatoes in the larger scheme of things. They are far less important than the great innovations of the 19th and 20th centuries, like antibiotics, automobiles and the airplane.

Big data is here to stay, as it should be. But let’s be realistic: It’s an important resource for anyone analyzing data, not a silver bullet.

Face Time: Eternal Youth Has Become a Growth Industry in Silicon Valley

Tuesday, Aug 12 2014

The students of Timothy Draper’s University of Heroes shuffle into a conference room, khaki shorts swishing against their knees, flip-flops clacking against the carpeted floor. One by one they take their seats and crack open their laptops, training their eyes on Facebook home pages or psychedelic screen savers. An air conditioner whirs somewhere in the rafters. A man in chinos stands before them.

The man is Steve Westly, former state controller, prominent venture capitalist, 57-year-old baron of Silicon Valley. He smiles at the group with all the sheepishness of a student preparing for show-and-tell. He promises to be brief.

“People your age are changing the world,” Westly tells the students, providing his own list of great historical innovators: Napoleon, Jesus, Zuckerberg, Larry, Sergey. “It’s almost never people my age,” he adds.

Students at Draper University — a private, residential tech boot camp launched by venture capitalist Timothy Draper, in what was formerly San Mateo’s Benjamin Franklin Hotel — have already embraced Westly’s words as a credo. They inhabit a world where success and greatness seem to hover within arm’s reach. A small handful of those who complete the six-week, $9,500 residential program might get a chance to join Draper’s business incubator; an even smaller handful might eventually get desks at an accelerator run by Draper’s son, Adam. It’s a different kind of meritocracy than Westly braved, pursuing an MBA at Stanford in the early ’80s. At Draper University, heroism is merchandised, rather than earned. A 20-year-old with bright eyes and deep pockets (or a parent who can front the tuition) has no reason to think he won’t be the next big thing.

This is the dogma that glues Silicon Valley together. Young employees are plucked out of high school, college-aged interns trade their frat houses and dorm rooms for luxurious corporate housing. Twenty-seven-year-old CEOs inspire their workers with snappy jingles about moving fast and breaking things. Entrepreneurs pitch their business plans in slangy, tech-oriented patois.

Gone are the days of the “company man” who spends 30 years ascending the ranks in a single corporation. Having an Ivy League pedigree and a Brooks Brothers suit is no longer as important.

“Let’s face it: The days of the ‘gold watch’ are over,” 25-year-old writer David Burstein says. “The average millennial is expected to have several jobs by the time he turns 38.”

Yet if constant change is the new normal, then older workers have a much harder time keeping up. The Steve Westlys of the world are fading into management positions. Older engineers are staying on the back-end, working on system administration or architecture, rather than serving as the driving force of a company.

“If you lost your job, it might be hard to find something similar,” a former Google contractor says, noting that an older engineer might have to settle for something with a lower salary, or even switch fields. The contractor says he knows a man who graduated from Western New England University in the 1970s with a degree in the somewhat archaic field of time-motion engineering. That engineer wound up working at Walmart.

Those who do worm their way into the Valley workforce often have a rough adjustment. The former contractor, who is in his 40s, says he was often the oldest person commuting from San Francisco to Mountain View on a Google bus. And he adhered to a different schedule: Wake up at 4:50 a.m., get out the door by 6:20, catch the first coach home at 4:30 p.m. to be home for a family supper. He was one of the few people who didn’t take advantage of the free campus gyms or gourmet cafeteria dinners or on-site showers. He couldn’t hew to a live-at-work lifestyle.

And compared to other middle-aged workers, he had it easy.

In a lawsuit filed in San Francisco Superior Court in July, former Twitter employee Peter H. Taylor claims he was canned because of his age, despite performing his duties in “an exemplary manner.” Taylor, who was 57 at the time of his termination in September of last year, says his supervisor made at least one derogatory remark about his age, and that the company refused to accommodate his disabilities following a bout with kidney stones. He says he was ultimately replaced by several employees in their 20s and 30s. A Twitter spokesman says the lawsuit is without merit and that the company will “vigorously” defend itself.

The case is not without precedent. Computer scientist Brian Reid lobbed a similar complaint against Google in 2004, claiming co-workers called him an “old man” and an “old fuddy-duddy,” and routinely told him he was not a “cultural fit” for the company. Reid was 54 at the time he filed the complaint; he settled for an undisclosed amount of money.

What is surprising, perhaps, is that a 57-year-old man was employed at Twitter at all. “Look, Twitter has no 50-year-old employees,” the former Google contractor says, smirking. “By the time these [Silicon Valley] engineers are in their 40s, they’re old — they have houses, boats, stock options, mistresses. They drive to work in Chevy Volts.”

There’s definitely a swath of Valley nouveau riche who reap millions in their 20s and 30s, and who are able to cash out and retire by age 40. But that’s a minority of the population. The reality, for most people, is that most startups fail, most corporations downsize, and most workforces churn. Switching jobs every two or three years might be the norm, but it’s a lot easier to do when you’re 25 than when you’re 39. At that point, you’re essentially a senior citizen, San Francisco botox surgeon Seth Matarasso says.

“I have a friend who lived in Chicago and came back to Silicon Valley at age 38,” Matarasso recalls. “And he said, ‘I feel like a grandfather — in Chicago I just feel my age.”

Retirement isn’t an option for the average middle-aged worker, and even the elites — people like Westly, who were once themselves wunderkinds — find themselves in an awkward position when they hit their 50s, pandering to audiences that may have no sense of what came before. The diehards still work well past their Valley expiration date, but then survival becomes a job unto itself. Sometimes it means taking lower-pay contract work, or answering to a much younger supervisor, or seeking workplace protection in court.

CONTINUED: http://www.sfweekly.com/sanfrancisco/silicon-valley-bottom-age-discrimination/Content?oid=3079530

Tech Industry Believes it Invented San Francisco, Burning Man, and Sex

Posted By on Tue, Aug 12, 2014 at 7:30 AM

Inspired by Bay Area tech industry - FLICKR/CROWCOMBE AL

According to reports, the Silicon Valley-based tech industry has convinced itself that it invented everything it enjoys, including Democracy, rule of law, San Francisco, Burning Man, and sex.

“Techies are really innovative, so it’s only natural that they would hack human sexuality by coming up with a pleasurable use for what was previously just a reproductive process,” Google employee Miles Davidson said. “You’re welcome.”

Brent Sternberg, a Facebook engineer who started attending Mission Control sex parties a year ago, said that blow jobs simply wouldn’t have been possible without social media. “How could you have ever told someone that you like it?” he asked. “It would never work.”

Futurist Ray Kurzweil, Google’s Director of Engineering, said he believes that the tech giant is on track to invent S&M by 2025. “It will be incredibly pleasurable,” he said, “unless it hurts too much. Until we develop it, there’s just no way to know.”

Apple vice president of design Louis Harris is especially proud of the tech industry for inventing Burning Man, a 27-year-old annual arts event, in 2008.

“Prior to the tech industry, no one had really considered creating experimental communities, or going camping,” Harris said. “But then thousands of tech workers disrupted the desert and invented DJs.”

Not everything has gone well since then, Harris admitted. “The problem with Burning Man is that since the tech industry invented it, it’s gotten so popular that all these artists are showing up, and they don’t know anything about the culture.”

That’s also a problem with what many see as the tech industry’s crowning achievement: the city of San Francisco.

“We really knocked that one out of the park,” said Twitter Vice President Larry Johnson. “When we got here there wasn’t a single unaffordable building, there were musicians in lofts, and the place was just filled with women. But we’ve really turned that around.”

LinkedIn Senior Data Analyst Rod Suchet agreed. “San Francisco is famous the whole world over as a city of art, and art was originally an App for the iStore. It’s famous for its restaurants, and restaurants were originally developed so that Google’s cafeteria could telecommute. Honestly, was there even a music scene in San Francisco before Pandora digitized it? Did this town even have an economy before we started to displace it?”

As of press time, Suchet had meant to Google the answer but had been distracted by a cat video. Cats, for those not in the know, were invented by YouTube in 2006.

Benjamin Wachs is a literary chameleon. 

http://www.sfweekly.com/thesnitch/2014/08/12/tech-industry-believes-it-invented-san-francisco-burning-man-and-sex

Google is crossing a slippery slope between privacy and spying.


Google Is Acting Like an Arm of the Surveillance State

http://www.liebenfels.com/wp-content/uploads/2012/09/The-surveillance-state.jpg

Convicted in 1994 of sexually assaulting a young boy, John Henry Skillern of Texas once again finds himself incarcerated and awaiting trial, this time for possession and production of child pornography. Skillern’s arrest comes courtesy of Google. Few, I expect, will shed tears for Skillern with respect to his alleged sexual predations. Nonetheless his case once more brings Google into the privacy spotlight, this time as an arm of “law enforcement.”

Google makes no secret of the fact that it “analyzes content” in emails sent and received by users of its Gmail service, mostly for purposes of targeting advertising to users most likely to click thru and buy things. That’s how Google makes money — tracking users of its “free” services, watching what they do, selling those users’ eyeballs to paying customers.

It’s also understood by most that Google will, as its privacy policy states, “share personal information … [to] meet any applicable law, regulation, legal process or enforceable governmental request.” If the cops come a-knocking with a warrant or some asserted equivalent, Google cooperates with search and seizure of your stored information and records of your actions.

But Google goes farther than that. Their Gmail program policies unequivocally state that, among other things, “Google has a zero-tolerance policy against child sexual abuse imagery. If we become aware of such content, we will report it to the appropriate authorities and may take disciplinary action, including termination, against the Google Accounts of those involved.”

As a market anarchist, my visceral response to the Skillern case is “fair cop – it’s in the terms of service he agreed to when he signed up for a Gmail account.”

But there’s a pretty large gap between “we’ll let the government look at your stuff if they insist” and “we’ll keep an eye out for stuff that the government might want to see.” The latter, with respect to privacy, represents the top of a very slippery slope.

How slippery? Well, consider Google’s interests in “geolocation” (knowing where you are) and  in “the Internet of Things”  (connecting everything from your toaster to your thermostat to your car to the Internet, with Google as middleman).

It’s not out of the question that someday as you drive down the road, Google will track you and automatically message the local police department if it notices you’re driving 38 miles per hour in a 35-mph speed zone.

Think that can’t happen? Think again. In many locales, tickets (demanding payment of fines) are already automatically mailed to alleged red-light scofflaws caught by cameras. No need to even send out an actual cop with pad and pen. It’s a profit center for government — and for companies that set up and operate the camera systems. In case you haven’t noticed, Google really likes information-based profit centers.

And keep in mind that you are a criminal. Yes, really. At least if you live in the United States. Per Harvey Silverglate’s book Three Felonies a Day, the average American breaks at least three federal laws in every 24-hour period. Want to bet against the probability that evidence of those “crimes” can be detected in your email archive?

To a large degree the Internet has killed our old conceptions of what privacy means and to what extent we can expect it. Personally I’m down with that — I’m more than willing to let Google pry into my personal stuff to better target the ads it shows me, in exchange for its “free” services. On the other hand I’d like some limits. And I think that markets are capable of setting those limits.

Three market limiting mechanisms that come to mind are “end to end” encryption, services for obfuscating geographic location and locating servers in countries with more respect for privacy and less fear of “big dog” governments like the United States. If Google can’t or won’t provide those, someone else will (actually a number of someones already are).

The standard political mechanism for reining in bad actors like Google would be legislation forbidding Internet service companies to “look for and report” anything to government absent a warrant issued on probable cause to believe a crime has been committed. But such political mechanisms don’t work. As Edward Snowden’s exposure of the US National Security Agency’s illegal spying operations demonstrates, government ignores laws it doesn’t like.

Instead of seeking political solutions, I suggest a fourth market solution: Abolition of the state. The problem is not so much what Google tracks or what it might want to act on. Those are all a matter of agreement between Google and its users. The bigger problem is who Google might report you TO.

Thomas L. Knapp is Senior News Analyst at the Center for a Stateless Society (c4ss.org).

http://www.alternet.org/civil-liberties/google-acting-arm-surveillance-state?paging=off&current_page=1#bookmark

The rise of data and the death of politics

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisions that such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0″) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer
For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.
Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, drive Uber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”

 

Let’s nationalize Amazon and Google

Publicly funded technology built Big Tech

They’re huge and ruthless and define our lives. They’re close to monopolies. Let’s make them public utilities

Let's nationalize Amazon and Google: Publicly funded technology built Big Tech
Jeff Bezos (Credit: AP/Reed Saxon/Pakhnyushcha via Shutterstock/Salon)

They’re huge, they’re ruthless, and they touch every aspect of our daily lives. Corporations like Amazon and Google keep expanding their reach and their power. Despite a history of abuses, so far the Justice Department has declined to take antitrust actions against them. But there’s another solution.

Is it time to manage and regulate these companies as public utilities?

That argument’s already been made about broadband access. In her book “Captive Justice,” law professor Susan Crawford argues that “high-speed wired Internet access is as basic to innovation, economic growth, social communication, and the country’s competitiveness as electricity was a century ago.”

Broadband as a public utility? If not for corporate corruption of our political process, that would seem like an obvious solution. Instead, our nation’s wireless access is the slowest and costliest in the world.

But why stop there? Policymakers have traditionally considered three elements when evaluating the need for a public utility: production, transmission, and distribution. Broadband is transmission. What about production and distribution?

The Big Tech mega-corporations have developed what Al Gore calls the “Stalker Economy,” manipulating and monitoring as they go. But consider: They were created with publicly funded technologies, and prospered as the result of indulgent policies and lax oversight. They’ve achieved monopoly or near-monopoly status, are spying on us to an extent that’s unprecedented in human history, and have the potential to alter each and every one of our economic, political, social and cultural transactions.

In fact, they’re already doing it.

Public utilities? It’s a thought experiment worth conducting.

Big Tech was created with publicly developed technology.

No matter how they spin it, these corporations were not created in garages or by inventive entrepreneurs. The core technology behind them is the Internet, a publicly funded platform for which they pay no users’ fee. In fact, they do everything they can to avoid paying their taxes.



Big Tech’s use of public technology means that it operates in a technological “commons,” which they are using solely for its own gain, without regard for the public interest. Meanwhile the United States government devotes considerable taxpayer resource to protecting them – from patent infringement, cyberterrorism and other external threats.

Big Tech’s services have become a necessity in modern society.

Businesses would be unable to participate in modern society without access to the services companies like Amazon, Google and Facebook provide. These services have become public marketplaces.

For individuals, these entities have become the public square where social interactions take place, as well as the marketplace where they purchase goods.

They’re at or near monopoly status – and moving fast.

Google has 80 percent of the search market in the United States, and an even larger share of key overseas markets. Google’s browsers have now surpassed Microsoft’s in usage across all devices. It has monopoly-like influence over online news, as William Baker noted in the Nation. Its YouTube subsidiary dominates the U.S. online-video market, with nearly double the views of its closest competitor. (Roughly 83 percent of the Americans who watched a video online in April went to YouTube.)

Even Microsoft’s Steve Ballmer argued that Google is a “monopoly” whose activities were “worthy of discussion with competition authority.” He should know.

As a social platform, Facebook has no real competitors. Amazon’s book business dominates the market. E-books are now 30 percent of the total book market, and its Kindle e-books account for 65 percent of e-book sales.  Nearly one book in five is an Amazon product – and that’s not counting Amazon’s sales of physical books. It has become such a behemoth that it is able to command discounts of more than 50 percent from major publishers like Random House.

They abuse their power.

The bluntness with which Big Tech firms abuse their monopoly power is striking. Google has said that it will soon begin blocking YouTube videos from popular artists like Radiohead and Adele unless independent record labels sign deals with its upcoming music streaming service (at what are presumably disadvantageous rates).   Amazon’s war on publishers like Hachette is another sign of Big Tech arrogance.

But what is equally striking about these moves is the corporations’ disregard for basic customer service. Because YouTube’s dominance of the video market is so large, Google is confident that even frustrated music fans have nowhere to go. Amazon is so confident of its dominance that it retaliated against Hachette by removing order buttons when a Hachette book came up (which users must find maddening) and lied about the availability of Hachette books when a customer attempts to order one. It also altered its search process for recommendations to freeze out Hachette books and direct users to non-Hachette authors.

Amazon even suggested its customers use other vendors if they’re unhappy, a move that my Salon colleague Andrew Leonard described as “nothing short of amazing – and troubling.”

David Stratfield of the New York Times asked, “When does discouragement become misrepresentation?” One logical answer: When you tell customers a product isn’t available, even though it is, or rig your sales mechanism to prevent customers from choosing the item they want.

And now Amazon’s using some of the same tactics against Warner Home Video.

They got there with our help.

As we’ve already noted, Internet companies are using taxpayer-funded technology to make billions of dollars from the taxpayers – without paying a licensing fee. As we reported earlier, Amazon was the beneficiary of tax exemptions that allowed it to reach its current monopolistic size.

Google and the other technology companies have also benefited from tax policies and other forms of government indulgence. Contrary to popular misconception, Big Tech corporations aren’t solely the products of ingenuity and grit. Each has received, and continues to receive, a lot of government largess.

The real “commodity” is us.

Most of Big Tech’s revenues come from the use of our personal information in its advertising business. Social media entries, Web-surfing patterns, purchases, even our private and personal communications add value to these corporations. They don’t make money by selling us a product. We are the product, and we are sold to third parties for profit.

Public utilities are often created when the resource being consumed isn’t a “commodity” in the traditional sense. “We” aren’t an ordinary resource. Like air and water, the value of our information is something that should be publicly shared – or, at a minimum, publicly managed.

Our privacy is dying … or already dead.

“We know where you are,” says Google CEO Eric Schmidt. “We know where you’ve been. We can more or less know what you’re thinking about.”

Facebook tracks your visits to the website of any corporate Facebook “partner,” stores that information, and uses it to track and manipulate the ads you see. Its mobile app also has a new, “creepy” feature that turns on your phone’s microphone, analyzes what you’re listening to or watching, and is capable of posting updates to your status like “Listening to Albert King” or “Watching ‘Orphan Black.’

Google tracks your search activity, an activity with a number of disturbing implications. (A competing browser that does not track searches called DuckDuckGo offers an illustrated guide to its competitors’ practices.)  If you use its Chrome browser, Google tracks your website visits too (unless you’re in “private” mode.)

Yasha Levine, who is tracking corporate data spying in his “Surveillance Valley” series, notes that “True end-to-end encryption would make our data inaccessible to Google, and grind its intel extraction apparatus to a screeching halt.” As the ACLU’s Christopher Soghoian points out: “It’s very, very difficult to deploy privacy protective policies with the current business model of ad supported services.”

As Levine notes, the widely publicized revelation that Big Data companies track rape victims was just the tip of the iceberg. They also track “anorexia, substance abuse, AIDS and HIV … Bedwetting (Enuresis), Binge Eating Disorder, Depression, Fetal Alcohol Syndrome, Genital Herpes, Genital Warts, Gonorrhea, Homelessness, Infertility, Syphilis … the list goes on and on and on and on.”

Given its recent hardball tactics, here’s a little-known development that should concern more people: Amazon also hosts 37 percent of the nation’s cloud computing services, which means it has access to the inner workings of the software that runs all sorts of businesses – including ones that handle your personal data.

For all its protestations, Microsoft is no different when it comes to privacy. The camera and microphone on its Xbox One devices were initially designed to be left on at all times, and it refused to change that policy until purchasers protested.

Privacy, like water or energy, is a public resource. As the Snowden revelations have taught us, all such resources are at constant risk of government abuse.  The Supreme Court just banned warrantless searches of smartphones – by law enforcement. Will we be granted similar protections from Big Tech corporations?

Freedom of information is at risk.

Google tracks your activity and customizes search results, a process that can filter or distort your perception of the world around you.  What’s more, this “personalized search results” feature leads you back to information sources you’ve used before, which potentially narrows our ability to discover new perspectives or resources.  Over time this creates an increasingly narrow view of the world.

What’s more, Google’s shopping tools have begun using “paid inclusion,” a pay-for-play search feature it once condemned as “evil.” Its response is to say it prefers not to call this practice “paid inclusion,” even though its practices appear to meet the Federal Trade Commission’s definition of the term.

As for Amazon, it has even manipulated its recommendation searches in order to retaliate against other businesses, as we’ll see in the next section.

The free market could become even less free.

Could Big Tech and its data be used to set user-specific pricing, based on what is known about an individual’s willingness to pay more for the same product? Benjamin Schiller of Brandeis University wrote a working paper last year that showed how Netflix could do exactly that. Grocery stores and other retailers are already implementing technology that offers different pricing to different shoppers based on their data profile.

For its part, Amazon is introducing a phone that will also tag the items around you, as well as the music and programs you hear, for you to purchase – from Amazon, of course. Who will be purchasing the data those phones collect about you?

They could hijack the future.

The power and knowledge they have accumulated is frightening. But the Big Tech corporations are just getting started. Google has photographically mapped the entire world. It intends to put the world’s books into a privately owned online library. It’s launching balloons around the globe that will bring Internet access to remote areas – on its terms. It’s attempting to create artificial intelligence and extend the human lifespan.

Amazon hopes to deliver its products by drone within the next few years, an idea that would seem preposterous if not for its undeniable lobbying clout. Each of these Big Tech corporations has the ability to filter – and alter – our very perceptions of the world around us. And each of them has already shown a willingness to abuse it for their own ends.

These aren’t just the portraits of futuristic corporations that have become drunk on unchecked power. It’s a sign that things are likely to get worse – perhaps a lot worse – unless something is done. The solution may lie with an old concept. It may be time to declare Big Tech a public utility.

 

Richard (RJ) Eskow is a writer and policy analyst. He is a Senior Fellow with the Campaign for America’s Future and is host and managing editor of The Zero Hour on We Act Radio.

http://www.salon.com/2014/07/08/lets_nationalize_amazon_and_google_publicly_funded_technology_built_big_tech/?source=newsletter

Net neutrality is dying, Uber is waging a war on regulations, and Amazon grows stronger by the day

Why 2014 could be the year we lose the Internet

Why 2014 could be the year we lose the Internet
Jeff Bezos, Tim Cook (Credit: Reuters/Gus Ruelas/Robert Galbraith/Photo collage by Salon)

Halfway through 2014, and the influence of technology and Silicon Valley on culture, politics and the economy is arguably bigger than ever — and certainly more hotly debated. Here are Salon’s choices for the five biggest stories of the year.

1) Net neutrality is on the ropes.

So far, 2014 has been nothing but grim for the principle known as “net neutrality” — the idea that the suppliers of Internet bandwidth should not give preferential access (so-called fast lanes) to the providers of Internet services who are willing and able to pay for it. In January, the D.C. Court of Appeals struck down the FCC’s preliminary plan to enforce a weak form of net neutrality. Less than a month later, Comcast, the nation’s largest cable company and broadband Internet service provider, announced its plans to buy Time-Warner — and inadvertently gave us a compelling explanation for why net neutrality is so important. A single company with a dominant position in broadband will simply have too much power, something that could have enormous implications for our culture.

The situation continued to degenerate from there. Tom Wheeler, President Obama’s new pick to run the FCC, a former top cable industry lobbyist, unveiled a new plan for net neutrality that was immediately slammed as toothless. In May, ATT announced plans to merge with DirecTV. Consolidation proceeds apace, and our government appears incapable of managing the consequences.

2) Uber takes over.

After completing its most recent round of financing, Uber is now valued at $18.2 billion. Along with Airbnb, the Silicon Valley start-up has become a standard bearer for the Valley’s cherished allegiance to “disruption.” The established taxi industry is under sustained assault, but Uber has made it clear that the company’s ultimate ambitions go far beyond simply connecting people with rides. Uber has designs on becoming the premier logistics connection platform for getting anything to anyone. What Google is to search, Uber wants to be for moving objects from Point A to Point B. And Google, of course, has a significant financial stake in Uber.



Uber’s path has been bumpy. The company is fighting regulatory battles with municipalities across the world, and its own drivers are increasingly angry at fare cuts, and making sporadic attempts to organize. But the smart money sees Uber as one of the major players of the near future. The “sharing” economy is here to stay.

3) The year of the stream.

Apple bought Beats by Dre. Amazon launched its own streaming music service. Google is planning a new paid streaming offering. Spotify claimed 10 million paying customers and Pandora boasts 75 million listeners every month.

We may end up remembering 2014 as the year that streaming established itself as the dominant way people consume music. The numbers are stark. Streaming is surging, while paid downloads are in free fall.

For consumers, all-you-can-eat services like Spotify are generally marvelous. But it remains astonishing that a full 20 years after the Internet threw the music industry into turmoil, it is still completely unclear how artists and songwriters will make a decent living in an era when music is essentially free.

We also face unanswered questions about the potential implications for what kinds of music get made in an environment where every listen is tracked and every tweet or Facebook like observed. What will Big Data mean for music?

4) Amazon shows its true colors.

What a busy six months for Jeff Bezos! Amazon introduced its own set-top box for TV watching, its own smartphone for insta-shopping, anywhere, any time, and started abusing its near monopoly power to win better terms with publishing companies.

For years, consumer adoration of Amazon’s convenience and low prices fueled the company’s rise. It’s hard, at the midpoint of 2014, to avoid the conclusion that we’ve created a monster. This year, Amazon started getting sustained bad press at the very highest levels. And you know what? Jeff Bezos deserves it.

5) The tech culture wars boil over.

In the first six months of 2014, the San Francisco Bay Area witnessed emotional public hearings about Google shuttle buses, direct action by radicals against technology company executives, bar fights centering on Google Glass wearers, and a steady rise in political heat focused on tech economy-driven gentrification.

As I wrote in April

Just as the Luddites, despite their failure, spurred the creation of worker-class consciousness, the current Bay Area tech protests have had a pronounced political effect. While the tactics range from savvy, well-organized protest marches to juvenile acts of violence, the impact is clear. The attention of political leaders and the media has been engaged. Everyone is watching.

Ultimately, maybe this will be the biggest story of 2014. This year, numerous voices started challenging the transformative claims of Silicon Valley hype and began grappling with the nitty-gritty details of how all this “disruption” is changing our economy and culture. Don’t expect the second half of 2014 to be any different.