How Google and the Big Tech Companies Are Helping Maintain America’s Empire


Military, intelligence agencies and defense contractors are totally connected to Silicon Valley.

Silicon Valley has been in the media spotlight for its role in gentrifying and raising rents in San Francisco, helping the NSA spy on American citizens, and lack of racial and gender diversity. Despite that, Silicon Valley still has a reputation for benevolence, innocence and progressivism. Hence Google’s phrase, “Don’t be evil.” A recent Wall Street Journal/NBC News poll found that, even after the Snowden leaks, 53% of those surveyed had high confidence in the tech industry. The tech industry is not seen as evil as, say, Wall Street or Big Oil.

One aspect of Silicon Valley that would damage this reputation has not been scrutinized enough—its involvement in American militarism. Silicon Valley’s ties to the National Security State extend beyond the NSA’s PRISM program. Through numerous partnerships and contracts with the U.S. military, intelligence and law enforcement agencies, Silicon Valley is part of the American military-industrial complex. Google sells its technologies to the U.S. military, FBI, CIA, NSA, DEA, NGA, and other intelligence and law enforcement agencies, has managers with backgrounds in military and intelligence work, and partners with defense contractors like Lockheed Martin and Northrop Grumman. Amazon designed a cloud computing system that will be used by the CIA and every other intelligence agency. The CIA-funded tech company Palantir sells its data-mining and analysis software to the U.S. military, CIA, LAPD, NYPD, and other security agencies. These technologies have several war-zone and intelligence-gathering applications.

First, a little background to explain how the military has been involved with Silicon Valley since its conception as a technology center. Silicon Valley’s roots date back to World War II, according to a presentation by researcher and entrepreneur Steve Blank. During the war, the U.S. government funded a secret lab at Harvard University to research how to disrupt Germany’s radar-guided electronic air defense system. The solution — drop aluminum foil in front of German radars to jam them. This birthed modern electronic warfare and signals intelligence. The head of that lab was Stanford engineering professor Fred Terman who, after World War II, took 11 staffers from that lab to create Stanford’s Electronic Research Lab (ERL), which received funding from the military. Stanford also had an Applied Electronics Lab(AEL) that did classified research in jammers and electronic intelligence for the military.

In fact, much of AEL’s research aided the U.S. war in Vietnam. This made the lab a target for student antiwar protesters who nonviolently occupied the lab in April 1969 and demanded an end to classified research at Stanford. After nearly a year of teach-ins, protests, and violent clashes with the police, Stanford effectively eliminated war-related classified research at the university.

The ERL did research in and designed microwave tubes and electronic receivers and jammers. This helped the U.S. military and intelligence agencies spy on the Soviet Union and jam their air defense systems. Local tube companies and contractors developed the technologies based on that research. Some researchers from ERL also founded microwave companies in the area. This created a boon of microwave and electronic startups that ultimately formed the Silicon Valley known today.

Don’t be evil, Google

Last year, the first Snowden documents revealed that Google, Facebook, Yahoo!, and other major tech companies provided the NSA access to their users’ data through the PRISM program. All the major tech companies denied knowledge of PRISM and put up an adversarial public front to government surveillance. However, Al Jazeera America’s Jason Leopold obtained, via FOIA request, two sets of email communications between former NSA Director Gen. Keith Alexander and Google executives Sergey Brin and Eric Schmidt. The communications, according to Leopold, suggest “a far cozier working relationship between some tech firms and the U.S. government than was implied by Silicon Valley brass” and that “not all cooperation was under pressure.” In the emails, Alexander and the Google executives discussed information sharing related to national security purposes.

But PRISM is the tip of the iceberg. Several tech companies are deeply in bed with the U.S. military, intelligence agencies, and defense contractors. One very notable example is Google. Google markets and sells its technology to the U.S. military and several intelligence and law enforcement agencies, such as the FBI, CIA, NSA, DEA, and NGA.

Google has a contract with the National Geospatial-Intelligence Agency (NGA) that allows the agency to use Google Earth Builder. The NGA provides geospatial intelligence, such as satellite imagery and mapping, to the military and other intelligence agencies like the NSA. In fact, NGA geospatial intelligence helped the military and CIA locate and kill Osama bin Laden. This contract allows the NGA to utilize Google’s mapping technology for geospatial intelligence purposes. Google’s Official Enterprise Blog announced that “Google’s work with NGA marks one of the first major government geospatial cloud initiatives, which will enable NGA to use Google Earth Builder to host its geospatial data and information. This allows NGA to customize Google Earth & Maps to provide maps and globes to support U.S. government activities, including: U.S. national security; homeland security; environmental impact and monitoring; and humanitarian assistance, disaster response and preparedness efforts.”

Google Earth’s technology “got its start in the intelligence community, in a CIA-backed firm called Keyhole,” which Google purchased in 2004, according to the Washington Post. PandoDaily reporter Yasha Levine, who has extensively reported on Google’s ties to the military and intelligence communitypoints out that Keyhole’s “main product was an application called EarthViewer, which allowed users to fly and move around a virtual globe as if they were in a video game.”

In 2003, a year before Google bought Keyhole, the company was on the verge of bankruptcy, until it was saved by In-Q-Tel, a CIA-funded venture capital firm. The CIA worked with other intelligence agencies to fit Keyhole’s systems to its needs. According to the CIA Museum page, “The finished product transformed the way intelligence officers interacted with geographic information and earth imagery. Users could now easily combine complicated sets of data and imagery into clear, realistic visual representations. Users could ‘fly’ from space to street level seamlessly while interactively exploring layers of information including roads, schools, businesses, and demographics.”

How much In-Q-Tel invested into Keyhole is classified. However, Levine writes that “the bulk of the funds didn’t come from the CIA’s intelligence budget — as they normally do with In-Q-Tel — but from the NGA, which provided the money on behalf of the entire ‘Intelligence Community.’ As a result, equity in Keyhole was held by two major intelligence agencies.” Shortly after In-Q-Tel bought Keyhole, the NGA (then known as the National Imagery and Mapping Agency or NIMA) announced it immediately used Keyhole’s technology to support U.S. troops in Iraq at the 2003-2011 war. The next year, Google purchased Keyhole and used its technology to develop Google Earth.

Four years after Google purchased Keyhole, in 2008, Google and the NGA purchased GeoEye-1, the world’s highest-resolution satellite, from the company GeoEye. The NGA paid for half of the satellite’s $502 million development and committed to purchasing its imagery. Because of a government restriction, Google gets lower-resolution images but still retains exclusive access to the satellite’s photos. GeoEye later merged into DigitalGlobe in 2013.

Google’s relationship to the National Security State extends beyond contracts with the military and intelligence agencies. Many managers in Google’s public sector division come from the U.S. military and intelligence community, according to one of Levine’s reports.

Michele R. Weslander-Quaid is one example. She became Google’s Innovation Evangelist and Chief Technology Officer of the company’s public sector division in 2011. Before joining Google, since 9/11, Weslander-Quaid worked throughout the military and intelligence world in positions at the National Geospatial-Intelligence Agency, Office of the Director of National Intelligence, National Reconnaissance Office, and later, the Office of the Secretary of Defense. Levine noted that Weslander-Quaid also “toured combat zones in both Iraq and Afghanistan in order to see the tech needs of the military first-hand.”

Throughout her years working in the intelligence community, Weslander-Quaid “shook things up by dropping archaic software and hardware and convincing teams to collaborate via web tools” and “treated each agency like a startup,” according to a 2014 Entrepreneur Magazine profile. She was a major advocate for web tools and cloud-based software and was responsible for implementing them at the agencies she worked at. At Google, Weslander-Quaid’s job is to meet “with agency directors to map technological paths they want to follow, and helps Google employees understand what’s needed to work with public-sector clients.” Weslander-Quaid told Entrepreneur, “A big part of my job is to translate between Silicon Valley speak and government dialect” and “act as a bridge between the two cultures.”

Another is Shannon Sullivan, head of defense and intelligence at Google. Before working at Google, Sullivan served in the U.S. Air Force working at various intelligence positions. First as senior military advisor and then in the Air Force’s C4ISR Acquisition and Test; Space Operations, Foreign Military Sales unit. C4ISR stands for “Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance.” Sullivan left his Air Force positions to work as Defense Director for BAE Systems, a British-based arms and defense company, and then Army and Air Force COCOMs Director at Oracle. His last project at Google was “setting up a Google Apps ‘transformational’ test program to supply 50,000 soldiers in the US Army and DoD with a customized Google App Universe”, according to Levine.

Google not only has a revolving door with the Pentagon and intelligence community, it also partners with defense and intelligence contractors. Levine writes that “in recent years, Google has increasingly taken the role of subcontractor: selling its wares to military and intelligence agencies by partnering with established military contractors.”

The company’s partners include two of the biggest American defense contractors — Lockheed Martin, an aerospace, defense, and information security company, and Northrop Grumman, an aerospace and defense technology company. Both Lockheed and Northrop produce aircraft, missiles and defense systems, naval and radar systems, unmanned systems, satellites, information technology, and other defense-related technologies. In 2011, Lockheed Martin made $36.3 billion in arms sales, while Northrop Grumman made $21.4 billion. Lockheed has a major office in Sunnyvale, California, right in the middle of Silicon Valley. Moreover, Lockheed was also involved in interrogating prisoners in Iraq and Guantanamo, through its purchase of Sytex Corporation and the information technology unit of Affiliated Computer Services (ACS), both of whom directly interrogated detainees.

Google worked with Lockheed to design geospatial technologies. In 2007, describing the company as “Google’s partner,” the Washington Post reported that Lockheed “demonstrated a Google Earth product that it helped design for the National Geospatial-Intelligence Agency’s work in Iraq. These included displays of key regions of the country and outlined Sunni and Shiite neighborhoods in Baghdad, as well as U.S. and Iraqi military bases in the city. Neither Lockheed nor Google would say how the geospatial agency uses the data.” Meanwhile, Google has a $1-million contract with Northrop to install a Google Earth plug-in.

Both Lockheed and Northrop manufacture and sell unmanned systems, also known as drones. Lockheed’s drones include the Stalker, which can stay airborne for 48 hours; Desert Hawk III, a small reconnaissance drone used by British troops in Iraq and Afghanistan; and the RQ-170 Sentinel, a high-altitude stealth reconnaissance drone used by the U.S. Air Force and CIA. RQ-170s have been used in Afghanistan and for the raidthat killed Osama bin Laden. One American RQ-170 infamously crashed in Iran while on a surveillance mission over the country in late 2011.

Northrop Grumman built the RQ-4 Global Hawk, a high-altitude surveillance drone used by the Air Force and Navy. Northrop is also building a new stealth drone for the Air Force called the RQ-180, which may be operational by 2015. In 2012, Northrop sold $1.2 billion worth of drones to South Korea.

Google is also cashing in on the drone market. It recently purchased drone manufacturer Titan Aerospace, which makes high-altitude, solar-powered drones that can “stay in the air for years without needing to land,” reported the Wire. Facebook entered into talks to buy the company a month before Google made the purchase.

Last December, Google purchased Boston Dynamics, a major engineering and robotics company that receives funding from the military for its projects. According to the Guardian, “Funding for the majority of the most advanced Boston Dynamics robots comes from military sources, including the US Defence Advanced Research Projects Agency (DARPA) and the US army, navy and marine corps.” Some of these DARPA-funded projects include BigDog, Legged Squad Support System (LS3), Cheetah, WildCat, and Atlas, all of which are autonomous, walking robots. Altas is humanoid, while BigDog, LS3, Cheetah, WildCat are animal-like quadrupeds. In addition to Boston Dynamics, Google purchased eight robotics companies in 2013—Industrial Perception, Redwood Robotics, Meka, Schaft, Holomni, Bot & Dolly, and Autofuss. Google has been tight-lipped about the specifics of its plans for the robotics companies. But some sources told the New York Times that Google’s robotics efforts are not aimed at consumers but rather manufacturing, such as automating supply chains.

Google’s “Enterprise Government” page also lists military/intelligence contractors Science Applications International Corporation (SAIC) and Blackbird Technologies among the companies it partners with. In particularly, Blackbird is a military contractor that supplies locators for “the covert ‘tagging, tracking and locating’ of suspected enemies,” according to Wired. Its customers include the U.S. Navy and U.S. Special Operations Command. SOCOM oversees the U.S. military’s special operations forces units, such as the Navy SEALs, Delta Force, Army Rangers, and Green Berets. Blackbird even sent some employees as armed operatives on secret missions with special operations forces. The company’s vice president is Cofer Black, a former CIA operative who ran the agency’s Counterterrorist Center before 9/11.

Palantir and the military

Many others tech companies are working with military and intelligence agencies. Amazon recently developed a $600 million cloud computing system for the CIA that will also service all 17 intelligence agencies. Both Amazon and the CIA have said little to nothing about the system’s capabilities.

Palantir, which is based in Palo Alto, California produces and sells data-mining and analysis software. Its customers include the U.S. Marine Corps, U.S. Special Operations Command, CIA, NSA, FBI, Defense Intelligence Agency, Department of Homeland Security, National Counterterrorism Center, LAPD, and NYPD. In California, the Northern California Regional Intelligence Center (NCRIC), one of 72 federally run fusion centers built across the nation since 9/11, uses Palantir software to collect and analyze license plate photos.

While Google sells its wares to whomever in order to make a profit, Palantir, as a company, isn’t solely dedicated to profit-maximizing. Counterterrorism has been part of the company’s mission since it began. The company was founded in 2004 by investor Alex Karp, who is the company’s chief executive, and billionaire PayPal founder Peter Thiel. In 2003, Thiel came up with the idea to develop software to fight terrorism based on PayPal’s fraud recognition software. The CIA’s In-Q-Tel helped jumpstart the company by investing $2 million. The rest of the company’s $30 million start-up costs were funded by Thiel and his venture capital fund.

Palantir’s software has “a user-friendly search tool that can scan multiple data sources at once, something previous search tools couldn’t do,” according to a 2009 Wall Street Journal profile. The software fills gaps in intelligence “by using a ‘tagging’ technique similar to that used by the search functions on most Web sites. Palantir tags, or categorizes, every bit of data separately, whether it be a first name, a last name or a phone number.” Analysts can quickly categorize information as it comes in. The software’s ability to scan and categorize multiple sources of incoming data helps analysts connect the dots among large and different pools of information — signals intelligence, human intelligence, geospatial intelligence, and much more. All this data is collected and analyzed in Palantir’s system. This makes it useful for war-related, intelligence, and law enforcement purposes. That is why so many military, police, and intelligence agencies want Palantir’s software.

U.S. troops in Afghanistan who used Palantir’s software, particularly the Marines and SOCOM, found it very helpful for their missions. Commanders liked Palantir’s ability to direct them at insurgents who “build and bury homemade bombs, the biggest killer of U.S. troops in Afghanistan,” the Washington Times reported. A Government Accountability Office report said Palantir’s software “gained a reputation for being intuitive and easy to use, while also providing effective tools to link and visualize data.” Special operations forces found Palantir to be “a highly effective system for conducting intelligence information analysis and supporting operations” and “provided flexibility to support mobile, disconnected users out on patrols or conducting missions.” Many within the military establishment are pushing to have other branches, such as the Army, adopt Palantir’s software in order to improve intelligence-sharing.

Palantir’s friends include people from the highest echelons of the National Security State. Former CIA Director George Tenet and former Secretary of State Condoleezza Rice are advisers to Palantir, while former CIA director Gen. David Petraeus “considers himself a friend of Palantir CEO Alex Karp”, according to Forbes. Tenet told Forbes, “I wish I had Palantir when I was director. I wish we had the tool of its power because it not only slice and dices today, but it gives you an enormous knowledge management tool to make connections for analysts that go back five, six, six, eight, 10 years. It gives you a shot at your data that I don’t think any product that we had at the time did.”

High-tech militarism

Silicon Valley’s technology has numerous battlefield applications, which is something the U.S. military notices. Since the global war on terror began, the military has had a growing need for high-tech intelligence-gathering and other equipment. “A key challenge facing the military services is providing users with the capabilities to analyze the huge amount of intelligence data being collected,” the GAO report said. The proliferation of drones, counter-insurgency operations, sophisticated intelligence-surveillance-reconnaissance (ISR) systems, and new technologies and sensors changed how intelligence is used in counterinsurgency campaigns in Iraq and Afghanistan and counterterrorism operations in Pakistan, Somalia, Yemen, and other countries.

According to the report, “The need to integrate the large amount of available intelligence data, including the ability to synthesize information from different types of intelligence sources (e.g., HUMINT, SIGINT, GEOINT, and open source), has become increasingly important in addressing, for example, improvised explosive device threats and tracking the activities of certain components of the local population.” This is where Palantir’s software comes in handy. It does what the military needs — data-mining and intelligence analysis. That is why it is used by SOCOM and other arms of the National Security State.

Irregular wars against insurgents and terrorist groups present two problems— finding the enemy and killing them. This is because such groups know how to mix in with, and are usually part of, the local population. Robotic weapons, such as drones, present “an asymmetric solution to an asymmetric problem,” according to a Foster-Miller executive quoted in P.W. Singer’s book Wired for War. Drones can hover over a territory for long periods of time and launch a missile at a target on command without putting American troops in harm’s way, making them very attractive weapons.

Additionally, the U.S. military and intelligence agencies are increasingly relying on signals intelligence to solve this problem. Signals intelligence monitors electronic signals, such as phone calls and conversations, emails, radio or radar signals, and electronic communications. Intelligence analysts or troops on the ground will collect and analyze the electronic communications, along with geospatial intelligence, of adversaries to track their location, map human behavior, and carry out lethal operations.

Robert Steele, a former Marine, CIA case officer, and current open source intelligence advocate, explained the utility of signals intelligence. “Signals intelligence has always relied primarily on seeing the dots and connecting the dots, not on knowing what the dots are saying. When combined with a history of the dots, and particularly the dots coming together in meetings, or a black (anonymous) cell phone residing next to a white (known) cellphone, such that the black acquires the white identity by extension, it becomes possible to ‘map’ human activity in relation to weapons caches, mosques, meetings, etcetera,” he said in an email interview. Steele added the “only advantage” to signals intelligence “is that it is very very expensive and leaves a lot of money on the table for pork and overhead.”

In Iraq and Afghanistan, for example, Joint Special Operations Command (JSOC) commandos combined images from surveillance drones with the tracking of mobile phone numbers to analyze insurgent networks. Commandos then used this analysis to locate and capture or kill their intended targets during raids. Oftentimes, however, this led to getting the wrong person. Steele added that human and open source intelligence are “vastly superior to signals intelligence 95% of the time” but “are underfunded precisely because they are not expensive and require face to face contact with foreigners, something the US Government is incompetent at, and Silicon Valley could care less.”

Capt. Michael Kearns, a retired U.S. and Australian Air Force intelligence officer and former SERE instructor with experience working in Silicon Valley, explained how digital information makes it easier for intelligence agencies to collect data. In an email, he told AlterNet, “Back in the day when the world was analog, every signal was one signal. Some signals contained a broad band of information contained within, however, there were no ‘data packets’ embedded within the electromagnetic spectrum. Therefore, collecting a signal, or a phone conversation, was largely the task of capturing / decoding / processing some specifically targeted, singular source. Today, welcome to the digital era. Data ‘packets’ flow as if like water, with pieces and parts of all things ‘upstream’ contained within. Therefore, the task today for a digital society is largely one of collecting everything, so as to fully unwrap and exploit the totality of the captured data in an almost exploratory manner. And therein lies the apparent inherently unconstitutional-ness of wholesale collection of digital data…it’s almost like ‘pre-crime.'”

One modern use of signals intelligence is in the United States’ extrajudicial killing program, a major component of the global war on terror. The extrajudicial killing program began during the Bush administration as a means to kill suspected terrorists around the world without any due process. However, as Bush focused on the large-occupations of Iraq and Afghanistan, the extrajudicial killing program was less emphasized.

The Obama administration continued the war on terror but largely shifted away from large-scale occupations to emphasizing CIA/JSOC drone strikes, airstrikes, cruise missile attacks,proxies, and raids by special operations forces against suspected terrorists and other groups. Obama continued and expanded Bush’s assassination program, relying on drones and special operations forces to do the job. According to the Bureau of Investigative Journalism, U.S. drone strikes and other covert operations have killed nearly 3,000 to over 4,800 people, including 500 to over 1,000 civilians, in Pakistan, Yemen, and Somalia. During Obama’s five years in office, over 2,400 people were killed by U.S. drone strikes. Most of those killed by drone strikes are civilians or low-level fighters and, in Pakistan, only 2 percent were high-level militants. Communities living under drone strikes are regularly terrorized and traumatized by them.

Targeting for drone strikes is based on metadata analysis and geolocating the cell phone SIM card of a suspected terrorist, according to a report by the Intercept. This intelligence is provided by the NSA and given to the CIA or JSOC which will then carry out the drone strike. However, it is very common for people in countries like Yemen or Pakistan to hold multiple SIM cards, hand their cell phones to family and friends, and groups like the Taliban to randomly hand out SIM cards among their fighters to confuse trackers.

Since this methodology targets a SIM card linked to a suspect rather than an actual person, innocent civilians are regularly killed unintentionally. To ensure the assassination program will continue, the National Counterterrorism Center developed the “disposition matrix,” a database that continuously adds the names, locations, and associates of suspected terrorists to kill-or-capture lists.

The Defense Department’s 2015 budget proposal requests $495.6 billion, down $0.4 billion from last year, and decreases the Army to around 440,000 to 450,000 troops from the post-9/11 peak of 570,000. But it protects money — $5.1 billion — for cyberwarfare and special operations forces, giving SOCOM $7.7 billion, a 10 percent increase from last year, and 69,700 personnel. Thus, these sorts of operations will likely continue.

As the United States emphasizes cyberwarfare, special operations, drone strikes, electronic-based forms of intelligence, and other tactics of irregular warfare to wage perpetual war, sophisticated technology will be needed. Silicon Valley is the National Security State’s go-to industry for this purpose.

Adam Hudson is a journalist, writer, and photographer.

http://www.alternet.org/news-amp-politics/how-google-and-big-tech-companies-are-helping-maintain-americas-empire?akid=12149.265072.iCZIs-&rd=1&src=newsletter1016284&t=6&paging=off&current_page=1#bookmark

Not Content To Ruin Just San Francisco, Rich Techies Are Gentrifying Burning Man Too

facebook-like-altar.jpg
Artist Dadara‘s Facebook like altar from Burning Man 2013. Photo: Bexx Brown-Spinelli/Flickr

This will come as news only to people who have not attended Burning Man in the last couple of years, but the New York Times has just caught on to the fact that Silicon Valley millionaires (and billionaires) have been attending the desert festival in greater numbers and quickly ruining it with their displays of wealth. While we used to call Coachella “Burning Man Lite for Angelenos,” Burning Man itself is quickly becoming Coachella on Crack for rich tech folk who want to get naked and do bong hits with Larry Page in Elon Musk’s decked-out RV.

Burners won’t just be sharing the playa with Larry and Sergey, Zuck, Grover Norquist, and at least one Winklevoss twin this year. There will also be a legion of new millionaires, most of them probably Burning Man virgins, who will be living in the lap of luxury and occasionally dropping in on your parties to ask for molly.

Per the Times piece:

“We used to have R.V.s and precooked meals,” said a man who attends Burning Man with a group of Silicon Valley entrepreneurs. (He asked not to be named so as not to jeopardize those relationships.) “Now, we have the craziest chefs in the world and people who build yurts for us that have beds and air-conditioning.” He added with a sense of amazement, “Yes, air-conditioning in the middle of the desert!”His camp includes about 100 people from the Valley and Hollywood start-ups, as well as several venture capital firms. And while dues for most non-tech camps run about $300 a person, he said his camp’s fees this year were $25,000 a person. A few people, mostly female models flown in from New York, get to go free, but when all is told, the weekend accommodations will collectively cost the partygoers over $2 million.

“Anyone who has been going to Burning Man for the last five years is now seeing things on a level of expense or flash that didn’t exist before,” said Brian Doherty, author of the book “This Is Burning Man.” “It does have this feeling that, ‘Oh, look, the rich people have moved into my neighborhood.’ It’s gentrifying.”

The blockaded camps of the tech gentrifiers have tended to be in the outer rings of Black Rock City, as was previously reported in 2011 when a guest of Elon Musk’s spoke to the Wall Street Journal. “We’re out of the thick of it,” he said, “so we’re not offending the more elaborate or involved set ups.”

But as Silicon Valley assumes more and more of a presence on the playa, what’s to stop them from claiming better and better real estate, closer to where the action is?

You won’t see any evidence of this on Facebook, though. All of this happens without the tech world’s usual passion for documentation, since they do abide by at least that one tenet of Burning Man culture that frowns on photography. And at least, as of 2014, they seem to understand that their displays of wealth aren’t all that welcome, and should probably be kept on the down-low.

But seriously? Models flown in from New York? Gross.

[NYT]

 

http://sfist.com/2014/08/21/not_content_to_ruin_just_san_franci.php

Eight (No, Nine!) Problems With Big Data

Credit Open, N.Y.

BIG data is suddenly everywhere. Everyone seems to be collecting it, analyzing it, making money from it and celebrating (or fearing) its powers. Whether we’re talking about analyzing zillions of Google search queries to predict flu outbreaks, or zillions of phone records to detect signs of terrorist activity, or zillions of airline stats to find the best time to buy plane tickets, big data is on the case. By combining the power of modern computing with the plentiful data of the digital era, it promises to solve virtually any problem — crime, public health, the evolution of grammar, the perils of dating — just by crunching the numbers.

Or so its champions allege. “In the next two decades,” the journalist Patrick Tucker writes in the latest big data manifesto, “The Naked Future,” “we will be able to predict huge areas of the future with far greater accuracy than ever before in human history, including events long thought to be beyond the realm of human inference.” Statistical correlations have never sounded so good.

Is big data really all it’s cracked up to be? There is no doubt that big data is a valuable tool that has already had a critical impact in certain areas. For instance, almost every successful artificial intelligence computer program in the last 20 years, from Google’s search engine to the I.B.M. “Jeopardy!” champion Watson, has involved the substantial crunching of large bodies of data. But precisely because of its newfound popularity and growing use, we need to be levelheaded about what big data can — and can’t — do.

The first thing to note is that although big data is very good at detecting correlations, especially subtle correlations that an analysis of smaller data sets might miss, it never tells us which correlations are meaningful. A big data analysis might reveal, for instance, that from 2006 to 2011 the United States murder rate was well correlated with the market share of Internet Explorer: Both went down sharply. But it’s hard to imagine there is any causal relationship between the two. Likewise, from 1998 to 2007 the number of new cases of autism diagnosed was extremely well correlated with sales of organic food (both went up sharply), but identifying the correlation won’t by itself tell us whether diet has anything to do with autism.

Second, big data can work well as an adjunct to scientific inquiry but rarely succeeds as a wholesale replacement. Molecular biologists, for example, would very much like to be able to infer the three-dimensional structure of proteins from their underlying DNA sequence, and scientists working on the problem use big data as one tool among many. But no scientist thinks you can solve this problem by crunching data alone, no matter how powerful the statistical analysis; you will always need to start with an analysis that relies on an understanding of physics and biochemistry.

Third, many tools that are based on big data can be easily gamed. For example, big data programs for grading student essays often rely on measures like sentence length and word sophistication, which are found to correlate well with the scores given by human graders. But once students figure out how such a program works, they start writing long sentences and using obscure words, rather than learning how to actually formulate and write clear, coherent text. Even Google’s celebrated search engine, rightly seen as a big data success story, is not immune to “Google bombing” and “spamdexing,” wily techniques for artificially elevating website search placement.

Fourth, even when the results of a big data analysis aren’t intentionally gamed, they often turn out to be less robust than they initially seem. Consider Google Flu Trends, once the poster child for big data. In 2009, Google reported — to considerable fanfare — that by analyzing flu-related search queries, it had been able to detect the spread of the flu as accurately and more quickly than the Centers for Disease Control and Prevention. A few years later, though, Google Flu Trends began to falter; for the last two years it has made more bad predictions than good ones.

As a recent article in the journal Science explained, one major contributing cause of the failures of Google Flu Trends may have been that the Google search engine itself constantly changes, such that patterns in data collected at one time do not necessarily apply to data collected at another time. As the statistician Kaiser Fung has noted, collections of big data that rely on web hits often merge data that was collected in different ways and with different purposes — sometimes to ill effect. It can be risky to draw conclusions from data sets of this kind.

A fifth concern might be called the echo-chamber effect, which also stems from the fact that much of big data comes from the web. Whenever the source of information for a big data analysis is itself a product of big data, opportunities for vicious cycles abound. Consider translation programs like Google Translate, which draw on many pairs of parallel texts from different languages — for example, the same Wikipedia entry in two different languages — to discern the patterns of translation between those languages. This is a perfectly reasonable strategy, except for the fact that with some of the less common languages, many of the Wikipedia articles themselves may have been written using Google Translate. In those cases, any initial errors in Google Translate infect Wikipedia, which is fed back into Google Translate, reinforcing the error.

A sixth worry is the risk of too many correlations. If you look 100 times for correlations between two variables, you risk finding, purely by chance, about five bogus correlations that appear statistically significant — even though there is no actual meaningful connection between the variables. Absent careful supervision, the magnitudes of big data can greatly amplify such errors.

Seventh, big data is prone to giving scientific-sounding solutions to hopelessly imprecise questions. In the past few months, for instance, there have been two separate attempts to rank people in terms of their “historical importance” or “cultural contributions,” based on data drawn from Wikipedia. One is the book “Who’s Bigger? Where Historical Figures Really Rank,” by the computer scientist Steven Skiena and the engineer Charles Ward. The other is an M.I.T. Media Lab project called Pantheon.

Both efforts get many things right — Jesus, Lincoln and Shakespeare were surely important people — but both also make some egregious errors. “Who’s Bigger?” claims that Francis Scott Key was the 19th most important poet in history; Pantheon has claimed that Nostradamus was the 20th most important writer in history, well ahead of Jane Austen (78th) and George Eliot (380th). Worse, both projects suggest a misleading degree of scientific precision with evaluations that are inherently vague, or even meaningless. Big data can reduce anything to a single number, but you shouldn’t be fooled by the appearance of exactitude.

FINALLY, big data is at its best when analyzing things that are extremely common, but often falls short when analyzing things that are less common. For instance, programs that use big data to deal with text, such as search engines and translation programs, often rely heavily on something called trigrams: sequences of three words in a row (like “in a row”). Reliable statistical information can be compiled about common trigrams, precisely because they appear frequently. But no existing body of data will ever be large enough to include all the trigrams that people might use, because of the continuing inventiveness of language.

To select an example more or less at random, a book review that the actor Rob Lowe recently wrote for this newspaper contained nine trigrams such as “dumbed-down escapist fare” that had never before appeared anywhere in all the petabytes of text indexed by Google. To witness the limitations that big data can have with novelty, Google-translate “dumbed-down escapist fare” into German and then back into English: out comes the incoherent “scaled-flight fare.” That is a long way from what Mr. Lowe intended — and from big data’s aspirations for translation.

Wait, we almost forgot one last problem: the hype. Champions of big data promote it as a revolutionary advance. But even the examples that people give of the successes of big data, like Google Flu Trends, though useful, are small potatoes in the larger scheme of things. They are far less important than the great innovations of the 19th and 20th centuries, like antibiotics, automobiles and the airplane.

Big data is here to stay, as it should be. But let’s be realistic: It’s an important resource for anyone analyzing data, not a silver bullet.

From Concorde to the iPhone, state intervention drives technological innovation

History tells us that state involvement is the best route to prosperity. Our politicians need to think big and accept the risk of failure

Concorde
Concorde: who would approve the proposal today? Photograph: Xavier Lhospice/Reuters

If you wanted to radically alter the economy, making a country such as Britain as dynamic as China or Brazil, what would the state have to do? Intervene, obviously, but how?

That has become a hard question to answer since the onset of free-market economics. Much of the old apparatus of state control has been dismantled. Plus, the political culture in which planners, engineers and technical innovators inhabited the same offices has been shattered.

If a modern-day civil servant wanted to place the proposal to build, say, Concorde on a minister’s desk, there wouldn’t even be an obvious ministry to go to. It was the UK’s Ministry of Supply that set up the supersonic airliner project: it commissioned a prototype aircraft, immediately, at its first meeting.

When state innovation projects worked, they did so because their owners were politicians, who were allowed to think big and who accepted the risk of failure. In the freemarket model, the private sector is given the task of driving innovation. But though the individual results look spectacular – from info-tech, genetics and materials science to neuro-medicine – we have not experienced the same “lift-off” as in previous industrial revolutions, where all the innovations synergise, producing high-dynamism and rising wealth for all.

Instead, in the developed world, amid rapid tech innovation, we sway between low growth and stagnation. If the information revolution creates the possibility of a “third capitalism” – as different from the industrial era as it was from the age of Sir Francis Drake – then it is, so far, a possibility unrealised.

And a growing number of economists believe it will remain so unless we rethink the role of the state. The Sussex University professor Mariana Mazzucato, whose calls for an “entrepreneurial state” were greeted with incomprehension four years ago, recently put together a conference attended by ministers, central bankers and serious investors. The buzzwords were: think big, and do “mission-oriented finance”.

Mazzucato points out the state played a role in financing nearly every key technology in an iPhone, from GPS to the touch screen. She says that, even now, the lion’s share of funding for climate change technologies comes from state investment banks and public utilities, with just 6% coming from private capital. The problem is, the modern state sees this as accidental and residual. It avoids major projects, and their associated risks, seeing its role as mainly to act where the market “fails” – as with the near evaporation of venture capital funding for technology startups in the UK.

Mazzucato, in a paper with LSE professor Carlota Perez, points out the danger of leaving tech to the private sector. In an economy bloated with printed money and cheap credit, if capital can’t find real-world, high-growth, high-profit opportunities to invest in, it will pool into the finance system, creating one bubble after another.

Seen from this angle, the financial crisis looks less like the product of bad practices in the City, and more like a structural crisis. At all previous takeoff points, capital in the finance system flowed out into the real economy, where a paradigm had been established making it easy for businesspeople to invest in tried-and-tested models, with predictable and growing demand.

Solving this problem is not just critical for economics. The clearest unmet need on Earth is for technologies to combat climate change. It’s impossible for markets to direct climate science, or climate technology, because there is no ready-made framework that will make the innovations being tried profitable. It should be a no-brainer that the modern-day equivalent of Concorde or the Apollo projects – classic “mission-oriented” state projects – should be green technology.

History shows innovation happens best when the state shapes it. During the second world war, the US decreed that companies could only profit from making and selling their military technologies – any attempt to derive immediate profit from monopolised intellectual property stood against the public good. Once they knew the American state was trying to achieve an anti-aircraft fire control system first, and a number-crunching static computer later, the greatest innovators alive set to work on making a gun predict the ideas in a fighter pilot’s head. Mainframes – and other technologies – followed, and reaped high profits for the corporations that pioneered them. But it was the state that forced the take-off point to happen.

Thinking big about the economic role of the state no longer means planned allocation of goods, or state provision: it means the state setting pathways for technology and rewarding those who follow them. That means that politicians and civil servants will have to do more than overcome their opposition to picking winners. It means changing the tax system to reward long-term investments in real activities, forcing the finance system to lend to value-creating businesses – and limiting the ability of corporates to use accumulated cash simply to buy their own shares, eating value out of the real economy.

Five years on from Lehman Brothers, we’re still seeing stagnation in terms of busted banks and country debts: if Mazzucato and her collaborators are right, the real problem is bigger – it concerns the economic culture of the state.

Every takeoff point in history has seen the state reorganise the market and promote prosperity. The boom-bust cycles of the past 15 years show we’re at a turning point, but in large parts of the developed world we have states that don’t think they should exist.

If the proponents of modern laissez-faire economics could point to a single economy where technology and markets had worked, alone, to create a vibrant, confident, high-growth economic model, their arguments would be stronger. But it’s China, Singapore, South Korea and Brazil where the success stories happened. The state was at the centre of every one.

Paul Mason is economics editor of Channel 4 News. Follow him @paulmasonnews

 

 

http://www.theguardian.com/commentisfree/2014/jul/27/concorde-iphone-history-state-intervention-technological-innovation

Face Time: Eternal Youth Has Become a Growth Industry in Silicon Valley

Tuesday, Aug 12 2014

The students of Timothy Draper’s University of Heroes shuffle into a conference room, khaki shorts swishing against their knees, flip-flops clacking against the carpeted floor. One by one they take their seats and crack open their laptops, training their eyes on Facebook home pages or psychedelic screen savers. An air conditioner whirs somewhere in the rafters. A man in chinos stands before them.

The man is Steve Westly, former state controller, prominent venture capitalist, 57-year-old baron of Silicon Valley. He smiles at the group with all the sheepishness of a student preparing for show-and-tell. He promises to be brief.

“People your age are changing the world,” Westly tells the students, providing his own list of great historical innovators: Napoleon, Jesus, Zuckerberg, Larry, Sergey. “It’s almost never people my age,” he adds.

Students at Draper University — a private, residential tech boot camp launched by venture capitalist Timothy Draper, in what was formerly San Mateo’s Benjamin Franklin Hotel — have already embraced Westly’s words as a credo. They inhabit a world where success and greatness seem to hover within arm’s reach. A small handful of those who complete the six-week, $9,500 residential program might get a chance to join Draper’s business incubator; an even smaller handful might eventually get desks at an accelerator run by Draper’s son, Adam. It’s a different kind of meritocracy than Westly braved, pursuing an MBA at Stanford in the early ’80s. At Draper University, heroism is merchandised, rather than earned. A 20-year-old with bright eyes and deep pockets (or a parent who can front the tuition) has no reason to think he won’t be the next big thing.

This is the dogma that glues Silicon Valley together. Young employees are plucked out of high school, college-aged interns trade their frat houses and dorm rooms for luxurious corporate housing. Twenty-seven-year-old CEOs inspire their workers with snappy jingles about moving fast and breaking things. Entrepreneurs pitch their business plans in slangy, tech-oriented patois.

Gone are the days of the “company man” who spends 30 years ascending the ranks in a single corporation. Having an Ivy League pedigree and a Brooks Brothers suit is no longer as important.

“Let’s face it: The days of the ‘gold watch’ are over,” 25-year-old writer David Burstein says. “The average millennial is expected to have several jobs by the time he turns 38.”

Yet if constant change is the new normal, then older workers have a much harder time keeping up. The Steve Westlys of the world are fading into management positions. Older engineers are staying on the back-end, working on system administration or architecture, rather than serving as the driving force of a company.

“If you lost your job, it might be hard to find something similar,” a former Google contractor says, noting that an older engineer might have to settle for something with a lower salary, or even switch fields. The contractor says he knows a man who graduated from Western New England University in the 1970s with a degree in the somewhat archaic field of time-motion engineering. That engineer wound up working at Walmart.

Those who do worm their way into the Valley workforce often have a rough adjustment. The former contractor, who is in his 40s, says he was often the oldest person commuting from San Francisco to Mountain View on a Google bus. And he adhered to a different schedule: Wake up at 4:50 a.m., get out the door by 6:20, catch the first coach home at 4:30 p.m. to be home for a family supper. He was one of the few people who didn’t take advantage of the free campus gyms or gourmet cafeteria dinners or on-site showers. He couldn’t hew to a live-at-work lifestyle.

And compared to other middle-aged workers, he had it easy.

In a lawsuit filed in San Francisco Superior Court in July, former Twitter employee Peter H. Taylor claims he was canned because of his age, despite performing his duties in “an exemplary manner.” Taylor, who was 57 at the time of his termination in September of last year, says his supervisor made at least one derogatory remark about his age, and that the company refused to accommodate his disabilities following a bout with kidney stones. He says he was ultimately replaced by several employees in their 20s and 30s. A Twitter spokesman says the lawsuit is without merit and that the company will “vigorously” defend itself.

The case is not without precedent. Computer scientist Brian Reid lobbed a similar complaint against Google in 2004, claiming co-workers called him an “old man” and an “old fuddy-duddy,” and routinely told him he was not a “cultural fit” for the company. Reid was 54 at the time he filed the complaint; he settled for an undisclosed amount of money.

What is surprising, perhaps, is that a 57-year-old man was employed at Twitter at all. “Look, Twitter has no 50-year-old employees,” the former Google contractor says, smirking. “By the time these [Silicon Valley] engineers are in their 40s, they’re old — they have houses, boats, stock options, mistresses. They drive to work in Chevy Volts.”

There’s definitely a swath of Valley nouveau riche who reap millions in their 20s and 30s, and who are able to cash out and retire by age 40. But that’s a minority of the population. The reality, for most people, is that most startups fail, most corporations downsize, and most workforces churn. Switching jobs every two or three years might be the norm, but it’s a lot easier to do when you’re 25 than when you’re 39. At that point, you’re essentially a senior citizen, San Francisco botox surgeon Seth Matarasso says.

“I have a friend who lived in Chicago and came back to Silicon Valley at age 38,” Matarasso recalls. “And he said, ‘I feel like a grandfather — in Chicago I just feel my age.”

Retirement isn’t an option for the average middle-aged worker, and even the elites — people like Westly, who were once themselves wunderkinds — find themselves in an awkward position when they hit their 50s, pandering to audiences that may have no sense of what came before. The diehards still work well past their Valley expiration date, but then survival becomes a job unto itself. Sometimes it means taking lower-pay contract work, or answering to a much younger supervisor, or seeking workplace protection in court.

CONTINUED: http://www.sfweekly.com/sanfrancisco/silicon-valley-bottom-age-discrimination/Content?oid=3079530

Walter Isaacson: “Innovation” doesn’t mean anything anymore

The man who brought America inside the minds of Einstein, Franklin and Jobs takes issue with modern-day tech hype

Walter Isaacson: "Innovation" doesn't mean anything anymore
Walter Isaacson (Credit: Reuters/Fred Prouser)

If anybody in America understands genius, it’s Walter Isaacson.

The bestselling biographer has chronicled the lives of everyone from Benjamin Franklin and Albert Einstein to Henry Kissinger and (most recently) Steve Jobs. In the process, he has garnered a reputation as a writer deeply attuned to the idiosyncratic — and sometimes megalomaniacal — personalities and predilections of singularly brilliant men. But genius alone, as he would probably be the first to point out, actually isn’t enough to change the world.

We live in a time when technology companies — from Google and Apple to the burgeoning start-up community that’s taken Silicon Valley by storm — have staked a place at the center of the American culture. And the idea of innovation, how an idea translates from mind stuff into tangible reality, has consequently become shrouded in a mythology about genius and grit — brainiacs with a golden idea holed up in a dingy garage, working in obscurity before taking the world by storm — that is emotionally appealing but short on nuance. The truth of the matter, as Isaacson has pointed out, is that what makes for a genuine, world-changing innovation is much more complicated than a towering IQ. In reality, execution is everything.

Isaacson’s new book, “The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution,” slated for release in October, explores how disruptive change really comes to fruition. Salon spoke with him earlier this summer about the nature of innovation, and how it’s often misunderstood. This interview has been edited for length and clarity.

First let’s start with something big: What do you think is the greatest issue facing our time?

Right now I think that inequality of opportunity. I think ever since the days of Benjamin Franklin the basic American philosophy – the basic American creed has been if you work hard and play by the rules you can be secure. And we’re losing that these days because of unequal education and because of unequal opportunities and inequalities in wealth.

And I would say that the most important element of that is creating better educational opportunities for all. I think that it used to be … well. I think that every kid should have a decent opportunity for a great education and we don’t have that at the moment.



I wanted to talk to you about how you define innovation because you have written about so many innovators. And the terms “innovation” and “innovator” are sort of the terms of our time. Everything is described in this way.

I think that the word “innovation” has become a buzzword and it’s been drained of much of its meaning because we overuse it.

For the past 12 years I’ve been working on a book about the people who actually invented the computer and the Internet. I put it aside to work on the Steve Jobs biography, but I went back to it, because I wanted to show how real innovators actually get something done.

So instead of trying to be philosophy or simple rules of innovation, I wanted to do it biographically to show you how people you may never have heard of, who invented the computer and Internet, actually came up with their ideas and executed them, because I think when we talk about innovation in the abstract, it loses its meaning.

Innovation comes from collaboration, it comes from teamwork, and it comes from being able to take visionary ideas and actually execute them with good engineering. And there’s no simple buzzword definition of innovation; I think it’s useful, as somebody who loves history, to just focus on real people and how they invented things, and that includes the computer, the Internet, but also the transistor, the microchip, search engines, and the World Wide Web. And there were real people who worked in teams and were able to execute on their ideas and I wanted to tell their stories, based on a dozen years of reporting what they did, instead of some abstract theory of innovation.

I began working on this book 12 years ago, mainly focused on the Internet, but after writing about Steve Jobs and interviewing people who’d been involved with the personal computer, I decided to make it a book about the intersection of digital networks and the personal computer, and how people in that field actually executed on their ideas. I’m not interested in how-to manuals about the philosophy of innovation. I’m interested in real people and how they actually succeeded or failed.

Like Ada Lovelace, who I’ve read that you feature prominently in your new book.

Yeah, Ada was the person who connected art to technology in the 1840s. Her father was the poet Lord Byron, her close friend was Charles Babbage, who invented the analytical engine, and she was able to understand how you could program a mechanical calculator to do more than just numbers. That it could weave patterns just like the punch cards that helped mechanical looms weave fabric, and that’s the first example in my next book of how real people went about connecting the arts and technology to new innovative things.

I start with her, but it goes all the way through the history of people we’ve never paid enough attention to because the people who invented the computer and the Internet were not lone inventors like an Edison or a Bell, in their labs saying, “Eureka!” They were teams of people who worked collaboratively, and so I think sometimes we underestimate … or sometimes we don’t fully appreciate the importance of collaborative creativity. So my book is not a theoretical book, but it’s just a history of the collaborations and teamwork that led to the computer, the Internet, the transistor, the microchip, Wikipedia, Google and other innovations.

In writing this book over the past 12 years, and in your other books on Albert Einstein and Steve Jobs and Ben Franklin, have you noticed any patterns or any similarities? Anything you start to pick up and think, “Oh well that’s very similar between these two, maybe that makes a good innovator”? You’ve mentioned the teams and the collaboration …

Yeah, there’s not one formula, which is fortunate for those of us that write biographies, otherwise you wouldn’t need a lot of biographies. Albert Einstein was much more of a loner, whereas Ben Franklin’s genius was bringing people together into teams. Steve Jobs’ genius was applying creativity and beauty to technology. But the one thing they had in common is they were all imaginative. They all questioned the conventional way of doing things. And as Einstein once said, imagination is more important than knowledge. And that’s sort of been a theme of all of my books.

 

 

http://www.salon.com/2014/08/05/walter_isaacson_innovation_doesnt_mean_anything_anymore/?source=newsletter

A Silicon Valley scheme to “disrupt” America’s education system would hurt the people who need it the most

The plot to destroy education: Why technology could ruin American classrooms — by trying to fix them

The plot to destroy education: Why technology could ruin American classrooms — by trying to fix them
(Credit: Warner Bros. Entertainment Inc./Pgiam via iStock/Salon)

How does Silicon Valley feel about college? Here’s a taste: Seven words in a tweet provoked by a conversation about education started by Silicon Valley venture capitalist Marc Andreeseen.

Arrogance? Check. Supreme confidence? Check. Oblivious to the value actually provided by a college education? Check.

The $400 billion a year that Americans pay for education after high school is being wasted on an archaic brick-and-mortar irrelevance. We can do better! 

But how? The question becomes more pertinent every day — and it’s one that Silicon Valley would dearly like to answer.

The robots are coming for our jobs, relentlessly working their way up the value chain. Anything that can be automated will be automated. The obvious — and perhaps the only — answer to this threat is a vastly improved educational system. We’ve got to leverage our human intelligence to stay ahead of robotic A.I.! And right now, everyone agrees, the system is not meeting the challenge. The cost of a traditional four-year college education has far outpaced inflation. Student loan debt is a national tragedy. Actually achieving a college degree still bequeaths better job prospects than the alternative, but for many students, the cost-benefit ratio is completely out of whack.

No problem, says the tech industry. Like a snake eating its own tail, Silicon Valley has the perfect solution for the social inequities caused by technologically induced “disruption.” More disruption!

Universities are a hopelessly obsolete way to go about getting an education when we’ve got the Internet, the argument goes. Just as Airbnb is disemboweling the hotel industry and Uber is annihilating the taxi industry, companies such as Coursera and Udacity will leverage technology and access to venture capital in order to crush the incumbent education industry, supposedly offering high-quality educational opportunities for a fraction of the cost of a four-year college.



There is an elegant logic to this argument. We’ll use the Internet to stay ahead of the Internet. Awesome tools are at our disposal. In MOOCs — “Massive Open Online Courses” — hundreds of thousands of students will imbibe the wisdom of Ivy League “superprofessors” via pre-recorded lectures piped down to your smartphone. No need even for overworked graduate student teaching assistants. Intelligent software will take care of the grading. (That’s right — we’ll use robots to meet the robot threat!) The market, in other words, will provide the solution to the problem that the market has caused. It’s a wonderful libertarian dream.

But there’s a flaw in the logic. Early returns on MOOCs have confirmed what just about any teacher could have told you before Silicon Valley started believing it could “fix” education. Real human interaction and engagement are hugely important to delivering a quality education. Most crucially, hands-on interaction with teachers is vital for the students who are in most desperate need for an education — those with the least financial resources and the most challenging backgrounds.

Of course, it costs money to provide greater human interaction. You need bodies — ideally, bodies with some mastery of the subject material. But when you raise costs, you destroy the primary attraction of Silicon Valley’s “disruptive” model. The big tech success stories are all about avoiding the costs faced by the incumbents. Airbnb owns no hotels. Uber owns no taxis. The selling point of Coursera and Udacity is that they need own no universities.

But education is different than running a hotel. There’s a reason why governments have historically considered providing education a public good. When you start throwing bodies into the fray to teach people who can’t afford a traditional private education you end up disastrously chipping away at the profits that the venture capitalists backing Coursera and Udacity demand.

And that’s a tail that the snake can’t swallow.

* * *

The New York Times famously dubbed 2012 “The Year of the MOOC.” Coursera and Udacity (both started by Stanford professors) and an MIT-Harvard collaboration called EdX exploded into the popular imagination. But the hype ebbed almost as quickly as it had flowed. In 2013, after a disastrous pilot experiment in which Udacity and San Jose State collaborated to deliver three courses, MOOCs were promptly declared dead — with the harshest schadenfreude coming from academics who saw the rush to MOOCs as an educational travesty.

At the end of 2013, the New York Times had changed its tune: “After Setbacks, Online Courses are Rethought.”

But MOOC supporters have never wavered. In May, Clayton Christensen, the high priest of “disruption” theory, scoffed at the unbelievers: ”[T]heir potential to disrupt — on price, technology, even pedagogy — in a long-stagnant industry,” wrote Christensen, ” is only just beginning to be seen.”

At the end of June, the Economist followed suit with a package of stories touting the inevitable “creative destruction” threatened by MOOCs: “[A] revolution has begun thanks to three forces: rising costs, changing demand and disruptive technology. The result will be the reinvention of the university …” It’s 2012 all over again!

Sure, there have been speed bumps along the way. But as Christensen explained, the same is true for any would-be disruptive start-up. Failures are bound to happen. What makes Silicon Valley so special is its ability to learn from mistakes, tweak its biz model and try something new. It’s called “iteration.”

There is, of course, great merit to the iterative process. And it would be foolish to claim that new technology won’t have an impact on the educational process. If there’s one thing that the Internet and smartphones are insanely good at, it is providing access to information. A teenager with a phone in Uganda has opportunities for learning that most of the world never had through the entire course of human history. That’s great.

But there’s a crucial difference between “access to information” and “education” that explains why the university isn’t about to become obsolete, and why we can’t depend — as Marc Andreessen tells us — on the magic elixir of innovation plus the free market to solve our education quandary.

Nothing better illustrates this point than a closer look at the Udacity-San Jose State collaboration.

* * *

When Gov. Jerry Brown announced the collaboration between Udacity, founded by the Stanford computer science Sebastian Thrun and San Jose State, a publicly funded university in the heart of Silicon Valley, in January 2013, the match seemed perfect. Where else would you want to test out the future of education? The plan was to focus on three courses: elementary statistics, remedial math and college algebra. The target student demographic was notoriously ill-served by the university system: “Students were drawn from a lower-income high school and the underperforming ranks of SJSU’s student body,” reported Fast Company.

The results of the pilot, conducted in the spring of 2013, were a disaster, reported Fast Company:

Among those pupils who took remedial math during the pilot program, just 25 percent passed. And when the online class was compared with the in-person variety, the numbers were even more discouraging. A student taking college algebra in person was 52 percent more likely to pass than one taking a Udacity class, making the $150 price tag–roughly one-third the normal in-state tuition–seem like something less than a bargain.

A second attempt during the summer achieved better results, but with a much less disadvantaged student body; and, even more crucially, with considerably greater resources put into human interaction and oversight. For example, San Jose State reported that the summer courses were improved by “checking in with students more often.”

But the prime takeaway was stark. Inside Higher Education reported that a research report conducted by San Jose State on the experiment concluded that “it may be difficult for the university to deliver online education in this format to the students who need it most.”

In an iterative world, San Jose State and Udacity would have learned from their mistakes. The next version of their collaboration would have incorporated the increased human resources necessary to make it work, to be sure that students didn’t fall through the cracks. But the lesson that Udacity learned from the collaboration turned out be something different: There isn’t going to be much profit to be made attempting to apply the principles of MOOCs to students from a disadvantaged background.

Thrun set off a firestorm of commentary when he told Fast Company’s Max Chafkin this:

“These were students from difficult neighborhoods, without good access to computers, and with all kinds of challenges in their lives,” he says. “It’s a group for which this medium is not a good fit….”

“I’d aspired to give people a profound education–to teach them something substantial… But the data was at odds with this idea.”

Henceforth, Udacity would “pivot” to focusing on vocational training funded by direct corporate support.

Thrun later claimed that his comments were misinterpreted by Fast Company. And in his May Op-Ed Christensen argued that Udacity’s pivot was a boon!

Udacity, for its part, should be applauded for not burning through all of its money in pursuit of the wrong strategy. The company realized — and publicly acknowledged — that its future lay on a different path than it had originally anticipated. Indeed, Udacity’s pivot may have even prevented a MOOC bubble from bursting.

Educating the disadvantaged via MOOCs is the wrong strategy? That’s not a pivot — it’s an abject surrender.

The Economist, meanwhile, brushed off the San Jose State episode by noting that “online learning has its pitfalls.” But the Economist also published a revealing observation: “In some ways MOOCs will reinforce inequality … among students (the talented will be much more comfortable than the weaker outside the structured university environment) …”

But isn’t that exactly the the problem? No one can deny that the access to information facilitated by the Internet is a fantastic thing for talented students — and particularly so for those with secure economic backgrounds and fast Internet connections. But such people are most likely to succeed in a world full of smart robots anyway. The challenge posed by technological transformation and disruption is that the jobs that are being automated away first are the ones that are most suited to the less talented or advantaged. In other words, the population that MOOCs are least suited to serving is the population that technology is putting in the most vulnerable position.

Innovation and the free market aren’t going to fix this problem, for the very simple reason that there is no money in it. There’s no profit to be mined in educating people who not only can’t pay for an education, but also require greater human resources to be educated.

This is why we have public education in the first place.

“College is a public good,” says Jonathan Rees, a professor at Colorado State University who has been critical of MOOCs. “It’s what industrialized democratic society should be providing for students.”

Andrew Leonard Andrew Leonard is a staff writer at Salon. On Twitter, @koxinga21.

The terrifying uncertainty of our high-tech future

Our new robot overlords:

Are computers taking our jobs?

Our new robot overlords: The terrifying uncertainty of our high-tech future
(Credit: Ociacia, MaraZe via Shutterstock/Salon)
This article was originally published by Scientific American.

Scientific American Last fall economist Carl Benedikt Frey and information engineer Michael A. Osborne, both at the University of Oxford, published a study estimating the probability that 702 occupations would soon be computerized out of existence. Their findings were startling. Advances in data mining, machine vision, artificial intelligence and other technologies could, they argued, put 47 percent of American jobs at high risk of being automated in the years ahead. Loan officers, tax preparers, cashiers, locomotive engineers, paralegals, roofers, taxi drivers and even animal breeders are all in danger of going the way of the switchboard operator.

Whether or not you buy Frey and Osborne’s analysis, it is undeniable that something strange is happening in the U.S. labor market. Since the end of the Great Recession, job creation has not kept up with population growth. Corporate profits have doubled since 2000, yet median household income (adjusted for inflation) dropped from $55,986 to $51,017. At the same time, after-tax corporate profits as a share of gross domestic product increased from around 5 to 11 percent, while compensation of employees as a share of GDP dropped from around 47 to 43 percent. Somehow businesses are making more profit with fewer workers.

Erik Brynjolfsson and Andrew McAfee, both business researchers at the Massachusetts Institute of Technology, call this divergence the “great decoupling.” In their view, presented in their recent book “The Second Machine Age,” it is a historic shift.

The conventional economic wisdom has long been that as long as productivity is increasing, all is well. Technological innovations foster higher productivity, which leads to higher incomes and greater well-being for all. And for most of the 20th century productivity and incomes did rise in parallel. But in recent decades the two began to diverge. Productivity kept increasing while incomes—which is to say, the welfare of individual workers—stagnated or dropped.

Brynjolfsson and McAfee argue that technological advances are destroying jobs, particularly low-skill jobs, faster than they are creating them. They cite research showing that so-called routine jobs (bank teller, machine operator, dressmaker) began to fade in the 1980s, when computers first made their presence known, but that the rate has accelerated: between 2001 and 2011, 11 percent of routine jobs disappeared.



Plenty of economists disagree, but it is hard to referee this debate, in part because of a lack of data. Our understanding of the relation between technological advances and employment is limited by outdated metrics. At a roundtable discussion on technology and work convened this year by the European Union, the IRL School at Cornell University and the Conference Board (a business research association), a roomful of economists and financiers repeatedly emphasized how many basic economic variables are measured either poorly or not at all. Is productivity declining? Or are we simply measuring it wrong? Experts differ. What kinds of workers are being sidelined, and why? Could they get new jobs with the right retraining? Again, we do not know.

In 2013 Brynjolfsson told Scientific American that the first step in reckoning with the impact of automation on employment is to diagnose it correctly—“to understand why the economy is changing and why people aren’t doing as well as they used to.” If productivity is no longer a good proxy for a vigorous economy, then we need a new way to measure economic health. In a 2009 report economists Joseph Stiglitz of Columbia University, Amartya Sen of Harvard University and Jean-Paul Fitoussi of the Paris Institute of Political Studies made a similar case, writing that “the time is ripe for our measurement system to shift emphasis from measuring economic production to measuring people’s well-being.” An IRL School report last year called for statistical agencies to capture more and better data on job market churn—data that could help us learn which job losses stem from automation.

Without such data, we will never properly understand how technology is changing the nature of work in the 21st century—and what, if anything, should be done about it. As one participant in this year’s roundtable put it, “Even if this is just another industrial revolution, people underestimate how wrenching that is. If it is, what are the changes to the rules of labor markets and businesses that should be made this time? We made a lot last time. What is the elimination of child labor this time? What is the eight-hour workday this time?”

 

More musicians are taking aim at the rates paid by Spotify and Pandora, and warning whole genres are in danger

It’s not just David Byrne and Radiohead: Spotify, Pandora and how streaming music kills jazz and classical

It's not just David Byrne and Radiohead: Spotify, Pandora and how streaming music kills jazz and classical

David Byrne, Thom Yorke (Credit: Reuters/Hugo Correia/AP/Chris Pizzello/Photo collage by Salon)

After years in which tech-company hype has drowned out most other voices, the frustration of musicians with the digital music world has begun to get a hearing. We know now that many rockers don’t like it. Less discussed so far is the trouble jazz and classical musicians — and their fans — have with music streaming, which is being hailed as the “savior” of the music business.

But between low royalties, opaque payout rates, declining record sales and suspicion that the major labels have cut deals with the streamers that leave musicians out of the equation, anger from the music business’s artier edges is slowing growing. It’s further proof of the lie of the “long tail.” The shift to digital is also helping to isolate these already marginalized genres: It has a decisive effect on what listeners can find, and on whether or not an artist can earn a living from his work. (Music streaming, in all genres, is up 42 percent for the first half of this year, according to Nielsen SoundScan, against the first half of 2013. Over the same period, CD sales fell 19.6 percent, and downloads, the industry’s previous savior, were down 11.6 percent.)

Only a very few classical artists have been outspoken on the issue so far: San-Francisco-based Zoe Keating — a tech-savvy, DIY, Amanda Palmer of the cello — has blown the whistle on the tiny amounts the streaming services pay musicians. Though she’s exactly the kind of artist who should be cashing in on streaming, since she releases her own music, tours relentlessly, and has developed a strong following since her days with rock band Rasputina, only 8 percent of  her last year’s earnings from recorded music came from streaming. The iTunes store, which pays out in small amounts since most purchases are for 99 cent songs, paid her about six times what she earned from streaming. (More than 400,000 Spotify streams earned her $1,764; almost 2 million YouTube views generated $1,248.)

For jazz and classical players without Keating’s entrepreneurial energy or larger cult following, the numbers are even bleaker. “It feels awful,” says Christina Courtin, a Julliard-trained violinist who plays in classical groups and has put out albums on the Nonesuch and Hundred Pockets labels. “I don’t count on that as a way to make money — I don’t see how it makes sense for a musician. It’s pretty dark — no one’s selling as much as they were even five years ago.”



Some artists remember a very different world. “I used to sell CDs of my music,” says Richard Danielpour, a celebrated American composer who has written an opera with Toni Morrison and once had an exclusive recording contract with Sony Classical. “And now we get nothing.”

It’s not just streaming, but the larger digital era that’s burying record stores, radio and recordings – and it’s hitting jazz and classical musicians especially hard. For some young musicians launching their careers, the “exposure” they get on Pandora or YouTube brings them employment or a fan base somewhere down the line. But many wait in vain. And like their counterparts in the pop world, musicians typically cannot opt out of streaming and the rest of the new world.

“One of the big reasons musicians kept control of their publishing was for the possibility that at least we would be paid when those songs were played in media outlets,” says jazz pianist Jason Moran, currently the jazz advisor for the Kennedy Center. “Back in the day, Fats Waller, and tons of other artists were robbed of their publishing. This is the new version of it, but on a much more wider scale.”

*

In some ways, the trouble in these genres resembles the problems experienced by any non-superstar musicians. Royalties on steaming services, for instance, are notoriously low. “All of my colleagues — composers and arrangers — are seeing huge cuts in their earnings,” says Paul Chihara, a veteran composer who until recently headed UCLA’s film-music program. “In effect, we’re not getting royalties. It’s almost amusing some of the royalty checks I get.” One of the last checks he got was for $29. “And it bounced.”

The pain is especially acute for indie musicians. While some jazz and classical labels are owned by one of the three majors — Blue Note and Deutsche Grammophon, for example, are now part of the Universal Music Group — the vast majority of musicians record for independent labels. And the indies have been largely left out of the sweet deals struck with the streamers. Most of those deals are opaque; the informed speculation says that these arrangement are not good for musicians, especially those not on the few remaining majors.

“Musicians in niche categories need to be fearful of the agreements that labels are signing with streaming services,” says music historian Ted Gioia, who has also recorded as a jazz pianist. Some of these deals, he suspects, allow the steamers to pay nothing at all to some artists, including most who record jazz and classical music. “The record labels could make a case that they don’t need to share royalties with artists whose sales don’t cross a certain threshold. If you’re Lady Gaga or Justin Bieber, you have no problem. But otherwise, you would get no royalties. The nature of these deals are that the rich get richer and the poor get poorer.”

Labels that own substantial back catalog — old Pink Floyd and Eagles albums, and earlier music that no longer require royalty payments to musicians — have likely cut much better deals than labels that primarily put out new music, especially those in non-pop genres. Says Gioia: “I suspect we’d find agreements where the labels say, [to the streamers], ‘You can have our whole catalog for $5 million, plus you pay us a fraction of a penny for any song that streams more than a million times.’” You don’t have to be a conspiracy theorist to think this way: The major labels have a number of weaselly little tricks like this one, sometimes called a “digital breakage,” in which musicians get nothing.

Moran compares the appearance of Spotify on the scene to the arrival of Wal-Mart to an American small-town: The new model undercuts the existing ones, and helps put smaller, independent stores out of business.

Indie labels are equally vulnerable. Pi Recordings is a jazz label that puts out recordings by the cream of the avant-garde, including Henry Threadgill, Marc Ribot and Rudresh Mahanthappa. It’s been described as one of the rare success stories in a dark time. But Yulun Wang, who co-runs the label, is not sure how they can stand up against the streaming onslaught.

“You have the guy who buys 20 jazz records a year — $300 a year,” Wang says. “He might buy one or two of our albums. If I convert that guy to Spotify – he’s now getting all-you-can-eat for $120. And the proportion that comes to me is literally pennies. That’s when it over. That’s will force labels like ours to either change the way we do things significantly.”

The digital enthusiasts say that labels need to “adjust” to the new world – by taking a piece of musicians’ touring, or cutting “360 deals” in which they get part of every strand of an artist’s revenue stream. But for jazz artists, touring outside New York and a few other cities does not yield much. “If I take 15 percent of someone making $30,000, it’s just less money in their pocket.” At a certain point, the artist can no longer pay the rent. “That’s when it’s game over.”

*

But it’s not just a problem of scale. There are distinctive qualities to jazz and classical music that make it a difficult fit to the digital world as it now exists, and that punish musicians and curious fans alike. To Jean Cook, a new-music violinist, onetime Mekon, and director of programs for the Future Musical Coalition, it further marginalizes these already peripheral styles, creating what she calls “invisible genres.”

It doesn’t matter if it’s Spotify, Pandora, iTunes, or Beats Music, she says. “Any music service that’s serving pop and classical music will not serve classical music well.” The problem is the nature of classical music, and jazz as well, and the way they differ from pop music. They all make different use of metadata – a term most people associate with Edward Snowden’s NSA revelations, but which have a profound importance to streaming services. Put most simply: Classical music and jazz are such a mismatch for existing streaming services, it’s almost impossible to find stuff. Cook realized this when she got a recommendation from a music lover, and found herself falling down an online labyrinth trying to find it.

Here’s a good place to start: Say you’re looking for a bedrock recording, the Beethoven Piano Concertos, with titan Maurizio Pollini on piano. Who is the “artist” for this one? Is it the Berlin Philharmonic, or Claudio Abbado, who conducts them? Is it Pollini? Or is it Beethoven himself? If you can see the entire record jacket, you can see who the recording includes. Otherwise, you could find yourself guessing.

Or, if you want music written the Russian late romantic, do you want Rachmaninoff, or Rachmaninoff? Chances are, your service will have one but not the other. And what do you call the movements of a symphony or chamber piece? By their Roman numeral? Or by names like andante or scherzo?

“These services are built to serve the largest segments of the marketplace — pop, country and hip hop,” says Cook. None of these have this kind of complicated structure.

Jazz offers similar difficulties, she says. Say you want to find recordings by pianist Bill Evans. You can find a bunch of them — but nothing linking him to “Kind of Blue,” perhaps the most important (and, in vinyl and CD form, certainly the bestselling) recording he was ever a part of. Evans shaped that album profoundly. You won’t find John Coltrane — another key voice on that session — there either, since it’s a Miles Davis record.

“Listing sidemen is something that is just not built into the architecture,” says Cook. It’s not a small problem. “I can’t think of a single example of a jazz musician who was not a sideman at one point in their career. We’re talking about a significant portion of jazz history that can’t get out.” It also makes you wonder — what are the chances that sidemen, or their heirs, get paid when things are streamed? And what do potential music consumers do when they can’t find what they’re looking for?

There used to be a solution to this. “Go back to the days of record stores,” says Gioia, “and customers could learn a lot from browsing the racks, or asking the serious music fans who worked there.” (Classical record stores, then and now, tended to have their recordings organized by composer rather than group.) The algorithms for specialized genres — classical, reggae, acoustic blues, Brazilian music —are hopeless, he says.

“These days, you have to know exactly what you’re looking for. If you want something by Beyonce or Miley Cyrus, it’s not hard. If you’re interested in niche music, you can be in the position of not knowing what’s out there. I still find myself missing important releases by musicians I care about. Streaming provides access to millions of hours of music, but it’s easy to get lost in it.”

If dedicated fans like Cook and Gioia have these problems, what will happen to the casual or new fans that every genre needs in order to stay alive? They’ll simply drift away to the stuff that’s being beamed at them by advertisers around the clock.

*

Even some of those frightened and demoralized by the digital transition think things can be improved for jazz and classical music.

So far, Wang’s solution has been to drop out. It’s nearly impossible for artists to withdraw, but as a label head, he can pull all of Pi’s music off Spotify. After three or four months on the service, two years back, he received a royalty statement of about $25 for all of it, and decided it just wasn’t worth it.

“What we found when we got out of Spotify — after these dire warnings — was that our sales went up; they absolutely jumped.”

He’s very familiar with the pressure to give art away. “We were always told you need to get as many audiences as possible … With the exposure argument, you’re told, ‘You could become the next Lady Gaga!’ It’s like playing Lotto — buy dollar tickets, and you could hit it big. In jazz, keep buying dollar tickets so you can win a dollar fifty.”

Cook sees the poor fit of these genres to streaming services as part of a larger phenomenon: Their radio playlists don’t show up in Billboard, their ticket receipts and album sales are often not reported to SoundScan and PollStar, and their awards on the Grammys are rarely televised. “This affects the visibility of jazz and classical music, and the way they are viewed by the rest of the industry.”

Part of a solution involves getting the data straight. “There is no database that tells you who played on what recording, and who wrote each song. ASCAP has one piece of the puzzle; iTunes has another. If you’ve got a music service, you need this, because you need to know who to pay. You need to tell listeners who they’re listening too. And if it’s not consistent, it’s not searchable.”

She wonders how it happens, though, even with open-source software that makes it easier. “The classical community needs to say, ‘This is a good index, instead of the crap the record labels are sending you. It requires a coordinated effort by a lot of different parties.”

Composer Danielpour says that classical people should not give up on recording work and trying to get on the radio. “Even though radio is a mid-20th century medium, for classical music it’s still a powerful source of revenue,” especially in Europe, where royalties are typically better. He recently returned from a trip to St. Petersburg, Russia. “For European and Russian audiences, classical music is religion. For us in America, it’s entertainment.”

Gioia, a former businessman, is pragmatic and forward looking. “My view is that the only solution for this, that is equitable for everyone, is for the music labels, in partnership with the artists, to control their own streaming,” says Gioia. “They need to bypass Silicon Valley.

“They need to work together with a new model, to control distribution and not rely on Apple, Amazon and everyone else. The music industry has always hated technology — they hated radio when it came out — and have always dragged their feet. They need to embrace technology and do it better.”

 

Scott Timberg, a longtime arts reporter in Los Angeles who has contributed to the New York Times, runs the blog Culture Crash. His book, “Culture Crash: The Killing of the Creative Class” comes out in January. Follow him on Twitter at @TheMisreadCity

http://www.salon.com/2014/07/20/its_not_just_david_byrne_and_radiohead_spotify_pandora_and_how_streaming_music_kills_jazz_and_classical/?source=newsletter

Here are the states where you are most likely to be wiretapped

According to the Administrative Office of the U.S. Court’s Wiretap Report, here’s where wiretapping occurs the most

 

Here are the states where you are most likely to be wiretapped

In terms of wiretapping — with a warrant — it turns out some states use the tactic far more than others.

The Administrative Office of the U.S. Court released its “Wiretap Report” for the year 2013, and it turns out that Nevada, California, Colorado and New York account for nearly half of all wiretap applications on portable devices in the United States. Add in New Jersey, Georgia and Florida and you have 80 percent of the country’s applications for wiretaps. A chart from Pew Research can be viewed here.

Overall, according to the report, wiretaps were up in 2013:

“The number of federal and state wiretaps reported in 2013 increased 5 percent from 2012. A total of 3,576 wiretaps were reported as authorized in 2013, with 1,476 authorized by federal judges and 2,100 authorized by state judges.”

The report also found that in terms of federal applications The Southern District of California was responsible for 8 percent of the applications, approved by federal judges — the most by a single district in the country.

In terms of the nation, Pew Research reports:

“When we factor in population, Nevada leads the nation with 38 mobile wiretaps for every 500,000 people. Most Nevada wiretaps (187) were sought by officials in Clark County, home to Las Vegas; federal prosecutors in the state obtained authorization for 26 more, though only one was actually installed.”

The overwhelming majority of the wiretaps, nationwide — 90 percent, according to Pew Research — were requested to monitor drug-related criminal activity. Pew also reported that the wiretaps resulted in 3,744 arrests and 709 convictions.

Most of the wiretaps were for “portable devices” which included mobile phones and digital pagers, according to the report.



The states where no wiretaps were requested include Hawaii, Montana, North Dakota, South Dakota and Vermont.

Of course, the report only highlights wiretaps that require a warrant, and not those done without.

h/t Gizmodo, Pew Research, U.S. Courts

 

http://www.salon.com/2014/07/14/here_are_the_sates_where_you_are_most_likely_to_be_wiretapped/?source=newsletter