A “silent majority” of young people without college degrees and decent jobs are on a downwardly-mobile slide.


A Majority of Millennials Don’t Have a College Degree—That’s Going to Cost Everybody

Photo Credit: Shutterstock.com

 There’s a lot of hoopla in the media about how Millennials are the best-educated generation in history, blah, blah, blah. But according to a Pew survey, that’s a distortion of reality. In fact, two-thirds of Millennials between ages 25 and 32 don’t have a bachelor’s degree. The education gap among this generation is higher than for any other in history in terms of how those with a college degree will fare compared to those without. Reflecting a trend that has been gaining momentum in the rest of America, Millennials are rapidly getting sorted into winners and losers. Most of them are losing. That’s going to cost this generation a lot —and the rest of society, too.

According to Pew, young college graduates are ahead of their less-educated peers on just about every measure of economic well-being and how they are faring in the course of their careers. Their parents and grandparents’ generations did not take as big of a hit by not going to college, but for Millennials, the blow is severe. Without serious intervention, its effects will be permanent.

Young college grads working full-time are earning an eye-popping $17,500 more per year than those with only a high school diploma. To put this in perspective, in 1979 when the first Baby Boomers were the same age that Millennials are today, a high school graduate could earn around three-quarters (77 percent) of what his or her college-educated peer took in. But Millennials with only a high school diploma earn only 62 percent of what the college grads earn.

According to Pew, young people with a college degree are also more likely to have full-time jobs, much more likely to have a job of any kind, and more likely to believe that their job will lead to a fulfilling career. But forty-two percent of those with a high school diploma or less see their work as “just a job to get by.” In stark contrast, only 14 percent of college grads have such a negative assessment of their jobs.

Granted, college is expensive. But nine out of 10 Millennials say it’s worth it — even those who have had to borrow to foot the bill. They seem to have absorbed the fact that in a precarious economy, a college diploma is the bare minimum for security and stability.

Why are those with less education doing so badly? The Great Recession is part of the answer. There has also been a trend in which  jobs, when they return after a financial crisis, are worse than those that were lost. After the recession of the 80s, for example, unionized labor never again found jobs as good as the ones they’d had before the downturn. The same things has happened this time, only even more dramatically. The jobs that are returning are often part-time, underpaid, lacking in benefits and short on opportunities to advance. It’s great to embark on a career as an engineer at Apple, not so great to work in an Apple retail store, where pay is low and the hope for a career is minimal. The Great Recession amplified a trend of McJobs that had been gaining strength for decades, stoked by the decline in unions, deregulation, outsourcing, and poor corporate governance that have tilted the balance of power away from employees to such a degree that many young people now expect exploitation and poor conditions on the job simply as a matter of course, with no experience of how things could be any different.

All this is not to say that having a college degree gives you a free pass: This generation of college-educated adults is doing slightly worse on certain measures, like the percentage without jobs, than Gen Xers, Baby Boomers or members of the silent generation when they were in their mid-20s and early 30s. But today’s young people who don’t go to college are doing much worse than those in similar situations in the generations that came before.

Povety is one of the biggest threats to Millenials without college degrees. Nearly a quarter (22 percent) of young people ages 25 to 32 without a college degree live in poverty today, whereas only 6 percent of the college-educated fall into this camp. When Baby Boomers were the same age as today’s Millenials, only 7 percent of those with only a high school diploma were living in poverty.

It’s true that more Millennials than past generations have college degrees, and it’s also true that the value of those diplomas has increased. Given those facts you might think might that the Millennial generation should be earning more than earlier generations of young adults. You would be wrong — and that’s because it’s more costly not to have a college education than ever before. So the education have-nots are pulling the average of the whole generation down. The typical high school graduate’s earnings dropped by more than $3,000, from $31,384 in 1965 to $28,000 in 2013.

There are also more Millennials who don’t even have a high school diploma than previous generations: Some have taken to calling Millennials “Generation Dropout.” A 2013 article in the Atlantic Monthly noted that compared to other countries, the newest wave of employees is actually less educated than their parents because of the lower number completing high school. A recent program on NPR called the 25- to 32-year-old cohort without college degrees and decent jobs the “Silent Majority.”

In 1965, young college graduates earned $7,499 more than those with a high school diploma. But the earnings gap by educational attainment has steadily widened since then, and today it has more than doubled to $17,500 among Millennials ages 25 to 32.

All of this is alarming because it means that less-educated workers are going to have a really hard time. Compared to the Silent Generation, those with high school or less are three times more likely to be jobless.

When you look at the length of time the typical job seeker spends looking for work, less educated Millennials are again faring poorly. In 2013 the average unemployed college-educated Millennial had been looking for work for 27 weeks—more than double the time it took an unemployed college-educated 25- to 32-year-old in 1979 to find a job (12 weeks). And again, today’s young high school graduates do worse on this measure compared to the college-educated or their peers in earlier generations. Millennial high school graduates spend, on average, four weeks longer looking for work than college graduates (31 weeks vs. 27 weeks).

These young people are ending up in dire straits — stuck in debt, unable to set up their own households, and having to put off starting families (recent research shows that many women who face economic hard times in their 20s will never end up having kids). It’s not that they don’t want to grow up, it’s that they don’t have access to the things that make independence possible, like a good education, a good job, a strong social safety net, affordable childcare, and so on.

How much is this going to cost America as a nation? It’s too early to say for sure, but Millennial underemployment, which is directly linked to undereducation, is already costing $25 billion a year, largely because of the lost tax revenue. But what about the other costs? The increased rates of alcoholism and substance abuse? The broken relationships? The depression? The long list of physical ailments that go along with the stress of not being able to gain and keep a financial foothold?

Once upon a time, more forward-thinking politicians and politicos recognized that young people who have the bad luck to try to launch into adulthood in the wake of an economic crisis not of their own making need real help. They need jobs programs, training and decent work conditions that could improve not only their individual lives but the health of the whole society and economy. We have the blueprint of how to do this from the New Deal. It’s going to cost everyone if America leaves these young people to suffer this cruel fate.

Lynn Parramore is an AlterNet senior editor. She is cofounder of Recessionwire, founding editor of New Deal 2.0, and author of “Reading the Sphinx: Ancient Egypt in Nineteenth-Century Literary Culture.” She received her Ph.D. in English and cultural theory from NYU. She is the director of AlterNet’s New Economic Dialogue Project. Follow her on Twitter @LynnParramore.

http://www.alternet.org/education/surprise-majority-millennials-dont-have-college-degree-thats-going-cost-everybody?akid=12378.265072.6qEBLL&rd=1&src=newsletter1023736&t=7&paging=off&current_page=1#bookmark

Google makes us all dumber

…the neuroscience of search engines

As search engines get better, we become lazier. We’re hooked on easy answers and undervalue asking good questions

Google makes us all dumber: The neuroscience of search engines
(Credit: Ollyy via Shutterstock)

In 1964, Pablo Picasso was asked by an interviewer about the new electronic calculating machines, soon to become known as computers. He replied, “But they are useless. They can only give you answers.”

We live in the age of answers. The ancient library at Alexandria was believed to hold the world’s entire store of knowledge. Today, there is enough information in the world for every person alive to be given three times as much as was held in Alexandria’s entire collection —and nearly all of it is available to anyone with an internet connection.

This library accompanies us everywhere, and Google, chief librarian, fields our inquiries with stunning efficiency. Dinner table disputes are resolved by smartphone; undergraduates stitch together a patchwork of Wikipedia entries into an essay. In a remarkably short period of time, we have become habituated to an endless supply of easy answers. You might even say dependent.

Google is known as a search engine, yet there is barely any searching involved anymore. The gap between a question crystallizing in your mind and an answer appearing at the top of your screen is shrinking all the time. As a consequence, our ability to ask questions is atrophying. Google’s head of search, Amit Singhal, asked if people are getting better at articulating their search queries, sighed and said: “The more accurate the machine gets, the lazier the questions become.”

Google’s strategy for dealing with our slapdash questioning is to make the question superfluous. Singhal is focused on eliminating “every possible friction point between [users], their thoughts and the information they want to find.” Larry Page has talked of a day when a Google search chip is implanted in people’s brains: “When you think about something you don’t really know much about, you will automatically get information.” One day, the gap between question and answer will disappear.

I believe we should strive to keep it open. That gap is where our curiosity lives. We undervalue it at our peril.

The Internet can make us feel omniscient. But it’s the feeling of not knowing which inspires the desire to learn. The psychologist George Loewenstein gave us the simplest and most powerful definition of curiosity, describing it as the response to an “information gap.” When you know just enough to know that you don’t know everything, you experience the itch to know more. Loewenstein pointed out that a person who knows the capitals of three out of 50 American states is likely to think of herself as knowing something (“I know three state capitals”). But a person who has learned the names of 47 state capitals is likely to think of herself as not knowing three state capitals, and thus more likely to make the effort to learn those other three.



That word “effort” is important. It’s hardly surprising that we love the ease and fluency of the modern web: our brains are designed to avoid anything that seems like hard work. The psychologists Susan Fiske and Shelley Taylor coined the term “cognitive miser” to describe the stinginess with which the brain allocates limited attention, and its in-built propensity to seek mental short-cuts. The easier it is for us to acquire information, however, the less likely it is to stick. Difficulty and frustration — the very friction that Google aims to eliminate — ensure that our brain integrates new information more securely. Robert Bjork, of the University of California, uses the phrase “desirable difficulties” to describe the counterintuitive notion that we learn better when the learning is hard. Bjork recommends, for instance, spacing teaching sessions further apart so that students have to make more effort to recall what they learned last time.

A great question should launch a journey of exploration. Instant answers can leave us idling at base camp. When a question is given time to incubate, it can take us to places we hadn’t planned to visit. Left unanswered, it acts like a searchlight ranging across the landscape of different possibilities, the very consideration of which makes our thinking deeper and broader. Searching for an answer in a printed book is inefficient, and takes longer than in its digital counterpart. But while flicking through those pages your eye may alight on information that you didn’t even know you wanted to know.

The gap between question and answer is where creativity thrives and scientific progress is made. When we celebrate our greatest thinkers, we usually focus on their ingenious answers. But the thinkers themselves tend to see it the other way around. “Looking back,” said Charles Darwin, “I think it was more difficult to see what the problems were than to solve them.” The writer Anton Chekhov declared, “The role of the artist is to ask questions, not answer them.” The very definition of a bad work of art is one that insists on telling its audience the answers, and a scientist who believes she has all the answers is not a scientist.

According to the great physicist James Clerk Maxwell, “thoroughly conscious ignorance is the prelude to every real advance in science.” Good questions induce this state of conscious ignorance, focusing our attention on what we don’t know. The neuroscientist Stuart Firestein teaches a course on ignorance at Columbia University, because, he says, “science produces ignorance at a faster rate than it produces knowledge.” Raising a toast to Einstein, George Bernard Shaw remarked, “Science is always wrong. It never solves a problem without creating ten more.”

Humans are born consciously ignorant. Compared to other mammals, we are pushed out into the world prematurely, and stay dependent on elders for much longer. Endowed with so few answers at birth, children are driven to question everything. In 2007, Michelle Chouinard, a psychology professor at the University of California, analyzed recordings of four children interacting with their respective caregivers for two hours at a time, for a total of more than two hundred hours. She found that, on average, the children posed more than a hundred questions every hour.

Very small children use questions to elicit information — “What is this called?” But as they grow older, their questions become more probing. They start looking for explanations and insight, to ask “Why?” and “How?”. Extrapolating from Chouinard’s data, the Harvard professor Paul Harris estimates that between the ages of 3 and 5, children ask 40,000 such questions. The numbers are impressive, but what’s really amazing is the ability to ask such a question at all. Somehow, children instinctively know there is a vast amount they don’t know, and they need to dig beneath the world of appearances.

In a 1984 study by British researchers Barbara Tizard and Martin Hughes, four-year-old girls were recorded talking to their mothers at home. When the researchers analyzed the tapes, they found that some children asked more “How” and “Why” questions than others, and engaged in longer passages of “intellectual search” — a series of linked questions, each following from the other. (In one such conversation, four-year-old Rosy engaged her mother in a long exchange about why the window cleaner was given money.) The more confident questioners weren’t necessarily the children who got more answers from their parents, but the ones who got more questions. Parents who threw questions back to their children — “I don’t know, what do you think?” — raised children who asked more questions of them. Questioning, it turn out, is contagious.

Childish curiosity only gets us so far, however. To ask good questions, it helps if you have built your own library of answers. It’s been proposed that the Internet relieves us of the onerous burden of memorizing information. Why cram our heads with facts, like the date of the French revolution, when they can be summoned up in a swipe and a couple of clicks? But knowledge doesn’t just fill the brain up; it makes it work better. To see what I mean, try memorizing the following string of fourteen digits in five seconds:

74830582894062

Hard, isn’t it? Virtually impossible. Now try memorizing this string of fourteen letters:

lucy in the sky with diamonds

This time, you barely needed a second. The contrast is so striking that it seems like a completely different problem, but fundamentally, it’s the same. The only difference is that one string of symbols triggers a set of associations with knowledge you have stored deep in your memory. Without thinking, you can group the letters into words, the words into a sentence you understand as grammatical — and the sentence is one you recognize as the title of a song by the Beatles. The knowledge you’ve gathered over years has made your brain’s central processing unit more powerful.

This tells us something about the idea we should outsource our memories to the web: it’s a short-cut to stupidity. The less we know, the worse we are at processing new information, and the slower we are to arrive at pertinent inquiry. You’re unlikely to ask a truly penetrating question about the presidency of Richard Nixon if you have just had to look up who he is. According to researchers who study innovation, the average age at which scientists and inventors make breakthroughs is increasing over time. As knowledge accumulates across generations, it takes longer for individuals to acquire it, and thus longer to be in a position to ask the questions which, in Susan Sontag’s phrase, “destroy the answers”.

My argument isn’t with technology, but the way we use it. It’s not that the Internet is making us stupid or incurious. Only we can do that. It’s that we will only realize the potential of technology and humans working together when each is focused on its strengths — and that means we need to consciously cultivate effortful curiosity. Smart machines are taking over more and more of the tasks assumed to be the preserve of humans. But no machine, however sophisticated, can yet be said to be curious. The technology visionary Kevin Kelly succinctly defines the appropriate division of labor: “Machines are for answers; humans are for questions.”

The practice of asking perceptive, informed, curious questions is a cultural habit we should inculcate at every level of society. In school, students are generally expected to answer questions rather than ask them. But educational researchers have found that students learn better when they’re gently directed towards the lacunae in their knowledge, allowing their questions bubble up through the gaps. Wikipedia and Google are best treated as starting points rather than destinations, and we should recognize that human interaction will always play a vital role in fueling the quest for knowledge. After all, Google never says, “I don’t know — what do you think?”

The Internet has the potential to be the greatest tool for intellectual exploration ever invented, but only if it is treated as a complement to our talent for inquiry rather than a replacement for it. In a world awash in ready-made answers, the ability to pose difficult, even unanswerable questions is more important than ever.

Picasso was half-right: computers are useless without truly curious humans.

Ian Leslie is the author of “Curious: The Desire To Know and Why Your Future Depends On It.” He writes on psychology, trends and politics for The Economist, The Guardian, Slate and Granta. He lives in London. Follow him on Twitter at @mrianleslie.

http://www.salon.com/2014/10/12/google_makes_us_all_dumber_the_neuroscience_of_search_engines/?source=newsletter

Core Secrets: NSA Saboteurs in China and Germany

Featured photo - Core Secrets: NSA Saboteurs in China and Germany

The National Security Agency has had agents in China, Germany, and South Korea working on programs that use “physical subversion” to infiltrate and compromise networks and devices, according to documents obtained by The Intercept.

The documents, leaked by NSA whistleblower Edward Snowden, also indicate that the agency has used “under cover” operatives to gain access to sensitive data and systems in the global communications industry, and that these secret agents may have even dealt with American firms. The documents describe a range of clandestine field activities that are among the agency’s “core secrets” when it comes to computer network attacks, details of which are apparently shared with only a small number of officials outside the NSA.

“It’s something that many people have been wondering about for a long time,” said Chris Soghoian, principal technologist for the American Civil Liberties Union, after reviewing the documents. “I’ve had conversations with executives at tech companies about this precise thing. How do you know the NSA is not sending people into your data centers?”

Previous disclosures about the NSA’s corporate partnerships have focused largely on U.S. companies providing the agency with vast amounts of customer data, including phone records and email traffic. But documents published today by The Intercept suggest that even as the agency uses secret operatives to penetrate them, companies have also cooperated more broadly to undermine the physical infrastructure of the internet than has been previously confirmed.

In addition to so-called “close access” operations, the NSA’s “core secrets” include the fact that the agency works with U.S. and foreign companies to weaken their encryption systems; the fact that the NSA spends “hundreds of millions of dollars” on technology to defeat commercial encryption; and the fact that the agency works with U.S. and foreign companies to penetrate computer networks, possibly without the knowledge of the host countries. Many of the NSA’s core secrets concern its relationships to domestic and foreign corporations.

Some of the documents in this article appear in a new documentary, CITIZENFOUR, which tells the story of the Snowden disclosures and is directed by Intercept co-founder Laura Poitras. The documents describe a panoply of programs classified with the rare designation of “Exceptionally Compartmented Information,” or ECI, which are only disclosed to a “very select” number of government officials.

Sentry Eagle

The agency’s core secrets are outlined in a 13-page “brief sheet” about Sentry Eagle, an umbrella term that the NSA used to encompass its most sensitive programs “to protect America’s cyberspace.”

“You are being indoctrinated on Sentry Eagle,” the 2004 document begins, before going on to list the most highly classified aspects of its various programs. It warns that the details of the Sentry Eagle programs are to be shared with only a “limited number” of people, and even then only with the approval of one of a handful of senior intelligence officials, including the NSA director.

“The facts contained in this program constitute a combination of the greatest number of highly sensitive facts related to NSA/CSS’s overall cryptologic mission,” the briefing document states. “Unauthorized disclosure…will cause exceptionally grave damage to U.S. national security. The loss of this information could critically compromise highly sensitive cryptologic U.S. and foreign relationships, multi-year past and future NSA investments, and the ability to exploit foreign adversary cyberspace while protecting U.S. cyberspace.”

The document does not provide any details on the identity or number of government officials who were supposed to know about these highly classified programs. Nor is it clear what sort of congressional or judicial oversight, if any, was applied to them. The NSA refused to comment beyond a statement saying, “It should come as no surprise that NSA conducts targeted operations to counter increasingly agile adversaries.” The agency cited Presidential Policy Directive 28, which it claimed “requires signals intelligence policies and practices to take into account the globalization of trade, investment and information flows, and the commitment to an open, interoperable, and secure global Internet.” The NSA, the statement concluded, “values these principles and honors them in the performance of its mission.”

Sentry Eagle includes six programs: Sentry Hawk (for activities involving computer network exploitation, or spying), Sentry Falcon (computer network defense), Sentry Osprey (cooperation with the CIA and other intelligence agencies), Sentry Raven (breaking encryption systems), Sentry Condor (computer network operations and attacks), and Sentry Owl (collaborations with private companies). Though marked as a draft from 2004, it refers to the various programs in language indicating that they were ongoing at the time, and later documents in the Snowden archive confirm that some of the activities were going on as recently as 2012.

TAREX

One of the most interesting components of the “core secrets” involves an array of clandestine activities in the real world by NSA agents working with their colleagues at the CIA, FBI, and Pentagon. The NSA is generally thought of as a spying agency that conducts its espionage from afar—via remote commands, cable taps, and malware implants that are overseen by analysts working at computer terminals. But the agency also participates in a variety of “human intelligence” programs that are grouped under the codename Sentry Osprey. According to the briefing document’s description of Sentry Osprey, the NSA “employs its own HUMINT assets (Target Exploitation—TAREX) to support SIGINT operations.”

According to a 2012 classification guide describing the program, TAREX “conducts worldwide clandestine Signals Intelligence (SIGINT) close-access operations and overt and clandestine Human Intelligence (HUMINT) operations.” The NSA directs and funds the operations and shares authority over them with the Army’s Intelligence and Security Command. The guide states that TAREX personnel are “integrated” into operations conducted by the CIA, FBI, and Defense Intelligence Agency. It adds that TAREX operations include “off net-enabling,” “supply chain-enabling,” and “hardware implant-enabling.”

According to another NSA document, off-net operations are “covert or clandestine field activities,” while supply-chain operations are “interdiction activities that focus on modifying equipment in a target’s supply chain.”

The NSA’s involvement in supply-chain interdiction was previously revealed in No Place to Hide, written by Intercept co-founder Glenn Greenwald. The book included a photograph of intercepted packages being opened by NSA agents, and an accompanying NSA document explained the packages were “redirected to a secret location” where the agents implanted surveillance beacons that secretly communicated with NSA computers. The document did not say how the packages were intercepted and did not suggest, as the new documents do, that interception and implants might be done by clandestine agents in the field.

The TAREX guide lists South Korea, Germany, and Beijing, China as sites where the NSA has deployed a “forward-based TAREX presence;” TAREX personnel also operate at domestic NSA centers in Hawaii, Texas, and Georgia. It also states that TAREX personnel are assigned to U.S. embassies and other “overseas locations,” but does not specify where. The document does not say what the “forward-based” personnel are doing, or how extensive TAREX operations are. But China, South Korea, and Germany are all home to large telecommunications equipment manufacturers, and China is known to be a key target of U.S. intelligence activities.

Although TAREX has existed for decades, until now there has been little information in the public domain about its current scope. A 2010 book by a former Defense Intelligence Agency officer, Lt. Col. Anthony Shaffer, described TAREX operations in Afghanistan as consisting of “small-unit, up-close, intelligence-gathering operatives. Usually two-to-three man units.”

“Under Cover” Agents

The most controversial revelation in Sentry Eagle might be a fleeting reference to the NSA infiltrating clandestine agents into “commercial entities.” The briefing document states that among Sentry Eagle’s most closely guarded components are “facts related to NSA personnel (under cover), operational meetings, specific operations, specific technology, specific locations and covert communications related to SIGINT enabling with specific commercial entities (A/B/C).”

It is not clear whether these “commercial entities” are American or foreign or both. Generally the placeholder “(A/B/C)” is used in the briefing document to refer to American companies, though on one occasion it refers to both American and foreign companies. Foreign companies are referred to with the placeholder “(M/N/O).” The NSA refused to provide any clarification to The Intercept.

The document makes no other reference to NSA agents working under cover. It is not clear whether they might be working as full-time employees at the “commercial entities,” or whether they are visiting commercial facilities under false pretenses. The CIA is known to use agents masquerading as businessmen, and it has used shell companies in the U.S. to disguise its activities.

There is a long history of overt NSA involvement with American companies, especially telecommunications and technology firms. Such firms often have employees with security clearances who openly communicate with intelligence agencies as part of their duties, so that the government receives information from the companies that it is legally entitled to receive, and so that the companies can be alerted to classified cyber threats. Often, such employees have previously worked at the NSA, FBI, or the military.

But the briefing document suggests another category of employees—ones who are secretly working for the NSA without anyone else being aware. This kind of double game, in which the NSA works with and against its corporate partners, already characterizes some of the agency’s work, in which information or concessions that it desires are surreptitiously acquired if corporations will not voluntarily comply. The reference to “under cover” agents jumped out at two security experts who reviewed the NSA documents for The Intercept.

“That one bullet point, it’s really strange,” said Matthew Green, a cryptographer at Johns Hopkins University. “I don’t know how to interpret it.” He added that the cryptography community in America would be surprised and upset if it were the case that “people are inside [an American] company covertly communicating with NSA and they are not known to the company or to their fellow employees.”

The ACLU’s Soghoian said technology executives are already deeply concerned about the prospect of clandestine agents on the payroll to gain access to highly sensitive data, including encryption keys, that could make the NSA’s work “a lot easier.”

“As more and more communications become encrypted, the attraction for intelligence agencies of stealing an encryption key becomes irresistible,” he said. “It’s such a juicy target.”

Of course the NSA is just one intelligence agency that would stand to benefit from these operations. China’s intelligence establishment is believed to be just as interested in penetrating American companies as the NSA is believed to be interested in penetrating Chinese firms.

“The NSA is a risk [but] I worry a lot more about the Chinese,” said Matthew Prince, chief executive of CloudFlare, a server company. “The insider threat is a huge challenge.” Prince thinks it is unlikely the NSA would place secret agents inside his or other American firms, due to political and legal issues. “I would be surprised if that were the case within any U.S. organization without at least a senior executive like the CEO knowing it was happening,” he said. But he assumes the NSA or CIA are doing precisely that in foreign companies. “I would be more surprised if they didn’t,” he said.

Corporate Partners

The briefing sheet’s description of Sentry Owl indicates the NSA has previously unknown relationships with foreign companies. According to the document, the agency “works with specific foreign partners (X/Y/Z) and foreign commercial industry entities” to make devices and products “exploitable for SIGINT”—a reference to signals intelligence, which is the heart of the NSA’s effort to collect digital communications, such as emails, texts, photos, chats, and phone records. This language clarifies a vague reference to foreign companies that appears in the secret 2013 budget for the intelligence community, key parts of which were published last year from the Snowden archive.

The document does not name any foreign companies or products, and gives no indication of the number or scale of the agency’s ties to them. Previous disclosures from the Snowden archive have exposed the agency’s close relationships with foreign intelligence agencies, but there has been relatively little revealed about the agency gaining the help of foreign companies.

The description of Sentry Hawk, which involves attacks on computer networks, also indicates close ties with foreign as well as American companies. The document states that the NSA “works with U.S. and foreign commercial entities…in the conduct of CNE [Computer Network Exploitation].” Although previous stories from the Snowden archive revealed a wide range of NSA attacks on computer networks, it has been unclear whether those attacks were conducted with the help of “commercial entities”—especially foreign ones. The document does not provide the names of any of these entities or the types of operations.

Green, the cryptography professor, said “it’s a big deal” if the NSA is working with foreign companies on a greater scale than currently understood. Until now, he noted, disclosures about the agency’s corporate relationships have focused on American companies. Those revelations have harmed their credibility, nudging customers to foreign alternatives that were thought to be untouched by the NSA. If foreign companies are also cooperating with the NSA and modifying their products, the options for purchasing truly secure telecommunications hardware are more limited than previously thought.

The briefing sheet does not say whether foreign governments are aware that the NSA may be working with their own companies. If they are not aware, says William Binney, a former NSA crypto-mathematician turned whistleblower, it would mean the NSA is cutting deals behind the backs of friendly and perhaps not-so-friendly governments.

“The idea of having foreign corporations involved without any hint of any foreign government involved is significant,” he said. “It will be an alert to all governments to go check with their companies. Bring them into parliament and put them under oath.”

The description of Sentry Raven, which focuses on encryption, provides additional confirmation that American companies have helped the NSA by secretly weakening encryption products to make them vulnerable to the agency. The briefing sheet states the NSA “works with specific U.S. commercial entities…to modify U.S manufactured encryption systems to make them exploitable for SIGINT.” It doesn’t name the commercial entities or the encryption tools they modified, but it appears to encompass a type of activity that Reuters revealed last year—that the NSA paid $10 million to the security firm RSA to use a weak random number generator in one of its encryption programs.

The avalanche of NSA disclosures since the Snowden leaks began in 2013 has shattered whatever confidence technologists once had about their networks. When asked for comment on the latest documents, Prince, the CEO of CloudFlare, began his response by saying, “We’re hyper-paranoid about everything.”

Documents:

Domestic Nukes: An Unprecedented Disaster Waiting to Happen

…with Eric Schlosser

October 9, 2014, 12:00 PM
Nuke_thumb

Somehow and someway the United States managed to make it to the year 2014 without getting itself blown up. Despite knuckle-gripping tension and mass nuclear proliferation during the Cold War, not a single detonation has caused mass civilian casualties since 1945. According to investigative journalist Eric Schlosser, such good fortune is nothing more than blind luck. Schlosser is best known as author of best-selling books Fast Food Nation and Reefer Madness. His latest, Command and Control, analyzes nuclear weapons and the illusion of their safety. In his recent Big Think interview, Schlosser explains why the American public has no reason to feel safe about how the U.S. manages its nuclear arsenal:

Schlosser’s book, as well as this interview, focuses in particular on a frightening incident that occurred thirty-four years ago in the town of Damascus, Arkansas. Damascus was home to a massive silo housing a ten-story Titan II missile. Atop this missile was the most powerful nuclear warhead the United States ever built. Tread carefully, right?

On September 18, 1980, an airman conducting routine maintenance dropped a socket that fell 80 feet (24 m) down the silo before tearing a hole in the missile’s protective metal skin. This caused a major rocket fuel leak. Rocket fuel is highly flammable. It’s also highly explosive. And thousands of gallons of the stuff was suddenly spilling out into a silo containing an explosive device capable of leveling much of Arkansas.

The rocket exploded within 24 hours (killing one, injuring dozens) but since Arkansas is still on the map you can probably guess that the warhead was kept from detonating. Still, as Schlosser explains, hearing this story for the first time shocked him. How could we have come so close to such a disaster? When he began researching further into Damascus, Schlosser found that it wasn’t nearly as isolated an incident as he initially suspected:

“The more I learned, the more amazed I was by how many other accidents there had been and how many times the United States came close to losing our own cities as a result of accidents with our own nuclear weapons. So that led me to interview bomber crew members, missile crew members, nuclear weapon designers, nuclear weapon repairmen and to do a lot of searches through the Freedom of Information Act for top secret documents about these nuclear accidents and about safety problems with our weapons.”

Schlosser’s research eventually led to the writing of Command and Control. His findings revealed that the U.S. government routinely lied about, covered up, and underreported accidents involving nuclear devices:

“There was this effort to keep away from the American people the truth about the dangers and the risks of our nuclear arsenal because there was a concern that if the American people really understood some of the risks they wouldn’t support our nuclear weapons policies.”

Now, in the 21st century, Schlosser wants citizens to carry more sway in the national discussion about these weapons. A nuclear detonation on domestic soil would wreak chaos on a scale dwarfing any known natural disaster. With stakes this high and the Cold War long over, is it not time to do away with all the government secrecy?

“All manmade things are fallible and they’re going to be fallible because we’re fallible. It’s impossible for human beings to create anything that’s perfect and that will never go wrong. So the question is how much risk are you willing to accept. And those decisions weren’t made by the American people debating well how much risk are we willing to accept. They were made by Pentagon policy makers acting largely in secret, a small number of people. Eventually they came to the conclusion that the risk of an accidental detonation from a nuclear weapon during an accident should be one in a million. And that’s what they decided was an acceptable risk. Now one in a million sounds like a very unlikely occurrence but one in a million things happen all the time. People who buy lottery tickets and win the lottery are defying odds much greater than one in a million.”

At this rate, Schlosser believes a nuclear disaster is bound to happen sometime. It may be years from now; it may be tomorrow. What is for sure is that our illusion of safety from nuclear threat is simply that — an illusion. And the greatest threat of a detonation on American soil comes not from Russia or some other outside entity. It comes from within. After all, we’re only ever one lost socket away from catastrophe.

“When nuclear weapons were first being invented this was such a new technology and such a new science they really had no idea what some of the safety implications would be. And one of the themes of my book is that this technology has always from the very beginning been on the verge of slipping out of control… And in the year 2014 there are still all kinds of uncertainties about our ability to control this technology and to be able to prevent catastrophic mistakes and accidents if something goes wrong.”

 

http://bigthink.com/think-tank/domestic-nukes-an-unprecedented-disaster-waiting-to-happen-with-eric-schlosser

DIGITAL MUSIC NEWS

Judge Slashes $48 Million Verdict Against

MP3Tunes Founder Michael Robertson

 

     A federal judge this week slashed record label EMI’s $48 million jury verdict against defunct music storage service MP3Tunes and its founder by about $33 million, ruling many of EMI’s claims were “just too big to succeed” and were backed by very little actual evidence. U.S. District Judge William H. Pauley III tossed out most of the jury’s findings of secondary infringement against MP3Tunes and founder Michael Robertson under the Digital Millennium Copyright Act. The judge also cut common law punitive damages from $7.5 million to $750,000, and additional elements of the ruling could reduce the total amount to just over $10 million.

Earlier this year a Manhattan jury found MP3tunes and Robertson liable for copyright infringement and awarded $48.1 million in damages. EMI Group Ltd originally contended in its 2007 lawsuit that MP3tunes and another website known as Sideload.com enabled the infringement of copyrights in sound recordings, musical compositions, and cover art. Since that suit was filed EMI was split up, with Vivendi SA’s Universal Music Group acquiring its recording music business and a consortium led by Sony Corp purchasing its publishing arm in 2012.

In his ruling, Judge Pauley excoriated attorneys on both sides of the case. Slamming EMI’s lawyers, he wrote, “Despite this Court’s efforts to winnow the issues, the parties insisted on an 82-page verdict sheet on liability and a 331-page verdict sheet on damages that included dense Excel tables, necessitating at least one juror’s use of a magnifying glass. While the jury did its best, their assignment was beyond all reasonable scale.” Judge Pauley then turned his attention to Robertson, noting that he “created a business model designed to operate at the very periphery of copyright law.”

The plaintiffs now can either accept the decision or embark on a new trial on punitive damages, the judge said. He gave both sides until Oct. 17 to submit proposals for a final judgment. [Read more: Global Post Hollywood Reporter

Judge Hits Grooveshark In

Federal Copyright Infringement Case

 

Gavel      A federal judge in New York this week ruled that Grooveshark, an online music service long vilified by the major record companies, infringed on thousands of their copyrights. Judge Thomas P. Griesa of United States District Court in Manhattan said the digital music platform was liable for copyright infringement because its own employees and officers – including Samuel Tarantino, the chief executive, and Joshua Greenberg, the chief technology officer – uploaded a total of 5,977 of the labels’ songs without permission. Those uploads are not subject to the “safe harbor” provisions of the Digital Millennium Copyright Act.

The case stems from Grooveshark’s claim that the Digital Millennium Copyright Act protects websites that host third-party material (content posted by users and not company employees) if they comply with takedown notices issued copyright holders. Grooveshark and its parent company, Escape Media Group, insisted in court documents and testimony that all of the music files on its servers had been uploaded by its users.

But Judge Griesa didn’t buy that argument, and on Monday said, “Each time Escape streamed one of plaintiffs’  recordings, it directly infringed upon plaintiffs’ exclusive performance rights.” He also found the company destroyed important evidence in the case, including lists of files that Mr. Greenberg and others uploaded to the service.

As reported by The New York Times, the next step of the case will be to set damages, and the possibility of a multimillion-dollar ruling against Grooveshark puts the service’s future in doubt. When asked for a comment about the summary judgment decision, John J. Rosenberg, a lawyer for Grooveshark, said, “The company respectfully disagrees with the court’s decision and is currently assessing its next steps, including the possibility of an appeal.” 

Judge Rules Expert Testimony In Apple’s

Alleged “Monopoly” Case Can Be Included

 

Monopoly man      Unbelievably, the class action suit that claims Apple Inc. is guilty of monopolistic practices because of an iTunes update continues to move through the court system. According to Courthouse News Service, a federal judge has ruled the Cupertino, CA-based tech giant cannot exclude a key expert for the plaintiffs who are accusing it of monopolizing digital music and music players between 2006 and 2009.

The lawsuit, filed in 2005, alleges Apple illegally acquired a digital music player monopoly with an iTunes update that made it impossible for iPods to play songs purchased from another online music store. As part of their case, the plaintiffs asked Stanford economist Roger Noll to testify that the update made it more costly for an iPod user to switch media players because it would be harder to collect music that could be played on all devices. Noll said the update also encouraged iPod owners to only buy music from iTunes.

The resulting monopoly allowed Apple to charge more for iPods, causing $305 million in damages to the class, Noll told the court. Apple had asked the judge to exclude Noll’s testimony in December 2013, but U.S. District Judge Yvonne Rogers last week denied that motion. She also denied a motion by Apple for summary judgment. 

Digital Streaming Revenue Grew In First

Half While Overall Revenues Slipped 4.9%

 

     U.S. music revenues slipped to $3.2 billion in the first half of 2014, a 4.9% drop from the $3.35 billion the industry tallied in the first half of 2013. According to the latest figures released by the Recording Industry Association Of America (RIAA), digital music revenue declined about 0.5% to $2.203 billion, from $2.214 billion in the first half of 2013. Meanwhile, subscription revenue jumped 23.2%, to $371.4 million from $301.4 million, and ad-supported streaming jumped 56.5% to $164.7 million from $105.2 million. CD sales fell 19.1% to $715.6 million from $994.1 million, while the sale of vinyl product – an infinitesimal line item – jumped 41% to $6.5 million, from $4.8 million in the same period last year.

The RIAA says paid subscription services averaged 7.8 million U.S. subscribers in the first six months of the year, up from an average of 5.5 million subscribers in the first half of 2013. Download sales of albums and tracks fell 11.8% to nearly $1.3 billion from $1.47 billion. Distribution of performance royalties collected by SoundExchange grew 21.3% during the same period, from $266.5 million in the first half of 2013 2013 to $323.4 million in H1 2014.

As noted by Billboard, the RIAA for the first time also provided an overall market volume for wholesale. Typically, the RIAA numbers add up the value of units for each album by that album’s list price, not the wholesale price that the labels receive when they ship the albums to retailers. But converting their data to wholesale values for downloads and the physical formats, RIAA estimates the U.S. music marketplace at $2.2 billion, down from $2.3 million at mid-year 2013.

 

Spanish Broadcasting System, 7digital

Launch Digital Content Partnership

 

Handshake      Spanish Broadcasting System has entered into a partnership with 7digital to provide SBS’ LaMusica.com with secure content management technology and a royalty reporting system to support additional music products beyond the site’s current streaming content. LaMusica.com currently streams 14 of the broadcasting company’s Spanish-language radio stations, and also provides a variety of entertainment, news, and cultural offerings leveraged from SBS’ radio network, television, and live entertainment properties.

“We continue to invest in strengthening our LaMusica.com portal and extending the robust content offerings we provide to the nation’s Latino music fans,” SBS Chairman/CEO Raul Alarcón, Jr. said in a statement. “Our agreement with 7digital will provide us with additional tools to maximize the LaMusica.com experience, further building on our momentum as we seek to fully capitalize on our strong media brands and close ties to the vibrant Latino music community.”

“We are pleased to partner with fast-growing entertainment services such as LaMusica.com to enhance the infrastructure that is required to deliver comprehensive and seamless digital entertainment offerings,” Simon Cole, 7digital’s CEO, commented in the same statement. “SBS has an exceptional history in creating top-ranked media brands attracting large and loyal audiences in the nation’s biggest Hispanic media markets, and we look forward to playing a role in expanding LaMusica.com’s operating platform.”

 

Yes, eMusic Is Still Around…And

It’s Returning To Its Indie Roots

 

     For years eMusic – one of the first MP3 download services on the web – positioned itself as specializing in independent label content and, in fact, thrived (somewhat) as a music subscription service, whereby users paid a set fee each month to download a set number of tracks.

Over the years, however, the company grdually aligned itself with the major labels in order to survive, but iTunes and Amazon eventually cornered the mainstream download market, leaving eMusic to languish in the nether regions between major and indies. In fact, most industry execs more or less forgot eMusic still existed, except when it popped up as a sponsor at various industry events.

So imagine the surprise of eMusic’s small but loyal user base this week when they received an announcing the service was ending its partnerships with the majors, and returning to its roots as a hub of indie label content. In fact, the email said that beginning today (Oct. 1, the start of the fourth quarter), eMusic “will be exiting the mainstream music business and exclusively offering independent music. The company’s goal is to build the most extensive catalogue of independent music in the world.” While Complete Music Update calls that an admirable goal, it does raise the question of whether it’s too little, too late, for two reasons: 1) Much of eMusic’s small user base has drifted to the subscription streaming services, and 2) The indie labels that 10 years ago would have applauded this move are now focused on trying to get a piece of that same streaming revenue.

 

A publication of Bunzel Media Resources © 2014

How an Apple mega-deal cost Los Angeles classrooms $1 billion

Rotten to the Core:

Bad business and worse ethics? A scandal is brewing in L.A. over a sketchy intiative to give every student an iPad

 

Rotten to the Core: How an Apple mega-deal cost Los Angeles classrooms $1 billion

Technology companies may soon be getting muddied from a long-running scandal at the Los Angeles Unified School District (LAUSD), the nation’s second-largest system. A year after the cash-strapped district signed a $1 billion contract with Apple to purchase iPads for every student, the once-ballyhooed deal has blown up. Now the mess threatens to sully other vendors from Cambridge to Cupertino.

LAUSD superintendent John Deasy is under fire for his cozy connections to Apple. In an effort to deflect attention and perhaps to show that “everybody else is doing it,” he’s demanded the release of all correspondence between his board members and technology vendors. It promises to be some juicy reading. But at its core, the LAUSD fiasco illustrates just how much gold lies beneath even the dirtiest, most neglected public schoolyard.

As the U.S. starts implementing federal Common Core State Standards, teachers and administrators are being driven to adopt technology as never before. That has set off a scramble in Silicon Valley to grab as much of the $9 billion K-12 market as possible, and Apple, Google, Cisco and others are mud-wrestling to seize a part of it. Deasy and the LAUSD have given us ringside seats to this match, which shows just how low companies will go.

When the Apple deal was announced a year ago, it was touted as the largest ever distribution of computing devices to American students. The Los Angeles Times ran a story accompanied by a photograph of an African-American girl and her classmate, who looked absolutely giddy about their new gadgets. Readers responded to the photo’s idealistic promise — that every child in Los Angeles, no matter their race or socioeconomic background, would have access to the latest technology, and Deasy himself pledged “to provide youth in poverty with tools that heretofore only rich kids have had.” Laudable as it was, that sentiment assumed that technology would by itself save our underfunded schools and somehow balance our inequitable society.



When I heard about the deal, I felt a wave of déjà vu. I had sat in a PTA meeting at a public school listening to a similar, albeit much smaller, proposed deal.  An Apple vendor had approached administrators in a Santa Barbara County school, offering to sell us iPads. The pitch was that we could help propel our kids into the technological age so that they’d be better prepared for the world, and maybe land a nice-paying, high-tech job somewhere down the line. Clearly, a school contract would be great for Apple, giving it a captive group of impressionable 11-year-olds it could then mold into lifelong customers.

But parents had to raise a lot of money to seal this deal. “Is Apple giving us a discount?” asked a fellow PTA member. No, we were told. Apple doesn’t give discounts, not even to schools. In the end, we decided to raise funds for an athletics program and some art supplies instead.

To be fair, PTA moms and dads are no match at the bargaining table for the salespeople at major companies like Google, and Hewlett-Packard. But the LAUSD, with its $6.8 billion budget, had the brains and muscle necessary to negotiate something valuable for its 655,000 students. That was the hope, at least.

Alas, problems began to appear almost immediately. First, some clever LAUSD students hacked the iPads and deleted security filters so they could roam the Internet freely and watch YouTube videos. Then, about $2 million in iPads and other devices went “missing.” Worse was the discovery that the pricey curriculum software, developed by Pearson Education Corp., wasn’t even complete. And the board looked foolish when it had to pay even more money to buy keyboards for iPads so that students could actually type out their reports.

Then, there was the deal itself. Whereas many companies extend discounts to schools and other nonprofits, Apple usually doesn’t, said George Michaels, executive director of Instructional Development at University of California at Santa Barbara. “Whatever discounts Apple gives are pretty meager.” The Chronicle of Philanthropy has noted Apple’s stingy reputation, and CEO Tim Cook has been trying to change the corporation’s miserly ways by giving $50 million to a local hospital and $50 million to an African nonprofit.

But the more we learned about the Apple “deal,” the more the LAUSD board seemed outmaneuvered. The district had bought iPad 4s, which have since been discontinued, but Apple had locked the district into paying high prices for the old models. LAUSD had not checked with its teachers or students to see what they needed or wanted, and instead had forced its end users to make the iPads work. Apple surely knew that kids needed keypads to write reports, but sold them just part of what they needed.

Compared with similar contracts signed by other districts, Apple’s deal for Los Angeles students looked crafty, at best. Perris Union High School District in Riverside County, for example, bought Samsung Chromebooks for only $344 per student. And their laptop devices have keyboards and multiple input ports for printers and thumb drives. The smaller Township High School District 214 in Illinois bought old iPad 2s without the pre-loaded, one-size-fits-all curriculum software. Its price: $429 per student.

But LAUSD paid Apple a jaw-dropping $768 per student, and LAUSD parents were not happy. As Manel Saddique wrote on a social media site: “Btw, thanks for charging a public school district more than the regular consumer price per unit, Apple. Keep it classy…”

By spring there was so much criticism about the purchase that the Los Angeles Times filed a request under the California Public Records Act to obtain all emails and records tied to the contract. What emerged was the image of a smoky backroom deal.

Then-Deputy Superintendent Jaime Aquino had once worked at Pearson, the curriculum developer, and knew the players. It turned out that Aquino and Deasy had started talking with Apple and Pearson two years before the contract was approved, and a full year before it was put out to bid. The idea behind a public bidding process is that every vendor is supposed to have the same opportunity to win a job, depending on their products, delivery terms and price. But emails show that Deasy was intent on embracing just one type of device: Apple’s.

Aquino went so far as to appear in a promotional video for iPads months before the contracts were awarded. Dressed in a suit and tie, the school official smiled for the camera as he talked about how Apple’s product would lead to “huge leaps in what’s possible for students” and would “phenomenally . . . change the landscape of education.” If other companies thought they had a shot at nabbing the massive contract from the influential district, this video must have disabused them of that idea.

At one point, Aquino was actually coaching software devloper Pearson on what to do: “[M]ake sure that your bid is the lower one,” he wrote. Meanwhile, Deasy was emailing Pearson CEO Marjorie Scardino, and effusively recounting his visit with Apple’s CEO. “I wanted to let you know I had an excellent meeting with Tim at Apple last Friday … The meeting went very well and he was fully committed to being a partner … He was very excited.”

If you step back from the smarmy exchanges, a bigger picture emerges. Yes, LAUSD is grossly mismanaged and maybe even dysfunctional. But corporations like Apple don’t look so good, either. Google, Microsoft, Facebook, Apple, Hewlett Packard — the companies that are cashing in on our classroom crisis are the same ones that helped defund the infrastructure that once made public schools so good. Sheltering billions of dollars from federal taxes may be great for the top 10 percent of Americans, who own 90 percent of the stock in these corporations. But it’s a catastrophe for the teachers, schools and universities that helped develop their technology and gave the companies some of its brightest minds. In the case of LAUSD, Apple comes across as cavalier about the problem it’s helped create for low-income students, and seems more concerned with maximizing its take from the district.

But the worst thing about this scandal is what it’s done to the public trust. The funds for this billion-dollar boondoggle were taken from voter-approved school construction and modernization bonds — bonds that voters thought would be used for physical improvements. At a time when LAUSD schools, like so many across the country, are in desperate need of physical repairs, from corroded gas lines to broken play structures, the Apple deal has cast a shadow over school bonds. Read the popular “Repairs Not iPads” page on Facebook and parents’ complaints about the lack of air conditioning, librarians and even toilet paper in school bathrooms. Sadly, replacing old fixtures and cheap trailers with new plumbing and classrooms doesn’t carry the kind of cachet for ambitious school boards as does, say, buying half-a-million electronic tablets. As one mom wrote: “Deasy has done major long-term damage because not one person will ever vote for any future bond measures supporting public schools.”

Now, the Apple deal is off, although millions of dollars have already been spent. An investigation into the bidding process is underway and there are cries to place Deasy in “teacher jail,” a district policy that keeps teachers at home while they’re under investigation. And LAUSD students, who are overwhelmingly Hispanic and African-American, have once again been given the short end of the stick. They were promised the sort of “tools that heretofore only rich kids have had,” and will probably not see them for several years, if ever. The soured Apple deal just adds to the sense of injustice that many of these students already see in the grown-up world.

Deasy contends that that he did nothing wrong. In a few weeks, the public official will get his job performance review. In the meantime, he’s called for the release of all emails and documents written between board members and other Silicon Valley and corporate education vendors. The heat in downtown Los Angeles is spreading to Northern California and beyond, posing a huge political problem for not just Deasy but for Cook and other high-tech captains.

But at the bottom of this rush to place technology in every classroom is the nagging feeling that the goal in buying expensive devices is not to improve teachers’ abilities, or to lighten their load. It’s not to create more meaningful learning experiences for students or to lift them out of poverty or neglect. It’s to facilitate more test-making and profit-taking for private industry, and quick, too, before there’s nothing left.

 

Why Facebook, Google, and the NSA Want Computers That Learn Like Humans

Deep learning could transform artificial intelligence. It could also get pretty creepy.

lol cats

Illustration: Quickhoney

In June 2012, a Google supercomputer made an artificial-intelligence breakthrough: It learned that the internet loves cats. But here’s the remarkable part: It had never been told what a cat looks like. Researchers working on the Google Brain project in the company’s X lab fed 10 million random, unlabeled images from YouTube into their massive network and instructed it to recognize the basic elements of a picture and how they fit together. Left to their own devices, the Brain’s 16,000 central processing units noticed that a lot of the images shared similar characteristics that it eventually recognized as a “cat.” While the Brain’s self-taught knack for kitty spotting was nowhere as good as a human’s, it was nonetheless a major advance in the exploding field of deep learning.

The dream of a machine that can think and learn like a person has long been the holy grail of computer scientists, sci-fi fans, and futurists alike. Deep learning—algorithms inspired by the human brain and its ability to soak up massive amounts of information and make complex predictions—might be the closest thing yet. Right now, the technology is in its infancy: Much like a baby, the Google Brain taught itself how to recognize cats, but it’s got a long way to go before it can figure out that you’re sad because your tabby died. But it’s just a matter of time. Its potential to revolutionize everything from social networking to surveillance has sent tech companies and defense and intelligence agencies on a deep-learning spending spree.

What really puts deep learning on the cutting edge of artificial intelligence (AI) is that its algorithms can analyze things like human behavior and then make sophisticated predictions. What if a social-networking site could figure out what you’re wearing from your photos and then suggest a new dress? What if your insurance company could diagnose you as diabetic without consulting your doctor? What if a security camera could tell if the person next to you on the subway is carrying a bomb?

And unlike older data-crunching models, deep learning doesn’t slow down as you cram in more info. Just the opposite—it gets even smarter. “Deep learning works better and better as you feed it more data,” explains Andrew Ng, who oversaw the cat experiment as the founder of Google’s deep-learning team. (Ng has since joined the Chinese tech giant Baidu as the head of its Silicon Valley AI team.)

And so the race to build a better virtual brain is on. Microsoft plans to challenge the Google Brain with its own system called Adam. Wired reported that Apple is applying deep learning to build a “neural-net-boosted Siri.” Netflix hopes the technology will improve its movie recommendations. Google, Yahoo, Twitter, and Pinterest have snapped up deep-learning companies; Google has used the technology to read every house number in France in less than an hour. “There’s a big rush because we think there’s going to be a bit of a quantum leap,” says Yann LeCun, a deep-learning pioneer and the head of Facebook’s new AI lab.

What if your insurance company diagnosed you without consulting your doctor? What if a security camera could tell if the person next to you is carrying a bomb?

Last December, Facebook CEO Mark Zuckerberg appeared, bodyguards in tow, at the Neural Information Processing Systems conference in Lake Tahoe, where insiders discussed how to make computers learn like humans. He has said that his company seeks to “use new approaches in AI to help make sense of all the content that people share.” Facebook researchers have used deep learning to identify individual faces from a giant database called “Labeled Faces in the Wild” with more than 97 percent accuracy. Another project, dubbed PANDA (Pose Aligned Networks for Deep Attribute Modeling), can accurately discern gender, hairstyles, clothing styles, and facial expressions from photos. LeCun says that these types of tools could improve the site’s ability to tag photos, target ads, and determine how people will react to content.

Yet considering recent news that Facebook secretly studied 700,000 users’ emotions by tweaking their feeds or that the National Security Agency harvests 55,000 facial images a day, it’s not hard to imagine how these attempts to better “know” you might veer into creepier territory.

Not surprisingly, deep learning’s potential for analyzing human faces, emotions, and behavior has attracted the attention of national-security types. The Defense Advanced Research Projects Agency has worked with researchers at New York University on a deep-learning program that sought, according to a spokesman, “to distinguish human forms from other objects in battlefield or other military environments.”

Chris Bregler, an NYU computer science professor, is working with the Defense Department to enable surveillance cameras to detect suspicious activity from body language, gestures, and even cultural cues. (Bregler, who grew up near Heidelberg, compares it to his ability to spot German tourists in Manhattan.) His prototype can also determine whether someone is carrying a concealed weapon; in theory, it could analyze a woman’s gait to reveal she is hiding explosives by pretending to be pregnant. He’s also working on an unnamed project funded by “an intelligence agency”—he’s not permitted to say more than that.

And the NSA is sponsoring deep-learning research on language recognition at Johns Hopkins University. Asked whether the agency seeks to use deep learning to track or identify humans, spokeswoman Vanee’ Vines only says that the agency “has a broad interest in deriving knowledge from data.”

Mark Zuckerberg

Mark Zuckerberg has said that Facebook seeks to “use new approaches in AI to help make sense of all the content that people share.” AP Photo/Ben Margot

Deep learning also has the potential to revolutionize Big Data-driven industries like banking and insurance. Graham Taylor, an assistant professor at the University of Guelph in Ontario, has applied deep-learning models to look beyond credit scores to determine customers’ future value to companies. He acknowledges that these types of applications could upend the way businesses treat their customers: “What if a restaurant was able to predict the amount of your bill, or the probability of you ever returning? What if that affected your wait time? I think there will be many surprises as predictive models become more pervasive.”

Privacy experts worry that deep learning could also be used in industries like banking and insurance to discriminate or effectively redline consumers for certain behaviors. Sergey Feldman, a consultant and data scientist with the brand personalization company RichRelevance, imagines a “deep-learning nightmare scenario” in which insurance companies buy your personal information from data brokers and then infer with near-total accuracy that, say, you’re an overweight smoker in the early stages of heart disease. Your monthly premium might suddenly double, and you wouldn’t know why. This would be illegal, but, Feldman says, “don’t expect Congress to protect you against all possible data invasions.”

An NSA spokeswoman only says that the agency “has a broad interest in deriving knowledge from data.”

And what if the computer is wrong? If a deep-learning program predicts that you’re a fraud risk and blacklists you, “there’s no way to contest that determination,” says Chris Calabrese, legislative counsel for privacy issues at the American Civil Liberties Union.

Bregler agrees that there might be privacy issues associated with deep learning, but notes that he tries to mitigate those concerns by consulting with a privacy advocate. Google has reportedly established an ethics committee to address AI issues; a spokesman says its deep-learning research is not primarily about analyzing personal or user-specific data—for now. While LeCun says that Facebook eventually could analyze users’ data to inform targeted advertising, he insists the company won’t share personally identifiable data with advertisers.

“The problem of privacy invasion through computers did not suddenly appear because of AI or deep learning. It’s been around for a long time,” LeCun says. “Deep learning doesn’t change the equation in that sense, it just makes it more immediate.” Big companies like Facebook “thrive on the trust users have in them,” so consumers shouldn’t worry about their personal data being fed into virtual brains. Yet, as he notes, “in the wrong hands, deep learning is just like any new technology.”

Deep learning, which also has been used to model everything from drug side effects to energy demand, could “make our lives much easier,” says Yoshua Bengio, head of the Machine Learning Laboratory at the University of Montreal. For now, it’s still relatively difficult for companies and governments to efficiently sift through all our emails, texts, and photos. But deep learning, he warns, “gives a lot of power to these organizations.”