Journalist Matt Richtel’s ‘Deadly Wandering’ tells a harrowing story of technology’s dangers

By Wallace Baine, Santa Cruz Sentinel

Matt Richtel

Matt Richtel

On an early Friday morning in September 2006, a young man named Reggie Shaw climbed into his Chevy Tahoe for his long commute to work in Logan, Utah. Somewhere on a highway east of Logan, with the sky just beginning to lighten, Reggie veered over the yellow line and sideswiped a Saturn coming from the opposite direction. The Saturn spun out and was “T-boned” by a Ford pick-up, killing the two men riding in the Saturn.

From that tragic event comes the story at the center of Matt Richtel’s new book “A Deadly Wandering: A Tale of Tragedy and Redemption in the Age of Attention” (Wm. Morrow).
Reggie Shaw, it was later determined, was texting on his flip phone at the time of the accident, which he initially denied. What followed was the seminal legal case that defined the debate about texting and driving.
Richtel, a reporter at the New York Times, won the Pulitzer Prize in 2010 for his reporting on the risks of distracted driving. In his book, he lays out the narrative of the Shaw case, what happened to Reggie and to the families of the victims, and how the events of that morning led lawmakers to look for a proper legal response to what can be a deadly habit.
At the same time, “Deadly Wandering” probes into the neuroscience of distraction, and the deeply seated neuro-chemical appeal of our ubiquitous hand-held devices.
“I didn’t want to write a book just about texting and driving,” said Richtel, who comes to Bookshop Santa Cruz to discuss his book Nov. 5. “What we’re talking about here goes well beyond what happens in the car. Why are we checking our devices all the time? Why can’t we stand idly in line at the grocery store, or at a stoplight, or with our homework, or with the spouse that sitting right across the table, without feeling that itch to look at our device?”
Chapters on what science is learning about how smart-phone and tablet technology are changing our brains are interspersed within the longer story of Reggie Shaw who later went to jail.
“This is not a screed against technology,” said Richtel of his new book. “It’s a wake-up call to be informed about the power of the neuro-chemical power of these things, in the same way we want to be informed about anything that has lots of power over our lives.”

Research suggests that checking in on your smart phone may release a dose of dopamine, the neurotransmitter that regulates the pleasure centers of the brain. “Ninety-six percent of people say that you shouldn’t text and drive, and yet, 30 percent do it anyway,” said Richtel. “The only other disconnect I can find that is that stark is with cigarettes. Every smoker says it’s bad for you, yet they keep doing it. Why do these devices have such a lure over us.”

Today, Shaw is a crusader against texting while driving. “Deadly Wandering” is an often harrowing chronicle of how Shaw got to the point where he could admit his wrongdoing and atone for causing the death of two fathers and husbands.

“The Reggie story is so compelling because we can connect to him easily,” said Richtel. “The battle that happened after his deadly wreck is a metaphor for our own internal battle about how to pay attention, particularly on the roads.”

This is not, however, a morality tale. Instead of talking about the problem of texting while driving as an issue of responsibility and willpower, Richtel asserts that our powerful and appealing technological devises are changing our behaviors on a neurological level.

“People are getting in their cars every single day, people who are not malicious, who are not bad people, and yet they’re winding up in these deadly wrecks. Driving feels boring a lot of the time. And with every passing moment, we are becoming less tolerant of boredom than we’ve ever been. This thing is constantly beckoning us.”

Matt Richtel

http://www.mercurynews.com/entertainment/ci_26823138/journalist-matt-richtels-deadly-wandering-tells-harrowing-story?source=rss

 

Assange: Google Is Not What It Seems

When Google Met Wikileaks

In June 2011, Julian Assange received an unusual visitor: the chairman of Google, Eric Schmidt, arrived from America at Ellingham Hall, the country house in Norfolk, England where Assange was living under house arrest.

For several hours the besieged leader of the world’s most famous insurgent publishing organization and the billionaire head of the world’s largest information empire locked horns. The two men debated the political problems faced by society, and the technological solutions engendered by the global network—from the Arab Spring to Bitcoin.

They outlined radically opposing perspectives: for Assange, the liberating power of the Internet is based on its freedom and statelessness. For Schmidt, emancipation is at one with U.S. foreign policy objectives and is driven by connecting non-Western countries to Western companies and markets. These differences embodied a tug-of-war over the Internet’s future that has only gathered force subsequently.

In this extract from When Google Met WikiLeaks Assange describes his encounter with Schmidt and how he came to conclude that it was far from an innocent exchange of views.

Eric Schmidt is an influential figure, even among the parade of powerful characters with whom I have had to cross paths since I founded WikiLeaks. In mid-May 2011 I was under house arrest in rural Norfolk, England, about three hours’ drive northeast of London. The crackdown against our work was in full swing and every wasted moment seemed like an eternity. It was hard to get my attention.

But when my colleague Joseph Farrell told me the executive chairman of Google wanted to make an appointment with me, I was listening.

In some ways the higher echelons of Google seemed more distant and obscure to me than the halls of Washington. We had been locking horns with senior U.S. officials for years by that point. The mystique had worn off. But the power centers growing up in Silicon Valley were still opaque and I was suddenly conscious of an opportunity to understand and influence what was becoming the most influential company on earth. Schmidt had taken over as CEO of Google in 2001 and built it into an empire.

I was intrigued that the mountain would come to Muhammad. But it was not until well after Schmidt and his companions had been and gone that I came to understand who had really visited me.

The stated reason for the visit was a book. Schmidt was penning a treatise with Jared Cohen, the director of Google Ideas, an outfit that describes itself as Google’s in-house “think/do tank.”

I knew little else about Cohen at the time. In fact, Cohen had moved to Google from the U.S. State Department in 2010. He had been a fast-talking “Generation Y” ideas man at State under two U.S. administrations, a courtier from the world of policy think tanks and institutes, poached in his early twenties.

He became a senior advisor for Secretaries of State Rice and Clinton. At State, on the Policy Planning Staff, Cohen was soon christened “Condi’s party-starter,” channeling buzzwords from Silicon Valley into U.S. policy circles and producing delightful rhetorical concoctions such as “Public Diplomacy 2.0.” On his Council on Foreign Relations adjunct staff page he listed his expertise as “terrorism; radicalization; impact of connection technologies on 21st century statecraft; Iran.”

It was Cohen who, while he was still at the Department of State, was said to have emailed Twitter CEO Jack Dorsey to delay scheduled maintenance in order to assist the aborted 2009 uprising in Iran. His documented love affair with Google began the same year when he befriended Eric Schmidt as they together surveyed the post-occupation wreckage of Baghdad. Just months later, Schmidt re-created Cohen’s natural habitat within Google itself by engineering a “think/do tank” based in New York and appointing Cohen as its head. Google Ideas was born.

Later that year two co-wrote a policy piece for the Council on Foreign Relations’ journal Foreign Affairs, praising the reformative potential of Silicon Valley technologies as an instrument of U.S. foreign policy. Describing what they called “coalitions of the connected,” Schmidt and Cohen claimed that:

Democratic states that have built coalitions of their militaries have the capacity to do the same with their connection technologies.…

They offer a new way to exercise the duty to protect citizens around the world [emphasis added].

Schmidt and Cohen said they wanted to interview me. I agreed. A date was set for June.

Jared Cohen

Executive Chairman of Google Eric Schmidt and Jared Cohen, director of Google Ideas Olivia Harris/Reuters

* * *

By the time June came around there was already a lot to talk about. That summer WikiLeaks was still grinding through the release of U.S. diplomatic cables, publishing thousands of them every week. When, seven months earlier, we had first started releasing the cables, Hillary Clinton had denounced the publication as “an attack on the international community” that would “tear at the fabric” of government.

It was into this ferment that Google projected itself that June, touching down at a London airport and making the long drive up into East Anglia to Norfolk and Beccles.

Schmidt arrived first, accompanied by his then partner, Lisa Shields. When he introduced her as a vice president of the Council on Foreign Relations—a U.S. foreign-policy think tank with close ties to the State Department—I thought little more of it. Shields herself was straight out of Camelot, having been spotted by John Kennedy Jr.’s side back in the early 1990s.

They sat with me and we exchanged pleasantries. They said they had forgotten their Dictaphone, so we used mine. We made an agreement that I would forward them the recording and in exchange they would forward me the transcript, to be corrected for accuracy and clarity. We began. Schmidt plunged in at the deep end, straightaway quizzing me on the organizational and technological underpinnings of WikiLeaks.

* * *

Some time later Jared Cohen arrived. With him was Scott Malcomson, introduced as the book’s editor. Three months after the meeting Malcomson would enter the State Department as the lead speechwriter and principal advisor to Susan Rice (then U.S. ambassador to the United Nations, now national security advisor).

At this point, the delegation was one part Google, three parts U.S. foreign-policy establishment, but I was still none the wiser. Handshakes out of the way, we got down to business.

Schmidt was a good foil. A late-fiftysomething, squint-eyed behind owlish spectacles, managerially dressed—Schmidt’s dour appearance concealed a machinelike analyticity. His questions often skipped to the heart of the matter, betraying a powerful nonverbal structural intelligence.

It was the same intellect that had abstracted software-engineering principles to scale Google into a megacorp, ensuring that the corporate infrastructure always met the rate of growth. This was a person who understood how to build and maintain systems: systems of information and systems of people. My world was new to him, but it was also a world of unfolding human processes, scale and information flows.

For a man of systematic intelligence, Schmidt’s politics—such as I could hear from our discussion—were surprisingly conventional, even banal. He grasped structural relationships quickly, but struggled to verbalize many of them, often shoehorning geopolitical subtleties into Silicon Valley marketese or the ossified State Department micro-language of his companions. He was at his best when he was speaking (perhaps without realizing it) as an engineer, breaking down complexities into their orthogonal components.

I found Cohen a good listener, but a less interesting thinker, possessed of that relentless conviviality that routinely afflicts career generalists and Rhodes Scholars. As you would expect from his foreign-policy background, Cohen had a knowledge of international flash points and conflicts and moved rapidly between them, detailing different scenarios to test my assertions. But it sometimes felt as if he was riffing on orthodoxies in a way that was designed to impress his former colleagues in official Washington.

Malcomson, older, was more pensive, his input thoughtful and generous. Shields was quiet for much of the conversation, taking notes, humoring the bigger egos around the table while she got on with the real work.

As the interviewee, I was expected to do most of the talking. I sought to guide them into my worldview. To their credit, I consider the interview perhaps the best I have given. I was out of my comfort zone and I liked it.

We ate and then took a walk in the grounds, all the while on the record. I asked Eric Schmidt to leak U.S. government information requests to WikiLeaks, and he refused, suddenly nervous, citing the illegality of disclosing Patriot Act requests. And then, as the evening came on, it was done and they were gone, back to the unreal, remote halls of information empire, and I was left to get back to my work.

That was the end of it, or so I thought.

CONTINUED:   http://www.newsweek.com/assange-google-not-what-it-seems-279447?piano_d=1

Leaked documents expose secret contracts between NSA and tech companies

http://cdn.eteknix.com/wp-content/uploads/2013/07/NSA_lawsuit.jpg

By Thomas Gaist
20 October 2014

Internal National Security Agency documents published by the Intercept earlier this month provide powerful evidence of active collaboration by the large technology corporations with the US government’s worldwide surveillance operations. The documents give a glimpse of efforts by the American state—the scale and complexity of which are astonishing—to penetrate, surveil and manipulate information systems around the world.

Reportedly leaked by whistleblower Edward Snowden, the documents catalogue a dizzying array of clandestine intelligence and surveillance operations run by the NSA, CIA and other US and allied security bureaucracies, including infiltration of undercover agents into corporate entities, offensive cyber-warfare and computer network exploitation (CNE), theoretical and practical aspects of encryption cracking, and supply chain interdiction operations that “focus on modifying equipment in a target’s supply chain.”

The trove of documents, made available in their original forms by the Intercept , are largely comprised of classification rubrics that organize NSA secrets according to a color-coded scale ranging from green (lowest priority secrets), through blue and red, to black (highest priority secrets).

The secret facts organized in the leaked classification guides supply overwhelming evidence that the NSA and Central Security Service (a 25,000-strong agency founded in 1972 as a permanent liaison between the NSA and US military intelligence) rely on cooperative and in some cases contractual relations with US firms to facilitate their global wiretapping and data stockpiling activities.

Blue level facts listed in the documents include:

* “Fact that NSA/CSS works with US industry in the conduct of its cryptologic missions”

* “Fact that NSA/CSS works with US industry as technical advisors regarding cryptologic products”

Red level facts include:

* “Fact that NSA/CSS conducts SIGINT enabling programs and related operations with US industry”

* “Fact that NSA/CSS have FISA operations with US commercial industry elements”

Black level facts include:

* “Fact that NSA/CSS works with and has contractual relationships with specific named US commercial entities to conduct SIGINT [signals intelligence] enabling programs and operations”

* “Fact that NSA/CSS works with specific named US commercial entities to make them exploitable for SIGINT”

* “Facts related to NSA personnel (under cover), operational meetings, specific operations, specific technology, specific locations and covert communications related to SIGINT enabling with specific commercial entities”

* “Facts related to NSA/CSS working with US commercial entities on the acquisition of communications (content and metadata) provided by the US service provider to worldwide customers; communications transiting the US; or access to international communications mediums provided by the US entity”

* “Fact that NSA/CSS injects ‘implants’ into the hardware and software of US companies to enable data siphoning”

Particularly damning are facts reported by a leaked classification schema detailing operation “Exceptionally Controlled Information (ECI) WHIPGENIE,” described in the document’s introduction as covering NSA “Special Source Operations relationships with US Corporate Partners.”

According to the ECI WHIPGENIE document, unnamed “corporate partners” facilitate NSA mass surveillance as part of undisclosed “contractual relations,” through which “NSA and Corporate Partners are involved in SIGINT ‘cooperative efforts.”’

Among the classified TOP SECRET items listed in the ECI WHIPGENIE document is the fact that “NSA and an unnamed Corporate Partner are involved in a ‘cooperative effort’ against cable collection, including domestic wire access collection.”

As part of WHIPGENIE, the document further states, the FBI facilitates NSA partnerships with industry that are both “compelled and cooperative” in nature. In other words, the NSA carries out domestic wiretapping and “cable collection” operations with the cooperation of at least one US corporation.

These revelations are especially significant in light of persistent claims by the major tech and communications corporations that their involvement in the surveillance operations is strictly involuntary in nature.

Last year, a leaked NSA PowerPoint presentation titled “Corporate Partner Access” showed that the volume of data transferred to the agency by Yahoo, Google, and Microsoft during a single 5-week period was sufficient to generate more than 2,000 intelligence reports. The companies all defended their actions by claiming they were forced to furnish data by the government.

Other documents contained in the trove detail the NSA’s development of sophisticated offensive cyber-warfare capabilities targeting the information systems of foreign corporations and governments. These programs highlight the threat of outbreaks of electronic warfare between competing capitalist elites, which could provide the spark for full-fledged shooting wars.

One document, titled “Computer Network Exploitation Classification Guide,” states that NSA, CSS and the NSA’s in-house hacker unit, the so-called Tailored Access Operations (TAO), engage in “remote subversion” as well as “off-net field operations to develop, deploy, exploit or maintain intrusive access.”

Another classification guide, titled “NSA / CSS Target Exploitation Program,” covers target exploitation operations (TAREX), which are said to “provide unique collection of telecommunication and cryptologic-related information and material in direct support of NSA / CSS.”

TAREX also involves “physical subversion,” “close access-enabling exploitation,” and “supply chain enabling,” the document shows, through which the surveillance agencies intervene directly to modify and sabotage the information systems of rival states.

TAREX operations are supported by outposts located in Beijing, China, South Korea, Germany, Washington DC, Hawaii, Texas and Georgia, and TAREX personnel are “integrated into the HUMINT [human intelligence] operations at CIA, DIA/DoD, and/or FBI,” according to the document.

On top of the electronic surveillance, infiltration and cyber-warfare operations themselves, the intelligence establishment has launched a slate of secondary operations designed to protect the secrecy of its various initiatives, as shown in another leaked document, titled “Exceptionally Controlled Information Listing.”

These include:

* AMBULANT, APERIODIC, AUNTIE—“Protect information related to sensitive SIGINT Enabling relationships”

* BOXWOOD—“Protects a sensitive sole source of lucrative communications intelligence emanating from a target”

* CHILLY—“Protects details of NSA association with and active participation in planning and execution of sensitive Integrated Joint Special Technical Operations (IJSTO) offensive Information Warfare strategies”

* EVADEYIELD—“Protects NSA’s capability to exploit voice or telephonic conversations from an extremely sensitive source”

* FORBIDDEN—“Protects information pertaining to joint operations conducted by NSA, GCHQ, CSE, CIA, and FBI against foreign intelligence agents”

* FORBORNE—“Protects the fact that the National Security Agency, GCHQ, and CSE can exploit ciphers used by hostile intelligence services”

* OPALESCE —“Protects Close Access SIGINT collection operations, which require a specialized sensor, positioned in close physical proximity to the target or facility”

* PENDLETON—“Protects NSA’s investment in manpower and resources to acquire our current bottom line capabilities to exploit SIGINT targets by attacking public key cryptography as well as investment in technology”

* PIEDMONT—“Provides protection to NSA’s bottom line capabilities to exploit SIGINT targets by attacking the hard mathematical problems underlying public key cryptography as well as any future technologies as may be developed”

* And others…

The number and character of the NSA’s “protection” programs gives an indication of the scope of its activities.

The latest round of leaked NSA documents underscores the absurdity of proposals aimed at “reforming” and “reigning in” the mass surveillance programs, which, propelled by the explosive growth of social inequality and the rise of a criminal financial oligarchy, have enjoyed a tropical flourishing since the 1970s, acquiring an extravagant scale and diversity.

 

http://www.wsws.org/en/articles/2014/10/20/nsad-o20.html

Google makes us all dumber

…the neuroscience of search engines

As search engines get better, we become lazier. We’re hooked on easy answers and undervalue asking good questions

Google makes us all dumber: The neuroscience of search engines
(Credit: Ollyy via Shutterstock)

In 1964, Pablo Picasso was asked by an interviewer about the new electronic calculating machines, soon to become known as computers. He replied, “But they are useless. They can only give you answers.”

We live in the age of answers. The ancient library at Alexandria was believed to hold the world’s entire store of knowledge. Today, there is enough information in the world for every person alive to be given three times as much as was held in Alexandria’s entire collection —and nearly all of it is available to anyone with an internet connection.

This library accompanies us everywhere, and Google, chief librarian, fields our inquiries with stunning efficiency. Dinner table disputes are resolved by smartphone; undergraduates stitch together a patchwork of Wikipedia entries into an essay. In a remarkably short period of time, we have become habituated to an endless supply of easy answers. You might even say dependent.

Google is known as a search engine, yet there is barely any searching involved anymore. The gap between a question crystallizing in your mind and an answer appearing at the top of your screen is shrinking all the time. As a consequence, our ability to ask questions is atrophying. Google’s head of search, Amit Singhal, asked if people are getting better at articulating their search queries, sighed and said: “The more accurate the machine gets, the lazier the questions become.”

Google’s strategy for dealing with our slapdash questioning is to make the question superfluous. Singhal is focused on eliminating “every possible friction point between [users], their thoughts and the information they want to find.” Larry Page has talked of a day when a Google search chip is implanted in people’s brains: “When you think about something you don’t really know much about, you will automatically get information.” One day, the gap between question and answer will disappear.

I believe we should strive to keep it open. That gap is where our curiosity lives. We undervalue it at our peril.

The Internet can make us feel omniscient. But it’s the feeling of not knowing which inspires the desire to learn. The psychologist George Loewenstein gave us the simplest and most powerful definition of curiosity, describing it as the response to an “information gap.” When you know just enough to know that you don’t know everything, you experience the itch to know more. Loewenstein pointed out that a person who knows the capitals of three out of 50 American states is likely to think of herself as knowing something (“I know three state capitals”). But a person who has learned the names of 47 state capitals is likely to think of herself as not knowing three state capitals, and thus more likely to make the effort to learn those other three.



That word “effort” is important. It’s hardly surprising that we love the ease and fluency of the modern web: our brains are designed to avoid anything that seems like hard work. The psychologists Susan Fiske and Shelley Taylor coined the term “cognitive miser” to describe the stinginess with which the brain allocates limited attention, and its in-built propensity to seek mental short-cuts. The easier it is for us to acquire information, however, the less likely it is to stick. Difficulty and frustration — the very friction that Google aims to eliminate — ensure that our brain integrates new information more securely. Robert Bjork, of the University of California, uses the phrase “desirable difficulties” to describe the counterintuitive notion that we learn better when the learning is hard. Bjork recommends, for instance, spacing teaching sessions further apart so that students have to make more effort to recall what they learned last time.

A great question should launch a journey of exploration. Instant answers can leave us idling at base camp. When a question is given time to incubate, it can take us to places we hadn’t planned to visit. Left unanswered, it acts like a searchlight ranging across the landscape of different possibilities, the very consideration of which makes our thinking deeper and broader. Searching for an answer in a printed book is inefficient, and takes longer than in its digital counterpart. But while flicking through those pages your eye may alight on information that you didn’t even know you wanted to know.

The gap between question and answer is where creativity thrives and scientific progress is made. When we celebrate our greatest thinkers, we usually focus on their ingenious answers. But the thinkers themselves tend to see it the other way around. “Looking back,” said Charles Darwin, “I think it was more difficult to see what the problems were than to solve them.” The writer Anton Chekhov declared, “The role of the artist is to ask questions, not answer them.” The very definition of a bad work of art is one that insists on telling its audience the answers, and a scientist who believes she has all the answers is not a scientist.

According to the great physicist James Clerk Maxwell, “thoroughly conscious ignorance is the prelude to every real advance in science.” Good questions induce this state of conscious ignorance, focusing our attention on what we don’t know. The neuroscientist Stuart Firestein teaches a course on ignorance at Columbia University, because, he says, “science produces ignorance at a faster rate than it produces knowledge.” Raising a toast to Einstein, George Bernard Shaw remarked, “Science is always wrong. It never solves a problem without creating ten more.”

Humans are born consciously ignorant. Compared to other mammals, we are pushed out into the world prematurely, and stay dependent on elders for much longer. Endowed with so few answers at birth, children are driven to question everything. In 2007, Michelle Chouinard, a psychology professor at the University of California, analyzed recordings of four children interacting with their respective caregivers for two hours at a time, for a total of more than two hundred hours. She found that, on average, the children posed more than a hundred questions every hour.

Very small children use questions to elicit information — “What is this called?” But as they grow older, their questions become more probing. They start looking for explanations and insight, to ask “Why?” and “How?”. Extrapolating from Chouinard’s data, the Harvard professor Paul Harris estimates that between the ages of 3 and 5, children ask 40,000 such questions. The numbers are impressive, but what’s really amazing is the ability to ask such a question at all. Somehow, children instinctively know there is a vast amount they don’t know, and they need to dig beneath the world of appearances.

In a 1984 study by British researchers Barbara Tizard and Martin Hughes, four-year-old girls were recorded talking to their mothers at home. When the researchers analyzed the tapes, they found that some children asked more “How” and “Why” questions than others, and engaged in longer passages of “intellectual search” — a series of linked questions, each following from the other. (In one such conversation, four-year-old Rosy engaged her mother in a long exchange about why the window cleaner was given money.) The more confident questioners weren’t necessarily the children who got more answers from their parents, but the ones who got more questions. Parents who threw questions back to their children — “I don’t know, what do you think?” — raised children who asked more questions of them. Questioning, it turn out, is contagious.

Childish curiosity only gets us so far, however. To ask good questions, it helps if you have built your own library of answers. It’s been proposed that the Internet relieves us of the onerous burden of memorizing information. Why cram our heads with facts, like the date of the French revolution, when they can be summoned up in a swipe and a couple of clicks? But knowledge doesn’t just fill the brain up; it makes it work better. To see what I mean, try memorizing the following string of fourteen digits in five seconds:

74830582894062

Hard, isn’t it? Virtually impossible. Now try memorizing this string of fourteen letters:

lucy in the sky with diamonds

This time, you barely needed a second. The contrast is so striking that it seems like a completely different problem, but fundamentally, it’s the same. The only difference is that one string of symbols triggers a set of associations with knowledge you have stored deep in your memory. Without thinking, you can group the letters into words, the words into a sentence you understand as grammatical — and the sentence is one you recognize as the title of a song by the Beatles. The knowledge you’ve gathered over years has made your brain’s central processing unit more powerful.

This tells us something about the idea we should outsource our memories to the web: it’s a short-cut to stupidity. The less we know, the worse we are at processing new information, and the slower we are to arrive at pertinent inquiry. You’re unlikely to ask a truly penetrating question about the presidency of Richard Nixon if you have just had to look up who he is. According to researchers who study innovation, the average age at which scientists and inventors make breakthroughs is increasing over time. As knowledge accumulates across generations, it takes longer for individuals to acquire it, and thus longer to be in a position to ask the questions which, in Susan Sontag’s phrase, “destroy the answers”.

My argument isn’t with technology, but the way we use it. It’s not that the Internet is making us stupid or incurious. Only we can do that. It’s that we will only realize the potential of technology and humans working together when each is focused on its strengths — and that means we need to consciously cultivate effortful curiosity. Smart machines are taking over more and more of the tasks assumed to be the preserve of humans. But no machine, however sophisticated, can yet be said to be curious. The technology visionary Kevin Kelly succinctly defines the appropriate division of labor: “Machines are for answers; humans are for questions.”

The practice of asking perceptive, informed, curious questions is a cultural habit we should inculcate at every level of society. In school, students are generally expected to answer questions rather than ask them. But educational researchers have found that students learn better when they’re gently directed towards the lacunae in their knowledge, allowing their questions bubble up through the gaps. Wikipedia and Google are best treated as starting points rather than destinations, and we should recognize that human interaction will always play a vital role in fueling the quest for knowledge. After all, Google never says, “I don’t know — what do you think?”

The Internet has the potential to be the greatest tool for intellectual exploration ever invented, but only if it is treated as a complement to our talent for inquiry rather than a replacement for it. In a world awash in ready-made answers, the ability to pose difficult, even unanswerable questions is more important than ever.

Picasso was half-right: computers are useless without truly curious humans.

Ian Leslie is the author of “Curious: The Desire To Know and Why Your Future Depends On It.” He writes on psychology, trends and politics for The Economist, The Guardian, Slate and Granta. He lives in London. Follow him on Twitter at @mrianleslie.

http://www.salon.com/2014/10/12/google_makes_us_all_dumber_the_neuroscience_of_search_engines/?source=newsletter

How an Apple mega-deal cost Los Angeles classrooms $1 billion

Rotten to the Core:

Bad business and worse ethics? A scandal is brewing in L.A. over a sketchy intiative to give every student an iPad

 

Rotten to the Core: How an Apple mega-deal cost Los Angeles classrooms $1 billion

Technology companies may soon be getting muddied from a long-running scandal at the Los Angeles Unified School District (LAUSD), the nation’s second-largest system. A year after the cash-strapped district signed a $1 billion contract with Apple to purchase iPads for every student, the once-ballyhooed deal has blown up. Now the mess threatens to sully other vendors from Cambridge to Cupertino.

LAUSD superintendent John Deasy is under fire for his cozy connections to Apple. In an effort to deflect attention and perhaps to show that “everybody else is doing it,” he’s demanded the release of all correspondence between his board members and technology vendors. It promises to be some juicy reading. But at its core, the LAUSD fiasco illustrates just how much gold lies beneath even the dirtiest, most neglected public schoolyard.

As the U.S. starts implementing federal Common Core State Standards, teachers and administrators are being driven to adopt technology as never before. That has set off a scramble in Silicon Valley to grab as much of the $9 billion K-12 market as possible, and Apple, Google, Cisco and others are mud-wrestling to seize a part of it. Deasy and the LAUSD have given us ringside seats to this match, which shows just how low companies will go.

When the Apple deal was announced a year ago, it was touted as the largest ever distribution of computing devices to American students. The Los Angeles Times ran a story accompanied by a photograph of an African-American girl and her classmate, who looked absolutely giddy about their new gadgets. Readers responded to the photo’s idealistic promise — that every child in Los Angeles, no matter their race or socioeconomic background, would have access to the latest technology, and Deasy himself pledged “to provide youth in poverty with tools that heretofore only rich kids have had.” Laudable as it was, that sentiment assumed that technology would by itself save our underfunded schools and somehow balance our inequitable society.



When I heard about the deal, I felt a wave of déjà vu. I had sat in a PTA meeting at a public school listening to a similar, albeit much smaller, proposed deal.  An Apple vendor had approached administrators in a Santa Barbara County school, offering to sell us iPads. The pitch was that we could help propel our kids into the technological age so that they’d be better prepared for the world, and maybe land a nice-paying, high-tech job somewhere down the line. Clearly, a school contract would be great for Apple, giving it a captive group of impressionable 11-year-olds it could then mold into lifelong customers.

But parents had to raise a lot of money to seal this deal. “Is Apple giving us a discount?” asked a fellow PTA member. No, we were told. Apple doesn’t give discounts, not even to schools. In the end, we decided to raise funds for an athletics program and some art supplies instead.

To be fair, PTA moms and dads are no match at the bargaining table for the salespeople at major companies like Google, and Hewlett-Packard. But the LAUSD, with its $6.8 billion budget, had the brains and muscle necessary to negotiate something valuable for its 655,000 students. That was the hope, at least.

Alas, problems began to appear almost immediately. First, some clever LAUSD students hacked the iPads and deleted security filters so they could roam the Internet freely and watch YouTube videos. Then, about $2 million in iPads and other devices went “missing.” Worse was the discovery that the pricey curriculum software, developed by Pearson Education Corp., wasn’t even complete. And the board looked foolish when it had to pay even more money to buy keyboards for iPads so that students could actually type out their reports.

Then, there was the deal itself. Whereas many companies extend discounts to schools and other nonprofits, Apple usually doesn’t, said George Michaels, executive director of Instructional Development at University of California at Santa Barbara. “Whatever discounts Apple gives are pretty meager.” The Chronicle of Philanthropy has noted Apple’s stingy reputation, and CEO Tim Cook has been trying to change the corporation’s miserly ways by giving $50 million to a local hospital and $50 million to an African nonprofit.

But the more we learned about the Apple “deal,” the more the LAUSD board seemed outmaneuvered. The district had bought iPad 4s, which have since been discontinued, but Apple had locked the district into paying high prices for the old models. LAUSD had not checked with its teachers or students to see what they needed or wanted, and instead had forced its end users to make the iPads work. Apple surely knew that kids needed keypads to write reports, but sold them just part of what they needed.

Compared with similar contracts signed by other districts, Apple’s deal for Los Angeles students looked crafty, at best. Perris Union High School District in Riverside County, for example, bought Samsung Chromebooks for only $344 per student. And their laptop devices have keyboards and multiple input ports for printers and thumb drives. The smaller Township High School District 214 in Illinois bought old iPad 2s without the pre-loaded, one-size-fits-all curriculum software. Its price: $429 per student.

But LAUSD paid Apple a jaw-dropping $768 per student, and LAUSD parents were not happy. As Manel Saddique wrote on a social media site: “Btw, thanks for charging a public school district more than the regular consumer price per unit, Apple. Keep it classy…”

By spring there was so much criticism about the purchase that the Los Angeles Times filed a request under the California Public Records Act to obtain all emails and records tied to the contract. What emerged was the image of a smoky backroom deal.

Then-Deputy Superintendent Jaime Aquino had once worked at Pearson, the curriculum developer, and knew the players. It turned out that Aquino and Deasy had started talking with Apple and Pearson two years before the contract was approved, and a full year before it was put out to bid. The idea behind a public bidding process is that every vendor is supposed to have the same opportunity to win a job, depending on their products, delivery terms and price. But emails show that Deasy was intent on embracing just one type of device: Apple’s.

Aquino went so far as to appear in a promotional video for iPads months before the contracts were awarded. Dressed in a suit and tie, the school official smiled for the camera as he talked about how Apple’s product would lead to “huge leaps in what’s possible for students” and would “phenomenally . . . change the landscape of education.” If other companies thought they had a shot at nabbing the massive contract from the influential district, this video must have disabused them of that idea.

At one point, Aquino was actually coaching software devloper Pearson on what to do: “[M]ake sure that your bid is the lower one,” he wrote. Meanwhile, Deasy was emailing Pearson CEO Marjorie Scardino, and effusively recounting his visit with Apple’s CEO. “I wanted to let you know I had an excellent meeting with Tim at Apple last Friday … The meeting went very well and he was fully committed to being a partner … He was very excited.”

If you step back from the smarmy exchanges, a bigger picture emerges. Yes, LAUSD is grossly mismanaged and maybe even dysfunctional. But corporations like Apple don’t look so good, either. Google, Microsoft, Facebook, Apple, Hewlett Packard — the companies that are cashing in on our classroom crisis are the same ones that helped defund the infrastructure that once made public schools so good. Sheltering billions of dollars from federal taxes may be great for the top 10 percent of Americans, who own 90 percent of the stock in these corporations. But it’s a catastrophe for the teachers, schools and universities that helped develop their technology and gave the companies some of its brightest minds. In the case of LAUSD, Apple comes across as cavalier about the problem it’s helped create for low-income students, and seems more concerned with maximizing its take from the district.

But the worst thing about this scandal is what it’s done to the public trust. The funds for this billion-dollar boondoggle were taken from voter-approved school construction and modernization bonds — bonds that voters thought would be used for physical improvements. At a time when LAUSD schools, like so many across the country, are in desperate need of physical repairs, from corroded gas lines to broken play structures, the Apple deal has cast a shadow over school bonds. Read the popular “Repairs Not iPads” page on Facebook and parents’ complaints about the lack of air conditioning, librarians and even toilet paper in school bathrooms. Sadly, replacing old fixtures and cheap trailers with new plumbing and classrooms doesn’t carry the kind of cachet for ambitious school boards as does, say, buying half-a-million electronic tablets. As one mom wrote: “Deasy has done major long-term damage because not one person will ever vote for any future bond measures supporting public schools.”

Now, the Apple deal is off, although millions of dollars have already been spent. An investigation into the bidding process is underway and there are cries to place Deasy in “teacher jail,” a district policy that keeps teachers at home while they’re under investigation. And LAUSD students, who are overwhelmingly Hispanic and African-American, have once again been given the short end of the stick. They were promised the sort of “tools that heretofore only rich kids have had,” and will probably not see them for several years, if ever. The soured Apple deal just adds to the sense of injustice that many of these students already see in the grown-up world.

Deasy contends that that he did nothing wrong. In a few weeks, the public official will get his job performance review. In the meantime, he’s called for the release of all emails and documents written between board members and other Silicon Valley and corporate education vendors. The heat in downtown Los Angeles is spreading to Northern California and beyond, posing a huge political problem for not just Deasy but for Cook and other high-tech captains.

But at the bottom of this rush to place technology in every classroom is the nagging feeling that the goal in buying expensive devices is not to improve teachers’ abilities, or to lighten their load. It’s not to create more meaningful learning experiences for students or to lift them out of poverty or neglect. It’s to facilitate more test-making and profit-taking for private industry, and quick, too, before there’s nothing left.

 

Why Facebook, Google, and the NSA Want Computers That Learn Like Humans

Deep learning could transform artificial intelligence. It could also get pretty creepy.

lol cats

Illustration: Quickhoney

In June 2012, a Google supercomputer made an artificial-intelligence breakthrough: It learned that the internet loves cats. But here’s the remarkable part: It had never been told what a cat looks like. Researchers working on the Google Brain project in the company’s X lab fed 10 million random, unlabeled images from YouTube into their massive network and instructed it to recognize the basic elements of a picture and how they fit together. Left to their own devices, the Brain’s 16,000 central processing units noticed that a lot of the images shared similar characteristics that it eventually recognized as a “cat.” While the Brain’s self-taught knack for kitty spotting was nowhere as good as a human’s, it was nonetheless a major advance in the exploding field of deep learning.

The dream of a machine that can think and learn like a person has long been the holy grail of computer scientists, sci-fi fans, and futurists alike. Deep learning—algorithms inspired by the human brain and its ability to soak up massive amounts of information and make complex predictions—might be the closest thing yet. Right now, the technology is in its infancy: Much like a baby, the Google Brain taught itself how to recognize cats, but it’s got a long way to go before it can figure out that you’re sad because your tabby died. But it’s just a matter of time. Its potential to revolutionize everything from social networking to surveillance has sent tech companies and defense and intelligence agencies on a deep-learning spending spree.

What really puts deep learning on the cutting edge of artificial intelligence (AI) is that its algorithms can analyze things like human behavior and then make sophisticated predictions. What if a social-networking site could figure out what you’re wearing from your photos and then suggest a new dress? What if your insurance company could diagnose you as diabetic without consulting your doctor? What if a security camera could tell if the person next to you on the subway is carrying a bomb?

And unlike older data-crunching models, deep learning doesn’t slow down as you cram in more info. Just the opposite—it gets even smarter. “Deep learning works better and better as you feed it more data,” explains Andrew Ng, who oversaw the cat experiment as the founder of Google’s deep-learning team. (Ng has since joined the Chinese tech giant Baidu as the head of its Silicon Valley AI team.)

And so the race to build a better virtual brain is on. Microsoft plans to challenge the Google Brain with its own system called Adam. Wired reported that Apple is applying deep learning to build a “neural-net-boosted Siri.” Netflix hopes the technology will improve its movie recommendations. Google, Yahoo, Twitter, and Pinterest have snapped up deep-learning companies; Google has used the technology to read every house number in France in less than an hour. “There’s a big rush because we think there’s going to be a bit of a quantum leap,” says Yann LeCun, a deep-learning pioneer and the head of Facebook’s new AI lab.

What if your insurance company diagnosed you without consulting your doctor? What if a security camera could tell if the person next to you is carrying a bomb?

Last December, Facebook CEO Mark Zuckerberg appeared, bodyguards in tow, at the Neural Information Processing Systems conference in Lake Tahoe, where insiders discussed how to make computers learn like humans. He has said that his company seeks to “use new approaches in AI to help make sense of all the content that people share.” Facebook researchers have used deep learning to identify individual faces from a giant database called “Labeled Faces in the Wild” with more than 97 percent accuracy. Another project, dubbed PANDA (Pose Aligned Networks for Deep Attribute Modeling), can accurately discern gender, hairstyles, clothing styles, and facial expressions from photos. LeCun says that these types of tools could improve the site’s ability to tag photos, target ads, and determine how people will react to content.

Yet considering recent news that Facebook secretly studied 700,000 users’ emotions by tweaking their feeds or that the National Security Agency harvests 55,000 facial images a day, it’s not hard to imagine how these attempts to better “know” you might veer into creepier territory.

Not surprisingly, deep learning’s potential for analyzing human faces, emotions, and behavior has attracted the attention of national-security types. The Defense Advanced Research Projects Agency has worked with researchers at New York University on a deep-learning program that sought, according to a spokesman, “to distinguish human forms from other objects in battlefield or other military environments.”

Chris Bregler, an NYU computer science professor, is working with the Defense Department to enable surveillance cameras to detect suspicious activity from body language, gestures, and even cultural cues. (Bregler, who grew up near Heidelberg, compares it to his ability to spot German tourists in Manhattan.) His prototype can also determine whether someone is carrying a concealed weapon; in theory, it could analyze a woman’s gait to reveal she is hiding explosives by pretending to be pregnant. He’s also working on an unnamed project funded by “an intelligence agency”—he’s not permitted to say more than that.

And the NSA is sponsoring deep-learning research on language recognition at Johns Hopkins University. Asked whether the agency seeks to use deep learning to track or identify humans, spokeswoman Vanee’ Vines only says that the agency “has a broad interest in deriving knowledge from data.”

Mark Zuckerberg

Mark Zuckerberg has said that Facebook seeks to “use new approaches in AI to help make sense of all the content that people share.” AP Photo/Ben Margot

Deep learning also has the potential to revolutionize Big Data-driven industries like banking and insurance. Graham Taylor, an assistant professor at the University of Guelph in Ontario, has applied deep-learning models to look beyond credit scores to determine customers’ future value to companies. He acknowledges that these types of applications could upend the way businesses treat their customers: “What if a restaurant was able to predict the amount of your bill, or the probability of you ever returning? What if that affected your wait time? I think there will be many surprises as predictive models become more pervasive.”

Privacy experts worry that deep learning could also be used in industries like banking and insurance to discriminate or effectively redline consumers for certain behaviors. Sergey Feldman, a consultant and data scientist with the brand personalization company RichRelevance, imagines a “deep-learning nightmare scenario” in which insurance companies buy your personal information from data brokers and then infer with near-total accuracy that, say, you’re an overweight smoker in the early stages of heart disease. Your monthly premium might suddenly double, and you wouldn’t know why. This would be illegal, but, Feldman says, “don’t expect Congress to protect you against all possible data invasions.”

An NSA spokeswoman only says that the agency “has a broad interest in deriving knowledge from data.”

And what if the computer is wrong? If a deep-learning program predicts that you’re a fraud risk and blacklists you, “there’s no way to contest that determination,” says Chris Calabrese, legislative counsel for privacy issues at the American Civil Liberties Union.

Bregler agrees that there might be privacy issues associated with deep learning, but notes that he tries to mitigate those concerns by consulting with a privacy advocate. Google has reportedly established an ethics committee to address AI issues; a spokesman says its deep-learning research is not primarily about analyzing personal or user-specific data—for now. While LeCun says that Facebook eventually could analyze users’ data to inform targeted advertising, he insists the company won’t share personally identifiable data with advertisers.

“The problem of privacy invasion through computers did not suddenly appear because of AI or deep learning. It’s been around for a long time,” LeCun says. “Deep learning doesn’t change the equation in that sense, it just makes it more immediate.” Big companies like Facebook “thrive on the trust users have in them,” so consumers shouldn’t worry about their personal data being fed into virtual brains. Yet, as he notes, “in the wrong hands, deep learning is just like any new technology.”

Deep learning, which also has been used to model everything from drug side effects to energy demand, could “make our lives much easier,” says Yoshua Bengio, head of the Machine Learning Laboratory at the University of Montreal. For now, it’s still relatively difficult for companies and governments to efficiently sift through all our emails, texts, and photos. But deep learning, he warns, “gives a lot of power to these organizations.”

Are Apple and Google Really on Your Side Against the NSA?

  Civil Liberties  

http://www.techautos.com/wp-content/uploads/2010/03/AppleGoogleS.jpg

The tech giants’ tacit message: Open your wallet for the latest gadget and you’ll be safe from Big Brother.
 In the past couple of days both Google[i] and Apple[ii] have announced that they’re enabling default encryption on their mobile devices so that only the user possessing a device’s password can access its local files. The public relations team at Apple makes the following claim:

“Apple cannot bypass your passcode and therefore cannot access this data… So it’s not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8″

The marketing drones at Google issued a similar talking point:

“For over three years Android has offered encryption, and keys are not stored off of the device, so they cannot be shared with law enforcement… As part of our next Android release, encryption will be enabled by default out of the box, so you won’t even have to think about turning it on.”

Quite a sales pitch? Cleverly disguised as a news report no less. Though it’s not stated outright the tacit message is: open your wallet for the latest gadget and you’ll be safe from Big Brother. Sadly, to a large degree this perception of warrant protection is the product of Security Theater aimed at rubes and shareholders. The anti-surveillance narrative being dispensed neglects the myriad of ways in which such device-level encryption can be overcome. A list of such techniques has been enumerated by John Young, the architect who maintains the Cryptomeleak site[iii]. Young asks readers why he should trust hi-tech’s sales pitch and subsequently presents a series of barbed responses. For example:

Because they can’t covertly send your device updated software [malware] that would change all these promises, for a targeted individual, or on a mass basis?

Because this first release of their encryption software has no security bugs, so you will never need to upgrade it to retain your privacy?

Because the US export control bureaucracy would never try tostop Apple from selling secure mass market proprietary encryption products across the border?

Because the countries that wouldn’t let Blackberry sell phonesthat communicate securely with your own corporate servers, will of course let Apple sell whatever high security non-tappable devices it wants to?

Because they want to help the terrorists win?

Because it’s always better to wiretap people after you convince them that they are perfectly secure, so they’ll spill all their best secrets?

Another thing to keep in mind is that local device encryption is just that. Local. As Bruce Schneier points out this tactic does little to protect user data that’s stored in the cloud[iv]. When push comes to shove executives will still be able to hand over anything that resides on corporate servers.

Marketing spokesmen are eager to create the impression that companies are siding with users in the struggle against mass surveillance (never mind the prolific corporate data mining[v]). Especially after business leaders denied participating in the NSA’s PRISM program. Yet the appearance of standing up to government surveillance is often a clever ploy to sell you stuff, a branding mechanism. It’s important to recognize that Internet companies, especially billion dollar hi-tech multinationals like Yahoo[vi] and Cisco[vii], exist to generate revenue and have clearly demonstrated the tendency to choose profits over human rights when it’s expedient.

End Note

[i] Craig Timberg, “Newest Androids will join iPhones in offering default encryption, blocking police,” Washington Post, September 18, 2014.

[ii] Craig Timberg, “Apple will no longer unlock most iPhones, iPads for police, even with search warrants,” Washington Post, September 18, 2014.

[iii] John Young, “Apple Wiretap Disbelief,” Cryptome, September 19, 2014.

[iv] Bruce Schneier, “iOS 8 Security,” Schneier on Security, September 19, 2014.

[v] Bill Blunden, “The NSA’s Corporate Collaborators,” Counterpunch, May 9-11, 2014.

[vi] Bill Blunden, “Corporate Executives Couldn’t Care Less About Civil Liberties,” Counterpunch, September 15, 2014.

[vii] Cindy Cohn and Rainey Reitman, “Court Lets Cisco Systems Off the Hook for Helping China Detain, Torture Religious Minorities,”Electronic Frontier Foundation, September 19, 2014.

Bill Blunden is an independent investigator whose current areas of inquiry include information security, anti-forensics, and institutional analysis.

http://www.alternet.org/civil-liberties/are-apple-and-google-really-your-side-against-nsa?akid=12281.265072.xUwZXM&rd=1&src=newsletter1020319&t=19&paging=off&current_page=1#bookmark

Follow

Get every new post delivered to your Inbox.

Join 1,594 other followers