DIGITAL MUSIC NEWS

Record Label Coalition Asks Appeals

Court To Uphold Verdict Against Vimeo

 

Gavel      Emboldened by last month’s ruling that SiriusXM was financially liable for playing music recorded prior to 1972, a coalition of record labels now is asking the 2nd Circuit Court of Appeals to uphold a ruling issued earlier this year by U.S. District Court Judge Ronnie Abrams in New York. In that decision Judge Abrams said online music service Vimeo was ineligible for the Digital Millennium Copyright Harbor Act’s safe harbor protections for any user-uploaded clips with pre-1972 music. As reported by MediaPost, the labels this week filed papers asking the 2nd Circuit to uphold Abrams’ ruling, arguing that the “plain language” of the Copyright Act supports the idea that Congress intended for pre-1972 music to be treated differently from music recorded after that date.

Vimeo is appealing Abrams’ ruling, arguing that it has no practical way to distinguish pre-1972 recordings from newer ones. Moreover, the company says the Digital Millennium Copyright Act contains safe harbor provisions that give online companies immunity from infringement liability for material uploaded by users, as long as the companies meet certain requirements – including that they remove infringing material upon request of the copyright owner. However, the Copyright Act of 1976 – which overhauled U.S. copyright law – says it doesn’t annul any “common law” rights that existed before Feb. 15, 1972. Thus, the labels insist the provision preserving pre-1972 common-law rights means that the DMCA’s safe harbors don’t apply to older music.

Different judges have reached different conclusions about this issue, and a number of Silicon Valley companies, including Google, Facebook, and Twitter, have supported Vimeo in the dispute. Additionally, digital rights groups including the Electronic Frontier Foundation have filed papers supporting Vimeo. In a friend-of-the-court brief, the EFF said treating pre-1972 and post-1972 music differently would “create an impossible burden for service providers and would stifle innovation.” 

Apple Says iTunes Store Revenue

Grew, But Overall Music Sales Dropped

 

Increase and Decrease      In an 88-page annual report filed Monday with the Securities and Exchange Commission, Apple Inc. revealed the iTunes Store overall took in more revenue in its 2014 fiscal year – which ended September 27 – than it did last year, despite the fact that music sales have fallen. “The iTunes Store generated a total of $10.2 billion in net sales during 2014 compared to $9.3 billion during 2013,” Apple said. “Growth in net sales from the iTunes Store was driven by increases in revenue from app sales, reflecting continued growth in the installed base of iOS devices and the expanded offerings of iOS Apps and related in-App purchases. This was partially offset by a decline in sales of digital music.”

As noted by CNET, Apple didn’t give specifics on how much digital music sales have declined, but the Wall Street Journal last week cited “people familiar with the matter” who said such sales have skidded 13% to 14% since January 1. In a statement that seems to support this estimate, Apple said, “The company’s digital content services have faced significant competition from other companies promoting their own digital music and content products and services, including those offering free peer-to-peer music and video services.” The Journal pinned the blame on growing competition from cheap music sources, such as free videos and $10-per-month unlimited music subscription plans.

A drop in digital music sales is having an impact on global music sales, as listeners to streaming services are buying fewer digital albums and tracks. Apple took steps earlier this year to counteract this shift by acquiring Beats Music, which the company hopes will help it regain prominence as the #1source for digital music. Rebranding the Beats service and integrating it with iTunes should help do this. 

Music Fans Will Come Back To

Apple For These 3 Reasons

 

Apple      Apple is in danger of losing its music industry dominance. That’s the hypothesis of the analysts at Tech Cheat Sheet, which this week reiterated that the digital music downloads market has been in a steady decline over the past year, as more consumers shift from buying digital music files to subscribing to streaming music services. Citing and Nielsen’s and Billboard‘s 2014 Mid-Year Music Industry Report, they noted that individual digital track sales and digital album sales fell by 13% and 11.6%, respectively, in the first six months of 2014 vs. the same period last year. At the same time, on-demand audio streaming grew by 50.1%.

Although Apple has been able to buck trends in the wider smartphone and PC markets, a recent report from The Wall Street Journal suggested the company has not been as fortunate in the digital music download market. Still, the company is not about to surrender its dominance of the retail music market without a fight, and the Cheat Sheet gurus offer three steps Apple is taking to regain its prominence in the digital music universe:

  1. Integrating Beats Music into iTunes: It appears Apple has plans to fully integrate the music streaming service into its iTunes Radio service; rebranding the newly acquired subscription-based music streaming platform with the iTunes label may help the company garner more users who are familiar with Apple’s iconic brand.
  2. Undercutting the competition. Apple is pushing record labels to give it a discount rate that would allow it to offer Beats Music subscriptions for only $5 per month, instead of the $10 per month standard charged by most other competitors.
  3. Apple may be introducing a new digital music format. There are numerous indications that Apple is working on a digital music format that, according to U2 frontman Bono, “will prove so irresistibly exciting to music fans that it will tempt them again into buying music – whole albums as well as individual tracks.

New Tidal Streaming Platform To Compete

Against Spotify With “Lossless” Digital Music

 

Mobile increase      In a gamble (of sorts) that pits quality against quantity, a new streaming music service known as Tidal launched in the U.S. and U.K. this week in an effort to compete directly with such online platforms as Spotify, Deezer, and Beats Music. Developed by Scandinavian technology company Aspiro, Tidal’s monthly subscription is twice the price of most of those rivals – $19.99 vs. $9.99 – with the firm hoping that its promise of “hifi-quality” music induces music fans to pay the extra cost. The service will stream tracks at “lossless” quality – FLAC/ALAC 44.1kHz / 16 bit files at 1411 kbps, to be specific – with distribution partnerships already signed with a range of hi-fi manufacturers that include Sonos, Denon, and Harman.

As reported by The Guardian, Tidal is betting on more than just high-quality audio. It has 25 million individual tracks in its library, as well as 75,000 music videos and a team of editors writing features and interviews about established and emerging artists. “The music is just one part of the service,” Andy Chen, Tidal’s chief executive, said in a statement. “The expert editorial educates, entertains, and enriches the music experience while the music videos complement the music perfectly. We are sure that Tidal will quickly become the music streaming service of choice for all who appreciate high quality at every level.”

Unlike other subscription-based services, Tidal is not offering a separate “introductory” version supported by advertising. Parent company Aspiro has signed licensing deals with all three major labels, as well as independent labels and collection societies in the U.S. and U.K. According to company information, Tidal’s roots lie in WiMP, the Aspiro-owned service that is a rival to Spotify in Scandinavia, and which has its own double-price WiMP HiFi tier offering lossless-quality streams. At the end of June 2014, WiMP had 580,000 paying users, including 17,000 signed up to its HiFi version. 

Motley Fool: With Audio Cards Twitter

May Have “Done Music Right”

 

Twitter Music     It’s been a tough week so far for Twitter, which reported on Monday that its usage numbers stalled in the third quarter, and the number of new users has slowed dramatically. This news didn’t keep the Motley Fool from noting that the social media giant finally may have gotten its music platform moving in the right direction, following last year’s #Music debacle that shut down less than 12 months later.

As the Fool reported, Twitter is giving music a second chance with its rollout of Audio Cards, which were developed to fix the numerous problems users had with #Music. Audio Cards offer a simple solution for users to share music simply by pressing a play button. Not only can they launch a song in a new window, but they can “dock” it so they can continue browsing their timeline. To do this, Twitter enlisted the help of Soundcloud, which is highly popular with social media users and has been an acquisition target in the past. With a new round of debt funding, Twitter may make another go at Soundcloud soon, Motley Fool says.

Twitter also is partnering with Apple, allowing any song on iTunes to stream through Twitter, ad giving users the choice to easily purchase downloads through their timeline with just a few taps. Apple was quick to join up with Audio Cards as a way to help boost activity in its iTunes store, and the company is hoping its early adoption helps spur digital music sales. 

New Microsoft “Music Deals” App Offers

Select Albums At Heavy Discounts

 

Music Business      Windows Phone, PC, and tablet users can look forward to scoring some new music on the cheap as Microsoft has unveiled an app that offers albums for as little as $0.99. Silicon Republic reports the Microsoft Music Deals app allows 101 albums to be downloaded every Tuesday, with newer records available for $0.99 and older LPs costing $1.99. For instance, some of the albums available for next to nothing this week are Slipknot’s latest album .5: The Gray Chapter, Maroon 5’s V, Prince’s Purple Rain, and Fleetwood Mac’s Rumors.

     Once an album is purchased via the app it’s added to a user’s Xbox Music account, where it can be downloaded for listening. The app is only available within the U.S., with no announced plans for a worldwide rollout.

The news follows recent reports of Apple re-branding its Beats Music platform and integrating it with iTunes Radio, with new licensing arrangements that allow it to undercut the competition. (See story, above.) If this is the case and Apple is able to lower the cost of music streaming (and/or downloads), it would seem the two tech giants might become engaged in a digital music price war.

 

A publication of Bunzel Media Resources © 2014

 

Journalist Matt Richtel’s ‘Deadly Wandering’ tells a harrowing story of technology’s dangers

By Wallace Baine, Santa Cruz Sentinel

Matt Richtel

Matt Richtel

On an early Friday morning in September 2006, a young man named Reggie Shaw climbed into his Chevy Tahoe for his long commute to work in Logan, Utah. Somewhere on a highway east of Logan, with the sky just beginning to lighten, Reggie veered over the yellow line and sideswiped a Saturn coming from the opposite direction. The Saturn spun out and was “T-boned” by a Ford pick-up, killing the two men riding in the Saturn.

From that tragic event comes the story at the center of Matt Richtel’s new book “A Deadly Wandering: A Tale of Tragedy and Redemption in the Age of Attention” (Wm. Morrow).
Reggie Shaw, it was later determined, was texting on his flip phone at the time of the accident, which he initially denied. What followed was the seminal legal case that defined the debate about texting and driving.
Richtel, a reporter at the New York Times, won the Pulitzer Prize in 2010 for his reporting on the risks of distracted driving. In his book, he lays out the narrative of the Shaw case, what happened to Reggie and to the families of the victims, and how the events of that morning led lawmakers to look for a proper legal response to what can be a deadly habit.
At the same time, “Deadly Wandering” probes into the neuroscience of distraction, and the deeply seated neuro-chemical appeal of our ubiquitous hand-held devices.
“I didn’t want to write a book just about texting and driving,” said Richtel, who comes to Bookshop Santa Cruz to discuss his book Nov. 5. “What we’re talking about here goes well beyond what happens in the car. Why are we checking our devices all the time? Why can’t we stand idly in line at the grocery store, or at a stoplight, or with our homework, or with the spouse that sitting right across the table, without feeling that itch to look at our device?”
Chapters on what science is learning about how smart-phone and tablet technology are changing our brains are interspersed within the longer story of Reggie Shaw who later went to jail.
“This is not a screed against technology,” said Richtel of his new book. “It’s a wake-up call to be informed about the power of the neuro-chemical power of these things, in the same way we want to be informed about anything that has lots of power over our lives.”

Research suggests that checking in on your smart phone may release a dose of dopamine, the neurotransmitter that regulates the pleasure centers of the brain. “Ninety-six percent of people say that you shouldn’t text and drive, and yet, 30 percent do it anyway,” said Richtel. “The only other disconnect I can find that is that stark is with cigarettes. Every smoker says it’s bad for you, yet they keep doing it. Why do these devices have such a lure over us.”

Today, Shaw is a crusader against texting while driving. “Deadly Wandering” is an often harrowing chronicle of how Shaw got to the point where he could admit his wrongdoing and atone for causing the death of two fathers and husbands.

“The Reggie story is so compelling because we can connect to him easily,” said Richtel. “The battle that happened after his deadly wreck is a metaphor for our own internal battle about how to pay attention, particularly on the roads.”

This is not, however, a morality tale. Instead of talking about the problem of texting while driving as an issue of responsibility and willpower, Richtel asserts that our powerful and appealing technological devises are changing our behaviors on a neurological level.

“People are getting in their cars every single day, people who are not malicious, who are not bad people, and yet they’re winding up in these deadly wrecks. Driving feels boring a lot of the time. And with every passing moment, we are becoming less tolerant of boredom than we’ve ever been. This thing is constantly beckoning us.”

Matt Richtel

http://www.mercurynews.com/entertainment/ci_26823138/journalist-matt-richtels-deadly-wandering-tells-harrowing-story?source=rss

 

Assange: Google Is Not What It Seems

When Google Met Wikileaks

In June 2011, Julian Assange received an unusual visitor: the chairman of Google, Eric Schmidt, arrived from America at Ellingham Hall, the country house in Norfolk, England where Assange was living under house arrest.

For several hours the besieged leader of the world’s most famous insurgent publishing organization and the billionaire head of the world’s largest information empire locked horns. The two men debated the political problems faced by society, and the technological solutions engendered by the global network—from the Arab Spring to Bitcoin.

They outlined radically opposing perspectives: for Assange, the liberating power of the Internet is based on its freedom and statelessness. For Schmidt, emancipation is at one with U.S. foreign policy objectives and is driven by connecting non-Western countries to Western companies and markets. These differences embodied a tug-of-war over the Internet’s future that has only gathered force subsequently.

In this extract from When Google Met WikiLeaks Assange describes his encounter with Schmidt and how he came to conclude that it was far from an innocent exchange of views.

Eric Schmidt is an influential figure, even among the parade of powerful characters with whom I have had to cross paths since I founded WikiLeaks. In mid-May 2011 I was under house arrest in rural Norfolk, England, about three hours’ drive northeast of London. The crackdown against our work was in full swing and every wasted moment seemed like an eternity. It was hard to get my attention.

But when my colleague Joseph Farrell told me the executive chairman of Google wanted to make an appointment with me, I was listening.

In some ways the higher echelons of Google seemed more distant and obscure to me than the halls of Washington. We had been locking horns with senior U.S. officials for years by that point. The mystique had worn off. But the power centers growing up in Silicon Valley were still opaque and I was suddenly conscious of an opportunity to understand and influence what was becoming the most influential company on earth. Schmidt had taken over as CEO of Google in 2001 and built it into an empire.

I was intrigued that the mountain would come to Muhammad. But it was not until well after Schmidt and his companions had been and gone that I came to understand who had really visited me.

The stated reason for the visit was a book. Schmidt was penning a treatise with Jared Cohen, the director of Google Ideas, an outfit that describes itself as Google’s in-house “think/do tank.”

I knew little else about Cohen at the time. In fact, Cohen had moved to Google from the U.S. State Department in 2010. He had been a fast-talking “Generation Y” ideas man at State under two U.S. administrations, a courtier from the world of policy think tanks and institutes, poached in his early twenties.

He became a senior advisor for Secretaries of State Rice and Clinton. At State, on the Policy Planning Staff, Cohen was soon christened “Condi’s party-starter,” channeling buzzwords from Silicon Valley into U.S. policy circles and producing delightful rhetorical concoctions such as “Public Diplomacy 2.0.” On his Council on Foreign Relations adjunct staff page he listed his expertise as “terrorism; radicalization; impact of connection technologies on 21st century statecraft; Iran.”

It was Cohen who, while he was still at the Department of State, was said to have emailed Twitter CEO Jack Dorsey to delay scheduled maintenance in order to assist the aborted 2009 uprising in Iran. His documented love affair with Google began the same year when he befriended Eric Schmidt as they together surveyed the post-occupation wreckage of Baghdad. Just months later, Schmidt re-created Cohen’s natural habitat within Google itself by engineering a “think/do tank” based in New York and appointing Cohen as its head. Google Ideas was born.

Later that year two co-wrote a policy piece for the Council on Foreign Relations’ journal Foreign Affairs, praising the reformative potential of Silicon Valley technologies as an instrument of U.S. foreign policy. Describing what they called “coalitions of the connected,” Schmidt and Cohen claimed that:

Democratic states that have built coalitions of their militaries have the capacity to do the same with their connection technologies.…

They offer a new way to exercise the duty to protect citizens around the world [emphasis added].

Schmidt and Cohen said they wanted to interview me. I agreed. A date was set for June.

Jared Cohen

Executive Chairman of Google Eric Schmidt and Jared Cohen, director of Google Ideas Olivia Harris/Reuters

* * *

By the time June came around there was already a lot to talk about. That summer WikiLeaks was still grinding through the release of U.S. diplomatic cables, publishing thousands of them every week. When, seven months earlier, we had first started releasing the cables, Hillary Clinton had denounced the publication as “an attack on the international community” that would “tear at the fabric” of government.

It was into this ferment that Google projected itself that June, touching down at a London airport and making the long drive up into East Anglia to Norfolk and Beccles.

Schmidt arrived first, accompanied by his then partner, Lisa Shields. When he introduced her as a vice president of the Council on Foreign Relations—a U.S. foreign-policy think tank with close ties to the State Department—I thought little more of it. Shields herself was straight out of Camelot, having been spotted by John Kennedy Jr.’s side back in the early 1990s.

They sat with me and we exchanged pleasantries. They said they had forgotten their Dictaphone, so we used mine. We made an agreement that I would forward them the recording and in exchange they would forward me the transcript, to be corrected for accuracy and clarity. We began. Schmidt plunged in at the deep end, straightaway quizzing me on the organizational and technological underpinnings of WikiLeaks.

* * *

Some time later Jared Cohen arrived. With him was Scott Malcomson, introduced as the book’s editor. Three months after the meeting Malcomson would enter the State Department as the lead speechwriter and principal advisor to Susan Rice (then U.S. ambassador to the United Nations, now national security advisor).

At this point, the delegation was one part Google, three parts U.S. foreign-policy establishment, but I was still none the wiser. Handshakes out of the way, we got down to business.

Schmidt was a good foil. A late-fiftysomething, squint-eyed behind owlish spectacles, managerially dressed—Schmidt’s dour appearance concealed a machinelike analyticity. His questions often skipped to the heart of the matter, betraying a powerful nonverbal structural intelligence.

It was the same intellect that had abstracted software-engineering principles to scale Google into a megacorp, ensuring that the corporate infrastructure always met the rate of growth. This was a person who understood how to build and maintain systems: systems of information and systems of people. My world was new to him, but it was also a world of unfolding human processes, scale and information flows.

For a man of systematic intelligence, Schmidt’s politics—such as I could hear from our discussion—were surprisingly conventional, even banal. He grasped structural relationships quickly, but struggled to verbalize many of them, often shoehorning geopolitical subtleties into Silicon Valley marketese or the ossified State Department micro-language of his companions. He was at his best when he was speaking (perhaps without realizing it) as an engineer, breaking down complexities into their orthogonal components.

I found Cohen a good listener, but a less interesting thinker, possessed of that relentless conviviality that routinely afflicts career generalists and Rhodes Scholars. As you would expect from his foreign-policy background, Cohen had a knowledge of international flash points and conflicts and moved rapidly between them, detailing different scenarios to test my assertions. But it sometimes felt as if he was riffing on orthodoxies in a way that was designed to impress his former colleagues in official Washington.

Malcomson, older, was more pensive, his input thoughtful and generous. Shields was quiet for much of the conversation, taking notes, humoring the bigger egos around the table while she got on with the real work.

As the interviewee, I was expected to do most of the talking. I sought to guide them into my worldview. To their credit, I consider the interview perhaps the best I have given. I was out of my comfort zone and I liked it.

We ate and then took a walk in the grounds, all the while on the record. I asked Eric Schmidt to leak U.S. government information requests to WikiLeaks, and he refused, suddenly nervous, citing the illegality of disclosing Patriot Act requests. And then, as the evening came on, it was done and they were gone, back to the unreal, remote halls of information empire, and I was left to get back to my work.

That was the end of it, or so I thought.

CONTINUED:   http://www.newsweek.com/assange-google-not-what-it-seems-279447?piano_d=1

The Impulse Society

How Our Growing Desperation for Instant Connection Is Ruining Us

Consumer culture does everything in its power to persuade us that adversity has no place in our lives.

The following is an excerpt from Paul Roberts’ new book, The Impulse Society: America in the Age of Instant Gratification (Bloomsbury, 2014). Reprinted here with permission.

The metaphor of the expanding fragile modern self is quite apt. To personalize is, in effect, to reject the world “as is,” and instead to insist on bending it to our preferences, as if mastery and dominance were our only mode. But humans aren’t meant only for mastery. We’re also meant to adapt to something larger. Our large brains are specialized for cooperation and compromise and negotiation—with other individuals, but also with the broader world, which, for most of history, did not cater to our preferences or likes. For all our ancestors’ tremendous skills at modifying and improving their environment, daily survival depended as much on their capacity to conform themselves and their expectations to the world as they found it. Indeed, it was only by enduring adversity and disappointment that we humans gained the strength and knowledge and perspective that are essential to sustainable mastery.

Virtually every traditional culture understood this and regarded adversity as inseparable from, and essential to, the formation of strong, self-sufficient individuals. Yet the modern conception of “character” now leaves little space for discomfort or real adversity. To the contrary, under the Impulse Society, consumer culture does everything in its considerable power to persuade us that adversity and difficulty and even awkwardness have no place in our lives (or belong only in discrete, self-enhancing moments, such as ropes courses or really hard ab workouts). Discomfort, difficulty, anxiety, suffering, depression, rejection, uncertainty, or ambiguity—in the Impulse Society, these aren’t opportunities to mature and toughen or become. Rather, they represent errors and inefficiencies, and thus opportunities to correct—nearly always with more consumption and self-expression.

So rather than having to wait a few days for a package, we have it overnighted. Or we pay for same-day service. Or we pine for the moment when Amazon launches drone delivery and can get us our package in thirty minutes.* And as the system gets faster at gratifying our desires, the possibility that we might actually be more satisfied by waiting and enduring a delay never arises. Just as nature abhors a vacuum, the efficient consumer market abhors delay and adversity, and by extension, it cannot abide the strength of character that delay and adversity and inefficiency generally might produce. To the efficient market, “character” and “virtue” are themselves inefficiencies—impediments to the volume-based, share-price-maximizing economy. Once some new increment of self-expressive, self-gratifying, self-promoting capability is made available, the unstated but overriding assumption of contemporary consumer culture is that this capability can and should be put to use. Which means we now allow the efficient market and the treadmills and the relentless cycles of capital and innovation to determine how, and how far, we will take our self-expression and, by extension, our selves— even when doing so leaves us in a weaker state.

Consider the way our social relationships, and the larger processes of community, are changing under the relentless pressure of our new efficiencies. We know how important community is for individual development. It’s in the context of community that we absorb the social rules and prerequisites for interaction and success. It’s here that we come to understand and, ideally, to internalize, the need for limits and self-control, for patience and persistence and long-term commitments; the pressure of community is one way society persuades us to control our myopia and selfishness. (Or as economists Sam Bowles and Herbert Gintis have put it, community is the vehicle through which “society’s ‘oughts’ become its members’ ‘wants.’ ”) But community’s function isn’t simply to say “no.” It’s in the context of our social relationships where we discover our capacities and strengths. It’s here that we gain our sense of worth as individuals, as citizens and as social producers—active participants who don’t merely consume social goods, but contribute something the community needs.

But community doesn’t simply teach us to be productive citizens. People with strong social connections generally have a much better time. We enjoy better physical and mental health, recover faster from sickness or injury, and are less likely to suffer eating or sleeping disorders. We report being happier and rank our quality of life as higher—and do so even when the community that we’re connected to isn’t particularly well off or educated. Indeed, social connectedness is actually more important than affluence: regular social activities such as volunteering, church attendance, entertaining friends, or joining a club provide us with the same boost to happiness as does a doubling of personal income. As Harvard’s Robert Putnam notes, “The single most common finding from a half century’s research on the correlates of life satisfaction, not only in the United States but around the world, is that happiness is best predicted by the breadth and depth of one’s social connections.”

Unfortunately, for all the importance of social connectedness, we haven’t done a terribly good job of preserving it under the Impulse Society. Under the steady pressure of commercial and technological efficiencies, many of the tight social structures of the past have been eliminated or replaced with entirely new social arrangements. True, many of these new arrangements are clearly superior—even in ostensibly free societies, traditional communities left little room for individual growth or experimentation or happiness. Yet our new arrangements, which invariably seek to give individuals substantially more control over how they connect, exact a price. More and more, social connection becomes just another form of consumption, one we expect to tailor to our personal preferences and schedules—almost as if community was no longer a necessity or an obligation, but a matter of personal style, something to engage as it suits our mood or preference. And while such freedom has its obvious attractions, it clearly has downsides. In gaining so much control over the process of social connection, we may be depriving ourselves of some of the robust give-and-take of traditional interaction that is essential to becoming a functional, fulfilled individual.

Consider our vaunted and increasing capacity to communicate and connect digitally. In theory, our smartphones and social media allow us the opportunity to be more social than at any time in history. And yet, because there are few natural limits to this format—we can, in effect, communicate incessantly, posting every conceivable life event, expressing every thought no matter how incompletely formed or inappropriate or mundane—we may be diluting the value of the connection.

Studies suggest, for example, that the efficiency with which we can respond to an online provocation routinely leads to escalations that can destroy offline relationships. “People seem aware that these kinds of crucial conversations should not take place on social media,” notes Joseph Grenny, whose firm, VitalSmarts, surveys online behavior. “Yet there seems to be a compulsion to resolve emotions right now and via the convenience of these channels.”

Even when our online communications are entirely friendly, the ease with which we can reach out often undermines the very connection we seek to create. Sherry Turkle, a sociologist and clinical psychologist who has spent decades researching digital interactions, argues that because it is now possible to be in virtually constant contact with others, we tend to communicate so excessively that even a momentary lapse can leave us feeling isolated or abandoned. Where people in the pre-digital age did not think it alarming to go hours or days or even weeks without hearing from someone, the digital mind can become uncomfortable and anxious without instant feedback. In her book Alone Together, Turkle describes a social world of collapsing time horizons. College students text their parents daily, and even hourly, over the smallest matters—and feel anxious if they can’t get a quick response. Lovers break up over the failure to reply instantly to a text; friendships sour when posts aren’t “liked” fast enough. Parents call 911 if Junior doesn’t respond immediately to a text or a phone call—a degree of panic that was simply unknown before constant digital contact. Here, too, is a world made increasingly insecure by its own capabilities and its own accelerating efficiencies.

This same efficiency-driven insecurity now lurks just below the surface in nearly all digital interactions. Whatever the relationship (romantic, familial, professional), the very nature of our technology inclines us to a constant state of emotional suspense. Thanks to the casual, abbreviated nature of digital communication, we converse in fragments of thoughts and feelings that can be completed only through more interaction—we are always waiting to know how the story ends. The result, Turkle says, is a communication style, and a relationship style, that allow us to “express emotions while they are being formed” and in which “feelings are not fully experienced until they are communicated.” In other words, what was once primarily an interior process—thoughts were formed and feelings experienced before we expressed them—has now become a process that is external and iterative and public. Identity itself comes to depend on iterative interaction—giving rise to what Turkle calls the “collaborative self.” Meanwhile, our skills as a private, self-contained person vanish. “What is not being cultivated here,” Turkle writes, “is the ability to be alone and reflect on one’s emotions in private.” For all the emphasis on independence and individual freedom under the Impulse Society, we may be losing the capacity to truly be on our own.

In a culture obsessed with individual self-interest, such an incapacity is surely one of the greatest ironies of the Impulse Society. Yet it many ways it was inevitable. Herded along by a consumer culture that is both solicitous and manipulative, one that proposes absolute individual liberty while enforcing absolute material dependence—we rely completely on the machine of the marketplace—it is all too easy to emerge with a self-image, and a sense of self, that are both wildly inflated and fundamentally weak and insecure. Unable to fully experience the satisfactions of genuine independence and individuality, we compensate with more personalized self-expression and gratification, which only push us further from the real relationships that might have helped us to a stable, fulfilling existence.

 

http://www.alternet.org/books/impulse-society-how-our-growing-desperation-instant-connection-ruining-us?akid=12390.265072.bjTHr8&rd=1&src=newsletter1024073&t=9&paging=off&current_page=1#bookmark

 

“We’ve Created a Generation of People Who Hate America”


Filmmaker Laura Poitras on Our Surveillance State

Back to that Hong Kong hotel room with Snowden.

Photo Credit: Mopic / Shutterstock.com

Here’s a Ripley’s Believe It or Not! stat from our new age of national security. How many Americans have security clearances? The answer: 5.1 million, a figure that reflects the explosive growth of the national security state in the post-9/11 era. Imagine the kind of system needed just to vet that many people for access to our secret world (to the tune of billions of dollars). We’re talking here about the total population of Norway and significantly more people than you can find in Costa Rica, Ireland, or New Zealand. And yet it’s only about 1.6% of the American population, while on ever more matters, the unvetted 98.4% of us are meant to be left in the dark.

For our own safety, of course. That goes without saying.

All of this offers a new definition of democracy in which we, the people, are to know only what the national security state cares to tell us.  Under this system, ignorance is the necessary, legally enforced prerequisite for feeling protected.  In this sense, it is telling that the only crime for which those inside the national security state can be held accountable in post-9/11 Washington is not potential perjury before Congress, or the destruction of evidence of a crime, or torture, or kidnapping, or assassination, or the deaths of prisoners in an extralegal prison system, but whistleblowing; that is, telling the American people something about what their government is actually doing.  And that crime, and only that crime, has been prosecuted to the full extent of the law (and beyond) with a vigor unmatched in American history.  To offer a single example, the only American to go to jail for the CIA’s Bush-era torture program was John Kiriakou, a CIA whistleblower who revealed the name of an agent involved in the program to a reporter.

In these years, as power drained from Congress, an increasingly imperial White House has launched various wars (redefined by its lawyers as anything but), as well as a global assassination campaign in which the White House has its own “kill list” and the president himself decides on global hits.  Then, without regard for national sovereignty or the fact that someone is an American citizen (and upon the secret invocation of legal mumbo-jumbo), the drones are sent off to do the necessary killing.

And yet that doesn’t mean that we, the people, know nothing.  Against increasing odds, there has been some fine reporting in the mainstream media by the likes of James Risen and Barton Gellman on the security state’s post-legal activities and above all, despite the Obama administration’s regular use of the World War I era Espionage Act, whistleblowers have stepped forward from within the government to offer us sometimes staggering amounts of information about the system that has been set up in our name but without our knowledge.

Among them, one young man, whose name is now known worldwide, stands out.  In June of last year, thanks to journalist Glenn Greenwald and filmmaker Laura Poitras, Edward Snowden, a contractor for the NSA and previously the CIA, stepped into our lives from a hotel room in Hong Kong.  With a treasure trove of documents that are still being released, he changed the way just about all of us view our world.  He has been charged under the Espionage Act.  If indeed he was a “spy,” then the spying he did was for us, for the American people and for the world.  What he revealed to a stunned planet was a global surveillance state whose reach and ambitions were unique, a system based on a single premise: that privacy was no more and that no one was, in theory (and to a remarkable extent in practice), unsurveillable.

Its builders imagined only one exemption: themselves.  This was undoubtedly at least part of the reason why, when Snowden let us peek in on them, they reacted with such over-the-top venom.  Whatever they felt at a policy level, it’s clear that they also felt violated, something that, as far as we can tell, left them with no empathy whatsoever for the rest of us.  One thing that Snowden proved, however, was that the system they built was ready-made for blowback.

Sixteen months after his NSA documents began to be released by the Guardian and the Washington Post, I think it may be possible to speak of the Snowden Era.  And now, a remarkable new film, Citizenfour, which had its premiere at the New York Film Festival on October 10th and will open in select theaters nationwide on October 24th, offers us a window into just how it all happened.  It is already being mentioned as a possible Oscar winner.

Director Laura Poitras, like reporter Glenn Greenwald, is now known almost as widely as Snowden himself, for helping facilitate his entry into the world.  Her new film, the last in a trilogy she’s completed (the previous two being My Country, My Country on the Iraq War and The Oath on Guantanamo), takes you back to June 2013 and locks you in that Hong Kong hotel room with Snowden, Greenwald, Ewen MacAskill of the Guardian, and Poitras herself for eight days that changed the world.  It’s a riveting, surprisingly unclaustrophic, and unforgettable experience.

Before that moment, we were quite literally in the dark.  After it, we have a better sense, at least, of the nature of the darkness that envelops us. Having seen her film in a packed house at the New York Film Festival, I sat down with Poitras in a tiny conference room at the Loews Regency Hotel in New York City to discuss just how our world has changed and her part in it.

Tom Engelhardt: Could you start by laying out briefly what you think we’ve learned from Edward Snowden about how our world really works?

Laura Poitras: The most striking thing Snowden has revealed is the depth of what the NSA and the Five Eyes countries [Australia, Canada, New Zealand, Great Britain, and the U.S.] are doing, their hunger for all data, for total bulk dragnet surveillance where they try to collect all communications and do it all sorts of different ways. Their ethos is “collect it all.” I worked on a story with Jim Risen of the New York Times about a document — a four-year plan for signals intelligence — in which they describe the era as being “the golden age of signals intelligence.”  For them, that’s what the Internet is: the basis for a golden age to spy on everyone.

This focus on bulk, dragnet, suspicionless surveillance of the planet is certainly what’s most staggering.  There were many programs that did that.  In addition, you have both the NSA and the GCHQ [British intelligence] doing things like targeting engineers at telecoms.  There was an article published at The Intercept that cited an NSA document Snowden provided, part of which was titled “I Hunt Sysadmins” [systems administrators].  They try to find the custodians of information, the people who are the gateway to customer data, and target them.  So there’s this passive collection of everything, and then things that they can’t get that way, they go after in other ways.

 I think one of the most shocking things is how little our elected officials knew about what the NSA was doing.  Congress is learning from the reporting and that’s staggering.  Snowden and [former NSA employee] William Binney, who’s also in the film as a whistleblower from a different generation, are technical people who understand the dangers.  We laypeople may have some understanding of these technologies, but they really grasp the dangers of how they can be used.  One of the most frightening things, I think, is the capacity for retroactive searching, so you can go back in time and trace who someone is in contact with and where they’ve been.  Certainly, when it comes to my profession as a journalist, that allows the government to trace what you’re reporting, who you’re talking to, and where you’ve been.  So no matter whether or not I have a commitment to protect my sources, the government may still have information that might allow them to identify whom I’m talking to.

TE: To ask the same question another way, what would the world be like without Edward Snowden?  After all, it seems to me that, in some sense, we are now in the Snowden era.

LP: I agree that Snowden has presented us with choices on how we want to move forward into the future.  We’re at a crossroads and we still don’t quite know which path we’re going to take.  Without Snowden, just about everyone would still be in the dark about the amount of information the government is collecting. I think that Snowden has changed consciousness about the dangers of surveillance.  We see lawyers who take their phones out of meetings now.  People are starting to understand that the devices we carry with us reveal our location, who we’re talking to, and all kinds of other information.  So you have a genuine shift of consciousness post the Snowden revelations.

TE: There’s clearly been no evidence of a shift in governmental consciousness, though.

LP: Those who are experts in the fields of surveillance, privacy, and technology say that there need to be two tracks: a policy track and a technology track.  The technology track is encryption.  It works and if you want privacy, then you should use it.  We’ve already seen shifts happening in some of the big companies — Google, Apple — that now understand how vulnerable their customer data is, and that if it’s vulnerable, then their business is, too, and so you see a beefing up of encryption technologies.  At the same time, no programs have been dismantled at the governmental level, despite international pressure.

TE: In Citizenfour, we spend what must be an hour essentially locked in a room in a Hong Kong hotel with Snowden, Glenn Greenwald, Ewan MacAskill, and you, and it’s riveting.  Snowden is almost preternaturally prepossessing and self-possessed.  I think of a novelist whose dream character just walks into his or her head.  It must have been like that with you and Snowden.  But what if he’d been a graying guy with the same documents and far less intelligent things to say about them?  In other words, how exactly did who he was make your movie and remake our world?

LP: Those are two questions.  One is: What was my initial experience?  The other: How do I think it impacted the movie?  We’ve been editing it and showing it to small groups, and I had no doubt that he’s articulate and genuine on screen.  But to see him in a full room [at the New York Film Festival premiere on the night of October 10th], I’m like, wow!  He really commands the screen! And I experienced the film in a new way with a packed house.

TE: But how did you experience him the first time yourself?  I mean you didn’t know who you were going to meet, right?

LP: So I was in correspondence with an anonymous source for about five months and in the process of developing a dialogue you build ideas, of course, about who that person might be.  My idea was that he was in his late forties, early fifties.  I figured he must be Internet generation because he was super tech-savvy, but I thought that, given the level of access and information he was able to discuss, he had to be older.  And so my first experience was that I had to do a reboot of my expectations.  Like fantastic, great, he’s young and charismatic and I was like wow, this is so disorienting, I have to reboot.  In retrospect, I can see that it’s really powerful that somebody so smart, so young, and with so much to lose risked so much.

He was so at peace with the choice he had made and knowing that the consequences could mean the end of his life and that this was still the right decision.  He believed in it, and whatever the consequences, he was willing to accept them.  To meet somebody who has made those kinds of decisions is extraordinary.  And to be able to document that and also how Glenn [Greenwald] stepped in and pushed for this reporting to happen in an aggressive way changed the narrative. Because Glenn and I come at it from an outsider’s perspective, the narrative unfolded in a way that nobody quite knew how to respond to.  That’s why I think the government was initially on its heels.  You know, it’s not everyday that a whistleblower is actually willing to be identified.

TE: My guess is that Snowden has given us the feeling that we now grasp the nature of the global surveillance state that is watching us, but I always think to myself, well, he was just one guy coming out of one of 17 interlocked intelligence outfits. Given the remarkable way your film ends — the punch line, you might say — with another source or sources coming forward from somewhere inside that world to reveal, among other things, information about the enormous watchlist that you yourself are on, I’m curious: What do you think is still to be known?  I suspect that if whistleblowers were to emerge from the top five or six agencies, the CIA, the DIA, the National Geospatial Intelligence Agency, and so on, with similar documentation to Snowden’s, we would simply be staggered by the system that’s been created in our name.

LP: I can’t speculate on what we don’t know, but I think you’re right in terms of the scale and scope of things and the need for that information to be made public. I mean, just consider the CIA and its effort to suppress the Senate’s review of its torture program. Take in the fact that we live in a country that a) legalized torture and b) where no one was ever held to account for it, and now the government’s internal look at what happened is being suppressed by the CIA.  That’s a frightening landscape to be in.

In terms of sources coming forward, I really reject this idea of talking about one, two, three sources.  There are many sources that have informed the reporting we’ve done and I think that Americans owe them a debt of gratitude for taking the risk they do.  From a personal perspective, because I’m on a watchlist and went through years of trying to find out why, of having the government refuse to confirm or deny the very existence of such a list, it’s so meaningful to have its existence brought into the open so that the public knows there is a watchlist, and so that the courts can now address the legality of it.  I mean, the person who revealed this has done a huge public service and I’m personally thankful.

TE: You’re referring to the unknown leaker who’s mentioned visually and elliptically at the end of your movie and who revealed that the major watchlist your on has more than 1.2 million names on it.  In that context, what’s it like to travel as Laura Poitras today?  How do you embody the new national security state?

LP: In 2012, I was ready to edit and I chose to leave the U.S. because I didn’t feel I could protect my source footage when I crossed the U.S. border.  The decision was based on six years of being stopped and questioned every time I returned to the United States.  And I just did the math and realized that the risks were too high to edit in the U.S., so I started working in Berlin in 2012.  And then, in January 2013, I got the first email from Snowden.

TE: So you were protecting…

LP: …other footage.  I had been filming with NSA whistleblower William Binney, with Julian Assange, with Jacob Appelbaum of the Tor Project, people who have also been targeted by the U.S., and I felt that this material I had was not safe.  I was put on a watchlist in 2006.  I was detained and questioned at the border returning to the U.S. probably around 40 times.  If I counted domestic stops and every time I was stopped at European transit points, you’re probably getting closer to 80 to 100 times. It became a regular thing, being asked where I’d been and who I’d met with. I found myself caught up in a system you can’t ever seem to get out of, this Kafkaesque watchlist that the U.S. doesn’t even acknowledge.

TE: Were you stopped this time coming in?

LP: I was not. The detentions stopped in 2012 after a pretty extraordinary incident.

I was coming back in through Newark Airport and I was stopped.  I took out my notebook because I always take notes on what time I’m stopped and who the agents are and stuff like that.  This time, they threatened to handcuff me for taking notes.  They said, “Put the pen down!” They claimed my pen could be a weapon and hurt someone.

“Put the pen down! The pen is dangerous!” And I’m like, you’re not… you’ve got to be crazy. Several people yelled at me every time I moved my pen down to take notes as if it were a knife. After that, I decided this has gotten crazy, I’d better do something and I called Glenn. He wrote a piece about my experiences. In response to his article, they actually backed off.

TE:  Snowden has told us a lot about the global surveillance structure that’s been built.  We know a lot less about what they are doing with all this information.  I’m struck at how poorly they’ve been able to use such information in, for example, their war on terror.  I mean, they always seem to be a step behind in the Middle East — not just behind events but behind what I think someone using purely open source information could tell them.  This I find startling.  What sense do you have of what they’re doing with the reams, the yottabytes, of data they’re pulling in?

LP: Snowden and many other people, including Bill Binney, have said that this mentality — of trying to suck up everything they can — has left them drowning in information and so they miss what would be considered more obvious leads.  In the end, the system they’ve created doesn’t lead to what they describe as their goal, which is security, because they have too much information to process.

I don’t quite know how to fully understand it.  I think about this a lot because I made a film about the Iraq War and one about Guantanamo.  From my perspective, in response to the 9/11 attacks, the U.S. took a small, very radical group of terrorists and engaged in activities that have created two generations of anti-American sentiment motivated by things like Guantanamo and Abu Ghraib.  Instead of figuring out a way to respond to a small group of people, we’ve created generations of people who are really angry and hate us.  And then I think, if the goal is security, how do these two things align, because there are more people who hate the United States right now, more people intent on doing us harm?  So either the goal that they proclaim is not the goal or they’re just unable to come to terms with the fact that we’ve made huge mistakes in how we’ve responded.

TE: I’m struck by the fact that failure has, in its own way, been a launching pad for success.  I mean, the building of an unparallelled intelligence apparatus and the greatest explosion of intelligence gathering in history came out of the 9/11 failure.  Nobody was held accountable, nobody was punished, nobody was demoted or anything, and every similar failure, including the one on the White House lawn recently, simply leads to the bolstering of the system.

LP: So how do you understand that?

TE: I don’t think that these are people who are thinking: we need to fail to succeed. I’m not conspiratorial in that way, but I do think that, strangely, failure has built the system and I find that odd. More than that I don’t know.

LP: I don’t disagree. The fact that the CIA knew that two of the 9/11 hijackers were entering the United States and didn’t notify the FBI and that nobody lost their job is shocking.  Instead, we occupied Iraq, which had nothing to do with 9/11.  I mean, how did those choices get made?

A “silent majority” of young people without college degrees and decent jobs are on a downwardly-mobile slide.


A Majority of Millennials Don’t Have a College Degree—That’s Going to Cost Everybody

Photo Credit: Shutterstock.com

 There’s a lot of hoopla in the media about how Millennials are the best-educated generation in history, blah, blah, blah. But according to a Pew survey, that’s a distortion of reality. In fact, two-thirds of Millennials between ages 25 and 32 don’t have a bachelor’s degree. The education gap among this generation is higher than for any other in history in terms of how those with a college degree will fare compared to those without. Reflecting a trend that has been gaining momentum in the rest of America, Millennials are rapidly getting sorted into winners and losers. Most of them are losing. That’s going to cost this generation a lot —and the rest of society, too.

According to Pew, young college graduates are ahead of their less-educated peers on just about every measure of economic well-being and how they are faring in the course of their careers. Their parents and grandparents’ generations did not take as big of a hit by not going to college, but for Millennials, the blow is severe. Without serious intervention, its effects will be permanent.

Young college grads working full-time are earning an eye-popping $17,500 more per year than those with only a high school diploma. To put this in perspective, in 1979 when the first Baby Boomers were the same age that Millennials are today, a high school graduate could earn around three-quarters (77 percent) of what his or her college-educated peer took in. But Millennials with only a high school diploma earn only 62 percent of what the college grads earn.

According to Pew, young people with a college degree are also more likely to have full-time jobs, much more likely to have a job of any kind, and more likely to believe that their job will lead to a fulfilling career. But forty-two percent of those with a high school diploma or less see their work as “just a job to get by.” In stark contrast, only 14 percent of college grads have such a negative assessment of their jobs.

Granted, college is expensive. But nine out of 10 Millennials say it’s worth it — even those who have had to borrow to foot the bill. They seem to have absorbed the fact that in a precarious economy, a college diploma is the bare minimum for security and stability.

Why are those with less education doing so badly? The Great Recession is part of the answer. There has also been a trend in which  jobs, when they return after a financial crisis, are worse than those that were lost. After the recession of the 80s, for example, unionized labor never again found jobs as good as the ones they’d had before the downturn. The same things has happened this time, only even more dramatically. The jobs that are returning are often part-time, underpaid, lacking in benefits and short on opportunities to advance. It’s great to embark on a career as an engineer at Apple, not so great to work in an Apple retail store, where pay is low and the hope for a career is minimal. The Great Recession amplified a trend of McJobs that had been gaining strength for decades, stoked by the decline in unions, deregulation, outsourcing, and poor corporate governance that have tilted the balance of power away from employees to such a degree that many young people now expect exploitation and poor conditions on the job simply as a matter of course, with no experience of how things could be any different.

All this is not to say that having a college degree gives you a free pass: This generation of college-educated adults is doing slightly worse on certain measures, like the percentage without jobs, than Gen Xers, Baby Boomers or members of the silent generation when they were in their mid-20s and early 30s. But today’s young people who don’t go to college are doing much worse than those in similar situations in the generations that came before.

Povety is one of the biggest threats to Millenials without college degrees. Nearly a quarter (22 percent) of young people ages 25 to 32 without a college degree live in poverty today, whereas only 6 percent of the college-educated fall into this camp. When Baby Boomers were the same age as today’s Millenials, only 7 percent of those with only a high school diploma were living in poverty.

It’s true that more Millennials than past generations have college degrees, and it’s also true that the value of those diplomas has increased. Given those facts you might think might that the Millennial generation should be earning more than earlier generations of young adults. You would be wrong — and that’s because it’s more costly not to have a college education than ever before. So the education have-nots are pulling the average of the whole generation down. The typical high school graduate’s earnings dropped by more than $3,000, from $31,384 in 1965 to $28,000 in 2013.

There are also more Millennials who don’t even have a high school diploma than previous generations: Some have taken to calling Millennials “Generation Dropout.” A 2013 article in the Atlantic Monthly noted that compared to other countries, the newest wave of employees is actually less educated than their parents because of the lower number completing high school. A recent program on NPR called the 25- to 32-year-old cohort without college degrees and decent jobs the “Silent Majority.”

In 1965, young college graduates earned $7,499 more than those with a high school diploma. But the earnings gap by educational attainment has steadily widened since then, and today it has more than doubled to $17,500 among Millennials ages 25 to 32.

All of this is alarming because it means that less-educated workers are going to have a really hard time. Compared to the Silent Generation, those with high school or less are three times more likely to be jobless.

When you look at the length of time the typical job seeker spends looking for work, less educated Millennials are again faring poorly. In 2013 the average unemployed college-educated Millennial had been looking for work for 27 weeks—more than double the time it took an unemployed college-educated 25- to 32-year-old in 1979 to find a job (12 weeks). And again, today’s young high school graduates do worse on this measure compared to the college-educated or their peers in earlier generations. Millennial high school graduates spend, on average, four weeks longer looking for work than college graduates (31 weeks vs. 27 weeks).

These young people are ending up in dire straits — stuck in debt, unable to set up their own households, and having to put off starting families (recent research shows that many women who face economic hard times in their 20s will never end up having kids). It’s not that they don’t want to grow up, it’s that they don’t have access to the things that make independence possible, like a good education, a good job, a strong social safety net, affordable childcare, and so on.

How much is this going to cost America as a nation? It’s too early to say for sure, but Millennial underemployment, which is directly linked to undereducation, is already costing $25 billion a year, largely because of the lost tax revenue. But what about the other costs? The increased rates of alcoholism and substance abuse? The broken relationships? The depression? The long list of physical ailments that go along with the stress of not being able to gain and keep a financial foothold?

Once upon a time, more forward-thinking politicians and politicos recognized that young people who have the bad luck to try to launch into adulthood in the wake of an economic crisis not of their own making need real help. They need jobs programs, training and decent work conditions that could improve not only their individual lives but the health of the whole society and economy. We have the blueprint of how to do this from the New Deal. It’s going to cost everyone if America leaves these young people to suffer this cruel fate.

Lynn Parramore is an AlterNet senior editor. She is cofounder of Recessionwire, founding editor of New Deal 2.0, and author of “Reading the Sphinx: Ancient Egypt in Nineteenth-Century Literary Culture.” She received her Ph.D. in English and cultural theory from NYU. She is the director of AlterNet’s New Economic Dialogue Project. Follow her on Twitter @LynnParramore.

http://www.alternet.org/education/surprise-majority-millennials-dont-have-college-degree-thats-going-cost-everybody?akid=12378.265072.6qEBLL&rd=1&src=newsletter1023736&t=7&paging=off&current_page=1#bookmark

Google makes us all dumber

…the neuroscience of search engines

As search engines get better, we become lazier. We’re hooked on easy answers and undervalue asking good questions

Google makes us all dumber: The neuroscience of search engines
(Credit: Ollyy via Shutterstock)

In 1964, Pablo Picasso was asked by an interviewer about the new electronic calculating machines, soon to become known as computers. He replied, “But they are useless. They can only give you answers.”

We live in the age of answers. The ancient library at Alexandria was believed to hold the world’s entire store of knowledge. Today, there is enough information in the world for every person alive to be given three times as much as was held in Alexandria’s entire collection —and nearly all of it is available to anyone with an internet connection.

This library accompanies us everywhere, and Google, chief librarian, fields our inquiries with stunning efficiency. Dinner table disputes are resolved by smartphone; undergraduates stitch together a patchwork of Wikipedia entries into an essay. In a remarkably short period of time, we have become habituated to an endless supply of easy answers. You might even say dependent.

Google is known as a search engine, yet there is barely any searching involved anymore. The gap between a question crystallizing in your mind and an answer appearing at the top of your screen is shrinking all the time. As a consequence, our ability to ask questions is atrophying. Google’s head of search, Amit Singhal, asked if people are getting better at articulating their search queries, sighed and said: “The more accurate the machine gets, the lazier the questions become.”

Google’s strategy for dealing with our slapdash questioning is to make the question superfluous. Singhal is focused on eliminating “every possible friction point between [users], their thoughts and the information they want to find.” Larry Page has talked of a day when a Google search chip is implanted in people’s brains: “When you think about something you don’t really know much about, you will automatically get information.” One day, the gap between question and answer will disappear.

I believe we should strive to keep it open. That gap is where our curiosity lives. We undervalue it at our peril.

The Internet can make us feel omniscient. But it’s the feeling of not knowing which inspires the desire to learn. The psychologist George Loewenstein gave us the simplest and most powerful definition of curiosity, describing it as the response to an “information gap.” When you know just enough to know that you don’t know everything, you experience the itch to know more. Loewenstein pointed out that a person who knows the capitals of three out of 50 American states is likely to think of herself as knowing something (“I know three state capitals”). But a person who has learned the names of 47 state capitals is likely to think of herself as not knowing three state capitals, and thus more likely to make the effort to learn those other three.



That word “effort” is important. It’s hardly surprising that we love the ease and fluency of the modern web: our brains are designed to avoid anything that seems like hard work. The psychologists Susan Fiske and Shelley Taylor coined the term “cognitive miser” to describe the stinginess with which the brain allocates limited attention, and its in-built propensity to seek mental short-cuts. The easier it is for us to acquire information, however, the less likely it is to stick. Difficulty and frustration — the very friction that Google aims to eliminate — ensure that our brain integrates new information more securely. Robert Bjork, of the University of California, uses the phrase “desirable difficulties” to describe the counterintuitive notion that we learn better when the learning is hard. Bjork recommends, for instance, spacing teaching sessions further apart so that students have to make more effort to recall what they learned last time.

A great question should launch a journey of exploration. Instant answers can leave us idling at base camp. When a question is given time to incubate, it can take us to places we hadn’t planned to visit. Left unanswered, it acts like a searchlight ranging across the landscape of different possibilities, the very consideration of which makes our thinking deeper and broader. Searching for an answer in a printed book is inefficient, and takes longer than in its digital counterpart. But while flicking through those pages your eye may alight on information that you didn’t even know you wanted to know.

The gap between question and answer is where creativity thrives and scientific progress is made. When we celebrate our greatest thinkers, we usually focus on their ingenious answers. But the thinkers themselves tend to see it the other way around. “Looking back,” said Charles Darwin, “I think it was more difficult to see what the problems were than to solve them.” The writer Anton Chekhov declared, “The role of the artist is to ask questions, not answer them.” The very definition of a bad work of art is one that insists on telling its audience the answers, and a scientist who believes she has all the answers is not a scientist.

According to the great physicist James Clerk Maxwell, “thoroughly conscious ignorance is the prelude to every real advance in science.” Good questions induce this state of conscious ignorance, focusing our attention on what we don’t know. The neuroscientist Stuart Firestein teaches a course on ignorance at Columbia University, because, he says, “science produces ignorance at a faster rate than it produces knowledge.” Raising a toast to Einstein, George Bernard Shaw remarked, “Science is always wrong. It never solves a problem without creating ten more.”

Humans are born consciously ignorant. Compared to other mammals, we are pushed out into the world prematurely, and stay dependent on elders for much longer. Endowed with so few answers at birth, children are driven to question everything. In 2007, Michelle Chouinard, a psychology professor at the University of California, analyzed recordings of four children interacting with their respective caregivers for two hours at a time, for a total of more than two hundred hours. She found that, on average, the children posed more than a hundred questions every hour.

Very small children use questions to elicit information — “What is this called?” But as they grow older, their questions become more probing. They start looking for explanations and insight, to ask “Why?” and “How?”. Extrapolating from Chouinard’s data, the Harvard professor Paul Harris estimates that between the ages of 3 and 5, children ask 40,000 such questions. The numbers are impressive, but what’s really amazing is the ability to ask such a question at all. Somehow, children instinctively know there is a vast amount they don’t know, and they need to dig beneath the world of appearances.

In a 1984 study by British researchers Barbara Tizard and Martin Hughes, four-year-old girls were recorded talking to their mothers at home. When the researchers analyzed the tapes, they found that some children asked more “How” and “Why” questions than others, and engaged in longer passages of “intellectual search” — a series of linked questions, each following from the other. (In one such conversation, four-year-old Rosy engaged her mother in a long exchange about why the window cleaner was given money.) The more confident questioners weren’t necessarily the children who got more answers from their parents, but the ones who got more questions. Parents who threw questions back to their children — “I don’t know, what do you think?” — raised children who asked more questions of them. Questioning, it turn out, is contagious.

Childish curiosity only gets us so far, however. To ask good questions, it helps if you have built your own library of answers. It’s been proposed that the Internet relieves us of the onerous burden of memorizing information. Why cram our heads with facts, like the date of the French revolution, when they can be summoned up in a swipe and a couple of clicks? But knowledge doesn’t just fill the brain up; it makes it work better. To see what I mean, try memorizing the following string of fourteen digits in five seconds:

74830582894062

Hard, isn’t it? Virtually impossible. Now try memorizing this string of fourteen letters:

lucy in the sky with diamonds

This time, you barely needed a second. The contrast is so striking that it seems like a completely different problem, but fundamentally, it’s the same. The only difference is that one string of symbols triggers a set of associations with knowledge you have stored deep in your memory. Without thinking, you can group the letters into words, the words into a sentence you understand as grammatical — and the sentence is one you recognize as the title of a song by the Beatles. The knowledge you’ve gathered over years has made your brain’s central processing unit more powerful.

This tells us something about the idea we should outsource our memories to the web: it’s a short-cut to stupidity. The less we know, the worse we are at processing new information, and the slower we are to arrive at pertinent inquiry. You’re unlikely to ask a truly penetrating question about the presidency of Richard Nixon if you have just had to look up who he is. According to researchers who study innovation, the average age at which scientists and inventors make breakthroughs is increasing over time. As knowledge accumulates across generations, it takes longer for individuals to acquire it, and thus longer to be in a position to ask the questions which, in Susan Sontag’s phrase, “destroy the answers”.

My argument isn’t with technology, but the way we use it. It’s not that the Internet is making us stupid or incurious. Only we can do that. It’s that we will only realize the potential of technology and humans working together when each is focused on its strengths — and that means we need to consciously cultivate effortful curiosity. Smart machines are taking over more and more of the tasks assumed to be the preserve of humans. But no machine, however sophisticated, can yet be said to be curious. The technology visionary Kevin Kelly succinctly defines the appropriate division of labor: “Machines are for answers; humans are for questions.”

The practice of asking perceptive, informed, curious questions is a cultural habit we should inculcate at every level of society. In school, students are generally expected to answer questions rather than ask them. But educational researchers have found that students learn better when they’re gently directed towards the lacunae in their knowledge, allowing their questions bubble up through the gaps. Wikipedia and Google are best treated as starting points rather than destinations, and we should recognize that human interaction will always play a vital role in fueling the quest for knowledge. After all, Google never says, “I don’t know — what do you think?”

The Internet has the potential to be the greatest tool for intellectual exploration ever invented, but only if it is treated as a complement to our talent for inquiry rather than a replacement for it. In a world awash in ready-made answers, the ability to pose difficult, even unanswerable questions is more important than ever.

Picasso was half-right: computers are useless without truly curious humans.

Ian Leslie is the author of “Curious: The Desire To Know and Why Your Future Depends On It.” He writes on psychology, trends and politics for The Economist, The Guardian, Slate and Granta. He lives in London. Follow him on Twitter at @mrianleslie.

http://www.salon.com/2014/10/12/google_makes_us_all_dumber_the_neuroscience_of_search_engines/?source=newsletter

Follow

Get every new post delivered to your Inbox.

Join 1,593 other followers