The dangerous complacency of the iPhone era

Technology is making us blind:

The rise of smartphones and social media has ushered in a new age of techno-optimism. And that’s a big problem

Technology is making us blind: The dangerous complacency of the iPhone era

The technology pages of news media can make for scary reading these days. From new evidence of government surveillance to the personal data collection capabilities of new devices, to the latest leaks of personal information, we hear almost daily of new threats to personal privacy. It’s difficult to overstate the implications of this: The separation of the private and public that’s the cornerstone of liberal thought, not to mention the American Constitution, is being rapidly eroded, with potentially profound consequences for our freedom.

 As much as we may register a certain level of dismay at this, in practice, our reaction is often indifference. How many of us have taken to the streets in protest, started a petition, canvassed a politician, or even changed our relationship with our smartphone, tablet or smartwatch? The question is why are we so unconcerned?

We could say that it’s simply a matter of habit, that we have become so used to using devices in such a way that we cannot imagine using them any differently. Or we could, for example, invoke a tragic fate in which we simply have no option but to accept the erosion of our privacy because of our powerlessness against corporations and governments.

These are, however, retrospective justifications that miss the kernel of the truth. To reach this kernel, we have to excavate the substratum of culture to uncover the ideas that shape our relationship with technology. Only here can we see that the cause is a profound ideological shift in this relationship.

Over the last few hundred years, it has been one characterized by deep ambivalence. On the one hand, we have viewed technology as emancipatory, and even, as David Nye, James Carey and other scholars have argued, as divine. On the other hand, we have seen it as dehumanizing, alienating and potentially manipulative — a viewpoint shaped by historical figures as diverse as William Blake, Mark Twain, Mary Shelley, Charlie Chaplin, Friedrich Nietzsche, Ned Lud, Samuel Beckett and Karl Marx. However, over the last 20 years or so, this latter perspective has largely been thrown out of the window.

There are many areas of culture that witness this shift, but none does so as lucidly as science fiction film. Even when set in the future, science fiction explodes onto the silver screen the ideas held about technology in the present. Indeed, the success of many of the best science fiction films is undoubtedly because they illustrate their time’s hopes and fears about technology so clearly.



Those of the late 20th century clearly suggest the prevalence in American culture of the old fearful view of technology. The 1980s, for example, saw the advent of personal computing, innovation in areas like genetic engineering and robotics, job losses brought about by industrial mechanization, and the creation of futuristic military technologies such as the Strategic Defense Initiative (aka Star Wars).

Lo and behold, the science fiction films of the time betray cultural fears of keeping up with the pace of change. Many explore the dehumanizing effects of technology, depicting worlds where humans have lost control. “Terminator,” for example, conjoins fears of mechanization and computing. The human protagonists are powerless to kill Schwarzenegger’s cyborg directly; it ultimately meets its end via another piece of industrial technology (a hydraulic press). Another classic of the era, “Blade Runner,” is a complex thought experiment on the joining of technology and humans as hybrids. The antagonist, Roy, whom Harrison Ford’s Deckard must kill, represents the horrific synthesis of unfettered human ambition and technological potency.

The 1990s was the age of mass computing and the rise of the internet. In response, new technological metaphors were created, with the 1980s’ imagery of hard, masculine technology replaced by the fluidity and dynamism of the network. In “Terminator 2,” Schwarzenegger’s industrial killing machine is obsolete, and no longer a threat to humans. Instead, the threat comes from the T-1000, whose speed and liquid metal form evoke a new world governed by the data stream.

The ’90s also witnessed increasing virtualization of everyday life — a trend reflected by Jean Baudrillard’s identification of the Gulf War as the first truly virtual war. Films explored the loss of the real that virtualization implied. “The Truman Show” and “The Matrix” both involve their protagonists being “awoken” from everyday life, which is shown to be artificial.

However, this view of technology as fearsome is seldom expressed in the sci-fi films since 2000, while it’s also difficult to identify many common themes, or many iconic genre examples. Is this simply because sci-fi as a genre has exhausted itself (as Ridley Scott has claimed)? Or is it symptomatic of something deeper in the culture?

The answer becomes clear when we consider two recent examples that do express fears of technology. “Transcendence” and “Her” join a long line of sci-fi films that portray artificial intelligence as out of control. Both, however, were not huge commercial successes. Was this because they were simply bad films? Not necessarily. While “Transcendence” was poorly received, “Her” was a thematically sophisticated exploration of love in a virtual age. The problem was that both missed the zeitgeist. No one really fears artificial intelligence anymore.

The notion that technology is fearful relies upon three assumptions: First, that technology and humans are self-contained and separate from each other (the old dichotomy of man and machine). Secondly, that technology has its own nature — that it can determine human life. (As legendary media theorist Marshall McLuhan once put it, “we shape our tools and then our tools shape us.”) Thirdly, that this nature can direct technology against humans.

However, the last 20 years have seen a dramatic erosion of all three assumptions. In particular, we no longer view technology as having any intrinsic meaning; the medium is no longer the message. Instead, its only meanings are those that we give it. For us, technology is a blank slate; it’s cultural matter waiting for us to give it form. Allied to this has been a new sense of intimacy with technology: a breaking down of the boundaries between it and us.

Perhaps the key driver of this has been technology’s centrality to the foremost pursuit of our times: the quest for the authentic self. This quest tasks us with finding ways of demonstrating to ourselves and others what makes us unique, special and individual. Technology has become a powerful way of doing this. We see it as a means of self-expression; it allows us to fully be ourselves.

The smartphone is the exemplar here. The cultural understanding of the smartphone was initially driven by BlackBerry, who positioned it as a corporate tool. Such meanings have long since lost resonance. Now, smartphone brands position their products as central to relationships, creative expression, play and all the other things that apparently make us authentic individuals. Apps are important: The customization of experience they allow helps to make our smartphones unique expressions of ourselves.

This association of technology with ideals of the authentic self is not confined to smartphones, however. Many researchers into artificial intelligence no longer aim to create ultimate intelligence; instead they replicate the “authentic” qualities of humans through creating machines that can, for instance, write music or paint.

Only recently, at the launch of Apple’s smartwatch, Jonathan Ive claimed that “we’re at a compelling beginning, designing technology to be worn, to be truly personal,” signifying a new frontier in the quest to eliminate the boundaries between ourselves and technology.

Similarly, the Internet of Things promises to make us the center of our worlds like never before. This is a world in which we will know about medical issues before we have even felt the symptoms, be able to alter the temperature of our home from wherever we are, and be warned in advance when we are running out of milk. It is one in which it’s claimed technology will be so in tune with our needs that it will anticipate them before we have.

Thus, we now view technology not just as empowering but as self-actualizing as well. Because it’s positioned as key to our authentic selves, we are newly intimate with it. This sounds utopian. It seems as if technology is finally reaching its potential: It is no longer the threat to human freedom, but its driving force.

Undoubtedly, there may be great pleasure in this new utopia, but this does not make it any less ideological. As Slavoj Zizek points out, ideas can both be true and highly ideological insofar as they obscure relations of domination. Indeed, it is the wrapping of technology in the “jargon of authenticity” (to borrow a phrase from Theodor Adorno, another critical theorist) that makes this new “ideology of intimacy” so seductive.

In the past, it was easier to critique technology because the dichotomy of man and machine clearly kept it separate from us. As such, we were able to take it as an object of analysis; to hypothesize how innovations might affect our freedoms for better or worse. This becomes infinitely more difficult in a context that has conflated ourselves and technology. We struggle to achieve the distance needed to critique it.

The result is that we become blind to technology’s dark side — its potential to be misused in ways that encroach on our privacy. How can we see the privacy implications of our smartphones when we see them first as the key to the authentic self, or the Google Car when it looks so cute, or Google Glass when we believe that it will allow us to transcend our bodies to allow a new mastery of the world.

It is, though, a question not just of blindness but also of will. The injunction to treat technology as an extension of our authentic selves encourages a kind of narcissistic love: We love technology because we love ourselves. In Freudian terms, the ideology of intimacy incites us to invest our love in the technological object through presenting it as key to the pursuit of our ego ideal. Thus, we do not want to really separate ourselves from technology because doing so would be experienced as a traumatic loss, an alienation from part of ourselves. Perhaps this is the true power that the ideology of intimacy holds over us.

Yet somehow we need to take a step back, to uncouple ourselves from the seductive devices around us. We need to end our blind devotion and rediscover critical distance. This way we can start to view technology as it is: as both the key to our freedoms, and also their greatest threat. If we don’t, we may discover too late that the new technological utopia is actually a poisoned chalice, with profound implications for our privacy.

 

http://www.salon.com/2014/11/29/technology_is_making_us_blind_the_dangerous_complacency_of_the_iphone_era/?source=newsletter

Your data is for sale

 — and not just on Facebook

Nobody is gathering more information more quickly than the providers of digital services. But do you trust them?

Your data is for sale — and not just on Facebook
(Credit: Chookiat K via Shutterstock/Salon)

This is how the tech P.R. wars of the future will be waged: “Trust us, because we will take care of your precious information better than the other guy.”

On Aug. 21, Square, the mobile-payments start-up helmed by Twitter co-founder Jack Dorsey, announced the release of a new package of analytical tools available for free to any merchant that uses Square.

Small businesses, argued the press release, tend not to have the same access to advanced data crunching as larger operations. Square Analytics “levels the playing field” and “delivers sellers actionable data to increase sales and better serve their customers.” Want to know exactly how much a bad snow storm affected your cupcake sales, or what kind of advanced coffee products your repeat customers crave the most on Tuesday mornings? Square Analytics has the answers!

A few hours after Square’s announcement, I received an email from a man who handles press relations for Shopkeep, a company that offers point-of-sale processing via the iPad, and has apparently been touting its own small business analytics support for years. Judging by the accusations made in the email, Shopkeep was none too pleased by the debut of Square’s new service.

“Square is more interested in collecting and selling data than it is in helping small businesses grow,” read the email. My correspondent further alleged that Square’s “terms and conditions” gave Square the right to do anything it wanted with the data it collected on retail transactions.

Picture this: I order coffee at a coffee shop that uses Square … Square, not the cafe, seizes the data on that transaction and emails me a receipt. The company can sell that data to the highest bidder — another coffee shop up the street or the closest Starbucks. Then I could get an email from that other coffee shop, not the one I’m a regular at, offering me a discount or some other incentive to come in.

Shopkeep, in contrast, would never do such a dastardly thing.

I contacted Square and asked spokesperson Aaron Zamost if the coffee shop scenario was realistic. Unsurprisingly, he dismissed it out of hand. “No, we do not intend to do this,” said Zamost. “We do not surface, nor do we have any plans to surface individualized transaction data to any sellers besides the one who made the sale. Our sellers trust us to be transparent with them and respectful of what they share with us. If we were to violate their trust, or behave as other companies have been known to, they would leave us.”



I have no evidence to prove or disprove the allegations made by Shopkeep or the defense offered by Square. The  interesting point is that the nature of the accusation is an attempt to poke at what is clearly a sore spot in Silicon Valley in 2014. In these post-Snowden days, how tech companies handle data is a volatile issue. In fact, it might be the biggest issue of them all. Because Shopkeep and Square are hardly alone in their ability to amass valuable information. Every company that offers a service over your mobile device — whether processing a sale, hiring a car, locating a room to stay in — is in the data business. Everyone is a data broker. As Silicon Valley likes to say, in the 21st century data is the new oil. What rarely gets mentioned afterward, however, is the fact that the oil business, especially when it was just getting started, was very, very dirty.

* * *

Square has a cool product: A plastic card reader that plugs into the headphone jack of your phone and enables anyone with a bank account to start processing credit card transactions. Although Square has yet to turn a profit, and has weathered some bad press in recent months, the company does process $30 billion worth of transactions a year. That’s a lot of information available to crunch.

Of course, there are plenty of companies, starting with the credit card firms themselves, that are already slicing and dicing payment transaction info and offering analysis to whomever can pay for it. Square is just one more player in a very crowded field. But Square is nevertheless emblematic of an important trend — let’s call it the disruptive democratization of data brokering. Once upon a time, a handful of obscure, operating-behind-the-scenes firms dominated the data-brokering business. But now that everything’s digital, everyone with a digital business can be a data broker.

In an increasing number of cases it appears that the ostensible service offered by the latest free app isn’t actually what the app-maker plans to make money off; it’s just the lure that brings in the good stuff — the monetizable data. Square may be a payments processing company first, but it is rapidly amassing huge amounts of data, which is in itself a valuable commodity, a point confirmed by  Square executive Gokul Rajaram to Fortune Magazine earlier this year.

Similarly, Uber is ostensibly a car hiring company but is also poised to know more about our transportation habits than just about any other single player. Almost every app on your phone — even the flashlight app — is simultaneously performing a service for you, and gathering data about you.

Increasingly, as the accusations about Square from a competitor demonstrate, we may end up deciding whom we choose for our services based on whether we trust them as responsible safekeepers of our data.

Until this year, most Americans have had only the sketchiest knowledge of how huge the marketplace is for our personal information. In May the FTC released a report that looked at the nine biggest data brokers — companies that specialize in amassing huge dossiers on every living person in the Western world. The numbers are startling.

Data brokers collect and store a vast amount of data on almost every U.S. household and commercial transaction. Of the nine data brokers, one data broker’s database has information on 1.4 billion consumer transactions and over 700 billion aggregated data elements; another data broker’s database covers one trillion dollars in consumer transactions; and yet another data broker adds three billion new records each month to its databases.

The big data brokers build their databases by snarfling up every single source of information they can find or buy. Databases operated by federal, state and local governments are an obvious source, but the big data brokers also routinely scrape social media sites and blogs, and also buy commercial databases from a vast variety of enterprises, as well as from other data brokers.

Today, nobody is gathering more information more quickly than the providers of digital services. Surveillance Valley, indeed! Analytics companies know the constellation of apps on your phone, including your every click and swipe, down to the most granular level.

The rules regarding what can be done with this information are in their infancy. For now, we depend largely on what the companies say in their own terms and conditions. But we would be unwise to regard those as permanently binding legally promises. They can change at any time — something that Facebook has demonstrated repeatedly. What Square says now, in other words, might not be what Square does in the future, especially if the company finds itself in dire need of cash.

When everyone is a data broker, having standardized rules governing what can be done with our information becomes a pressing social priority. Right now it’s just a big mess.

 

Andrew Leonard is a staff writer at Salon. On Twitter, @koxinga21.

 

http://www.salon.com/2014/08/29/its_not_just_facebook_anymore_in_the_future_your_data_is_always_for_sale/?source=newsletter

Net neutrality is dying, Uber is waging a war on regulations, and Amazon grows stronger by the day

Why 2014 could be the year we lose the Internet

Why 2014 could be the year we lose the Internet
Jeff Bezos, Tim Cook (Credit: Reuters/Gus Ruelas/Robert Galbraith/Photo collage by Salon)

Halfway through 2014, and the influence of technology and Silicon Valley on culture, politics and the economy is arguably bigger than ever — and certainly more hotly debated. Here are Salon’s choices for the five biggest stories of the year.

1) Net neutrality is on the ropes.

So far, 2014 has been nothing but grim for the principle known as “net neutrality” — the idea that the suppliers of Internet bandwidth should not give preferential access (so-called fast lanes) to the providers of Internet services who are willing and able to pay for it. In January, the D.C. Court of Appeals struck down the FCC’s preliminary plan to enforce a weak form of net neutrality. Less than a month later, Comcast, the nation’s largest cable company and broadband Internet service provider, announced its plans to buy Time-Warner — and inadvertently gave us a compelling explanation for why net neutrality is so important. A single company with a dominant position in broadband will simply have too much power, something that could have enormous implications for our culture.

The situation continued to degenerate from there. Tom Wheeler, President Obama’s new pick to run the FCC, a former top cable industry lobbyist, unveiled a new plan for net neutrality that was immediately slammed as toothless. In May, ATT announced plans to merge with DirecTV. Consolidation proceeds apace, and our government appears incapable of managing the consequences.

2) Uber takes over.

After completing its most recent round of financing, Uber is now valued at $18.2 billion. Along with Airbnb, the Silicon Valley start-up has become a standard bearer for the Valley’s cherished allegiance to “disruption.” The established taxi industry is under sustained assault, but Uber has made it clear that the company’s ultimate ambitions go far beyond simply connecting people with rides. Uber has designs on becoming the premier logistics connection platform for getting anything to anyone. What Google is to search, Uber wants to be for moving objects from Point A to Point B. And Google, of course, has a significant financial stake in Uber.



Uber’s path has been bumpy. The company is fighting regulatory battles with municipalities across the world, and its own drivers are increasingly angry at fare cuts, and making sporadic attempts to organize. But the smart money sees Uber as one of the major players of the near future. The “sharing” economy is here to stay.

3) The year of the stream.

Apple bought Beats by Dre. Amazon launched its own streaming music service. Google is planning a new paid streaming offering. Spotify claimed 10 million paying customers and Pandora boasts 75 million listeners every month.

We may end up remembering 2014 as the year that streaming established itself as the dominant way people consume music. The numbers are stark. Streaming is surging, while paid downloads are in free fall.

For consumers, all-you-can-eat services like Spotify are generally marvelous. But it remains astonishing that a full 20 years after the Internet threw the music industry into turmoil, it is still completely unclear how artists and songwriters will make a decent living in an era when music is essentially free.

We also face unanswered questions about the potential implications for what kinds of music get made in an environment where every listen is tracked and every tweet or Facebook like observed. What will Big Data mean for music?

4) Amazon shows its true colors.

What a busy six months for Jeff Bezos! Amazon introduced its own set-top box for TV watching, its own smartphone for insta-shopping, anywhere, any time, and started abusing its near monopoly power to win better terms with publishing companies.

For years, consumer adoration of Amazon’s convenience and low prices fueled the company’s rise. It’s hard, at the midpoint of 2014, to avoid the conclusion that we’ve created a monster. This year, Amazon started getting sustained bad press at the very highest levels. And you know what? Jeff Bezos deserves it.

5) The tech culture wars boil over.

In the first six months of 2014, the San Francisco Bay Area witnessed emotional public hearings about Google shuttle buses, direct action by radicals against technology company executives, bar fights centering on Google Glass wearers, and a steady rise in political heat focused on tech economy-driven gentrification.

As I wrote in April

Just as the Luddites, despite their failure, spurred the creation of worker-class consciousness, the current Bay Area tech protests have had a pronounced political effect. While the tactics range from savvy, well-organized protest marches to juvenile acts of violence, the impact is clear. The attention of political leaders and the media has been engaged. Everyone is watching.

Ultimately, maybe this will be the biggest story of 2014. This year, numerous voices started challenging the transformative claims of Silicon Valley hype and began grappling with the nitty-gritty details of how all this “disruption” is changing our economy and culture. Don’t expect the second half of 2014 to be any different.

The lost promise of the Internet

Meet the man who almost invented cyberspace

In 1934, a little-known Belgian thinker published plans for a global network that could have changed everything

The lost promise of the Internet: Meet the man who almost invented cyberspace
Paul Otlet with model of the World City, 1943 (Credit: Mundaneum, Mons, Belgium)

Tales of lost horizons always spark the imagination. From the mythical kingdom of Shangri-La to the burning of the library at Alexandria, human history is rife with longing for better worlds that shimmered briefly before slipping out of reach.

Some of us may be starting to feel that way about the Internet. As a handful of corporations continue to consolidate their grip over the network, the optimism of the early indie Web has given way to a much-chronicled backlash.

But what if it all had turned out differently?

In 1934, a little-known Belgian bibliographer named Paul Otlet published his plans for the Mundaneum, a global network that would allow anyone in the world to tap into a vast repository of published information with a device that could send and receive text, display photographs, transcribe speech and auto-translate between languages. Otlet even imagined social networking-like features that would allow anyone to “participate, applaud, give ovations, sing in the chorus.”

Once the Mundaneum took shape, he predicted, “anyone in his armchair will be able to contemplate creation, in whole or in certain of its parts.”

34a_ARC-MUNDA-EUMC155-600.png

Unpublished drawing of the Mundaneum, 1930s © Mundaneum, Mons, Belgium.

Conceived in the pre-digital era, Otlet’s scheme relied on a crazy quilt of analog technologies like microfilm, telegraph lines, radio transmitters and typewritten index cards. Nonetheless, it anticipated the emergence of a hyperlinked information environment — more than half a century before Tim Berners-Lee released the first Web browser.

Despite Otlet’s remarkable foresight, he remains largely forgotten outside of rarefied academic circles. When the Nazis invaded Belgium in 1940, they destroyed much of his work, helping ensure his descent into historical obscurity (although the Mundaneum museum in Belgium is making great strides toward restoring his legacy). Most of his writing has never been translated into English.



In the years following World War II, a series of English and American computer scientists paved the way for what would become the present-day Internet. Pioneering thinkers like Vannevar Bush, J.C.R. Licklider, Douglas Engelbart, Ted Nelson, Vinton G. Cerf and Robert E. Kahn shaped the contours of the present-day Internet, with its famously flat, open architecture.

Tim Wu recently compared the open Internet to the early American frontier, “a place where anyone with passion or foolish optimism might speak his or her piece or open a business and see what happens.” But just as the unregulated frontier of the 19th century gave rise to the age of robber barons, so the Internet has seen a rapid consolidation of power in the hands of a few corporate winners.

The global tech oligopoly now exerts so much power over our lives that some pundits — like Wu and Danah Boyd — have even argued that some Internet companies should be regulated like public utilities. The Web’s inventor Tim Berners-Lee has sounded the alarm as well, calling for a great “re-decentralization” of his creation.

Otlet’s Mundaneum offers a tantalizing picture of what a different kind of network might have looked like. In contrast to today’s commercialized — and increasingly Balkanized — Internet, he envisioned the network as a public, transnational undertaking, managed by a consortium of governments, universities and international associations. Commercial enterprise played almost no part in it.

Otlet saw the Mundaneum as the central nervous system for a new world order rooted squarely in the public sector. Having played a small role in the formation of the League of Nations after World War I, he believed strongly that a global network should be administered by an international government entity working on behalf of humanity’s shared interests.

That network would do more than just provide access to information; it would serve as a platform for collaboration between governments that would, Otlet believed, help create the necessary conditions for world peace. He even proposed the construction of an ambitious new World City to house that government, with central offices and facilities for hosting international events (like the Olympics), an international university, a world museum and a headquarters of the Mundaneum that would involve a vast documentary operation staffed by a small army of “bibliologists” — a sort of cross between a blogger and a library cataloger — who would collect and curate the world’s information.

otlet_world_city.png

Otlet with model of the World City, 1943 © Mundaneum, Mons, Belgium.

Today, the role that Otlet envisioned for the Mundaneum falls largely to for-profit companies like Google, Yahoo, Facebook, Twitter and Amazon, who channel the vast majority of the world’s intellectual output by exerting an enormous asymmetric advantage over their users, exploiting their vast data stores and proprietary algorithms to make consumers dependent on them to navigate the digital world.

Billions of people may rely on Google’s search engine, but only a handful of well-paid engineers inside the Googleplex understand how it actually works. Similarly, the Facebook newsfeed may sate our appetite for quizzes and cat videos, but few of us will ever plumb the mysteries of its algorithm. And while Amazon may provide book readers with easy access to millions of titles, it is hardly a public library: The company’s primary goal will always be to drive sales, not to support scholarly research.

Otlet saw the network not as a tool for generating wealth, but as a largely commercial-free system to help spur intellectual and social progress. To that end, he wanted to make its contents as freely available and easily searchable as possible. He created an innovative classification scheme called the Universal Decimal Classification, a highly precise system for pinpointing particular topics and creating deep links between related subjects contained in documents, photographs, audio-visual files and other evolving media types.

Otlet coined a term to describe this process of stitching together different media types and technologies: “hyper-documentation” (a term he first used almost 30 years before Ted Nelson invented the term “hypertext” in 1963). He envisioned the entire scheme as a kind of open source catalog that would allow anyone to “be everywhere, see everything … and know everything.”

Otlet_Univers_Livre-150.png

Overview of Otlet’s cataloging scheme, 1930s © Mundaneum, Mons, Belgium.

Instead of knowing everything, we now seem to know less and less. The sheer mass of data on the Internet — and the difficulty of archiving it in any coherent way — imposes a kind of collective amnesia that makes us ever more reliant on search engines and other filtering tools furnished mostly by the private sector, placing our trust in the dubious permanence of the so-called cloud.

Otlet’s ideas about organizing information may seem anachronistic today. In an age of billions of Web pages, the idea of cataloging all the world’s information seems like a fool’s dream. Moreover, the premise of a single, fixed cataloging scheme — predicated on an assumption that there is a single, knowable version of the Truth — undoubtedly rankles modern sensibilities of cultural relativism.

Otlet’s empirical, top-down approach was rooted squarely in the 19th century ideals of positivism and in the then-prevalent belief in the superiority of scientifically advanced Western culture. But if we can look past the Belle Epoque trappings of these ideas, we can find a deeper, more hopeful aspiration at work.

This is why Paul Otlet still matters. His ideas are more than just a matter of historical curiosity, but rather a kind of Platonic ideal of what the network could be: not a channel for the fulfillment of worldly desires, but a vehicle for nobler pursuits: scholarship, social progress and even spiritual liberation. Shangri-La indeed.

Alex Wright is the author of “Cataloging the World: Paul Otlet and the Birth of the Information Age.”

http://www.salon.com/2014/06/29/the_man_who_almost_invented_the_internet_and_the_lost_promise_of_the_world_wide_web/?source=newsletter

“The Internet’s Own Boy”: How the government destroyed Aaron Swartz

A film tells the story of the coder-activist who fought corporate power and corruption — and paid a cruel price

"The Internet's Own Boy": How the government destroyed Aaron Swartz
Aaron Swartz (Credit: TakePart/Noah Berger)

Brian Knappenberger’s Kickstarter-funded documentary “The Internet’s Own Boy: The Story of Aaron Swartz,” which premiered at Sundance barely a year after the legendary hacker, programmer and information activist took his own life in January 2013, feels like the beginning of a conversation about Swartz and his legacy rather than the final word. This week it will be released in theaters, arriving in the middle of an evolving debate about what the Internet is, whose interests it serves and how best to manage it, now that the techno-utopian dreams that sounded so great in Wired magazine circa 1996 have begun to ring distinctly hollow.

What surprised me when I wrote about “The Internet’s Own Boy” from Sundance was the snarky, dismissive and downright hostile tone struck by at least a few commenters. There was a certain dark symmetry to it, I thought at the time: A tragic story about the downfall, destruction and death of an Internet idealist calls up all of the medium’s most distasteful qualities, including its unique ability to transform all discourse into binary and ill-considered nastiness, and its empowerment of the chorus of belittlers and begrudgers collectively known as trolls. In retrospect, I think the symbolism ran even deeper. Aaron Swartz’s life and career exemplified a central conflict within Internet culture, and one whose ramifications make many denizens of the Web highly uncomfortable.

For many of its pioneers, loyalists and self-professed deep thinkers, the Internet was conceived as a digital demi-paradise, a zone of total freedom and democracy. But when it comes to specifics things get a bit dicey. Paradise for whom, exactly, and what do we mean by democracy? In one enduringly popular version of this fantasy, the Internet is the ultimate libertarian free market, a zone of perfect entrepreneurial capitalism untrammeled by any government, any regulation or any taxation. As a teenage programming prodigy with an unusually deep understanding of the Internet’s underlying architecture, Swartz certainly participated in the private-sector, junior-millionaire version of the Internet. He founded his first software company following his freshman year at Stanford, and became a partner in the development of Reddit in 2006, which was sold to Condé Nast later that year.



That libertarian vision of the Internet – and of society too, for that matter – rests on an unacknowledged contradiction, in that some form of state power or authority is presumably required to enforce private property rights, including copyrights, patents and other forms of intellectual property. Indeed, this is one of the principal contradictions embedded within our current form of capitalism, as the Marxist scholar David Harvey notes: Those who claim to venerate private property above all else actually depend on an increasingly militarized and autocratic state. And from the beginning of Swartz’s career he also partook of the alternate vision of the Internet, the one with a more anarchistic or anarcho-socialist character. When he was 15 years old he participated in the launch of Creative Commons, the immensely important content-sharing nonprofit, and at age 17 he helped design Markdown, an open-source, newbie-friendly markup format that remains in widespread use.

One can certainly construct an argument that these ideas about the character of the Internet are not fundamentally incompatible, and may coexist peaceably enough. In the physical world we have public parks and privately owned supermarkets, and we all understand that different rules (backed of course by militarized state power) govern our conduct in each space. But there is still an ideological contest between the two, and the logic of the private sector has increasingly invaded the public sphere and undermined the ancient notion of the public commons. (Former New York Mayor Rudy Giuliani once proposed that city parks should charge admission fees.) As an adult Aaron Swartz took sides in this contest, moving away from the libertarian Silicon Valley model of the Internet and toward a more radical and social conception of the meaning of freedom and equality in the digital age. It seems possible and even likely that the “Guerilla Open Access Manifesto” Swartz wrote in 2008, at age 21, led directly to his exaggerated federal prosecution for what was by any standard a minor hacking offense.

Swartz’s manifesto didn’t just call for the widespread illegal downloading and sharing of copyrighted scientific and academic material, which was already a dangerous idea. It explained why. Much of the academic research held under lock and key by large institutional publishers like Reed Elsevier had been largely funded at public expense, but was now being treated as private property – and as Swartz understood, that was just one example of a massive ideological victory for corporate interests that had penetrated almost every aspect of society. The actual data theft for which Swartz was prosecuted, the download of a large volume of journal articles from the academic database called JSTOR, was largely symbolic and arguably almost pointless. (As a Harvard graduate student at the time, Swartz was entitled to read anything on JSTOR.)

But the symbolism was important: Swartz posed a direct challenge to the private-sector creep that has eaten away at any notion of the public commons or the public good, whether in the digital or physical worlds, and he also sought to expose the fact that in our age state power is primarily the proxy or servant of corporate power. He had already embarrassed the government twice previously. In 2006, he downloaded and released the entire bibliographic dataset of the Library of Congress, a public document for which the library had charged an access fee. In 2008, he downloaded and released about 2.7 million federal court documents stored in the government database called PACER, which charged 8 cents a page for public records that by definition had no copyright. In both cases, law enforcement ultimately concluded Swartz had committed no crime: Dispensing public information to the public turns out to be legal, even if the government would rather you didn’t. The JSTOR case was different, and the government saw its chance (one could argue) to punish him at last.

Knappenberger could only have made this film with the cooperation of Swartz’s family, which was dealing with a devastating recent loss. In that context, it’s more than understandable that he does not inquire into the circumstances of Swartz’s suicide in “Inside Edition”-level detail. It’s impossible to know anything about Swartz’s mental condition from the outside – for example, whether he suffered from undiagnosed depressive illness – but it seems clear that he grew increasingly disheartened over the government’s insistence that he serve prison time as part of any potential plea bargain. Such an outcome would have left him a convicted felon and, he believed, would have doomed his political aspirations; one can speculate that was the point. Carmen Ortiz, the U.S. attorney for Boston, along with her deputy Stephen Heymann, did more than throw the book at Swartz. They pretty much had to write it first, concocting an imaginative list of 13 felony indictments that carried a potential total of 50 years in federal prison.

As Knappenberger explained in a Q&A session at Sundance, that’s the correct context in which to understand Robert Swartz’s public remark that the government had killed his son. He didn’t mean that Aaron had actually been assassinated by the CIA, but rather that he was a fragile young man who had been targeted as an enemy of the state, held up as a public whipping boy, and hounded into severe psychological distress. Of course that cannot entirely explain what happened; Ortiz and Heymann, along with whoever above them in the Justice Department signed off on their display of prosecutorial energy, had no reason to expect that Swartz would kill himself. There’s more than enough pain and blame to go around, and purely on a human level it’s difficult to imagine what agony Swartz’s family and friends have put themselves through.

One of the most painful moments in “The Internet’s Own Boy” arrives when Quinn Norton, Swartz’s ex-girlfriend, struggles to explain how and why she wound up accepting immunity from prosecution in exchange for information about her former lover. Norton’s role in the sequence of events that led to Swartz hanging himself in his Brooklyn apartment 18 months ago has been much discussed by those who have followed this tragic story. I think the first thing to say is that Norton has been very forthright in talking about what happened, and clearly feels torn up about it.

Norton was a single mom living on a freelance writer’s income, who had been threatened with an indictment that could have cost her both her child and her livelihood. When prosecutors offered her an immunity deal, her lawyer insisted she should take it. For his part, Swartz’s attorney says he doesn’t think Norton told the feds anything that made Swartz’s legal predicament worse, but she herself does not agree. It was apparently Norton who told the government that Swartz had written the 2008 manifesto, which had spread far and wide in hacktivist circles. Not only did the manifesto explain why Swartz had wanted to download hundreds of thousands of copyrighted journal articles on JSTOR, it suggested what he wanted to do with them and framed it as an act of resistance to the private-property knowledge industry.

Amid her grief and guilt, Norton also expresses an even more appropriate emotion: the rage of wondering how in hell we got here. How did we wind up with a country where an activist is prosecuted like a major criminal for downloading articles from a database for noncommercial purposes, while no one goes to prison for the immense financial fraud of 2008 that bankrupted millions? As a person who has made a living as an Internet “content provider” for almost 20 years, I’m well aware that we can’t simply do away with the concept of copyright or intellectual property. I never download pirated movies, not because I care so much about the bottom line at Sony or Warner Bros., but because it just doesn’t feel right, and because you can never be sure who’s getting hurt. We’re not going to settle the debate about intellectual property rights in the digital age in a movie review, but we can say this: Aaron Swartz had chosen his targets carefully, and so did the government when it fixed its sights on him. (In fact, JSTOR suffered no financial loss, and urged the feds to drop the charges. They refused.)

A clean and straightforward work of advocacy cinema, blending archival footage and contemporary talking-head interviews, Knappenberger’s film makes clear that Swartz was always interested in the social and political consequences of technology. By the time he reached adulthood he began to see political power, in effect, as another system of control that could be hacked, subverted and turned to unintended purposes. In the late 2000s, Swartz moved rapidly through a variety of politically minded ventures, including a good-government site and several different progressive advocacy groups. He didn’t live long enough to learn about Edward Snowden or the NSA spy campaigns he exposed, but Swartz frequently spoke out against the hidden and dangerous nature of the security state, and played a key role in the 2011-12 campaign to defeat the Stop Online Piracy Act (SOPA), a far-reaching government-oversight bill that began with wide bipartisan support and appeared certain to sail through Congress. That campaign, and the Internet-wide protest of American Censorship Day in November 2011, looks in retrospect like the digital world’s political coming of age.

Earlier that year, Swartz had been arrested by MIT campus police, after they noticed that someone had plugged a laptop into a network switch in a server closet. He was clearly violating some campus rules and likely trespassing, but as the New York Times observed at the time, the arrest and subsequent indictment seemed to defy logic: Could downloading articles that he was legally entitled to read really be considered hacking? Wasn’t this the digital equivalent of ordering 250 pancakes at an all-you-can-eat breakfast? The whole incident seemed like a momentary blip in Swartz’s blossoming career – a terms-of-service violation that might result in academic censure, or at worst a misdemeanor conviction.

Instead, for reasons that have never been clear, Ortiz and Heymann insisted on a plea deal that would have sent Swartz to prison for six months, an unusually onerous sentence for an offense with no definable victim and no financial motive. Was he specifically singled out as a political scapegoat by Eric Holder or someone else in the Justice Department? Or was he simply bulldozed by a prosecutorial bureaucracy eager to justify its own existence? We will almost certainly never know for sure, but as numerous people in “The Internet’s Own Boy” observe, the former scenario cannot be dismissed easily. Young computer geniuses who embrace the logic of private property and corporate power, who launch start-ups and seek to join the 1 percent before they’re 25, are the heroes of our culture. Those who use technology to empower the public commons and to challenge the intertwined forces of corporate greed and state corruption, however, are the enemies of progress and must be crushed.

”The Internet’s Own Boy” opens this week in Atlanta, Boston, Chicago, Cleveland, Denver, Los Angeles, Miami, New York, Toronto, Washington and Columbus, Ohio. It opens June 30 in Vancouver, Canada; July 4 in Phoenix, San Francisco and San Jose, Calif.; and July 11 in Seattle, with other cities to follow. It’s also available on-demand from Amazon, Google Play, iTunes, Vimeo, Vudu and other providers.

http://www.salon.com/2014/06/24/the_internets_own_boy_how_the_government_destroyed_aaron_swartz/?source=newsletter

After you’re gone, what happens to your social media and data?

Web of the dead: When Facebook profiles of the deceased outnumber the living

Web of the dead: When Facebook profiles of the deceased outnumber the living

There’s been chatter — and even an overly hyped study — predicting the eventual demise of Facebook.

But what about the actual death of Facebook users? What happens when a social media presence lives beyond the grave? Where does the data go?

The folks over at WebpageFX looked into what they called “digital demise,” and made a handy infographic to fully explain what happens to your Web presence when you’ve passed.

It was estimated that 30 million Facebook users died in the first eight years of the social media site’s existence, according to the Huffington Post. Facebook even has settings to memorialize a deceased user’s page.

Facebook isn’t the only site with policies in place to handle a user’s passing. Pinterest, Google, LinkedIn and Twitter all handle death and data differently. For instance, to deactivate a Facebook profile you must provide proof that you are an immediate family member; for Twitter, however, you must produce the death certificate and your identification. All of the sites pinpointed by WebpageFX stated that your data belongs to you — some with legal or family exceptions.

Social media sites are in in general a young Internet phenomena — Facebook only turned 10 this year. So are a majority of their users. (And according to Mashable, Facebook still has a large number of teen adapters.) Currently, profiles of the living far outweigh those of the dead.



However, according to calculations done by XKDC, that will not always be the case. They presented two hypothetical scenarios. If Facebook loses its “cool” and market share, dead users will outnumber the living in 2065. If Facebook keeps up its growth, the site won’t be a digital graveyard until the mid 2100s.

Check out the fascinating infographic here.

h/t Mashable

http://www.salon.com/2014/06/24/web_of_the_dead_when_facebook_profiles_of_the_deceased_outnumber_the_living/?source=newsletter

Death of a libertarian fantasy: Why dreams of a digital utopia are rapidly fading away

Free-market enthusiasts have long predicted that technology would liberate humanity. It turns out they were wrong

Death of a libertarian fantasy: Why dreams of a digital utopia are rapidly fading away
Ron Paul (Credit: AP/Cliff Owen/objectifphoto via iStock/Salon)

There is no mystery why libertarians love the Internet and all the freedom-enhancing applications, from open source software to Bitcoin, that thrive in its nurturing embrace. The Internet routes around censorship. It enables peer-to-peer connections that scoff at arbitrary geographical boundaries. It provides infinite access to do-it-yourself information. It fuels dreams of liberation from totalitarian oppression. Give everyone a smartphone, and dictators will fall! (Hell, you can even download the code that will let you 3-D print a gun.)

Libertarian nirvana: It’s never more than a mouse-click away.

So, no mystery, sure. But there is a paradox. The same digital infrastructure that was supposed to enable freedom turns out to be remarkably effective at control as well. Privacy is an illusion, surveillance is everywhere, and increasingly, black-box big-data-devouring algorithms guide and shape our behavior and consumption. The instrument of our liberation turns out to be doing double-duty as the facilitator of a Panopticon. 3-D printer or no, you better believe somebody is watching you download your guns.

Facebook delivered a fresh reminder of this unfortunate evolution earlier this week. On Thursday, it announced, with much fanfare and plenty of admiring media coverage, that it was going to allow users to opt out of certain kinds of targeted ads. Stripped of any context, this would normally be considered a good thing. (Come to think of it, are there any two words, excluding “Ayn Rand,” that capture the essence of libertarianism better than “opt out”?)

Of course, the announcement about opting out was just a bait-and-switch designed to distract people from the fact that Facebook was actually vastly increasing the omniscience of its ongoing ad-targeting program. Even as it dangled the opt-out olive branch, Facebook also revealed that it would now start incorporating your entire browsing history, as well as information gathered by your smartphone apps, into its ad targeting database. (Previously, ads served by Facebook limited themselves to targeting users based on their activity on Facebook. Now, everything goes!)



The move was classic Facebook: A design change that — as Jeff Chester, executive director of the Center for Digital Democracy, told the Washington Post – constitutes “a dramatic expansion of its spying on users.”

Of course, even while Facebook is spying on us, we certainly could be using Facebook to organize against dictators, or to follow 3-D gun maestro Cody Wilson, or to topple annoyingly un-libertarian congressional House majority leaders.

It’s confusing, this situation we’re in, where the digital tools of liberation are simultaneously tools of manipulation. It would be foolish to say that there is no utility to our digital infrastructure. But we need to at least ask ourselves the question — is it possible that in some important ways, we are less free now than before the Internet entered our lives? Because it’s not just Facebook who is spying on us; it’s everyone.

* * *

A week or so ago, I received a tweet from someone who had apparently read a story in which I was critical of the “sharing” economy:

I’ll be honest — I’m not exactly sure what “gun-yielding regulator thugs” are. (Maybe he meant gun-wielding?) But I was intrigued by the combination of the right to constitutionally guaranteed “free association” with the right of companies like Airbnb and Uber to operate free of regulatory oversight.  The “sharing” economy is often marketed as some kind of hippy-dippy post-capitalist paradise — full of sympathy, and trust abounding – but it is also apparent that the popularity of these services taps into a deep reservoir of libertarian yearning. In the libertopia, we’ll dispense with government and even most corporations. All we’ll need will be convenient platforms that enable to us to contract with each other for every essential service!

But what’s missing here is the realization that those ever-so-convenient platforms are actually far more intrusive and potentially oppressive than the incumbent regimes that they are displacing. Operating on a global scale, companies like Airbnb and Uber are amassing vast databases of information about what we do and where we go. They are even figuring out the kind of people that we are, through our social media profiles and the ratings and reputation systems that they deploy to enforce good behavior. They have our credit card numbers and real names and addresses. They’re inside our phones. The cab driver you paid with cash last year was an entirely anonymous transactor. Not so for the ride on Lyft or Uber. The sharing economy, it turns out, is an integral part of the surveillance economy. In our race to let Silicon Valley mediate every consumer experience, we are voluntarily imprisoning ourselves in the Panopticon.

The more data we generate, the more we open ourselves up to manipulation based on how that data is investigated and acted upon by algorithmic rules. Earlier this month, Slate published a fascinating article, titled “Data-Driven Discrimination: How algorithms can perpetuate poverty and inequality.”

It reads:

Unlike the mustache-twiddling racists of yore, conspiring to segregate and exploit particular groups, redlining in the Information Age can happen at the hand of well-meaning coders crafting exceedingly complex algorithms. One reason is because algorithms learn from one another and iterate into new forms, making them inscrutable to even the coders responsible for creating them, it’s harder for concerned parties to find the smoking gun of wrongdoing.

A potential example of such information redlining:

A transportation agency may pledge to open public transit data to inspire the creation of applications like “Next Bus,” which simplify how we plan trips and save time. But poorer localities often lack the resources to produce or share transit data, meaning some neighborhoods become dead zones—places your smartphone won’t tell you to travel to or through.

And that’s a well-meaning example of how an algorithm can go awry! What happens when algorithms are designed purposely to discriminate? The most troubling aspect of our new information infrastructure is that the opportunities to manipulate us via our data are greatly expanded in an age of digital intermediation. The recommendations we get from Netflix or Amazon, the ads we see on Facebook, the search results we generate on Google — they’re all connected and influenced by hard data on what we read and buy and watch and seek. Is this freedom? Or is it a more insidious set of constraints than we could ever possibly have imagined the first time we logged in and started exploring the online universe.