Net neutrality is dying, Uber is waging a war on regulations, and Amazon grows stronger by the day

Why 2014 could be the year we lose the Internet

Why 2014 could be the year we lose the Internet
Jeff Bezos, Tim Cook (Credit: Reuters/Gus Ruelas/Robert Galbraith/Photo collage by Salon)

Halfway through 2014, and the influence of technology and Silicon Valley on culture, politics and the economy is arguably bigger than ever — and certainly more hotly debated. Here are Salon’s choices for the five biggest stories of the year.

1) Net neutrality is on the ropes.

So far, 2014 has been nothing but grim for the principle known as “net neutrality” — the idea that the suppliers of Internet bandwidth should not give preferential access (so-called fast lanes) to the providers of Internet services who are willing and able to pay for it. In January, the D.C. Court of Appeals struck down the FCC’s preliminary plan to enforce a weak form of net neutrality. Less than a month later, Comcast, the nation’s largest cable company and broadband Internet service provider, announced its plans to buy Time-Warner — and inadvertently gave us a compelling explanation for why net neutrality is so important. A single company with a dominant position in broadband will simply have too much power, something that could have enormous implications for our culture.

The situation continued to degenerate from there. Tom Wheeler, President Obama’s new pick to run the FCC, a former top cable industry lobbyist, unveiled a new plan for net neutrality that was immediately slammed as toothless. In May, ATT announced plans to merge with DirecTV. Consolidation proceeds apace, and our government appears incapable of managing the consequences.

2) Uber takes over.

After completing its most recent round of financing, Uber is now valued at $18.2 billion. Along with Airbnb, the Silicon Valley start-up has become a standard bearer for the Valley’s cherished allegiance to “disruption.” The established taxi industry is under sustained assault, but Uber has made it clear that the company’s ultimate ambitions go far beyond simply connecting people with rides. Uber has designs on becoming the premier logistics connection platform for getting anything to anyone. What Google is to search, Uber wants to be for moving objects from Point A to Point B. And Google, of course, has a significant financial stake in Uber.



Uber’s path has been bumpy. The company is fighting regulatory battles with municipalities across the world, and its own drivers are increasingly angry at fare cuts, and making sporadic attempts to organize. But the smart money sees Uber as one of the major players of the near future. The “sharing” economy is here to stay.

3) The year of the stream.

Apple bought Beats by Dre. Amazon launched its own streaming music service. Google is planning a new paid streaming offering. Spotify claimed 10 million paying customers and Pandora boasts 75 million listeners every month.

We may end up remembering 2014 as the year that streaming established itself as the dominant way people consume music. The numbers are stark. Streaming is surging, while paid downloads are in free fall.

For consumers, all-you-can-eat services like Spotify are generally marvelous. But it remains astonishing that a full 20 years after the Internet threw the music industry into turmoil, it is still completely unclear how artists and songwriters will make a decent living in an era when music is essentially free.

We also face unanswered questions about the potential implications for what kinds of music get made in an environment where every listen is tracked and every tweet or Facebook like observed. What will Big Data mean for music?

4) Amazon shows its true colors.

What a busy six months for Jeff Bezos! Amazon introduced its own set-top box for TV watching, its own smartphone for insta-shopping, anywhere, any time, and started abusing its near monopoly power to win better terms with publishing companies.

For years, consumer adoration of Amazon’s convenience and low prices fueled the company’s rise. It’s hard, at the midpoint of 2014, to avoid the conclusion that we’ve created a monster. This year, Amazon started getting sustained bad press at the very highest levels. And you know what? Jeff Bezos deserves it.

5) The tech culture wars boil over.

In the first six months of 2014, the San Francisco Bay Area witnessed emotional public hearings about Google shuttle buses, direct action by radicals against technology company executives, bar fights centering on Google Glass wearers, and a steady rise in political heat focused on tech economy-driven gentrification.

As I wrote in April

Just as the Luddites, despite their failure, spurred the creation of worker-class consciousness, the current Bay Area tech protests have had a pronounced political effect. While the tactics range from savvy, well-organized protest marches to juvenile acts of violence, the impact is clear. The attention of political leaders and the media has been engaged. Everyone is watching.

Ultimately, maybe this will be the biggest story of 2014. This year, numerous voices started challenging the transformative claims of Silicon Valley hype and began grappling with the nitty-gritty details of how all this “disruption” is changing our economy and culture. Don’t expect the second half of 2014 to be any different.

The lost promise of the Internet

Meet the man who almost invented cyberspace

In 1934, a little-known Belgian thinker published plans for a global network that could have changed everything

The lost promise of the Internet: Meet the man who almost invented cyberspace
Paul Otlet with model of the World City, 1943 (Credit: Mundaneum, Mons, Belgium)

Tales of lost horizons always spark the imagination. From the mythical kingdom of Shangri-La to the burning of the library at Alexandria, human history is rife with longing for better worlds that shimmered briefly before slipping out of reach.

Some of us may be starting to feel that way about the Internet. As a handful of corporations continue to consolidate their grip over the network, the optimism of the early indie Web has given way to a much-chronicled backlash.

But what if it all had turned out differently?

In 1934, a little-known Belgian bibliographer named Paul Otlet published his plans for the Mundaneum, a global network that would allow anyone in the world to tap into a vast repository of published information with a device that could send and receive text, display photographs, transcribe speech and auto-translate between languages. Otlet even imagined social networking-like features that would allow anyone to “participate, applaud, give ovations, sing in the chorus.”

Once the Mundaneum took shape, he predicted, “anyone in his armchair will be able to contemplate creation, in whole or in certain of its parts.”

34a_ARC-MUNDA-EUMC155-600.png

Unpublished drawing of the Mundaneum, 1930s © Mundaneum, Mons, Belgium.

Conceived in the pre-digital era, Otlet’s scheme relied on a crazy quilt of analog technologies like microfilm, telegraph lines, radio transmitters and typewritten index cards. Nonetheless, it anticipated the emergence of a hyperlinked information environment — more than half a century before Tim Berners-Lee released the first Web browser.

Despite Otlet’s remarkable foresight, he remains largely forgotten outside of rarefied academic circles. When the Nazis invaded Belgium in 1940, they destroyed much of his work, helping ensure his descent into historical obscurity (although the Mundaneum museum in Belgium is making great strides toward restoring his legacy). Most of his writing has never been translated into English.



In the years following World War II, a series of English and American computer scientists paved the way for what would become the present-day Internet. Pioneering thinkers like Vannevar Bush, J.C.R. Licklider, Douglas Engelbart, Ted Nelson, Vinton G. Cerf and Robert E. Kahn shaped the contours of the present-day Internet, with its famously flat, open architecture.

Tim Wu recently compared the open Internet to the early American frontier, “a place where anyone with passion or foolish optimism might speak his or her piece or open a business and see what happens.” But just as the unregulated frontier of the 19th century gave rise to the age of robber barons, so the Internet has seen a rapid consolidation of power in the hands of a few corporate winners.

The global tech oligopoly now exerts so much power over our lives that some pundits — like Wu and Danah Boyd — have even argued that some Internet companies should be regulated like public utilities. The Web’s inventor Tim Berners-Lee has sounded the alarm as well, calling for a great “re-decentralization” of his creation.

Otlet’s Mundaneum offers a tantalizing picture of what a different kind of network might have looked like. In contrast to today’s commercialized — and increasingly Balkanized — Internet, he envisioned the network as a public, transnational undertaking, managed by a consortium of governments, universities and international associations. Commercial enterprise played almost no part in it.

Otlet saw the Mundaneum as the central nervous system for a new world order rooted squarely in the public sector. Having played a small role in the formation of the League of Nations after World War I, he believed strongly that a global network should be administered by an international government entity working on behalf of humanity’s shared interests.

That network would do more than just provide access to information; it would serve as a platform for collaboration between governments that would, Otlet believed, help create the necessary conditions for world peace. He even proposed the construction of an ambitious new World City to house that government, with central offices and facilities for hosting international events (like the Olympics), an international university, a world museum and a headquarters of the Mundaneum that would involve a vast documentary operation staffed by a small army of “bibliologists” — a sort of cross between a blogger and a library cataloger — who would collect and curate the world’s information.

otlet_world_city.png

Otlet with model of the World City, 1943 © Mundaneum, Mons, Belgium.

Today, the role that Otlet envisioned for the Mundaneum falls largely to for-profit companies like Google, Yahoo, Facebook, Twitter and Amazon, who channel the vast majority of the world’s intellectual output by exerting an enormous asymmetric advantage over their users, exploiting their vast data stores and proprietary algorithms to make consumers dependent on them to navigate the digital world.

Billions of people may rely on Google’s search engine, but only a handful of well-paid engineers inside the Googleplex understand how it actually works. Similarly, the Facebook newsfeed may sate our appetite for quizzes and cat videos, but few of us will ever plumb the mysteries of its algorithm. And while Amazon may provide book readers with easy access to millions of titles, it is hardly a public library: The company’s primary goal will always be to drive sales, not to support scholarly research.

Otlet saw the network not as a tool for generating wealth, but as a largely commercial-free system to help spur intellectual and social progress. To that end, he wanted to make its contents as freely available and easily searchable as possible. He created an innovative classification scheme called the Universal Decimal Classification, a highly precise system for pinpointing particular topics and creating deep links between related subjects contained in documents, photographs, audio-visual files and other evolving media types.

Otlet coined a term to describe this process of stitching together different media types and technologies: “hyper-documentation” (a term he first used almost 30 years before Ted Nelson invented the term “hypertext” in 1963). He envisioned the entire scheme as a kind of open source catalog that would allow anyone to “be everywhere, see everything … and know everything.”

Otlet_Univers_Livre-150.png

Overview of Otlet’s cataloging scheme, 1930s © Mundaneum, Mons, Belgium.

Instead of knowing everything, we now seem to know less and less. The sheer mass of data on the Internet — and the difficulty of archiving it in any coherent way — imposes a kind of collective amnesia that makes us ever more reliant on search engines and other filtering tools furnished mostly by the private sector, placing our trust in the dubious permanence of the so-called cloud.

Otlet’s ideas about organizing information may seem anachronistic today. In an age of billions of Web pages, the idea of cataloging all the world’s information seems like a fool’s dream. Moreover, the premise of a single, fixed cataloging scheme — predicated on an assumption that there is a single, knowable version of the Truth — undoubtedly rankles modern sensibilities of cultural relativism.

Otlet’s empirical, top-down approach was rooted squarely in the 19th century ideals of positivism and in the then-prevalent belief in the superiority of scientifically advanced Western culture. But if we can look past the Belle Epoque trappings of these ideas, we can find a deeper, more hopeful aspiration at work.

This is why Paul Otlet still matters. His ideas are more than just a matter of historical curiosity, but rather a kind of Platonic ideal of what the network could be: not a channel for the fulfillment of worldly desires, but a vehicle for nobler pursuits: scholarship, social progress and even spiritual liberation. Shangri-La indeed.

Alex Wright is the author of “Cataloging the World: Paul Otlet and the Birth of the Information Age.”

http://www.salon.com/2014/06/29/the_man_who_almost_invented_the_internet_and_the_lost_promise_of_the_world_wide_web/?source=newsletter

“The Internet’s Own Boy”: How the government destroyed Aaron Swartz

A film tells the story of the coder-activist who fought corporate power and corruption — and paid a cruel price

"The Internet's Own Boy": How the government destroyed Aaron Swartz
Aaron Swartz (Credit: TakePart/Noah Berger)

Brian Knappenberger’s Kickstarter-funded documentary “The Internet’s Own Boy: The Story of Aaron Swartz,” which premiered at Sundance barely a year after the legendary hacker, programmer and information activist took his own life in January 2013, feels like the beginning of a conversation about Swartz and his legacy rather than the final word. This week it will be released in theaters, arriving in the middle of an evolving debate about what the Internet is, whose interests it serves and how best to manage it, now that the techno-utopian dreams that sounded so great in Wired magazine circa 1996 have begun to ring distinctly hollow.

What surprised me when I wrote about “The Internet’s Own Boy” from Sundance was the snarky, dismissive and downright hostile tone struck by at least a few commenters. There was a certain dark symmetry to it, I thought at the time: A tragic story about the downfall, destruction and death of an Internet idealist calls up all of the medium’s most distasteful qualities, including its unique ability to transform all discourse into binary and ill-considered nastiness, and its empowerment of the chorus of belittlers and begrudgers collectively known as trolls. In retrospect, I think the symbolism ran even deeper. Aaron Swartz’s life and career exemplified a central conflict within Internet culture, and one whose ramifications make many denizens of the Web highly uncomfortable.

For many of its pioneers, loyalists and self-professed deep thinkers, the Internet was conceived as a digital demi-paradise, a zone of total freedom and democracy. But when it comes to specifics things get a bit dicey. Paradise for whom, exactly, and what do we mean by democracy? In one enduringly popular version of this fantasy, the Internet is the ultimate libertarian free market, a zone of perfect entrepreneurial capitalism untrammeled by any government, any regulation or any taxation. As a teenage programming prodigy with an unusually deep understanding of the Internet’s underlying architecture, Swartz certainly participated in the private-sector, junior-millionaire version of the Internet. He founded his first software company following his freshman year at Stanford, and became a partner in the development of Reddit in 2006, which was sold to Condé Nast later that year.



That libertarian vision of the Internet – and of society too, for that matter – rests on an unacknowledged contradiction, in that some form of state power or authority is presumably required to enforce private property rights, including copyrights, patents and other forms of intellectual property. Indeed, this is one of the principal contradictions embedded within our current form of capitalism, as the Marxist scholar David Harvey notes: Those who claim to venerate private property above all else actually depend on an increasingly militarized and autocratic state. And from the beginning of Swartz’s career he also partook of the alternate vision of the Internet, the one with a more anarchistic or anarcho-socialist character. When he was 15 years old he participated in the launch of Creative Commons, the immensely important content-sharing nonprofit, and at age 17 he helped design Markdown, an open-source, newbie-friendly markup format that remains in widespread use.

One can certainly construct an argument that these ideas about the character of the Internet are not fundamentally incompatible, and may coexist peaceably enough. In the physical world we have public parks and privately owned supermarkets, and we all understand that different rules (backed of course by militarized state power) govern our conduct in each space. But there is still an ideological contest between the two, and the logic of the private sector has increasingly invaded the public sphere and undermined the ancient notion of the public commons. (Former New York Mayor Rudy Giuliani once proposed that city parks should charge admission fees.) As an adult Aaron Swartz took sides in this contest, moving away from the libertarian Silicon Valley model of the Internet and toward a more radical and social conception of the meaning of freedom and equality in the digital age. It seems possible and even likely that the “Guerilla Open Access Manifesto” Swartz wrote in 2008, at age 21, led directly to his exaggerated federal prosecution for what was by any standard a minor hacking offense.

Swartz’s manifesto didn’t just call for the widespread illegal downloading and sharing of copyrighted scientific and academic material, which was already a dangerous idea. It explained why. Much of the academic research held under lock and key by large institutional publishers like Reed Elsevier had been largely funded at public expense, but was now being treated as private property – and as Swartz understood, that was just one example of a massive ideological victory for corporate interests that had penetrated almost every aspect of society. The actual data theft for which Swartz was prosecuted, the download of a large volume of journal articles from the academic database called JSTOR, was largely symbolic and arguably almost pointless. (As a Harvard graduate student at the time, Swartz was entitled to read anything on JSTOR.)

But the symbolism was important: Swartz posed a direct challenge to the private-sector creep that has eaten away at any notion of the public commons or the public good, whether in the digital or physical worlds, and he also sought to expose the fact that in our age state power is primarily the proxy or servant of corporate power. He had already embarrassed the government twice previously. In 2006, he downloaded and released the entire bibliographic dataset of the Library of Congress, a public document for which the library had charged an access fee. In 2008, he downloaded and released about 2.7 million federal court documents stored in the government database called PACER, which charged 8 cents a page for public records that by definition had no copyright. In both cases, law enforcement ultimately concluded Swartz had committed no crime: Dispensing public information to the public turns out to be legal, even if the government would rather you didn’t. The JSTOR case was different, and the government saw its chance (one could argue) to punish him at last.

Knappenberger could only have made this film with the cooperation of Swartz’s family, which was dealing with a devastating recent loss. In that context, it’s more than understandable that he does not inquire into the circumstances of Swartz’s suicide in “Inside Edition”-level detail. It’s impossible to know anything about Swartz’s mental condition from the outside – for example, whether he suffered from undiagnosed depressive illness – but it seems clear that he grew increasingly disheartened over the government’s insistence that he serve prison time as part of any potential plea bargain. Such an outcome would have left him a convicted felon and, he believed, would have doomed his political aspirations; one can speculate that was the point. Carmen Ortiz, the U.S. attorney for Boston, along with her deputy Stephen Heymann, did more than throw the book at Swartz. They pretty much had to write it first, concocting an imaginative list of 13 felony indictments that carried a potential total of 50 years in federal prison.

As Knappenberger explained in a Q&A session at Sundance, that’s the correct context in which to understand Robert Swartz’s public remark that the government had killed his son. He didn’t mean that Aaron had actually been assassinated by the CIA, but rather that he was a fragile young man who had been targeted as an enemy of the state, held up as a public whipping boy, and hounded into severe psychological distress. Of course that cannot entirely explain what happened; Ortiz and Heymann, along with whoever above them in the Justice Department signed off on their display of prosecutorial energy, had no reason to expect that Swartz would kill himself. There’s more than enough pain and blame to go around, and purely on a human level it’s difficult to imagine what agony Swartz’s family and friends have put themselves through.

One of the most painful moments in “The Internet’s Own Boy” arrives when Quinn Norton, Swartz’s ex-girlfriend, struggles to explain how and why she wound up accepting immunity from prosecution in exchange for information about her former lover. Norton’s role in the sequence of events that led to Swartz hanging himself in his Brooklyn apartment 18 months ago has been much discussed by those who have followed this tragic story. I think the first thing to say is that Norton has been very forthright in talking about what happened, and clearly feels torn up about it.

Norton was a single mom living on a freelance writer’s income, who had been threatened with an indictment that could have cost her both her child and her livelihood. When prosecutors offered her an immunity deal, her lawyer insisted she should take it. For his part, Swartz’s attorney says he doesn’t think Norton told the feds anything that made Swartz’s legal predicament worse, but she herself does not agree. It was apparently Norton who told the government that Swartz had written the 2008 manifesto, which had spread far and wide in hacktivist circles. Not only did the manifesto explain why Swartz had wanted to download hundreds of thousands of copyrighted journal articles on JSTOR, it suggested what he wanted to do with them and framed it as an act of resistance to the private-property knowledge industry.

Amid her grief and guilt, Norton also expresses an even more appropriate emotion: the rage of wondering how in hell we got here. How did we wind up with a country where an activist is prosecuted like a major criminal for downloading articles from a database for noncommercial purposes, while no one goes to prison for the immense financial fraud of 2008 that bankrupted millions? As a person who has made a living as an Internet “content provider” for almost 20 years, I’m well aware that we can’t simply do away with the concept of copyright or intellectual property. I never download pirated movies, not because I care so much about the bottom line at Sony or Warner Bros., but because it just doesn’t feel right, and because you can never be sure who’s getting hurt. We’re not going to settle the debate about intellectual property rights in the digital age in a movie review, but we can say this: Aaron Swartz had chosen his targets carefully, and so did the government when it fixed its sights on him. (In fact, JSTOR suffered no financial loss, and urged the feds to drop the charges. They refused.)

A clean and straightforward work of advocacy cinema, blending archival footage and contemporary talking-head interviews, Knappenberger’s film makes clear that Swartz was always interested in the social and political consequences of technology. By the time he reached adulthood he began to see political power, in effect, as another system of control that could be hacked, subverted and turned to unintended purposes. In the late 2000s, Swartz moved rapidly through a variety of politically minded ventures, including a good-government site and several different progressive advocacy groups. He didn’t live long enough to learn about Edward Snowden or the NSA spy campaigns he exposed, but Swartz frequently spoke out against the hidden and dangerous nature of the security state, and played a key role in the 2011-12 campaign to defeat the Stop Online Piracy Act (SOPA), a far-reaching government-oversight bill that began with wide bipartisan support and appeared certain to sail through Congress. That campaign, and the Internet-wide protest of American Censorship Day in November 2011, looks in retrospect like the digital world’s political coming of age.

Earlier that year, Swartz had been arrested by MIT campus police, after they noticed that someone had plugged a laptop into a network switch in a server closet. He was clearly violating some campus rules and likely trespassing, but as the New York Times observed at the time, the arrest and subsequent indictment seemed to defy logic: Could downloading articles that he was legally entitled to read really be considered hacking? Wasn’t this the digital equivalent of ordering 250 pancakes at an all-you-can-eat breakfast? The whole incident seemed like a momentary blip in Swartz’s blossoming career – a terms-of-service violation that might result in academic censure, or at worst a misdemeanor conviction.

Instead, for reasons that have never been clear, Ortiz and Heymann insisted on a plea deal that would have sent Swartz to prison for six months, an unusually onerous sentence for an offense with no definable victim and no financial motive. Was he specifically singled out as a political scapegoat by Eric Holder or someone else in the Justice Department? Or was he simply bulldozed by a prosecutorial bureaucracy eager to justify its own existence? We will almost certainly never know for sure, but as numerous people in “The Internet’s Own Boy” observe, the former scenario cannot be dismissed easily. Young computer geniuses who embrace the logic of private property and corporate power, who launch start-ups and seek to join the 1 percent before they’re 25, are the heroes of our culture. Those who use technology to empower the public commons and to challenge the intertwined forces of corporate greed and state corruption, however, are the enemies of progress and must be crushed.

”The Internet’s Own Boy” opens this week in Atlanta, Boston, Chicago, Cleveland, Denver, Los Angeles, Miami, New York, Toronto, Washington and Columbus, Ohio. It opens June 30 in Vancouver, Canada; July 4 in Phoenix, San Francisco and San Jose, Calif.; and July 11 in Seattle, with other cities to follow. It’s also available on-demand from Amazon, Google Play, iTunes, Vimeo, Vudu and other providers.

http://www.salon.com/2014/06/24/the_internets_own_boy_how_the_government_destroyed_aaron_swartz/?source=newsletter

After you’re gone, what happens to your social media and data?

Web of the dead: When Facebook profiles of the deceased outnumber the living

Web of the dead: When Facebook profiles of the deceased outnumber the living

There’s been chatter — and even an overly hyped study — predicting the eventual demise of Facebook.

But what about the actual death of Facebook users? What happens when a social media presence lives beyond the grave? Where does the data go?

The folks over at WebpageFX looked into what they called “digital demise,” and made a handy infographic to fully explain what happens to your Web presence when you’ve passed.

It was estimated that 30 million Facebook users died in the first eight years of the social media site’s existence, according to the Huffington Post. Facebook even has settings to memorialize a deceased user’s page.

Facebook isn’t the only site with policies in place to handle a user’s passing. Pinterest, Google, LinkedIn and Twitter all handle death and data differently. For instance, to deactivate a Facebook profile you must provide proof that you are an immediate family member; for Twitter, however, you must produce the death certificate and your identification. All of the sites pinpointed by WebpageFX stated that your data belongs to you — some with legal or family exceptions.

Social media sites are in in general a young Internet phenomena — Facebook only turned 10 this year. So are a majority of their users. (And according to Mashable, Facebook still has a large number of teen adapters.) Currently, profiles of the living far outweigh those of the dead.



However, according to calculations done by XKDC, that will not always be the case. They presented two hypothetical scenarios. If Facebook loses its “cool” and market share, dead users will outnumber the living in 2065. If Facebook keeps up its growth, the site won’t be a digital graveyard until the mid 2100s.

Check out the fascinating infographic here.

h/t Mashable

http://www.salon.com/2014/06/24/web_of_the_dead_when_facebook_profiles_of_the_deceased_outnumber_the_living/?source=newsletter

Death of a libertarian fantasy: Why dreams of a digital utopia are rapidly fading away

Free-market enthusiasts have long predicted that technology would liberate humanity. It turns out they were wrong

Death of a libertarian fantasy: Why dreams of a digital utopia are rapidly fading away
Ron Paul (Credit: AP/Cliff Owen/objectifphoto via iStock/Salon)

There is no mystery why libertarians love the Internet and all the freedom-enhancing applications, from open source software to Bitcoin, that thrive in its nurturing embrace. The Internet routes around censorship. It enables peer-to-peer connections that scoff at arbitrary geographical boundaries. It provides infinite access to do-it-yourself information. It fuels dreams of liberation from totalitarian oppression. Give everyone a smartphone, and dictators will fall! (Hell, you can even download the code that will let you 3-D print a gun.)

Libertarian nirvana: It’s never more than a mouse-click away.

So, no mystery, sure. But there is a paradox. The same digital infrastructure that was supposed to enable freedom turns out to be remarkably effective at control as well. Privacy is an illusion, surveillance is everywhere, and increasingly, black-box big-data-devouring algorithms guide and shape our behavior and consumption. The instrument of our liberation turns out to be doing double-duty as the facilitator of a Panopticon. 3-D printer or no, you better believe somebody is watching you download your guns.

Facebook delivered a fresh reminder of this unfortunate evolution earlier this week. On Thursday, it announced, with much fanfare and plenty of admiring media coverage, that it was going to allow users to opt out of certain kinds of targeted ads. Stripped of any context, this would normally be considered a good thing. (Come to think of it, are there any two words, excluding “Ayn Rand,” that capture the essence of libertarianism better than “opt out”?)

Of course, the announcement about opting out was just a bait-and-switch designed to distract people from the fact that Facebook was actually vastly increasing the omniscience of its ongoing ad-targeting program. Even as it dangled the opt-out olive branch, Facebook also revealed that it would now start incorporating your entire browsing history, as well as information gathered by your smartphone apps, into its ad targeting database. (Previously, ads served by Facebook limited themselves to targeting users based on their activity on Facebook. Now, everything goes!)



The move was classic Facebook: A design change that — as Jeff Chester, executive director of the Center for Digital Democracy, told the Washington Post – constitutes “a dramatic expansion of its spying on users.”

Of course, even while Facebook is spying on us, we certainly could be using Facebook to organize against dictators, or to follow 3-D gun maestro Cody Wilson, or to topple annoyingly un-libertarian congressional House majority leaders.

It’s confusing, this situation we’re in, where the digital tools of liberation are simultaneously tools of manipulation. It would be foolish to say that there is no utility to our digital infrastructure. But we need to at least ask ourselves the question — is it possible that in some important ways, we are less free now than before the Internet entered our lives? Because it’s not just Facebook who is spying on us; it’s everyone.

* * *

A week or so ago, I received a tweet from someone who had apparently read a story in which I was critical of the “sharing” economy:

I’ll be honest — I’m not exactly sure what “gun-yielding regulator thugs” are. (Maybe he meant gun-wielding?) But I was intrigued by the combination of the right to constitutionally guaranteed “free association” with the right of companies like Airbnb and Uber to operate free of regulatory oversight.  The “sharing” economy is often marketed as some kind of hippy-dippy post-capitalist paradise — full of sympathy, and trust abounding – but it is also apparent that the popularity of these services taps into a deep reservoir of libertarian yearning. In the libertopia, we’ll dispense with government and even most corporations. All we’ll need will be convenient platforms that enable to us to contract with each other for every essential service!

But what’s missing here is the realization that those ever-so-convenient platforms are actually far more intrusive and potentially oppressive than the incumbent regimes that they are displacing. Operating on a global scale, companies like Airbnb and Uber are amassing vast databases of information about what we do and where we go. They are even figuring out the kind of people that we are, through our social media profiles and the ratings and reputation systems that they deploy to enforce good behavior. They have our credit card numbers and real names and addresses. They’re inside our phones. The cab driver you paid with cash last year was an entirely anonymous transactor. Not so for the ride on Lyft or Uber. The sharing economy, it turns out, is an integral part of the surveillance economy. In our race to let Silicon Valley mediate every consumer experience, we are voluntarily imprisoning ourselves in the Panopticon.

The more data we generate, the more we open ourselves up to manipulation based on how that data is investigated and acted upon by algorithmic rules. Earlier this month, Slate published a fascinating article, titled “Data-Driven Discrimination: How algorithms can perpetuate poverty and inequality.”

It reads:

Unlike the mustache-twiddling racists of yore, conspiring to segregate and exploit particular groups, redlining in the Information Age can happen at the hand of well-meaning coders crafting exceedingly complex algorithms. One reason is because algorithms learn from one another and iterate into new forms, making them inscrutable to even the coders responsible for creating them, it’s harder for concerned parties to find the smoking gun of wrongdoing.

A potential example of such information redlining:

A transportation agency may pledge to open public transit data to inspire the creation of applications like “Next Bus,” which simplify how we plan trips and save time. But poorer localities often lack the resources to produce or share transit data, meaning some neighborhoods become dead zones—places your smartphone won’t tell you to travel to or through.

And that’s a well-meaning example of how an algorithm can go awry! What happens when algorithms are designed purposely to discriminate? The most troubling aspect of our new information infrastructure is that the opportunities to manipulate us via our data are greatly expanded in an age of digital intermediation. The recommendations we get from Netflix or Amazon, the ads we see on Facebook, the search results we generate on Google — they’re all connected and influenced by hard data on what we read and buy and watch and seek. Is this freedom? Or is it a more insidious set of constraints than we could ever possibly have imagined the first time we logged in and started exploring the online universe.

How the digitization of everything means that corporations can monitor your every move

In the future, insurance companies will make sure that you exercise  

In the future, insurance companies will <em>make sure</em> that you exercise
(Credit: YouraPechkin via iStock/Salon)

Imagine this scene: You go to the kitchen for a beer, but your home has automatically locked the fridge; a text explains that you’ve already exceeded your calorie count for the day. You consider driving to a bar, but the beer you already drank sets off your car’s alcohol sensors: Driving privileges revoked! You decide to fire up the BBQ, but the gas line to your grill has been shut off — likely because you set off your smoke alarm twice this week while frying bacon. In the future, you can’t hide from the Internet of Things.

The Internet of Things is a venerable (in Internet time) catchphrase commonly used to describe a universe of connected hardware devices all talking to each other. In your home, for example, your fridge, your coffee maker and your baby monitor will all be two-way “smart” devices — configurable from afar and constantly transmitting data back into the cloud. Want to turn up the heat before you get home from work? No problem!  Curious about your energy consumption habits? Google will help you analyze the data! But that’s just the beginning.  The Internet of Things will also be wearable — full of fitness trackers and smart-watches and Google Glass-style devices that keep tabs on your health and whereabouts. The Internet of Things is already in your car, if you happened to have purchased one recently. Just Google the word “telematics.”

Hype over the Internet of Things has been flowing out of Silicon Valley for at least 15 years, but this week the rhetoric bumped up a couple of notches, after a rumor made the rounds that Apple is planning to roll out a new software platform that will turn your iPhoneinto a remote control for each and every device in your house. More than one industry analyst interpreted the purported move as a strategic response to Google’s  $3.2 billion acquisition in January in cash for Nest, the “smart” thermostat and smoke alarm maker — a move that many took as a sign that the Internet of Things was finally upon us.



(Silicon Valley and San Francisco are already getting their own dedicated cellular network, just to accommodate the tremendous outgrowth of traffic that such “things” might require.)

The Silicon Valley utopians tell us to get ready for a glorious future in which our “things” figure out what we want before we know we want it. Our thermostats will “learn” when we typically get up in the morning and automatically turn up the heat. Our fitness trackers will count the calories we consume and burn, watch our heart rates and make sure we’re sleeping right. Our cars will know where the nearest highly rated restaurants on Yelp are or whether we’re getting sleepy behind the wheel. Per this vision, the Internet of Things will work for us, enhancing the safety and quality of our lives.

There are qualms, of course. For one thing, the Internet of Things is eminently hackable. (Do we really want Internet script-kiddies with the power to turn off our refrigerators, jack up our thermostats or verbally abuse our families through the baby monitor?) But that’s just surface distraction. There’s a much deeper problem with the Internet of Things: It won’t be deployed merely to work for us. It will also be deployed to control us.

Why? Because that’s how the next generation of insurance companies is going to turn a profit.

The Internet of Things is poised to become the most efficient tool ever invented for helping insurance companies figure out potential risks, adjust their premiums accordingly, and reward — or punish — us for straying beyond the bounds of properly prescribed behavior.

Your smoke alarm went off three times this month? Your home insurance provider is not going to like that. You typically drive 10 miles faster than the speed limit in residential areas? Expect to get a rap on the knuckles from Geico or State Farm. You stayed in all weekend sucking down beers? Your healthcare insurer will not be pleased.

That’s the future promised by ubiquitous connected devices — a future in which insurance companies know everything about our lives, and have extraordinary powers to use that information to influence how we live.

“Insurance is going to be the native business model for the Internet of Things,” said Tim O’Reilly, founder and CEO of O’Reilly Media at a conference in San Francisco last Thursday.

In other words, the real money to be made from the Internet of Thing won’t be coming from the sale of hardware devices. It will come from exploiting the data gathered by those devices. And if what O’Reilly is suggesting turns out to be correct, then the most avid consumers of that data will be insurance companies themselves.

* * *

That name of the insurance game is all about properly assessing risk. Are you a safe driver? A responsible homeowner? Do you take good care of your body? How long are you likely to live?

Want to buy auto insurance? First thing your provider will do is check to see how many speeding tickets and other traffic violations are on your record. Do you live in a flood zone or area prone to earthquakes? Your house insurance premiums will be more expensive (if coverage is available at all). And prior to the passage of the Affordable Care Act, preexisting health conditions meant you could just forget about getting any kind of reasonably priced health plan.

Whichever company assesses risk the most accurately and prices its coverage accordingly makes the most money. What the Internet of Things promises in that regard is access to a vastly expanded universe of data that can be applied to creating “risk profiles.” The Internet of Things will enable a new kind of redlining. Discriminating against your race or location will be replaced by discrimination tied to the data you generate.

This is already being implemented in auto coverage, through so-called pay-as-you-drive plans that monitor your vehicle usage though information transmitted directly back to your insurance provider by technology incorporated in your car. We’re going to see a lot more of this, for obvious reasons.

(And, to be fair, that’s not necessarily a bad thing — if the Internet of Things knows when you’re too inebriated to drive, for example.)

It’s easy to see how the same principles could also be applied to home insurance. Google, through Nest, will eventually have access to all kinds of information about what kind of homeowner you are, giving it a tremendous amount of power. It may choose to sell that information to home insurance companies, or could even decide to get in the insurance game directly itself. Like the music and publishing and hotel and taxi industries before it, the insurance industry may just be next on the list of “about-to-be-disrupted” business sectors.

But the most interesting, and potentially creepy, application of the Internet of Things is as it applies to healthcare: Insurance companies could start offering discounts to people who kept their calorie intake within prescribed limits or exercised each week along prescribed parameters. And they’ll know when you break the rules, because a condition of those discounts will be constant monitoring via fitness trackers.

There’s all kinds of nightmare scenarios to consider once you invite Hal into your kitchen.Sorry, Dave, you’ve hit your two-beer maximum, so I’m locking the refrigerator. No more fried chicken for you until you’ve ridden 20 miles on your bike. The Internet of Things will be a Panopticon that has been enabled to do more than just watch. (And if you really want to freak out, well, how about brain implants designed to treat psychiatric disorders?! The Internet of Things in your brain!)

According to one observer, the healthcare industry may even be required to start incentivizing specific behaviors through technology by provisions built into the Affordable Care Act. “Outrage over NSA spying on Americans is nothing compared to how people may react to the upcoming collision with wearable computing, medical privacy and new insurance rule,” wrote Seattle Times technology correspondent Brier Dudley.

Not everyone wants to have a little computer on the wrist or head keeping track of what a wearer does around the clock. But I wonder if they won’t have much choice in the future, under new insurance laws that invite companies to scrutinize and monitor their employees’ health and fitness…

The Affordable Care Act now lets employers charge employees different health-insurance rates, based on whether they exercise, eat healthful foods and other “wellness” choices they make outside of work.

Employees (and insured family members) who don’t submit to the screening and participate in wellness programs face steep penalties; they may have to pay up to 30 percent more for their share of health-insurance costs. The law calls this a “reward” for participation. Flip it around and it’s a penalty for not authorizing your employer to manage and monitor how you live outside of work.

Dudley isn’t the only person who has noticed a potential connection between “wellness programs,” the ACA and fitness trackers. For fitness trackers to work most effectively, they will need incentives.  It’s not hard at all  to imagine a future in which your employer tells you your healthcare insurance is contingent in participating in a “wellness” program that requires active monitoring.

Of course, healthier lifestyles should be everyone’s goal. But what level of micromanaging are we willing to endure in exchange for insurance security? The prospect of an Internet of Things-enabled insurance future raises all kinds of thorny questions. For example, the Affordable Care Act is supposed to end the practice of healthcare insurers’ discriminating against people with preexisting conditions. But wouldn’t mandatory wellness programs be a kind of discrimination against people with unhealthy lifestyles?

In every realm of insurance, how long will it be before discounts offered for “good” behavior are matched by price hikes that penalize “bad” behavior? Liberals scream bloody murder when conservatives try to make welfare programs contingent on drug-testing. How different will it be when we run the risk of higher health insurance premiums because we ate too many potato chips?

There also obvious class implications. The rich will be able to afford gold-plated insurance plans that come free of prying eyes, while the poor will only be able to afford insurance plans that come equipped with onerous behavior modification shackles.

The connected society is coming. In a best-case scenario, perhaps, as a society, we’ll be able to draw lines between what is acceptable surveillance and what isn’t. But before we can do that, we need to constantly be asking an important question every time we hear about what the Internet of Things will do for us. What’s in it for them?

 

Who Really Owns The Internet?

http://gagful.com/uploads/2011_12/1322992719_nothing_is_fun_in_the_internet_anymore_gag.jpg

Why are a tiny handful of people making so much money off of material produced for nothing or next-to-nothing by so many others? Why do we make it so easy for Internet moguls to avoid stepping on to what one called “the treadmill of paying for content”? Who owns the Internet?

In her excellent new book The People’s Platform, Astra Taylor thinks through issues of money and power in the age of the Internet with clarity, nuance, and wit. (The book is fun to read, even as it terrifies you about the future of culture and of the economy.) She brings to bear her estimable experience as a documentary filmmaker—she is the director of two engaging films about philosophy, Zizek! And Examined Life—as well as a publisher and musician. For the past several months, she has been on the road performing with the reunion tour of Neutral Milk Hotel (she is married to the band’s lead singer, Jeff Magnum). We spoke over coffee on the Lower East Side during a brief break from her tour.

Can we solve the issues that you talk about without radically reorganizing the economy?

No. (Laughs) Which I think is why I’ve been so active. I’ve been thinking about this in connection with all these writers who are coming up who found each other through Occupy, and why all of us were willing to participate in that uprising despite all the problems and the occasional ridiculousness of it.

But the economy can be revolutionized or the economy can be reformed, and I don’t discount the latter option. That level of social change happens in unpredictable ways. It’s actually harder to think of a revolutionary event that has had a positive outcome, whereas there have been lots of reforms and lots of things that people have done on the edges that have had powerful consequences. Would I like to see an economic revolution? Definitely. But I think there are a lot of ways to insert a kind of friction into the system that can be beneficial.

This book is about economics, and the amazing, probably very American ability to not talk about economics—particularly with technology, which is supposed to be this magical realm, so pure and disruptive and unpredictable that it transcends economic conditions and constraints. The basic idea is that that’s not the case.

To a lot of people this is self-evident, but I was surprised at how outside the mainstream conversation that insight was. When money is brought up, there’s this incredible romanticism, like the Yochai Benkler quote about being motivated by things other than money. But we’re talking about platforms that go to Goldman Sachs to handle their IPOs. Money is here. Wake up!

The people at the top are making money.

In that conversation about creativity and work, there’s so much ire directed at cultural elites. And rightly so. Newspapers suck. They’re not doing the job that they could do for us. Book publishers publish crap. Cultural elites deserve criticism. The punching bags of this Web 2.0 conversation all deserve it. But when we let the economic elites off the hook, that’s feeding into the tradition of right-wing populism. Ultimately, the guys getting rich behind the curtain aren’t being treated as the real enemies.

You mention that when you wrote to people who posted your films online, you either received no response or a very angry response.

One thing I took away from that experience is that it’s almost as though people really believe that the Internet is a library. “I should be able to watch on YouTube a full-length film about philosophy. It’s a library, it should be full of edifying, enlightening things!

My response was that I spent two years making this film, and I want a window—I didn’t ask them to take it down forever, I asked for a grace period of I think two months. Conceptually, we’re not grasping the fact that even though there are private platforms that increase our access to things, first those things have to exist. How have we not thought through how these products are funded?

I empathized with the person on the other end, who wanted these films. I made them because I hoped that people would want them. But I can’t invest another two years of my life in an esoteric and expensive production if all I can do is put it on YouTube and pray that it goes viral.

And even if it goes viral, you might not make any money from it.

Right. The whole model doesn’t work in that context. And I can see both sides. Especially on the copyright issue. As a documentary filmmaker, you’re so dependent on gleaning from the world, gleaning from other people’s creations. You’re not always the author of the words on the screen. I don’t want some closed, locked-down scenario where every utterance is closed and monitored by algorithms who have no ethical imperatives and have no nuance and who don’t understand fair use.

Another person I’ve talked to for this series is Benjamin Kunkel, who said his introduction to Marxist theory is already a bit antiquated because of Jesse Myerson’s Rolling Stone article, which recommends, among other things, a universal basic income. As I was reading your book, I was thinking about how a universal basic income might help.

I actually mention universal income in passing, in the chapter that looks at the enthusiasm for amateurism that was actually a bit more prominent a few years ago, when I started writing. “We finally have a platform that allows non-professionals to participate!There were things in that conversation that were so reminiscent of utopian predictions from centuries past about how machines would free us to live the life of a poet. “We’ll only work four hours a day.Why didn’t those visions come to pass? Because those machines were not harnessed by the people. They were harnessed by the ruling elite.

I was struck by how ours is a diminished utopianism. It wasn’t that we would use these machines to free us from labor; it was that now in our stolen minutes after work we can go online and be on social media. How did it come to this, that’s that all we can hope for? And the answer is in how the economy has been reshaped by neoliberalism or whatever you want to call it over the last few decades.

The idea of labor-saving devices has been around. Oscar Wilde, Keynes. But it was pretty common in the 1960s, when there was a robust social safety net. So I think you’re exactly right, that we need something in the public-policy and social sphere, not the technological sphere, to address these issues.

It’s great that people are talking about a universal income, at least in our little tiny circles. You step outside bubble of the young intellectual left of New York, and people will say: What the fuck are you talking about?

We think this idea is getting traction, but it’s because we all follow each other online and we’re all reading the same magazines. Not everyone is reading Kathi Weeks or Ben Kunkel in their free time. I don’t agree with Ben that his book is out of date because of that one article in Rolling Stone. We need to keep harping on these basic concepts.

I think it’s a ripe time for it, considering recent research into the employment prospects for millenials. College indebtedness is insane right now. That’s why I got involved with StrikeDebt. When the economy is forcing you to separate the romantic idea of what you consider your calling from what you have to do for money since there are no fucking jobs that have anything to do with your degree, you start to think that maybe a universal income might make a lot of sense.

If the economy won’t support you to do what you love for a living, you’re already halfway there.

Can you talk about Occupy and how you got involved?

I was working on this book before Occupy, and the tech realm was where a lot of our political hopes were being invested. If you think back, there wasn’t a vibrant protest movement in the US. Instead, there was this idea of democracy through social media, and technologically-enabled protests abroad. That might account a bit for why I gravitated towards this subject.

Then Occupy happened. If anything, it distracted me from The People’s Platform. I wound up putting out five issues of the Occupy Gazette with n+1. Then I got roped into, or rather I roped myself into, this offshoot of Occupy called StrikeDebt that has been doing the Rolling Jubilee campaign.

But my work with the Occupy campaign suffused my analysis more and more. Calling attention to the economic elite fits very well into Occupy’s idea of the ninety-nine percent and the one percent. The amount of value being hoarded by these companies is just mind-boggling.

So these projects did go in tandem. Both of them are thinking about power today. In this book, I was trying to think through how power operates in the technological sphere generally, but particularly in relationship to media. So no longer are you just watching what’s been chosen for you on television. Now you’re supposed to be the agent of your own destiny, clicking around. But there’s still power; there’s still money.

People will say, “How can you criticize these technological tools that helped people overthrow dictators?We constantly use this framework of the people against the authoritarian dictator. There was a lot of buzz about how social media empowered the protests in the Middle East which mostly turned out to be false. But what about the US? There’s no dictator. There’s a far more complicated power dynamic. The challenge of our generation is how you build economic association and aggregate economic power when you’re not going to be doing conventional workplace organizing, because there are no jobs, let alone stable, long-term jobs.

So, this is depressing. Could you talk about solutions?

The solutions aren’t that radical: The library model that we project on to the Internet but that doesn’t quite fit—we can invent something analogous to it. There are lots of cool things we could be doing. But we’re locked into this model that’s really stupid and inefficient: the advertising model. That’s the most ridiculous way to create these services and platforms. The advertising model is commonsensical because it’s common, but it’s not sensical.

What would socialized social media—and non-social media—look like?

Ben Kunkel has an essay where he talks a bit about this. But first, we have to get away from the idea that the government is the bad guy. One thing that we’ve learned in the wake of the NSA scandals is that the public and private sectors are really intertwined; government surveillance piggybacks off of corporate surveillance. It might be less technological and more about funding things for their own sake. If you look at countries with robust cultural policies, under the broadcast model a lot of them instate quotas. There would be a lot of protectionist regulations, and they would invest in their own work.

Quotas are complicated, obviously. But you can look to the model of public broadcasting. Public broadcasting wasn’t a government propaganda machine. Liberals and conservatives both worry that this would create something bland. But when public broadcasting came under fire, it was usually for being too edgy and provocative. There are mechanisms that you can introduce to prevent whatever visions of sad iron-curtain art you have in your head.

One thing that comes up a lot in some liberal critiques of Edward Snowden is that he might be a libertarian.

I don’t know Snowden, so I can’t comment on him. But I think that a lot of us are libertarians. Libertarianism is the default ideology of our day because there’s something deeply appealing about the idea of free agents—people on their own in charge of their own destinies. That has to do with the retreat of institutions from our lives, which results in an inability to imagine a positive role for them to play. We’re still dependent on institutions; we just don’t recognize it or give them much credit.

This ubiquitous libertarianism, particularly in tech circles, was a major target of my book. All of these things you want these tools to bring about—an egalitarian sphere, a sphere where the best could rise to the top, one that is not dominated by old Goliaths—within the libertarian framework, you’ll never get there. You have to have a more productive economic critique.

But I also think that if you’re on the left, you need to recognize what’s appealing about libertarianism. It’s the emphasis on freedom. We need to articulate a left politics that has freedom at its center. We can’t be afraid of freedom or individuality, and we need to challenge the idea that equality and freedom are somehow contradictions.

At the same time, even on the radical left, there’s a knee-jerk suspicion of institutions. When we criticize institutions that serve as buffers or bastions against market forces, the right wins out more. It’s a complicated thing.

When I defend institutions in this book, I knew I might provoke my more radical friends. The position that everything is corrupt—journalism is corrupt, educational institutions are corrupt, publishers are corrupt—sounds great. And on some level it’s true. They’ve disappointed us. But we need more and better—more robust, more accountable—institutions. So I tried to move out of the position of just criticizing those arrangements and enumerating all their flaws and all the ways they’ve failed us. What happens when we’ve burned all these institutions to the ground and it’s just us and Google?

One of my favorite aspects of your book is your emphasis on the physical aspects of the Internet. It reminded me of the scene in Examined Life where Zizek is standing on the garbage heap, talking about how material stuff disappears.

That image we have of the Internet as weightless—it’s so high-tech it doesn’t really exist!—is part of why we misunderstand it. There are some people doing good work around this, people like Andrew Blum, who wrote the book Tubes, asking what the Internet is. There’s infrastructure. It’s immense, and it’s of great consequence, especially as more and more of our lives move online. The materiality is really important to keep in mind.

We’re moving to a place where we have a better of grasp of this. People are finally realizing that the online and the offline are not separate realms. It’s not really like I have my online life where I’m pretending to be a 65-year-old man in a chat room, and then I’m Astra at the coffee shop. Those identities are as complicated and as coherent as any human identity has ever been. That can extend towards thinking about objects.

The other night I was re-reading Vance Packard’s The Waste Makers—the sort of book that makes you feel like you’re just reheated whatever, and that this person did it so much better the first time around. He outlines planned obsolescence, stuff made to break. It’s so relevant to our gadgets, our technology. He wrote it in 1960, at that moment where the economy had been saturated, so everybody had their fridge and their car. So how do you keep GDP going up? It’s actually patriotic to make things that break.

You talk about Steve Jobs in that context.

Steve Jobs is the ultimate incarnation of that plan. You have to have a new iPod every year. But he presented himself as this artist-craftsman who would never sacrifice quality. That’s such a lie.

You talk about how both sides of the Internet debate, if you will, see a radical break with the past, whereas you see more continuity.

I think that that’s crucial to understanding where we’re at. This standard assumption that there would be a massive transformation blocked us from seeing the obvious outcomes and set us back in terms of having a grasp on our current condition. If we had gone into it with a bit more realism, more respect for the power of the market, less faith in technology’s ability to transcend it, we’d be better off.

Could you say more about respect for the power of the market?

You don’t want to be too deterministic, I suppose, but the market drives the development of these tools. Especially once you’ve gone public and you’re beholden to your shareholders.

There’s confusion because we’ve been here before with the first tech boom. One thing that got me thinking about this—and that confused me—was that I came to New York right at the tail end of that. I didn’t work for a startup or anything like that, but I had friends who did, friends who were fired. I followed what was happening in the Bay Area, they lost hundreds of thousands of jobs. You think: okay, we learned from that. We learned that because of the way the market sought investments, they propped up some really stupid ideas, there was a bubble, and it burst. What’s amazing to me is that fifteen years later, the same commentators are suddenly back, talking about social media, Web 2.0, and making proclamations about how the culture will evolve. You were wrong then, partially because you ignored the financial aspect of what was going on, and here you are again, ignoring the money. Give the market its due.

Do you have advice for what people—people like me—who write or produce other work for the Internet can do about this situation?

I’m encouraged by all these little magazines that have started in the last few years. Building institutions, even if they’re small, is a very powerful thing, so that we’re less isolated. When you’re isolated, you’re forced into the logic of building our own brand. If you build something together, you’re more able to focus on endeavors that don’t immediately feed into that. That’s what an institution can buy you—the space to focus on other things.

What would help creators more than anything else in this country are things that would help other workers: Real public health care, real social provisions. Artists are people like everybody else; we need the same things as our barista.

I quote John Lennon: “You think you’re so clever and classless and free. One thing we need is an end to artist exceptionalism. When we can see our connection to other precarious people in the economy, that’s when interesting things could happen. When we justify our position with our own specialness…

You talk about how Steve Jobs would tell his employees that they were artists.

Right. How could you ask to be properly compensated, don’t you see that you’re supposed to be an artist? Grad students were given that advice, too.

That’s where this ties in to Miya Tokumitsu’s essay on the problems with the concept of “Do What You Love.”

Exactly. Now, precarity shouldn’t be a consequence of being an artist. Everyone should have more security. But it’s more and more the condition of our time. One thing I say in passing is that the ethos of the artist—someone who is willing to work around the clock with no security, and who will keep on working after punching out the clock—that attitude is more and more demanded of everyone in the economy. Maybe artists can be at the vanguard of saying no to that. But yes, there would have to be a psychological shift where people would have to accept being less special.

David Burr Gerrard’s debut novel, Short Century, has just been released by Rare Bird Books. He can be followed on Twitter. The interview has been condensed and edited.

http://www.theawl.com/2014/04/who-owns-the-internet

Why Comcast’s rise and net neutrality’s downfall will change everything

Say goodbye to TV’s golden age:

The economic model that sustains quality TV is under assault. Does the future have room for any more Walter Whites?

Say goodbye to TV's golden age: Why Comcast's rise and net neutrality's downfall will change everything
Bryan Cranston as Walter White in “Breaking Bad” (Credit: AMC/Frank Ockenfels 3)

Viewers be warned! The golden age of television is coming to an end, and here’s how it’s going to happen: An unholy cabal of judges, government regulators and “cord-cutting” millennials will decapitate it. Like the similarly beheaded Ned Stark, on HBO’s “Game of Thrones,” we will miss it dearly when it’s gone.

For years, the “future of TV” has been an evergreen topic of discussion, but rarely have we seen weeks like the one just past, in which a cascade of news-breaking developments all but overwhelms our ability to make sense of them.

To recap: On Tuesday, Netflix lambasted the proposed Comcast-Time Warner merger, declaring it “a long-term threat” to the healthy ecosystem of the Internet. Comcast promptly riposted, dismissing Netflix as a querulous, hypocritical whiner with a shaky grasp on the facts. Then, on Wednesday, HBO sucker-punched Netflix by agreeing to stream HBO shows through Amazon Prime, and AT&T fired a warning shot across everyone’s bow by announcing its own plans to get into the streaming video business with a “Netflix-like” service. Also on Wednesday, the Supreme Court heard arguments in the case of American Broadcasting Companies, Inc. v. Aereo, Inc., which could end up resulting in the most influential high court ruling regarding how TV programs are distributed in decades.

And, to cap it all off, the FCC on Thursday released a tentative set of new guidelines for net neutrality that were immediately greeted by critics as an appalling sell-out of “net neutrality,” and the hallowed guiding principles of the “Open Internet.”

Whew!



To fully explore the intricacies of what’s at stake in any single one of these industry-reshaping eruptions would gobble up more hours than a marathon binge-watch of all five seasons of “Breaking Bad.” But there’s a crucial common thread: In every case, the economic model that currently underpins television (and bankrolls our amazing proliferation of high-quality productions) is under sustained assault. This is happening both from the bottom-up, as so-called cord-cutters seek an à la carte programming future; and from the top down, as telecom companies consolidate near-monopoly control of broadband. In the process, an inevitable transfer of power — from the content creators who make “Mad Men,” “Game of Thrones” and “Justified,” to the cable and satellite and telephone companies that distribute those TV shows — is underway.

It’s a complex witches’ brew: The changing habits of television consumers, the disrupting influence of technological innovation, and the decisions of both courts and regulators are enabling this shift. How everything will shake out is far from certain, but there’s a more-than-good chance that when we get to the other side, the landscape of television will be altered for the worse.

The reason why is simple: Great television requires an awful lot of money. If content creators end up with a smaller piece of a shrinking pie … well, you don’t have to be Don Draper to figure out which way the wind will blow.

 

 

* * *

Before we can figure out how all the pieces in this crazy jigsaw puzzle fit together, we need to take a close look at how the TV business currently functions, and understand why so many people are unhappy with it. Because here’s the funny thing about the golden age of television: Even though there are more outstanding shows on the air than we can squeeze into our overstuffed DVRs, there’s also no shortage of grumbling. Specifically, there is widespread dissatisfaction with how hundreds of TV channels are packaged into all-or-nothing “bundles.” And people just don’t seem to like bundling.

It’s easy to see why. If you aren’t a sports fan, why should you be paying for ESPN or the NBA on TNT? If you don’t have kids, what’s the use of Nickelodeon and the Disney Channel? If all you crave is “The Americans” and “Fargo” and “Girls” and “Silicon Valley” why must you still be forced to flip through scores of channels stuffed with reality TV and “Big Bang Theory” reruns? This isn’t how we consume our music or our news in the Internet era. Why are we still stuck in this antediluvian age when it comes to TV?

Last Friday I ranted against the notion that the rising numbers of “cord-cutters”– people canceling their cable TV subscriptions (or never signing up in the first place) — posed any kind of meaningful threat to the profitability of Comcast and the handful of other cable TV giants. After all, even cord-cutters need to get their Internet bandwidth from someone, and most often, that someone involves a “cord” owned by their local cable company. Indeed, just this week, when Comcast announced its bang-up second quarter earnings, the company reported 383,000 new Internet subscribers. (Comcast also added 24,000 cable TV subscribers, undermining the “cord-cutter apocalypse” hypothesis.)

Readers challenged my thesis. Cord-cutting might not kill Comcast, they argued, but the practice of switching from cable to online consumption of TV did present a serious threat to bundling. The future, they declared, will be an “à la carte” utopia in which we only pay for exactly what we want. Just as iTunes ended the tyranny of albums over singles, the Internet will blow up the TV bundle. That’s just the way things work in our point, click and consume era.

The argument makes intuitive sense. Why spend $80 a month for Comcast, when some combination of Netflix and Hulu and YouTube and Amazon Prime and iTunes (and BitTorrent!) can get you almost as far as you’d like to go for a lot less than a cable subscription? But there’s a missing piece, and it’s the key to understanding everything that’s going on in the crazily evolving world of TV: Bundling is what pays the rent. An unbundled world weakens content creators, strengthens the ISPs, and puts net neutrality under huge pressure. So be careful what you wish for.

Without bundling, most channels that currently exist wouldn’t be economically feasible, and those channels that do survive would cost considerably more than they do now. Today, bundling funnels billions of dollars from the owners of the distribution infrastructure — the cable companies and satellite operators — to the networks and channels that create the shows that we want to watch. But all that will change if bundling goes away. In the worst-case scenario, the producers of content will actually be forced to pay the distributors to get their shows to the people.

The current system works like this: Popular channels charge a “subscriber fee” to the cable and satellite companies for the right to rebroadcast their shows. The fees can run as high as $5 per subscriber (as in the case of ESPN, the most valuable cable property). The fees are under constant negotiation, rising and falling according to the channel’s perceived popularity. In an environment where advertising revenue is under constant assault from technological innovation (your DVR) and Internet competition, those subscriber fees can add up to as much as half the total revenue for a channel.

In this system, the popular channels subsidize the unpopular ones, because the pay-TV distributors group everything together and charge a single fee to their own subscribers that more than makes up for the costs of acquiring programming. However, unbundling would undo that arrangement. If you subtract the subscriber fee revenue, even the most popular channels would be forced to raise their direct-to-consumer prices to cover their costs. For example, if sports fans were to buy ESPN directly, for ESPN to maintain its current profitability, it would have to charge its audience vastly more than it charges the cable companies. In an apocalyptic research note published by Needham Research last year, analyst Laura Martin predicted that unbundling would devastate the TV economy:

Unbundling dwarfs any other risk to the TV ecosystem, as we calculate that ~50% of total TV ecosystem revenue (about $70 billion) would evaporate and fewer than 20 channels would survive in an a la carte world where consumers are required to bear 100 percent of the cost of the channel.

Martin may or may not be exaggerating the total damage from unbundling, but the key insight to grasp here is that there is a significant flow of funds from cable companies like Comcast to ESPN and ABC and FX and TBS. Unbundling attacks that flow of funds. Yes, in many cases those channels may be raking in more revenue than they would be able to in a purely competitive market. But a purely competitive market would constrict the flow of cash necessary to produce great television. In a purely competitive market, low-traffic, high prestige channels like AMC would be struggling to survive, because many of us would be unwilling to pay the high premiums necessary to cover the costs of making something like “Mad Men.”

* * *

With all that in mind, let’s return to the events of the past week and see how they plug into the picture.

Let’s start with the HBO-to-Amazon announcement, along with the news that AT&T is getting into the streaming business. Both business moves implicitly acknowledge the fundamental change in user behavior that’s going on. Audiences are moving online, and they’re picking and choosing what they want to see. They’re not actually cutting cords, but they are voting with their feet against the bundling status quo. We are headed to an extraordinarily competitive future — one in which a handful of players compete on both price and selection to get us to hit the buy button.

But the move to online means that whoever controls the bandwidth pipes has extraordinary power. That’s why, for example, Netflix is opposed to the Comcast-Time Warner merger. The merger would give Comcast a dominating position in the provision of broadband Internet access. That dominance, in principle, would grant Comcast the power to dictate terms to anyone who wanted to deliver content over the Internet, be that Netflix or Amazon or Apple or Google. Hey, guess what — Comcast happens to own a big stake in Hulu, which raises the obvious question: Does Hulu get a break on bandwidth charges? Is Hulu subject to data caps? Right now, no. But …

Of course, price discrimination à la the Hulu hypothetical  is exactly what net neutrality is supposed to stop, but the new guidelines released by the FCC — now chaired by Tom Wheeler, a former lobbyist for the cable industry — strongly suggest that, going forward, the telecom companies will be allowed to charge content providers for access to their Internet pipes — except when “commercially unreasonable,” whatever that means.

I’m predicting a great deal of litigation, in the future, over exactly what constitutes “commercially unreasonable,” but for now, it is clear that the FCC has given Internet service providers such as Comcast a leg up over the content companies. To get “House of Cards” to you, Netflix is already paying Comcast a hefty fee. It seems crazy, but what happens when HBO has to pay Comcast to stream “Game of Thrones” via Amazon? Maybe  ”Game of Thrones’” humongous production budget takes a hit?

Interestingly, the Supreme Court’s Aereo case may be the most relevant of all this week’s developments to the future of TV. The details and legal issues at play in that case are fascinating and complex. (For a comprehensive explanation you can’t do better than SCOTUS Blog. And, for a more digestible introduction to the issues, Salon’s own Sarah Gray has an excellent Q&A right here.) But the bottom line is that this is a fight about money. Aereo has figured out an (arguably) legal technological hack that allows it to distribute broadcast TV content over the Internet without paying the same fees that cable companies or satellite TV operators are forced to pay to distribute that same content to their subscribers. If Aereo can get away with this, the pay-TV operators are likely to argue that they too should be able to do so. And whooooosh – there goes another huge stream of income previously enjoyed by the broadcast TV channels.

And if Aereo doesn’t get away with it, that opens up another huge can of worms, potentially affecting the entire emerging industry of cloud computing. But that’s a separate issue. The truth is, whether Aereo succeeds or fails in court may be moot. So far, content companies have had about as much success resisting the disruptive influence of Internet technologies as English kings have had in resisting the tide. We’ve seen this narrative play out before, when Internet distribution models upended the traditional business models of the music and publishing industries. You can even argue, as does Needham’s Laura Martin, that the Internet resulted in the “unbundling” of both the album and the newspaper — and in the process, sucked a huge amount of revenue out of both industries.

The great mystery is how TV has managed, so far, to resist this disruption. The answer, now that we’ve digested all the things that have happened this week, is that it might just be a matter of time. Yes, we are paying too much for cable bills, and we will find more inexpensive options as time goes on. But that, in turn, may make the next “Game of Thrones” a lot chintzier than the current offering. Enjoy it while it lasts.

 

 

Why the Web can’t abandon its misogyny

The Internet’s destructive gender gap:

People like Ezra Klein are showered with opportunity, while women face an online world hostile to their ambitions

, TomDispatch.com

The Internet's destructive gender gap: Why the Web can't abandon its misogynyEzra Klein (Credit: MSNBC)
This piece originally appeared on TomDispatch.

The Web is regularly hailed for its “openness” and that’s where the confusion begins, since “open” in no way means “equal.” While the Internet may create space for many voices, it also reflects and often amplifies real-world inequities in striking ways.

An elaborate system organized around hubs and links, the Web has a surprising degree of inequality built into its very architecture. Its traffic, for instance, tends to be distributed according to “power laws,” which follow what’s known as the 80/20 rule — 80% of a desirable resource goes to 20% of the population.

In fact, as anyone knows who has followed the histories of Google, Apple, Amazon, and Facebook, now among the biggest companies in the world, the Web is increasingly a winner-take-all, rich-get-richer sort of place, which means the disparate percentages in those power laws are only likely to look uglier over time.

Powerful and exceedingly familiar hierarchies have come to define the digital realm, whether you’re considering its economics or the social world it reflects and represents.  Not surprisingly, then, well-off white men are wildly overrepresented both in the tech industry and online.

Just take a look at gender and the Web comes quickly into focus, leaving you with a vivid sense of which direction the Internet is heading in and — small hint — it’s not toward equality or democracy.

Experts, Trolls, and What Your Mom Doesn’t Know

As a start, in the perfectly real world women shoulder a disproportionate share of household and child-rearing responsibilities, leaving them substantially less leisure time to spend online. Though a handful of high-powered celebrity “mommy bloggers” have managed to attract massive audiences and ad revenue by documenting their daily travails, they are the exceptions not the rule. In professional fields like philosophy, law, and science, where blogging has become popular, women are notoriously underrepresented; by one count, for instance, only around 20% of science bloggers are women.



An otherwise optimistic white paper by the British think tank Demos touching on the rise of amateur creativity online reported that white males are far more likely to be “hobbyists with professional standards” than other social groups, while you won’t be shocked to learn that low-income women with dependent children lag far behind. Even among the highly connected college-age set, research reveals a stark divergence in rates of online participation.

Socioeconomic status, race, and gender all play significant roles in a who’s who of the online world, with men considerably more likely to participate than women. “These findings suggest that Internet access may not, in and of itself, level the playing field when it comes to potential pay-offs of being online,” warns Eszter Hargittai, a sociologist at Northwestern University. Put simply, closing the so-called digital divide still leaves a noticeable gap; the more privileged your background, the more likely that you’ll reap the additional benefits of new technologies.

Some of the obstacles to online engagement are psychological, unconscious, and invidious. In a revealing study conducted twice over a span of five years — and yielding the same results both times — Hargittai tested and interviewed 100 Internet users and found that there was no significant variation in their online competency. In terms of sheer ability, the sexes were equal. The difference was in their self-assessments.

It came down to this: The men were certain they did well, while the women were wracked by self-doubt. “Not a single woman among all our female study subjects called herself an ‘expert’ user,” Hargittai noted, “while not a single male ranked himself as a complete novice or ‘not at all skilled.’” As you might imagine, how you think of yourself as an online contributor deeply influences how much you’re likely to contribute online.

The results of Hargittai’s study hardly surprised me. I’ve seen endless female friends be passed over by less talented, more assertive men. I’ve had countless people — older and male, always — assume that someone else must have conducted the interviews for my documentary films, as though a young woman couldn’t have managed such a thing without assistance. Research shows that people routinely underestimate women’s abilities, not least women themselves.

When it comes to specialized technical know-how, women are assumed to be less competent unless they prove otherwise. In tech circles, for example, new gadgets and programs are often introduced as being “so easy your mother or grandmother could use them.” A typical piece in the New York Times wastitled “How to Explain Bitcoin to Your Mom.”  (Assumedly, dad already gets it.)  This kind of sexism leapt directly from the offline world onto the Web and may only have intensified there.

And it gets worse. Racist, sexist, and homophobic harassment or “trolling” has become a depressingly routine aspect of online life.

Many prominent women have spoken up about their experiences being bullied and intimidated online — scenarios that sometimes escalate into the release of private information, including home addresses, e-mail passwords, and social security numbers, or simply devolve into an Internet version of stalking. Esteemed classicist Mary Beard, for example, “received online death threats and menaces of sexual assault” after a television appearance last year, as did British activist Caroline Criado-Perez after she successfully campaigned to get more images of women onto British banknotes.

Young women musicians and writers often find themselves targeted online by men who want to silence them. “The people who were posting comments about me were speculating as to how many abortions I’ve had, and they talked about ‘hate-fucking’ me,” blogger Jill Filipovic told the Guardian after photos of her were uploaded to a vitriolic online forum. Laurie Penny, a young political columnist who has faced similar persecution and recently published an ebook called Cybersexism, touched a nerve by calling a woman’s opinion the “short skirt” of the Internet: “Having one and flaunting it is somehow asking an amorphous mass of almost-entirely male keyboard-bashers to tell you how they’d like to rape, kill, and urinate on you.”

Alas, the trouble doesn’t end there. Women who are increasingly speaking out against harassers are frequently accused of wanting to stifle free speech. Or they are told to “lighten up” and that the harassment, however stressful and upsetting, isn’t real because it’s only happening online, that it’s just “harmless locker-room talk.”

As things currently stand, each woman is left alone to devise a coping mechanism as if her situation were unique. Yet these are never isolated incidents, however venomously personal the insults may be. (One harasser called Beard — and by online standards of hate speech this was mild — “a vile, spiteful excuse for a woman, who eats too much cabbage and has cheese straws for teeth.”)

Indeed, a University of Maryland study strongly suggests just how programmatic such abuse is. Those posting with female usernames, researchers were shocked to discover, received 25 times as many malicious messages as those whose designations were masculine or ambiguous. The findings were so alarming that the authors advised parents to instruct their daughters to use sex-neutral monikers online. “Kids can still exercise plenty of creativity and self-expression without divulging their gender,” a well-meaning professor said, effectively accepting that young girls must hide who they are to participate in digital life.

Over the last few months, a number of black women with substantial social media presences conducted an informal experiment of their own. Fed up with the fire hose of animosity aimed at them, Jamie Nesbitt Golden and others adopted masculine Twitter avatars. Golden replaced her photo with that of a hip, bearded, young white man, though she kept her bio and continued to communicate in her own voice. “The number of snarky, condescending tweets dropped off considerably, and discussions on race and gender were less volatile,” Golden wrote, marveling at how simply changing a photo transformed reactions to her. “Once I went back to Black, it was back tobusiness as usual.”

Old Problems in New Media

Not all discrimination is so overt. A study summarized on the Harvard Business Review website analyzed social patterns on Twitter, where female users actually outnumbered males by 10%. The researchers reported “that an average man is almost twice [as] likely to follow another man [as] a woman” while “an average woman is 25% more likely to follow a man than a woman.” The results could not be explained by varying usage since both genders tweeted at the same rate.

Online as off, men are assumed to be more authoritative and credible, and thus deserving of recognition and support. In this way, long-standing disparities are reflected or even magnified on the Internet.

In his 2008 book The Myth of Digital Democracy, Matthew Hindman, a professor of media and public affairs at George Washington University, reports that of the top 10 blogs, only one belonged to a female writer. A wider census of every political blog with an average of over 2,000 visitors a week, or a total of 87 sites, found that only five were run by women, nor were there “identifiable African Americans among the top 30 bloggers,” though there was “one Asian blogger, and one of mixed Latino heritage.” In 2008, Hindman surveyed the blogosphere and found it less diverse than the notoriously whitewashed op-ed pages of print newspapers. Nothing suggests that, in the intervening six years, things have changed for the better.

Welcome to the age of what Julia Carrie Wong has called “old problems in new media,” as the latest well-funded online journalism start-ups continue to be helmed by brand-name bloggers like Ezra Klein and Nate Silver. It is “impossible not to notice that in the Bitcoin rush to revolutionize journalism, the protagonists are almost exclusively — and increasingly — male and white,” Emily Bell lamented in a widely circulated op-ed. It’s not that women and people of color aren’t doing innovative work in reporting and cultural criticism; it’s just that they get passed over by investors and financiers in favor of the familiar.

As Deanna Zandt and others have pointed out, such real-world lack of diversity is also regularly seen on the rosters of technology conferences, even as speakers take the stage to hail a democratic revolution on the Web, while audiences that look just like them cheer. In early 2013, in reaction to the announcement of yet another all-male lineup at a prominent Web gathering, a pledge was posted on the website of the Atlantic asking men to refrain from speaking at events where women are not represented. The list of signatories was almost immediately removed “due to a flood of spam/trolls.” The conference organizer, a successful developer, dismissed the uproar over Twitter. “I don’t feel [the] need to defend this, but am happy with our process,” he stated. Instituting quotas, he insisted, would be a “discriminatory” way of creating diversity.

This sort of rationalization means technology companies look remarkably like the old ones they aspire to replace: male, pale, and privileged. Consider Instagram, the massively popular photo-sharing and social networking service, which was founded in 2010 but only hired its first female engineer last year. While the percentage of computer and information sciences degrees women earned rose from 14% to 37% between 1970 and 1985, that share had depressingly declined to 18% by 2008.

Those women who do fight their way into the industry often end up leaving — their attrition rate is 56%, or double that of men — and sexism is a big part of what pushes them out. “I no longer touch code because I couldn’t deal with the constant dismissing and undermining of even my most basic work by the ‘brogramming’ gulag I worked for,” wrote one woman in a roundup of answers to the question: Why there are so few female engineers?

In Silicon Valley, Facebook’s Sheryl Sandberg and Yahoo’s Marissa Mayer excepted, the notion of the boy genius prevails.  More than 85% of venture capitalists are men generally looking to invest in other men, and women make 49 cents for every dollar their male counterparts rake in — enough to make a woman long for the wage inequities of the non-digital world, where on average they take home a whopping 77 cents on the male dollar. Though 40% of private businesses are women-owned nationwide, only 8% of the venture-backed tech start-ups are.

Established companies are equally segregated. The National Center for Women and Information Technology reports that in the top 100 tech companies, only 6% of chief executives are women. The numbers of Asians who get to the top are comparable, despite the fact that they make up one-third of all Silicon Valley software engineers. In 2010, not even 1% of the founders of Silicon Valley companies were black.

Making Your Way in a Misogynist Culture

What about the online communities that are routinely held up as exemplars of a new, networked, open culture? One might assume from all the “revolutionary” and “disruptive” rhetoric that they, at least, are better than the tech goliaths. Sadly, the data doesn’t reflect the hype. Consider Wikipedia. A survey revealed that women make up less than 15% of the contributors to the site, despite the fact that they use the resource in equal numbers to men.

In a similar vein, collaborative filtering sites like Reddit and Slashdot, heralded by the digerati as the cultural curating mechanisms of the future, cater to users who are up to 87% male and overwhelmingly young, wealthy, and white. Reddit, in particular, has achieved notoriety for its misogynist culture, with threads where rapists have recounted their exploits and photos of underage girls got posted under headings like “Chokeabitch,” “N*****jailbait,” and “Creepshots.”

Despite being held up as a paragon of political virtue, evidence suggests that as few as 1.5% of open source programmers are women, a number far lower than the computing profession as a whole. In response, analysts have blamed everything from chauvinism, assumptions of inferiority, and outrageous examples of impropriety (including sexual harassment at conferences where programmers gather) to a lack of women mentors and role models. Yet the advocates of open-source production continue to insist that their culture exemplifies a new and ethical social order ruled by principles of equality, inclusivity, freedom, and democracy.

Unfortunately, it turns out that openness, when taken as an absolute, actually aggravates the gender gap. The peculiar brand of libertarianism in vogue within technology circles means a minority of members — a couple of outspoken misogynists, for example — can disproportionately affect the behavior and mood of the group under the cover of free speech. As Joseph Reagle, author of Good Faith Collaboration: The Culture of Wikipediapoints out, women are not supposed to complain about their treatment, but if they leave — that is, essentially are driven from — the community, that’s a decision they alone are responsible for.

“Urban” Planning in a Digital Age

The digital is not some realm distinct from “real” life, which means that the marginalization of women and minorities online cannot be separated from the obstacles they confront offline. Comparatively low rates of digital participation and the discrimination faced by women and minorities within the tech industry matter — and not just because they give the lie to the egalitarian claims of techno-utopians. Such facts and figures underscore the relatively limited experiences and assumptions of the people who design the systems we depend on to use the Internet — a medium that has, after all, become central to nearly every facet of our lives.

In a powerful sense, programmers and the corporate officers who employ them are the new urban planners, shaping the virtual frontier into the spaces we occupy, building the boxes into which we fit our lives, and carving out the routes we travel. The choices they make can segregate us further or create new connections; the algorithms they devise can exclude voices or bring more people into the fold; the interfaces they invent can expand our sense of human possibility or limit it to the already familiar.

What vision of a vibrant, thriving city informs their view? Is it a place that fosters chance encounters or does it favor the predictable? Are the communities they create mixed or gated? Are they full of privately owned shopping malls and sponsored billboards or are there truly public squares? Is privacy respected? Is civic engagement encouraged? What kinds of people live in these places and how are they invited to express themselves? (For example, is trolling encouraged, tolerated, or actively discouraged or blocked?)

No doubt, some will find the idea of engineering online platforms to promote diversity unsettling and — a word with some irony embedded in it — paternalistic, but such criticism ignores the ways online spaces are already contrived with specific outcomes in mind.  They are, as a start, designed to serve Silicon Valley venture capitalists, who want a return on investment, as well as advertisers, who want to sell us things. The term “platform,” which implies a smooth surface, misleads us, obscuring the ways technology companies shape our online lives, prioritizing certain purposes over others, certain creators over others, and certain audiences over others.

If equity is something we value, we have to build it into the system, developing structures that encourage fairness, serendipity, deliberation, and diversity through a process of trial and error. The question of how we encourage, or even enforce, diversity in so-called open networks is not easy to answer, and there is no obvious and uncomplicated solution to the problem of online harassment. As a philosophy, openness can easily rationalize its own failure, chalking people’s inability to participate up to choice, and keeping with the myth of the meritocracy, blaming any disparities in audience on a lack of talent or will.

That’s what the techno-optimists would have us believe, dismissing potential solutions as threats to Internet freedom and as forceful interference in a “natural” distribution pattern. The word “natural” is, of course, a mystification, given that technological and social systems are not found growing in a field, nurtured by dirt and sun. They are made by human beings and so can always be changed and improved.

Astra Taylor is a writer, documentary filmmaker (including Zizek! andExamined Life), and activist. Her new book, “The People’s Platform: Taking Back Power and Culture in the Digital Age (Metropolitan Books), has just been published. This essay is adapted from it. She also helped launch the Occupy offshoot Strike Debt and its Rolling Jubilee campaign.

Astra Taylor is working on a film about the theorist Slavoj Zizek, which is being produced by the Documentary Campaign.

http://www.salon.com/2014/04/10/the_internets_destructive_gender_gap_why_the_web_cant_abandon_its_misogyny_partner/?source=newsletter

The Internet Is the Greatest Legal Facilitator of Inequality in Human History

It doesn’t have to be.
JAN 28 2014, 4:34 PM ET
Reuters

In the 1990s, the venture capitalist John Doerr famously predicted that the Internet would lead to the “the largest legal creation of wealth in the history of the planet.” Indeed, the Internet has created a tremendous amount of personal wealth. Just look at the rash of Internet billionaires and millionaires, the investors both small and large that have made fortunes investing in Internet stocks, and the list of multibillion-dollar Internet companies—Google, Facebook, LinkedIn, and Amazon. Add to the list the recent Twitter stock offering, which created a reported 1,600 millionaires.

Then there’s the superstar effect. The Internet multiplies the earning power of the very best high-frequency traders, currency speculators, and entertainers, who reap billions while the merely good are left to slog it out.

But will the Internet also create the greatest economic inequality the global economy has ever known? And will poorly designed government policies aimed at ameliorating the problem of inequality end up empowering the Internet-driven redistribution process?

As the Internet goes about its work making the economy more efficient, it is reducing the need for travel agents, post office employees, and dozens of other jobs in corporate America. The increased interconnectivity created by the Internet forces many middle and lower class workers to compete for jobs with low-paid workers in developing countries. Even skilled technical workers are finding that their jobs can be outsourced to trained engineers and technicians in India and Eastern Europe.

That’s the old news.

The new news is that Internet-based companies may well be the businesses of the future, but they create opportunities for only a select few. Google has a little over 54,000 employees and generated revenues of around $50 billion in sales or about $1.0 million per employee. The numbers are similar for Facebook. Amazon is running at a $70 billion revenue rate and had around110,000 employees or a little over $600 thousand in sales per employee. In the U.S., each non-farm worker adds a little over $120,000 to the domestic output.

That means that in order to justify hiring an employee, a highly productive Internet company must create five to ten times the dollars in sales as the average domestic company.

In the past, the most efficient businesses created lots of middle class jobs. In 1914, Henry Ford shocked the industrial world by doubling the pay of assembly line workers to $5 a day. Ford wasn’t merely being generous. He helped to create the middle class, by reasoning that a higher paid workforce would be able them to buy more cars and thus would grow his business.

Ford’s success trickled down, as other companies followed his lead. Automotive companies not only employed numerous well paid workers but they created a large demand for other product and services that employed millions more—steel, glass, machine tools, auto dealers and dealerships, gas stations, mechanics, bridges, roads, and construction equipment. The workers in those industries purchased homes, appliances, and clothes creating still more jobs.

One reason we are failing to create a vibrant middle class is that the Internet affects the economy differently than the new businesses of the past did., forcing businesses and their workers to face increased global competition. It reduces the barriers for moving jobs overseas. It has a smaller economic trickle-down effect.

Doing some of the obvious things like raising the minimum wage to fight the effects of the Internet will probably worsen the problem. For example, it will make it more difficult for bricks-and-mortar retailers to compete with online retailers.

Surprisingly, the much-vilified Walmart probably does more to help middle class families raise their median income than the more productive Amazon.Walmart hires about one employee for every $200,000 in sales, which translates to roughly three times more jobs per dollar of sales than Amazon. Raising the minimum wage will also make it more difficult to bring manufacturing jobs back to the U.S. The Internet is not the sole force driving income inequality in the U.S. Our languishing education system is a major contributor to the problem. But two things are certain: the Internet is creating many of those in the ultra-wealthy 1%; and it forces businesses to compete with capable international competitors while providing the tools so that businessmen can squeeze inefficiency out of the system in order to remain competitive.

If the government is going to be in the business of redistributing wealth, a better approach would be to raise the earned income tax credit and increase taxes to pay for it. Not only would this raise the income of low paid workers, but also it would subsidize businesses so they would be more competitive in world markets and encourage them to create jobs. Since the minimum wage would not go up, moving jobs overseas would be a less attractive alternative.

If policy makers want to attack income inequality, they must pay more attention to the ways in which the Internet is affecting their businesses. If we ignore the power of the Internet when making policy decisions, we are in danger of allowing it to become the greatest legal facilitator of income inequality in the history of the planet.