“The Internet’s Own Boy”: How the government destroyed Aaron Swartz

A film tells the story of the coder-activist who fought corporate power and corruption — and paid a cruel price

"The Internet's Own Boy": How the government destroyed Aaron Swartz
Aaron Swartz (Credit: TakePart/Noah Berger)

Brian Knappenberger’s Kickstarter-funded documentary “The Internet’s Own Boy: The Story of Aaron Swartz,” which premiered at Sundance barely a year after the legendary hacker, programmer and information activist took his own life in January 2013, feels like the beginning of a conversation about Swartz and his legacy rather than the final word. This week it will be released in theaters, arriving in the middle of an evolving debate about what the Internet is, whose interests it serves and how best to manage it, now that the techno-utopian dreams that sounded so great in Wired magazine circa 1996 have begun to ring distinctly hollow.

What surprised me when I wrote about “The Internet’s Own Boy” from Sundance was the snarky, dismissive and downright hostile tone struck by at least a few commenters. There was a certain dark symmetry to it, I thought at the time: A tragic story about the downfall, destruction and death of an Internet idealist calls up all of the medium’s most distasteful qualities, including its unique ability to transform all discourse into binary and ill-considered nastiness, and its empowerment of the chorus of belittlers and begrudgers collectively known as trolls. In retrospect, I think the symbolism ran even deeper. Aaron Swartz’s life and career exemplified a central conflict within Internet culture, and one whose ramifications make many denizens of the Web highly uncomfortable.

For many of its pioneers, loyalists and self-professed deep thinkers, the Internet was conceived as a digital demi-paradise, a zone of total freedom and democracy. But when it comes to specifics things get a bit dicey. Paradise for whom, exactly, and what do we mean by democracy? In one enduringly popular version of this fantasy, the Internet is the ultimate libertarian free market, a zone of perfect entrepreneurial capitalism untrammeled by any government, any regulation or any taxation. As a teenage programming prodigy with an unusually deep understanding of the Internet’s underlying architecture, Swartz certainly participated in the private-sector, junior-millionaire version of the Internet. He founded his first software company following his freshman year at Stanford, and became a partner in the development of Reddit in 2006, which was sold to Condé Nast later that year.



That libertarian vision of the Internet – and of society too, for that matter – rests on an unacknowledged contradiction, in that some form of state power or authority is presumably required to enforce private property rights, including copyrights, patents and other forms of intellectual property. Indeed, this is one of the principal contradictions embedded within our current form of capitalism, as the Marxist scholar David Harvey notes: Those who claim to venerate private property above all else actually depend on an increasingly militarized and autocratic state. And from the beginning of Swartz’s career he also partook of the alternate vision of the Internet, the one with a more anarchistic or anarcho-socialist character. When he was 15 years old he participated in the launch of Creative Commons, the immensely important content-sharing nonprofit, and at age 17 he helped design Markdown, an open-source, newbie-friendly markup format that remains in widespread use.

One can certainly construct an argument that these ideas about the character of the Internet are not fundamentally incompatible, and may coexist peaceably enough. In the physical world we have public parks and privately owned supermarkets, and we all understand that different rules (backed of course by militarized state power) govern our conduct in each space. But there is still an ideological contest between the two, and the logic of the private sector has increasingly invaded the public sphere and undermined the ancient notion of the public commons. (Former New York Mayor Rudy Giuliani once proposed that city parks should charge admission fees.) As an adult Aaron Swartz took sides in this contest, moving away from the libertarian Silicon Valley model of the Internet and toward a more radical and social conception of the meaning of freedom and equality in the digital age. It seems possible and even likely that the “Guerilla Open Access Manifesto” Swartz wrote in 2008, at age 21, led directly to his exaggerated federal prosecution for what was by any standard a minor hacking offense.

Swartz’s manifesto didn’t just call for the widespread illegal downloading and sharing of copyrighted scientific and academic material, which was already a dangerous idea. It explained why. Much of the academic research held under lock and key by large institutional publishers like Reed Elsevier had been largely funded at public expense, but was now being treated as private property – and as Swartz understood, that was just one example of a massive ideological victory for corporate interests that had penetrated almost every aspect of society. The actual data theft for which Swartz was prosecuted, the download of a large volume of journal articles from the academic database called JSTOR, was largely symbolic and arguably almost pointless. (As a Harvard graduate student at the time, Swartz was entitled to read anything on JSTOR.)

But the symbolism was important: Swartz posed a direct challenge to the private-sector creep that has eaten away at any notion of the public commons or the public good, whether in the digital or physical worlds, and he also sought to expose the fact that in our age state power is primarily the proxy or servant of corporate power. He had already embarrassed the government twice previously. In 2006, he downloaded and released the entire bibliographic dataset of the Library of Congress, a public document for which the library had charged an access fee. In 2008, he downloaded and released about 2.7 million federal court documents stored in the government database called PACER, which charged 8 cents a page for public records that by definition had no copyright. In both cases, law enforcement ultimately concluded Swartz had committed no crime: Dispensing public information to the public turns out to be legal, even if the government would rather you didn’t. The JSTOR case was different, and the government saw its chance (one could argue) to punish him at last.

Knappenberger could only have made this film with the cooperation of Swartz’s family, which was dealing with a devastating recent loss. In that context, it’s more than understandable that he does not inquire into the circumstances of Swartz’s suicide in “Inside Edition”-level detail. It’s impossible to know anything about Swartz’s mental condition from the outside – for example, whether he suffered from undiagnosed depressive illness – but it seems clear that he grew increasingly disheartened over the government’s insistence that he serve prison time as part of any potential plea bargain. Such an outcome would have left him a convicted felon and, he believed, would have doomed his political aspirations; one can speculate that was the point. Carmen Ortiz, the U.S. attorney for Boston, along with her deputy Stephen Heymann, did more than throw the book at Swartz. They pretty much had to write it first, concocting an imaginative list of 13 felony indictments that carried a potential total of 50 years in federal prison.

As Knappenberger explained in a Q&A session at Sundance, that’s the correct context in which to understand Robert Swartz’s public remark that the government had killed his son. He didn’t mean that Aaron had actually been assassinated by the CIA, but rather that he was a fragile young man who had been targeted as an enemy of the state, held up as a public whipping boy, and hounded into severe psychological distress. Of course that cannot entirely explain what happened; Ortiz and Heymann, along with whoever above them in the Justice Department signed off on their display of prosecutorial energy, had no reason to expect that Swartz would kill himself. There’s more than enough pain and blame to go around, and purely on a human level it’s difficult to imagine what agony Swartz’s family and friends have put themselves through.

One of the most painful moments in “The Internet’s Own Boy” arrives when Quinn Norton, Swartz’s ex-girlfriend, struggles to explain how and why she wound up accepting immunity from prosecution in exchange for information about her former lover. Norton’s role in the sequence of events that led to Swartz hanging himself in his Brooklyn apartment 18 months ago has been much discussed by those who have followed this tragic story. I think the first thing to say is that Norton has been very forthright in talking about what happened, and clearly feels torn up about it.

Norton was a single mom living on a freelance writer’s income, who had been threatened with an indictment that could have cost her both her child and her livelihood. When prosecutors offered her an immunity deal, her lawyer insisted she should take it. For his part, Swartz’s attorney says he doesn’t think Norton told the feds anything that made Swartz’s legal predicament worse, but she herself does not agree. It was apparently Norton who told the government that Swartz had written the 2008 manifesto, which had spread far and wide in hacktivist circles. Not only did the manifesto explain why Swartz had wanted to download hundreds of thousands of copyrighted journal articles on JSTOR, it suggested what he wanted to do with them and framed it as an act of resistance to the private-property knowledge industry.

Amid her grief and guilt, Norton also expresses an even more appropriate emotion: the rage of wondering how in hell we got here. How did we wind up with a country where an activist is prosecuted like a major criminal for downloading articles from a database for noncommercial purposes, while no one goes to prison for the immense financial fraud of 2008 that bankrupted millions? As a person who has made a living as an Internet “content provider” for almost 20 years, I’m well aware that we can’t simply do away with the concept of copyright or intellectual property. I never download pirated movies, not because I care so much about the bottom line at Sony or Warner Bros., but because it just doesn’t feel right, and because you can never be sure who’s getting hurt. We’re not going to settle the debate about intellectual property rights in the digital age in a movie review, but we can say this: Aaron Swartz had chosen his targets carefully, and so did the government when it fixed its sights on him. (In fact, JSTOR suffered no financial loss, and urged the feds to drop the charges. They refused.)

A clean and straightforward work of advocacy cinema, blending archival footage and contemporary talking-head interviews, Knappenberger’s film makes clear that Swartz was always interested in the social and political consequences of technology. By the time he reached adulthood he began to see political power, in effect, as another system of control that could be hacked, subverted and turned to unintended purposes. In the late 2000s, Swartz moved rapidly through a variety of politically minded ventures, including a good-government site and several different progressive advocacy groups. He didn’t live long enough to learn about Edward Snowden or the NSA spy campaigns he exposed, but Swartz frequently spoke out against the hidden and dangerous nature of the security state, and played a key role in the 2011-12 campaign to defeat the Stop Online Piracy Act (SOPA), a far-reaching government-oversight bill that began with wide bipartisan support and appeared certain to sail through Congress. That campaign, and the Internet-wide protest of American Censorship Day in November 2011, looks in retrospect like the digital world’s political coming of age.

Earlier that year, Swartz had been arrested by MIT campus police, after they noticed that someone had plugged a laptop into a network switch in a server closet. He was clearly violating some campus rules and likely trespassing, but as the New York Times observed at the time, the arrest and subsequent indictment seemed to defy logic: Could downloading articles that he was legally entitled to read really be considered hacking? Wasn’t this the digital equivalent of ordering 250 pancakes at an all-you-can-eat breakfast? The whole incident seemed like a momentary blip in Swartz’s blossoming career – a terms-of-service violation that might result in academic censure, or at worst a misdemeanor conviction.

Instead, for reasons that have never been clear, Ortiz and Heymann insisted on a plea deal that would have sent Swartz to prison for six months, an unusually onerous sentence for an offense with no definable victim and no financial motive. Was he specifically singled out as a political scapegoat by Eric Holder or someone else in the Justice Department? Or was he simply bulldozed by a prosecutorial bureaucracy eager to justify its own existence? We will almost certainly never know for sure, but as numerous people in “The Internet’s Own Boy” observe, the former scenario cannot be dismissed easily. Young computer geniuses who embrace the logic of private property and corporate power, who launch start-ups and seek to join the 1 percent before they’re 25, are the heroes of our culture. Those who use technology to empower the public commons and to challenge the intertwined forces of corporate greed and state corruption, however, are the enemies of progress and must be crushed.

”The Internet’s Own Boy” opens this week in Atlanta, Boston, Chicago, Cleveland, Denver, Los Angeles, Miami, New York, Toronto, Washington and Columbus, Ohio. It opens June 30 in Vancouver, Canada; July 4 in Phoenix, San Francisco and San Jose, Calif.; and July 11 in Seattle, with other cities to follow. It’s also available on-demand from Amazon, Google Play, iTunes, Vimeo, Vudu and other providers.

http://www.salon.com/2014/06/24/the_internets_own_boy_how_the_government_destroyed_aaron_swartz/?source=newsletter

Why the Web can’t abandon its misogyny

The Internet’s destructive gender gap:

People like Ezra Klein are showered with opportunity, while women face an online world hostile to their ambitions

, TomDispatch.com

The Internet's destructive gender gap: Why the Web can't abandon its misogynyEzra Klein (Credit: MSNBC)
This piece originally appeared on TomDispatch.

The Web is regularly hailed for its “openness” and that’s where the confusion begins, since “open” in no way means “equal.” While the Internet may create space for many voices, it also reflects and often amplifies real-world inequities in striking ways.

An elaborate system organized around hubs and links, the Web has a surprising degree of inequality built into its very architecture. Its traffic, for instance, tends to be distributed according to “power laws,” which follow what’s known as the 80/20 rule — 80% of a desirable resource goes to 20% of the population.

In fact, as anyone knows who has followed the histories of Google, Apple, Amazon, and Facebook, now among the biggest companies in the world, the Web is increasingly a winner-take-all, rich-get-richer sort of place, which means the disparate percentages in those power laws are only likely to look uglier over time.

Powerful and exceedingly familiar hierarchies have come to define the digital realm, whether you’re considering its economics or the social world it reflects and represents.  Not surprisingly, then, well-off white men are wildly overrepresented both in the tech industry and online.

Just take a look at gender and the Web comes quickly into focus, leaving you with a vivid sense of which direction the Internet is heading in and — small hint — it’s not toward equality or democracy.

Experts, Trolls, and What Your Mom Doesn’t Know

As a start, in the perfectly real world women shoulder a disproportionate share of household and child-rearing responsibilities, leaving them substantially less leisure time to spend online. Though a handful of high-powered celebrity “mommy bloggers” have managed to attract massive audiences and ad revenue by documenting their daily travails, they are the exceptions not the rule. In professional fields like philosophy, law, and science, where blogging has become popular, women are notoriously underrepresented; by one count, for instance, only around 20% of science bloggers are women.



An otherwise optimistic white paper by the British think tank Demos touching on the rise of amateur creativity online reported that white males are far more likely to be “hobbyists with professional standards” than other social groups, while you won’t be shocked to learn that low-income women with dependent children lag far behind. Even among the highly connected college-age set, research reveals a stark divergence in rates of online participation.

Socioeconomic status, race, and gender all play significant roles in a who’s who of the online world, with men considerably more likely to participate than women. “These findings suggest that Internet access may not, in and of itself, level the playing field when it comes to potential pay-offs of being online,” warns Eszter Hargittai, a sociologist at Northwestern University. Put simply, closing the so-called digital divide still leaves a noticeable gap; the more privileged your background, the more likely that you’ll reap the additional benefits of new technologies.

Some of the obstacles to online engagement are psychological, unconscious, and invidious. In a revealing study conducted twice over a span of five years — and yielding the same results both times — Hargittai tested and interviewed 100 Internet users and found that there was no significant variation in their online competency. In terms of sheer ability, the sexes were equal. The difference was in their self-assessments.

It came down to this: The men were certain they did well, while the women were wracked by self-doubt. “Not a single woman among all our female study subjects called herself an ‘expert’ user,” Hargittai noted, “while not a single male ranked himself as a complete novice or ‘not at all skilled.’” As you might imagine, how you think of yourself as an online contributor deeply influences how much you’re likely to contribute online.

The results of Hargittai’s study hardly surprised me. I’ve seen endless female friends be passed over by less talented, more assertive men. I’ve had countless people — older and male, always — assume that someone else must have conducted the interviews for my documentary films, as though a young woman couldn’t have managed such a thing without assistance. Research shows that people routinely underestimate women’s abilities, not least women themselves.

When it comes to specialized technical know-how, women are assumed to be less competent unless they prove otherwise. In tech circles, for example, new gadgets and programs are often introduced as being “so easy your mother or grandmother could use them.” A typical piece in the New York Times wastitled “How to Explain Bitcoin to Your Mom.”  (Assumedly, dad already gets it.)  This kind of sexism leapt directly from the offline world onto the Web and may only have intensified there.

And it gets worse. Racist, sexist, and homophobic harassment or “trolling” has become a depressingly routine aspect of online life.

Many prominent women have spoken up about their experiences being bullied and intimidated online — scenarios that sometimes escalate into the release of private information, including home addresses, e-mail passwords, and social security numbers, or simply devolve into an Internet version of stalking. Esteemed classicist Mary Beard, for example, “received online death threats and menaces of sexual assault” after a television appearance last year, as did British activist Caroline Criado-Perez after she successfully campaigned to get more images of women onto British banknotes.

Young women musicians and writers often find themselves targeted online by men who want to silence them. “The people who were posting comments about me were speculating as to how many abortions I’ve had, and they talked about ‘hate-fucking’ me,” blogger Jill Filipovic told the Guardian after photos of her were uploaded to a vitriolic online forum. Laurie Penny, a young political columnist who has faced similar persecution and recently published an ebook called Cybersexism, touched a nerve by calling a woman’s opinion the “short skirt” of the Internet: “Having one and flaunting it is somehow asking an amorphous mass of almost-entirely male keyboard-bashers to tell you how they’d like to rape, kill, and urinate on you.”

Alas, the trouble doesn’t end there. Women who are increasingly speaking out against harassers are frequently accused of wanting to stifle free speech. Or they are told to “lighten up” and that the harassment, however stressful and upsetting, isn’t real because it’s only happening online, that it’s just “harmless locker-room talk.”

As things currently stand, each woman is left alone to devise a coping mechanism as if her situation were unique. Yet these are never isolated incidents, however venomously personal the insults may be. (One harasser called Beard — and by online standards of hate speech this was mild — “a vile, spiteful excuse for a woman, who eats too much cabbage and has cheese straws for teeth.”)

Indeed, a University of Maryland study strongly suggests just how programmatic such abuse is. Those posting with female usernames, researchers were shocked to discover, received 25 times as many malicious messages as those whose designations were masculine or ambiguous. The findings were so alarming that the authors advised parents to instruct their daughters to use sex-neutral monikers online. “Kids can still exercise plenty of creativity and self-expression without divulging their gender,” a well-meaning professor said, effectively accepting that young girls must hide who they are to participate in digital life.

Over the last few months, a number of black women with substantial social media presences conducted an informal experiment of their own. Fed up with the fire hose of animosity aimed at them, Jamie Nesbitt Golden and others adopted masculine Twitter avatars. Golden replaced her photo with that of a hip, bearded, young white man, though she kept her bio and continued to communicate in her own voice. “The number of snarky, condescending tweets dropped off considerably, and discussions on race and gender were less volatile,” Golden wrote, marveling at how simply changing a photo transformed reactions to her. “Once I went back to Black, it was back tobusiness as usual.”

Old Problems in New Media

Not all discrimination is so overt. A study summarized on the Harvard Business Review website analyzed social patterns on Twitter, where female users actually outnumbered males by 10%. The researchers reported “that an average man is almost twice [as] likely to follow another man [as] a woman” while “an average woman is 25% more likely to follow a man than a woman.” The results could not be explained by varying usage since both genders tweeted at the same rate.

Online as off, men are assumed to be more authoritative and credible, and thus deserving of recognition and support. In this way, long-standing disparities are reflected or even magnified on the Internet.

In his 2008 book The Myth of Digital Democracy, Matthew Hindman, a professor of media and public affairs at George Washington University, reports that of the top 10 blogs, only one belonged to a female writer. A wider census of every political blog with an average of over 2,000 visitors a week, or a total of 87 sites, found that only five were run by women, nor were there “identifiable African Americans among the top 30 bloggers,” though there was “one Asian blogger, and one of mixed Latino heritage.” In 2008, Hindman surveyed the blogosphere and found it less diverse than the notoriously whitewashed op-ed pages of print newspapers. Nothing suggests that, in the intervening six years, things have changed for the better.

Welcome to the age of what Julia Carrie Wong has called “old problems in new media,” as the latest well-funded online journalism start-ups continue to be helmed by brand-name bloggers like Ezra Klein and Nate Silver. It is “impossible not to notice that in the Bitcoin rush to revolutionize journalism, the protagonists are almost exclusively — and increasingly — male and white,” Emily Bell lamented in a widely circulated op-ed. It’s not that women and people of color aren’t doing innovative work in reporting and cultural criticism; it’s just that they get passed over by investors and financiers in favor of the familiar.

As Deanna Zandt and others have pointed out, such real-world lack of diversity is also regularly seen on the rosters of technology conferences, even as speakers take the stage to hail a democratic revolution on the Web, while audiences that look just like them cheer. In early 2013, in reaction to the announcement of yet another all-male lineup at a prominent Web gathering, a pledge was posted on the website of the Atlantic asking men to refrain from speaking at events where women are not represented. The list of signatories was almost immediately removed “due to a flood of spam/trolls.” The conference organizer, a successful developer, dismissed the uproar over Twitter. “I don’t feel [the] need to defend this, but am happy with our process,” he stated. Instituting quotas, he insisted, would be a “discriminatory” way of creating diversity.

This sort of rationalization means technology companies look remarkably like the old ones they aspire to replace: male, pale, and privileged. Consider Instagram, the massively popular photo-sharing and social networking service, which was founded in 2010 but only hired its first female engineer last year. While the percentage of computer and information sciences degrees women earned rose from 14% to 37% between 1970 and 1985, that share had depressingly declined to 18% by 2008.

Those women who do fight their way into the industry often end up leaving — their attrition rate is 56%, or double that of men — and sexism is a big part of what pushes them out. “I no longer touch code because I couldn’t deal with the constant dismissing and undermining of even my most basic work by the ‘brogramming’ gulag I worked for,” wrote one woman in a roundup of answers to the question: Why there are so few female engineers?

In Silicon Valley, Facebook’s Sheryl Sandberg and Yahoo’s Marissa Mayer excepted, the notion of the boy genius prevails.  More than 85% of venture capitalists are men generally looking to invest in other men, and women make 49 cents for every dollar their male counterparts rake in — enough to make a woman long for the wage inequities of the non-digital world, where on average they take home a whopping 77 cents on the male dollar. Though 40% of private businesses are women-owned nationwide, only 8% of the venture-backed tech start-ups are.

Established companies are equally segregated. The National Center for Women and Information Technology reports that in the top 100 tech companies, only 6% of chief executives are women. The numbers of Asians who get to the top are comparable, despite the fact that they make up one-third of all Silicon Valley software engineers. In 2010, not even 1% of the founders of Silicon Valley companies were black.

Making Your Way in a Misogynist Culture

What about the online communities that are routinely held up as exemplars of a new, networked, open culture? One might assume from all the “revolutionary” and “disruptive” rhetoric that they, at least, are better than the tech goliaths. Sadly, the data doesn’t reflect the hype. Consider Wikipedia. A survey revealed that women make up less than 15% of the contributors to the site, despite the fact that they use the resource in equal numbers to men.

In a similar vein, collaborative filtering sites like Reddit and Slashdot, heralded by the digerati as the cultural curating mechanisms of the future, cater to users who are up to 87% male and overwhelmingly young, wealthy, and white. Reddit, in particular, has achieved notoriety for its misogynist culture, with threads where rapists have recounted their exploits and photos of underage girls got posted under headings like “Chokeabitch,” “N*****jailbait,” and “Creepshots.”

Despite being held up as a paragon of political virtue, evidence suggests that as few as 1.5% of open source programmers are women, a number far lower than the computing profession as a whole. In response, analysts have blamed everything from chauvinism, assumptions of inferiority, and outrageous examples of impropriety (including sexual harassment at conferences where programmers gather) to a lack of women mentors and role models. Yet the advocates of open-source production continue to insist that their culture exemplifies a new and ethical social order ruled by principles of equality, inclusivity, freedom, and democracy.

Unfortunately, it turns out that openness, when taken as an absolute, actually aggravates the gender gap. The peculiar brand of libertarianism in vogue within technology circles means a minority of members — a couple of outspoken misogynists, for example — can disproportionately affect the behavior and mood of the group under the cover of free speech. As Joseph Reagle, author of Good Faith Collaboration: The Culture of Wikipediapoints out, women are not supposed to complain about their treatment, but if they leave — that is, essentially are driven from — the community, that’s a decision they alone are responsible for.

“Urban” Planning in a Digital Age

The digital is not some realm distinct from “real” life, which means that the marginalization of women and minorities online cannot be separated from the obstacles they confront offline. Comparatively low rates of digital participation and the discrimination faced by women and minorities within the tech industry matter — and not just because they give the lie to the egalitarian claims of techno-utopians. Such facts and figures underscore the relatively limited experiences and assumptions of the people who design the systems we depend on to use the Internet — a medium that has, after all, become central to nearly every facet of our lives.

In a powerful sense, programmers and the corporate officers who employ them are the new urban planners, shaping the virtual frontier into the spaces we occupy, building the boxes into which we fit our lives, and carving out the routes we travel. The choices they make can segregate us further or create new connections; the algorithms they devise can exclude voices or bring more people into the fold; the interfaces they invent can expand our sense of human possibility or limit it to the already familiar.

What vision of a vibrant, thriving city informs their view? Is it a place that fosters chance encounters or does it favor the predictable? Are the communities they create mixed or gated? Are they full of privately owned shopping malls and sponsored billboards or are there truly public squares? Is privacy respected? Is civic engagement encouraged? What kinds of people live in these places and how are they invited to express themselves? (For example, is trolling encouraged, tolerated, or actively discouraged or blocked?)

No doubt, some will find the idea of engineering online platforms to promote diversity unsettling and — a word with some irony embedded in it — paternalistic, but such criticism ignores the ways online spaces are already contrived with specific outcomes in mind.  They are, as a start, designed to serve Silicon Valley venture capitalists, who want a return on investment, as well as advertisers, who want to sell us things. The term “platform,” which implies a smooth surface, misleads us, obscuring the ways technology companies shape our online lives, prioritizing certain purposes over others, certain creators over others, and certain audiences over others.

If equity is something we value, we have to build it into the system, developing structures that encourage fairness, serendipity, deliberation, and diversity through a process of trial and error. The question of how we encourage, or even enforce, diversity in so-called open networks is not easy to answer, and there is no obvious and uncomplicated solution to the problem of online harassment. As a philosophy, openness can easily rationalize its own failure, chalking people’s inability to participate up to choice, and keeping with the myth of the meritocracy, blaming any disparities in audience on a lack of talent or will.

That’s what the techno-optimists would have us believe, dismissing potential solutions as threats to Internet freedom and as forceful interference in a “natural” distribution pattern. The word “natural” is, of course, a mystification, given that technological and social systems are not found growing in a field, nurtured by dirt and sun. They are made by human beings and so can always be changed and improved.

Astra Taylor is a writer, documentary filmmaker (including Zizek! andExamined Life), and activist. Her new book, “The People’s Platform: Taking Back Power and Culture in the Digital Age (Metropolitan Books), has just been published. This essay is adapted from it. She also helped launch the Occupy offshoot Strike Debt and its Rolling Jubilee campaign.

Astra Taylor is working on a film about the theorist Slavoj Zizek, which is being produced by the Documentary Campaign.

http://www.salon.com/2014/04/10/the_internets_destructive_gender_gap_why_the_web_cant_abandon_its_misogyny_partner/?source=newsletter

25 things you might not know about the web on its 25th birthday

It sprang from the brain of one man, Tim Berners-Lee, and is the fastest-growing communication medium of all time. A quarter-century on, we examine how the web has transformed our lives

 

Tim Berners-Lee
Briton Tim Berners-Lee, the inventor of the world wide web, at the opening ceremony of the London 2012 Olympic Games. Photograph: Wang Lili/xh/Xinhua Press/Corbis

 

1 The importance of “permissionless innovation”

The thing that is most extraordinary about the internet is the way it enables permissionless innovation. This stems from two epoch-making design decisions made by its creators in the early 1970s: that there would be no central ownership or control; and that the network would not be optimised for any particular application: all it would do is take in data-packets from an application at one end, and do its best to deliver those packets to their destination.

It was entirely agnostic about the contents of those packets. If you had an idea for an application that could be realised using data-packets (and were smart enough to write the necessary software) then the network would do it for you with no questions asked. This had the effect of dramatically lowering the bar for innovation, and it resulted in an explosion of creativity.

What the designers of the internet created, in effect, was a global machine for springing surprises. The web was the first really big surprise and it came from an individual – Tim Berners-Lee – who, with a small group of helpers, wrote the necessary software and designed the protocols needed to implement the idea. And then he launched it on the world by putting it on the Cern internet server in 1991, without having to ask anybody’s permission.

2 The web is not the internet

Although many people (including some who should know better) often confuse the two. Neither is Google the internet, nor Facebook the internet. Think of the net as analogous to the tracks and signalling of a railway system, and applications – such as the web, Skype, file-sharing and streaming media – as kinds of traffic which run on that infrastructure. The web is important, but it’s only one of the things that runs on the net.

3 The importance of having a network that is free and open

The internet was created by government and runs on open source software. Nobody “owns” it. Yet on this “free” foundation, colossal enterprises and fortunes have been built – a fact that the neoliberal fanatics who run internet companies often seem to forget. Berners-Lee could have been as rich as Croesus if he had viewed the web as a commercial opportunity. But he didn’t – he persuaded Cern that it should be given to the world as a free resource. So the web in its turn became, like the internet, a platform for permissionless innovation. That’s why a Harvard undergraduate was able to launch Facebook on the back of the web.

4 Many of the things that are built on the web are neither free nor open

Mark Zuckerberg was able to build Facebook because the web was free and open. But he hasn’t returned the compliment: his creation is not a platform from which young innovators can freely spring the next set of surprises. The same holds for most of the others who have built fortunes from exploiting the facilities offered by the web. The only real exception is Wikipedia.

5 Tim Berners-Lee is Gutenberg’s true heir

In 1455, with his revolution in printing, Johannes Gutenberg single-handedly launched a transformation in mankind’s communications environment – a transformation that has shaped human society ever since. Berners-Lee is the first individual since then to have done anything comparable.

6 The web is not a static thing

The web we use today is quite different from the one that appeared 25 years ago. In fact it has been evolving at a furious pace. You can think of this evolution in geological “eras”. Web 1.0 was the read-only, static web that existed until the late 1990s. Web 2.0 is the web of blogging, Web services, mapping, mashups and so on – the web that American commentator David Weinberger describes as “small pieces, loosely joined”. The outlines of web 3.0 are only just beginning to appear as web applications that can “understand” the content of web pages (the so-called “semantic web”), the web of data (applications that can read, analyse and mine the torrent of data that’s now routinely published on websites), and so on. And after that there will be web 4.0 and so on ad infinitum.

7 Power laws rule OK

In many areas of life, the law of averages applies – most things are statistically distributed in a pattern that looks like a bell. This pattern is called the “normal distribution”. Take human height. Most people are of average height and there are relatively small number of very tall and very short people. But very few – if any – online phenomena follow a normal distribution. Instead they follow what statisticians call a power law distribution, which is why a very small number of the billions of websites in the world attract the overwhelming bulk of the traffic while the long tail of other websites has very little.

8 The web is now dominated by corporations

Despite the fact that anybody can launch a website, the vast majority of the top 100 websites are run by corporations. The only real exception is Wikipedia.

9 Web dominance gives companies awesome (and unregulated) powers

Take Google, the dominant search engine. If a Google search doesn’t find your site, then in effect you don’t exist. And this will get worse as more of the world’s business moves online. Every so often, Google tweaks its search algorithms in order to thwart those who are trying to “game” them in what’s called search engine optimisation. Every time Google rolls out the new tweaks, however, entrepreneurs and organisations find that their online business or service suffers or disappears altogether. And there’s no real comeback for them.

10 The web has become a memory prosthesis for the world

Have you noticed how you no longer try to remember some things because you know that if you need to retrieve them you can do so just by Googling?

11 The web shows the power of networking

The web is based on the idea of “hypertext” – documents in which some terms are dynamically linked to other documents. But Berners-Lee didn’t invent hypertext – Ted Nelson did in 1963 and there were lots of hypertext systems in existence long before Berners-Lee started thinking about the web. But the existing systems all worked by interlinking documents on the same computer. The twist that Berners-Lee added was to use the internet to link documents that could be stored anywhere. And that was what made the difference.

12 The web has unleashed a wave of human creativity

Before the web, “ordinary” people could publish their ideas and creations only if they could persuade media gatekeepers (editors, publishers, broadcasters) to give them prominence. But the web has given people a global publishing platform for their writing (Blogger, WordPress, Typepad, Tumblr), photographs (Flickr, Picasa, Facebook), audio and video (YouTube, Vimeo); and people have leapt at the opportunity.

13 The web should have been a read-write medium from the beginning

Berners-Lee’s original desire was for a web that would enable people not only to publish, but also to modify, web pages, but in the end practical considerations led to the compromise of a read-only web. Anybody could publish, but only the authors or owners of web pages could modify them. This led to the evolution of the web in a particular direction and it was probably the factor that guaranteed that corporations would in the end become dominant.

14 The web would be much more useful if web pages were machine-understandable

Web pages are, by definition, machine-readable. But machines can’t understand what they “read” because they can’t do semantics. So they can’t easily determine whether the word “Casablanca” refers to a city or to a movie. Berners-Lee’s proposal for the “semantic web” – ie a way of restructuring web pages to make it easier for computers to distinguish between, say, Casablanca the city and Casablanca the movie – is one approach, but it would require a lot of work upfront and is unlikely to happen on a large scale. What may be more useful are increasingly powerful machine-learning techniques that will make computers better at understanding context.

15 The importance of killer apps

A killer application is one that makes the adoption of a technology a no-brainer. The spreadsheet was the killer app for the first Apple computer. Email was the first killer app for the Arpanet – the internet’s precursor. The web was the internet’s first killer app. Before the web – and especially before the first graphical browser, Mosaic, appeared in 1993 – almost nobody knew or cared about the internet (which had been running since 1983). But after the web appeared, suddenly people “got” it, and the rest is history.

16 WWW is linguistically unique

Well, perhaps not, but Douglas Adams claimed that it was the only set of initials that took longer to say than the thing it was supposed to represent.

17 The web is a startling illustration of the power of software

Software is pure “thought stuff”. You have an idea; you write some instructions in a special language (a computer program); and then you feed it to a machine that obeys your instructions to the letter. It’s a kind of secular magic. Berners-Lee had an idea; he wrote the code; he put it on the net, and the network did the rest. And in the process he changed the world.

18 The web needs a micro-payment system

In addition to being just a read-only system, the other initial drawback of the web was that it did not have a mechanism for rewarding people who published on it. That was because no efficient online payment system existed for securely processing very small transactions at large volumes. (Credit-card systems are too expensive and clumsy for small transactions.) But the absence of a micro-payment system led to the evolution of the web in a dysfunctional way: companies offered “free” services that had a hidden and undeclared cost, namely the exploitation of the personal data of users. This led to the grossly tilted playing field that we have today, in which online companies get users to do most of the work while only the companies reap the financial rewards.

19 We thought that the HTTPS protocol would make the web secure. We were wrong

HTTP is the protocol (agreed set of conventions) that normally regulates conversations between your web browser and a web server. But it’s insecure because anybody monitoring the interaction can read it. HTTPS (stands for HTTP Secure) was developed to encrypt in-transit interactions containing sensitive data (eg your credit card details). The Snowden revelations about US National Security Agency surveillance suggest that the agency may have deliberately weakened this and other key internet protocols.

20 The web has an impact on the environment. We just don’t know how big it is

The web is largely powered by huge server farms located all over the world that need large quantities of electricity for computers and cooling. (Not to mention the carbon footprint and natural resource costs of the construction of these installations.) Nobody really knows what the overall environmental impact of the web is, but it’s definitely non-trivial. A couple of years ago, Google claimed that its carbon footprint was on a par with that of Laos or the United Nations. The company now claims that each of its users is responsible for about eight grams of carbon dioxide emissions every day. Facebook claims that, despite its users’ more intensive engagement with the service, it has a significantly lower carbon footprint than Google.

21 The web that we see is just the tip of an iceberg

The web is huge – nobody knows how big it is, but what we do know is that the part of it that is reached and indexed by search engines is just the surface. Most of the web is buried deep down – in dynamically generated web pages, pages that are not linked to by other pages and sites that require logins – which are not reached by these engines. Most experts think that this deep (hidden) web is several orders of magnitude larger than the 2.3 billion pages that we can see.

22 Tim Berners-Lee’s boss was the first of many people who didn’t get it initially

Berners-Lee’s manager at Cern scribbled “vague but interesting” on the first proposal Berners-Lee submitted to him. Most people confronted with something that is totally new probably react the same way.

23 The web has been the fastest-growing communication medium of all time

One measure is how long a medium takes to reach the first 50 million users. It took broadcast radio 38 years and television 13 years. The web got there in four.

24 Web users are ruthless readers

The average page visit lasts less than a minute. The first 10 seconds are critical for users’ decision to stay or leave. The probability of their leaving is very high during these seconds. They’re still highly likely to leave during the next 20 seconds. It’s only after they have stayed on a page for about 30 seconds that the chances improve that they will finish it.

25 Is the web making us stupid?

Writers like Nick Carr are convinced that it is. He thinks that fewer people engage in contemplative activities because the web distracts them so much. “With the exception of alphabets and number systems,” he writes, “the net may well be the single most powerful mind-altering technology that has ever come into general use.” But technology giveth and technology taketh away. For every techno-pessimist like Carr, there are thinkers like Clay Shirky, Jeff Jarvis, Yochai Benkler, Don Tapscott and many others (including me) who think that the benefits far outweigh the costs.

John Naughton’s From Gutenberg to Zuckerberg is published by Quercus

 

http://www.theguardian.com/technology/2014/mar/09/25-years-web-tim-berners-lee

Lets build our own internet, with blackjack and hookers.

 

The Pirate Bay, delving further into the anti-censorship battle, may have just invented a new type of internet, hosted peer-to-peer, and maintained using the Bitcoin protocol.

Love them or hate them, The Pirate Bay are always ahead of the curve when it comes to digital rights, especially when it comes to copyright, DRM and censorship. Now I’m not one to say ‘they give me free shit, awesome hur dur’. Artist remuneration is important to me and in many senses TPB circumvents this. But the current copyright system is broken. Fractions of the dollar go to the artists, and the archaic content distribution models mean lots of content can’t be seen legally without a 100 channels of cable or a $40 DVD.
Media pirates

People consume media differently and the market largely hasn’t caught up. Progressive media groups, like Netflix, actually use TPB stats to work out what programs to book. It’s acknowledged that freely distributing your content is a great way to get exposed. Most bands will seed a torrent in the hopes it goes viral. So clearly there’s merit to the model.

 

“Thanks Pirate Bay”

Now if all TPB did was make it easier for people to OD on Game Of Thrones I’d still be impressed. Their fractured cloud hosted solutions and domain hopping have been a beacon of hope to everyone that feels uncomfortable with bolder and bolder attempts to centralise and regulate an internet built by and for free thinkers.

But what matters now is what they’re doing to bypass censorship.

Thought police

You see the internet, and its contents, is a bit like an ocean. It’s huge, it’s untamed, it has dangerous disgusting depths and beautiful vistas. More and more however you, the user, are shunted onto the tourist beaches for your own good. You don’t even see “no access” signs for the areas that aren’t safe.  Through the wizardry of IP blocking they make it so you can’t even see they where there. So instead you paddle in the shallows, reading 9gag and sharing snapchats of your cats hat.

TPB’s first step was the pirate bay browser, very similar to the tor browser, however without IP masking (so you aren’t anonymous). This browser means users aren’t limited in their access because of their location.

It’s not just China that limits it’s internet access, most countries live in a media bubble, from blocking access to movies and shows because licensing doesn’t allow it, to restricting the news that is readily available. The people in office aren’t even being subtle anymore.  Consider the porn filter in the UK: they are restricting content based on the views of a moral minority who happen to hold political (and one would assume economic) power. If you think this is going to be anything other than more prevalent in the near future, or at this doesn’t effect you, then you need a better understanding of the role of free speech in government accountability.

 

The buccaneers behind pirate bay.

Fighting back

However even with IP masking, governments can still get right to the source, and block an IP address, confiscate servers, basically killing a website. All well and good to stop child porn and nuclear warhead plans from being distributed, however this is also more than likely to be used to silence boat rockers, dissidents and anyone that challenges the current politico-economic paradigm that keeps the suits in limos. Consider Wikkileaks, who have been under attack merely for holding the governments own actions up to the light for scrutiny.

The way TPB are addressing this will be a decentralised, peer to peer internet.

You heard me right.

This means domain blocking is impossible, server seizure can’t be achieved and the powers that be can’t do everything in their power to limit free speech that challenges the political or economic status quo.

Decentralise everything

The way it works is that it stores a sites indexable data when on your computer, so you host little chunks of the sites you visit, in much the same way as people host chunks of data when maintaining a seed for a torrent file.

Users will be able to register their ‘domain’ using bitcoin, on a first come first serve basis, renewing every year. This means that even the registration system is decentralised, in fact relying on a completely different decentralised network. That is one hell of a built in redundancy

It will be using a fake DNS system but there is no real IP address to take down, as the database will be scattered across a global decentralised network of users. No points of failure and no centralised control mechanisms means it could become a very robust platform to maintain free speech.

There are issues, for example what happens if you host illegal content unwittingly, or what happens if the bulk of sites you use are very data hungry? The system has just been announced so further news may quash or exacerbate these concerns.

Do we need it?

In a world where the original ideals of a free internet are being consumed by data discrimination, PRISM, the NSA and the TPP, this pirate web may be one of the few places where true subversive discussion can occur. It may just halt part of a concerted effort to turn the net into a homogenised tracking device, used to buy iPads and photograph food, whilst being spied on and lied to.

While people may ask why it is needed, it must be remembered that a benign government only stays so under constant scrutiny and absolute accountability to the governed. This can only occur where there is a completely unfettered platform for free speech and sharing to occur.

Love them or hate them, but what The Pirate Bay have done, are doing and will do with the peer to peer protocol may be key to your political freedoms and human rights in the future.

X

The Golden Age of Journalism?

Tomgram: Engelhardt, The Rise of the Reader
Posted by Tom Engelhardt at 8:08am, January 21, 2014.
Follow TomDispatch on Twitter @TomDispatch.

Your Newspaper, Your Choice
By Tom Engelhardt

It was 1949.  My mother — known in the gossip columns of that era as “New York’s girl caricaturist” — was freelancing theatrical sketches to a number of New York’s newspapers and magazines, including the Brooklyn Eagle.  That paper, then more than a century old, had just a few years of life left in it.  From 1846 to 1848, its editor had been the poet Walt Whitman.  In later years, my mother used to enjoy telling a story about the Eagle editor she dealt with who, on learning that I was being sent to Walt Whitman kindergarten, responded in the classically gruff newspaper manner memorialized in movies like His Girl Friday: “Are they still naming things after that old bastard?”

In my childhood, New York City was, you might say, papered with newspapers.  The Daily News, the Daily Mirror, the Herald Tribune, the Wall Street Journal… there were perhaps nine or 10 significant ones on newsstands every day and, though that might bring to mind some golden age of journalism, it’s worth remembering that a number of them were already amalgams.  The Journal-American, for instance, had once been the Evening Journal and the American, just as the World-Telegram & Sun had been a threesome, the World, the Evening Telegram, and the Sun.  In my own household, we got the New York Times (disappointingly comic-strip-less), the New York Post (then a liberal, not a right-wing, rag that ran Pogo and Herblock’s political cartoons) and sometimes the Journal-American (Believe It or Not and The Phantom).

Then there were always the magazines: in our house, Life, the Saturday Evening Post, Look, the New Yorker — my mother worked for some of them, too — and who knows what else in a roiling mass of print.  It was a paper universe all the way to the horizon, though change and competition were in the air.  After all, the screen (the TV screen, that is) was entering the American home like gangbusters. Mine arrived in 1953 when the Post assigned my mother to draw the Army-McCarthy hearings, which — something new under the sun — were to be televised live by ABC.

Still, at least in my hometown, it seemed distinctly like a golden age of print news, if not of journalism.  Some might reserve that label for the shake-up, breakdown era of the 1960s, that moment when the New Journalism arose, an alternative press burst onto the scene, and for a brief moment in the late 1960s and early 1970s, the old journalism put its mind to uncovering massacres, revealing the worst of American war, reporting on Washington-style scandal, and taking down a president.  In the meantime, magazines like Esquire and Harper’s came to specialize in the sort of chip-on-the-shoulder, stylish voicey-ness that would, one day, become the hallmark of the online world and the age of the Internet.  (I still remember the thrill of first reading Tom Wolfe’s “The Kandy-Kolored Tangerine-Flake Streamline Baby” on the world of custom cars.  It put the vrrrooom into writing in a dazzling way.)

However, it took the arrival of the twenty-first century to turn the journalistic world of the 1950s upside down and point it toward the trash heap of history.  I’m talking about the years that shrank the screen, and put it first on your desk, then in your hand, next in your pocket, and one day soon on your eyeglasses, made it the way you connected with everyone on Earth and they — whether as friends, enemies, the curious, voyeurs, corporate sellers and buyers, or the NSA — with you.  Only then did it became apparent that, throughout the print era, all those years of paper running off presses and newsboys and newsstands, from Walt Whitman to Woodward and Bernstein, the newspaper had been misnamed.

Journalism’s amour propre had overridden a clear-eyed assessment of what exactly the paper really was.  Only then would it be fully apparent that it always should have been called the “adpaper.”  When the corporation and the “Mad Men” who worked for it spied the Internet and saw how conveniently it gathered audiences and what you could learn about their lives, preferences, and most intimate buying habits, the ways you could slice and dice demographics and sidle up to potential customers just behind the ever-present screen, the ad began to flee print for the online world.  It was then, of course, that papers (as well as magazines) — left with overworked, ever-smaller staffs, evaporating funding, and the ad-less news — began to shudder, shrink, and in some cases collapse (as they might not have done if the news had been what fled).

New York still has four dailies (Murdoch’s Post, the Daily News, the New York Times, and the Wall Street Journal).  However, in recent years, many two-paper towns like Denver and Seattle morphed into far shakier one-paper towns as papers like the Rocky Mountain News and the Seattle Post-Intelligencer passed out of existence (or into only digital existence).  Meanwhile, the Detroit News and Detroit Free Press went over to a three-day-a-week home delivery print edition, and the Times Picayune of New Orleans went down to a three-day-a-week schedule (before returning as a four-day Picayune and a three-day-a-week tabloid in 2013).  The Christian Science Monitor stopped publishing a weekday paper altogether.  And so it went.  In those years, newspaper advertising took a terrible hit, circulation declined, sometimes precipitously, and bankruptcies were the order of the day.

The least self-supporting sections like book reviews simply evaporated and in the one place of significance that a book review remained, the New York Times, shrank.  Sunday magazines shriveled up.  Billionaires began to buy papers at bargain-basement prices as, in essence, vanity projects.  Jobs and staffs were radically cut (as were the TV versions of the same so that, for example, if you tune in to NBC’s Nightly News with Brian Williams, you often have the feeling that the estimable Richard Engel, with the job title of chief foreign correspondent, is the only “foreign correspondent” still on the job, flown eternally from hot spot to hot spot around the globe).

No question about it, if you were an established reporter of a certain age or anyone who worked in a newsroom, this was proving to be the aluminum age of journalism.  Your job might be in jeopardy, along with maybe your pension, too.  In these years, stunned by what was suddenly happening to them, the management of papers stood for a time frozen in place like the proverbial deer in the headlights as the voicey-ness of the Internet broke over them, turning their op-ed pages into the grey sisters of the reading world.  Then, in a blinding rush to save what could be saved, recapture the missing ad, or find any other path to a new model of profitability from digital advertising (disappointing) to pay walls (a mixed bag), papers rushed online.  In the process, they doubled the work of the remaining journalists and editors, who were now to service both the new newspaper and the old.

The Worst of Times, the Best of Times

In so many ways, it’s been, and continues to be, a sad, even horrific, tale of loss.  (A similar tale of woe involves the printed book.  It’s only advantage: there were no ads to flee the premises, but it suffered nonetheless — already largely crowded out of the newspaper as a non-revenue producer and out of consciousness by a blitz of new ways of reading and being entertained. And I say that as someone who has spent most of his life as an editor of print books.)  The keening and mourning about the fall of print journalism has gone on for years.  It’s a development that represents — depending on who’s telling the story — the end of an age, the fall of all standards, or the loss of civic spirit and the sort of investigative coverage that might keep a few more politicians and corporate heads honest, and so forth and so on.

Let’s admit that the sins of the Internet are legion and well-known: the massive programs of government surveillance it enables; the corporate surveillance it ensures; the loss of privacy it encourages; the flamers and trolls it births; the conspiracy theorists, angry men, and strange characters to whom it gives a seemingly endless moment in the sun; and the way, among other things, it tends to sort like and like together in a self-reinforcing loop of opinion.  Yes, yes, it’s all true, all unnerving, all terrible.

As the editor of TomDispatch.com, I’ve spent the last decade-plus plunged into just that world, often with people half my age or younger.  I don’t tweet.  I don’t have a Kindle or the equivalent.  I don’t even have a smart phone or a tablet of any sort.  When something — anything — goes wrong with my computer I feel like a doomed figure in an alien universe, wish for the last machine I understood (a typewriter), and then throw myself on the mercy of my daughter.

I’ve been overwhelmed, especially at the height of the Bush years, by cookie-cutter hate email — sometimes scores or hundreds of them at a time — of a sort that would make your skin crawl.  I’ve been threatened.  I’ve repeatedly received “critical” (and abusive) emails, blasts of red hot anger that would startle anyone, because the Internet, so my experience tells me, loosens inhibitions, wipes out taboos, and encourages a sense of anonymity that in the older world of print, letters, or face-to-face meetings would have been far less likely to take center stage.  I’ve seen plenty that’s disturbed me. So you’d think, given my age, my background, and my present life, that I, too, might be in mourning for everything that’s going, going, gone, everything we’ve lost.

But I have to admit it: I have another feeling that, at a purely personal level, outweighs all of the above.  In terms of journalism, of expression, of voice, of fine reporting and superb writing, of a range of news, thoughts, views, perspectives, and opinions about places, worlds, and phenomena that I wouldn’t otherwise have known about, there has never been an experimental moment like this.  I’m in awe.  Despite everything, despite every malign purpose to which the Internet is being put, I consider it a wonder of our age.  Yes, perhaps it is the age from hell for traditional reporters (and editors) working double-time, online and off, for newspapers that are crumbling, but for readers, can there be any doubt that now, not the 1840s or the 1930s or the 1960s, is the golden age of journalism?

Think of it as the upbeat twin of NSA surveillance.  Just as the NSA can reach anyone, so in a different sense can you.  Which also means, if you’re a website, anyone can, at least theoretically, find and read you.  (And in my experience, I’m often amazed at who can and does!)  And you, the reader, have in remarkable profusion the finest writing on the planet at your fingertips.  You can read around the world almost without limit, follow your favorite writers to the ends of the Earth.

The problem of this moment isn’t too little.  It’s not a collapsing world.  It’s way too much.  These days, in a way that was never previously imaginable, it’s possible to drown in provocative and illuminating writing and reporting, framing and opining.  In fact, I challenge you in 2014, whatever the subject and whatever your expertise, simply to keep up.

The Rise of the Reader

In the “golden age of journalism,” here’s what I could once do.  In the 1960s and early 1970s, I read the New York Times (as I still do in print daily), various magazines ranging from the New Yorker and Ramparts to “underground” papers like the Great Speckled Bird when they happened to fall into my hands, and I.F. Stone’s Weekly (to which I subscribed), as well as James Ridgeway and Andrew Kopkind’s Hard Times, among other publications of the moment.  Somewhere in those years or thereafter, I also subscribed to a once-a-week paper that had the best of the Guardian, the Washington Post, and Le Monde in it.  For the time, that covered a fair amount of ground.

Still, the limits of that “golden” moment couldn’t be more obvious now.  Today, after all, if I care to, I can read online every word of the Guardian, the Washington Post, and Le Monde (though my French is way too rusty to tackle it). And that’s every single day — and that, in turn, is nothing.

It’s all out there for you.  Most of the major dailies and magazines of the globe, trade publications, propaganda outfits, Pentagon handouts, the voiciest of blogs, specialist websites, the websites of individual experts with a great deal to say, websites, in fact, for just about anyone from historians, theologians, and philosophers to techies, book lovers, and yes, those fascinated with journalism.  You can read your way through the American press and the world press.  You can read whole papers as their editors put them together or — at least in your mind — you can become the editor of your own op-ed page every day of the week, three times, six times a day if you like (and odds are that it will be more interesting to you, and perhaps others, than the op-ed offerings of any specific paper you might care to mention).

You can essentially curate your own newspaper (or magazine) once a day, twice a day, six times a day.  Or — a particular blessing in the present ocean of words — you can rely on a new set of people out there who have superb collection and curating abilities, as well as fascinating editorial eyes.  I’m talking about teams of people at what I like to call “riot sites” — for the wild profusion of headlines they sport — like Antiwar.com (where no story worth reading about conflict on our planet seems to go unnoticed) or Real Clear Politics (Real Clear World/Technology/Energy/etc., etc., etc.).  You can subscribe to an almost endless range of curated online newsletters targeted to specific subjects, like the “morning brief” that comes to me every weekday filled with recommended pieces on cyberwar, terrorism, surveillance, and the like from the Center on National Security at Fordham Law School.  And I’m not even mentioning the online versions of your favorite print magazine, or purely online magazines like Salon.com, or the many websites I visit like Truthout, Alternet, Commondreams, and Truthdig with their own pieces and picks.  And in mentioning all of this, I’m barely scratching the surface of the world of writing that interests me.

There has, in fact, never been a DIY moment like this when it comes to journalism and coverage of the world.  Period.  For the first time in history, you and I have been put in the position of the newspaper editor.  We’re no longer simply passive readers at the mercy of someone else’s idea of how to “cover” or organize this planet and its many moving parts.  To one degree or another, to the extent that any of us have the time, curiosity, or energy, all of us can have a hand in shaping, reimagining, and understanding our world in new ways.

Yes, it is a journalistic universe from hell, a genuine nightmare; and yet, for a reader, it’s also an experimental world, something thrillingly, unexpectedly new under the sun.  For that reader, a strangely democratic and egalitarian Era of the Word has emerged.  It’s chaotic; it’s too much; and make no mistake, it’s also an unstable brew likely to morph into god knows what.  Still, perhaps someday, amid its inanities and horrors, it will also be remembered, at least for a brief historical moment, as a golden age of the reader, a time when all the words you could ever have needed were freely offered up for you to curate as you wish.  Don’t dismiss it.  Don’t forget it.

Tom Engelhardt, a co-founder of the American Empire Project and author of The United States of Fear as well as a history of the Cold War, The End of Victory Culture, runs the Nation Institute’s TomDispatch.com. His latest book, co-authored with Nick Turse, is Terminator Planet: The First History of Drone Warfare, 2001-2050.

Follow TomDispatch on Twitter and join us on Facebook or Tumblr. Check out the newest Dispatch Book, Ann Jones’s They Were Soldiers: How the Wounded Return From America’s Wars — The Untold Story.

Copyright 2014 Tom Engelhardt

ARE YOUNG PEOPLE HARMED BY ONLINE PORN?

 

  Personal Health  

 

 


 

Isolation and voyeurism have big downsides.

 

Photo Credit: Shutterstock.com/icsnaps

  • January 8, 2014  |

This article originally appeared on The Fix, with coverage on addiction and recovery, straight up.

Men younger than ever are reporting difficulty achieving intimacy in relationships and are struggling well into adulthood to regain normal sexual function, according to sex addiction experts.

High-speed Internet pornography, more specifically the addiction to seeking novel and increasingly shocking images, is to blame for these sexual problems, according to therapists who counsel men and boys as young as preteens. “There seems to be a classic pattern that is emerging which is that the addiction to pornography develops in the adolescent years, stays hidden for a time, and not until the teen grows into adulthood and experiences serious marital conflict [does he] seek treatment,” said psychotherapist Matt Bulkley, counselor at the Youth Pornography Addiction Center in St. George, Utah.

Young viewers of Internet pornography are more likely to suffer long term physiological and psychological damage lasting into adulthood because the exposure happened during a time when their brains were not yet finished developing, Bulkley explained. “In some cases, erectile dysfunction is the result of the brain being trained to be aroused by pornography,” he said.

The problems arise when a younger viewer who has not yet had any real life romantic or sexual experience learns the “birds and the bees” from watching pornography. Teens may immediately experience feelingsof confusion, isolation and shame when they view pornographic content. When that teen moves into adulthood seeking a relationship, he may have problems with sexual interest, arousal and monogamy. “When it comes to understanding intimacy, porn is masterful at distorting what it is that is involved in a real relationship,” Bulkley said. 

How is Internet Pornography Addictive?

Scientists are just beginning to link heavy pornography viewing with the same pleasure-reward responses that occur in drug addiction. When viewing pornography, the brain releases large amounts of the neurotransmitter dopamine, the same chemical that drives reward-seeking behavior in substance addictions, according to Psychology Todaycontributor Gary Wilson.

Wilson is co-author of the book, “Cupid’s Arrow,” and the mastermind behind YourBrainOnPorn.com, a website that explores topics relating to neuroscience, behavioral addiction and sexual conditioning. In his article, “Why Shouldn’t Johnny Watch Porn if He Likes?” Wilson shows how younger brains are particularly susceptible to the thrill-seeking effect of dopamine as compared to adult viewers. Teen brains are the most sensitive to dopamine at around age 15 and react up to four times more strongly to images perceived as exciting. On top of the increased thrill-seeking, teens have a higher capacity to log long hours in front of a computer screen without experiencing burnout. Additionally, teens act based on emotional impulses rather than logical planning. These traits combined make the adolescent brain especially vulnerable to addiction.  Pornography addiction during adolescence is particularly troubling because of the way neuron pathways in the brain form during this period. The circuitry in the brain undergoes an explosion of growth followed by a rapid pruning of neuron pathways between ages 10 and 13. Wilson describes this as the “use it or lose it” period of a teen’s development. 

“We restrict our options — without realizing how critical our choices were during our final, pubescent, neuronal growth spurt,” Wilson wrote. “ … This is one reason why polls asking teens how Internet porn use is affecting them are unlikely to reveal the extent of porn’s effects. Kids who have never masturbated without porn have no idea how it is affecting them.”

Teens are left without an understanding of normal sexual behavior because they have been repeatedly exposed to the superstimuli of constant novelty and constant searching provided by Internet pornography.

Lasting Effects of Internet Pornography Addiction at an Early Age

The very components that define Internet pornography — isolation, voyeurism, multiplicity, variety — also explain why online porn is more addictive and damaging than the pornography of yesterday. “There was a time when people looked at pornography in print magazines and some [viewers] were specifically drawn to it more than others,” psychotherapist Alexandra Katehakis told The Fix. “Then, over time, there was video pornography and that grabbed the brain differently than print did. Now, internet pornography is so powerful that it is literally rewiring the brains of men.”

Young viewers are unintentionally training their bodies to become aroused by the unique conditions provided by internet pornography, explained Katehakis, who is also a certified sex addiction therapist and clinical director of theCenter for Healthy Sex in Los Angeles. “What happens is when these neuronal networks start to fire together, they become wired together,” she said. “With internet porn, the images are so incredibly powerful and visceral that it is shocking to the system and a person gets a massive dose of dopamine … over time, they need more and more [dopamine].”

While most of those who identify as having a pornography addiction are male, females are also susceptible and can experience lasting damage as well, Katehakis said. 

The same principles apply — sexual response is wired to what was learned by watching porn. For females, this can distort perceptions of validation, pleasure and their role in sex. “Parents need to have conversations with their kids,” Katehakis added. “They need to talk about what is the purpose of sex, what is the meaning of sex and why people have sex.” Without those conversations, teens move into adulthood without real knowledge of healthy relationships. “Later in life there may be intimacy problems, the inability to connect with another human being and the inability to maintain a long-term monogamous relationship,” she said.

Seeking Help for Pornography Addiction

The stigma surrounding pornography addiction — many treatment centers do not yet recognize it — leads many of the afflicted to feel isolated and depressed which can heighten the need for the feel-good response triggered by the addiction itself.

The simplest treatment may also be the hardest. “The most important thing to do is to stop looking at it,” Katehakis said. “For the young men we’ve treated, they literally have to go on a porn diet for three to five months to get an erection again.”

“Also, stopping looking at images isn’t enough,” she continued. “Often a person can find himself still looking at images in his head. Some people can look at [pornography] like some people can have a glass of wine and not have another, while other people can really never look at it again.”

Centers which treat sex addiction will often also treat pornography addiction, although the two are very different: pornography involves pixels and not another human being.

“The main thing that the general population needs to understand is that [pornography] can really become a bon-a-fide addiction and to not underestimate the potential impact of this on a teen’s life,” Bulkley said. Teens who are addicted to online pornography may show symptoms such as increased time spent in isolation, increased time spent viewing technological devices, changes in attitude or behavior such as hypersexual language or dress and decreased focus in school and other activities. 

Counselors at the Youth Pornography Addiction Center in Utah help teens reset their thinking by uncovering the underlying issues that existed before or were aggravated by the addiction. “An addiction is a coping mechanism,” Bulkley explained. “Rather than solving the problem, they turn to this temporary escape.” Helping teens create an action plan to identify problems and how to overcome urges is one formula used for outpatient counseling at Bulkley’s center.

For more intensive treatment, the center also has a wilderness program where teens “detox” from not only technology and internet pornography, but also from the highly sexualized images that are prevalent everywhere from bus bench advertisements to cosmetic product packaging.

However, as with many things, problems can be averted early on by having conversations with your family, Bulkley said. “Parents need to understand, like it or not, kids are going to be exposed to pornography … You can do everything you can to protect them, but with the sexualization of our culture and the ease of access, it’s not if, it’s when.”

“It’s about having an ongoing conversation with your kids,” Bulkley continued, “and it really has to be an early discussion and ongoing dialogue that continues through their growing-up years.”

Sarah Peters has written for the Los Angeles Times, The Daily Pilot and the California Health Report. This is her first story for The Fix.

The Internet is one giant hoax

 

Shia LaBeouf reminds us that everything online is just a remix of something else. So why are we paying attention?

 

The Internet is one giant hoax
Shia LaBeouf

 

This piece originally appeared on Pajiba.

PajibaI don’t know what’s real anymore.

Shia LaBeouf has made the Internet slightly weirder and angrier than usual this week, at least in film critic and pop culture circles, when it became clear that he’d ripped off dialogue and visuals in his short film HowardCantour.com from a Daniel Clowes comic called Justin M. Damiano. He issued an apology, only it turns out that chunks of that apology are lifted whole cloth from a random entry on plagiarism found on Yahoo Answers that dates back a few years. It also turns out that his description of short films as a form that can “capture the essence of storytelling without the encumbrance of pop-psychology” (punctuation sic, and there’s more) is also stolen from a statement made by British producer Keith Phillips. Short of the Week, the site that debuted the clip the other day after it originally played Cannes last year (yep), has been distancing itself from the film as things have gotten worse.

On one hand, this is not exactly new turf for LaBeouf, who was involved in some pointless tiff with other celebrities earlier this year and decided to scold them all with an email lecture about how to be a man that he’d actually boosted from a 2009 Esquire piece. Frankly, it’s amazing he hasn’t been caught doing more stuff like this already. (Then again, when you release a short film about film critics, you should probably expect that a few of them will do their homework.) He’s also published comics that have ripped off Charles Bukowski and other writers. Perversely, this just seems to be LaBeouf’s thing.



On the other, I have no idea what to believe about stuff like this. I have zero feelings toward LaBeouf one way or the other. He’s made some mostly forgettable movies, and he used to be on the Disney Channel when he was a kid. He’s just this guy, as far as I’ve been concerned. That he’s resorted to plagiarism and intellectual property theft is sad and strange, but I’d be lying if I said it felt surprising.

Part of it has to do with the obvious stuff: on the Internet, everything is a remix, and repurposing content for your own ends is just one of the crappy things we’ve accepted in the social contract that gave us the most powerful communucation tool in human history. At least a quarter of what’s out there is duplicate content. That things are stolen is bad, but it doesn’t feel unusual anymore. That familiarity with piracy, the shrug that greets each successive admission of “I fucked up”, is sad in its own right. (Students try to pull it off all the time, just because the opportunity is there.) But on a bigger level, I honestly don’t know what’s going on in the minds of any of these creators, and trying to untangle their methods and intentions can be frustrating, too. LaBeouf ripped off Clowes, but when called upon to apologize, he plagiarized a Yahoo Answers post about plagiarism. He tweeted another apology on the morning of Dec. 18 that was a copy-and-paste from one Tiger Woods made a few years earlier. He followed that up with yet another apology, this time lifted from Robert McNamara’s statement about Vietnam. Is LaBeouf just screwing around? Did he even care? Does it even matter to him that people think he stole? Does he even think that’s what he did? Could he actually mean what he’s saying, or is he just trolling people? Is he lashing out because he was caught, or because he just likes to get a rise out of readers? What the hell is even happening in his head?

What makes things like this so hard to puzzle out is the fact that so much of what we experience online is fake these days. We’ve been hit with a spate of hoaxes and pranks and just weird stuff lately, from the reality TV producer who acted out a fight with an imaginary enemy on a plane, to a woman who wrote about being poor even though her story didn’t quite check out, to the forwards and repostsand Likes that flood our social feeds with things that are demonstrably untrue. LaBeouf’s behavior isn’t just plagiarism, but part of a broader creative trend by some creative types to spit on the line between real and fake, and to center their output on the discomfort we feel when they confuse us.

There’s a video maker named Casey Neistat who comes to mind, too. After making a few clips that went viral (like this one about the foolishness of bike lanes, or this one about Apple’s former iPod battery replacement program), he was hired by Nike to make a video promoting their FuelBand and the attendant slogan “Make it count.” The video he made positions itself as a rogue assignment, kicking off with a title card that reads: “Instead of making their movie I spent the entire budget traveling around the world with my friend Max. We’d keep going until the money ran out.” From far away, this looks like a screw-you to the man, except Neistat wore the FuelBand and made the video and turned in a project that’s pretty much right in line with Nike’s whole m.o. anyway. So he got to look cool, Nike got to look like they’d both a) been beaten by the little guy and b) played it cool when he made his own special thing, and everybody got to pat each other’s back as the video passed 10 million views. It’s impossible to know just exactly how the whole thing went down, but saying, in essence, “I took the company’s money and did the thing they asked me to do” is not exactly the punk rock move Neistat makes it out to be.

More intriguingly, he just did the same thing again. 20th Century Fox approached Neistat about doing a promo video with the theme “Live your dreams” to plug their upcoming The Secret Life of Walter Mitty, so Neistat said he wanted to take the $25,000 budget and travel to the Philippines and spend it on disaster relief. The studio, per the video, said OK, and the result is this video in which Neistat does just that. Now, again, I have no way of knowing what actually transpired as the video was being planned, shot, and produced — that’s the whole point. All I know is that a guy who helped one massive corporation gain some street cred with a very specific stunt says he was contracted by another massive corporation to do basically the same thing. I have a hard time imagining the studio didn’t see this coming, nor that they were overjoyed at the idea of spending a few thousand dollars to help promote a $90 million film as a force for moral good and not, simply, a piece of holiday entertainment.

So we have an actor stealing things and then not caring about it, a video maker generating street cred for global corporations who don’t need it, and a series of people riding the gullibility of others to modest fame and donated fortune. And at the root of all of it is one thing: we have no idea what these people really think, or believe, or want to do. We have the effects of their actions, and the queasy feeling that comes when we’re tricked, but we don’t know how we got here, and we certainly don’t know what motivates these people to do what they’re doing. It seems unlikely that LaBeouf was unaware of what he was doing, but it’s impossible to tell. It seems unlikely that Neistat isn’t far more involved in the PR nature of some of these stunts, but it’s impossible to tell. It seems unlikely that frauds think they can pull it off, but — well, you know.

What I do know is that it’s wearing me down, and I’m not sure how to find a way out. In addition to hoaxes, the Internet can manufacture rage and indignation with horrifying ease, so each of these stories has been met with anger and vitriol before fading into the background and becoming just one more thing that silently informs our experience of the world. That’s understandable: anger and rage are clear, almost tangible reactions, and it can feel good in the wake of something weird or dishonest to lash out and seek to punish the one that fooled you. But after those, or underneath them, there has to be a way to come to terms with what it means that so much of what’s out there is just artificial or stolen bullshit. We have to find a way to police ourselves, guard our experiences, and — maybe most importantly — shut out the fakes and frauds. I don’t know how to do it yet, but it’s got to stop. I don’t know what’s real or what’s fake anymore, and I don’t want to get to the point where I stop caring about the difference.

 

Follow

Get every new post delivered to your Inbox.

Join 1,570 other followers