How the Internet Is Loosening Our Grip on the Truth

Next week, if all goes well, someone will win the presidency. What happens after that is anyone’s guess. Will the losing side believe the results? Will the bulk of Americans recognize the legitimacy of the new president? And will we all be able to clean up the piles of lies, hoaxes and other dung that have been hurled so freely in this hyper-charged, fact-free election?

Much of that remains unclear, because the internet is distorting our collective grasp on the truth. Polls show that many of us have burrowed into our own echo chambers of information. In a recent Pew Research Center survey, 81 percent of respondents said that partisans not only differed about policies, but also about “basic facts.”

For years, technologists and other utopians have argued that online news would be a boon to democracy. That has not been the case.

More than a decade ago, as a young reporter covering the intersection of technology and politics, I noticed the opposite. The internet was filled with 9/11 truthers, and partisans who believed against all evidence that George W. Bush stole the 2004 election from John Kerry, or that Barack Obama was a foreign-born Muslim. (He was born in Hawaii and is a practicing Christian.

Of course, America has long been entranced by conspiracy theories. But the online hoaxes and fringe theories appeared more virulent than their offline predecessors. They were also more numerous and more persistent. During Mr. Obama’s 2008 presidential campaign, every attempt to debunk the birther rumor seemed to raise its prevalence online.

In a 2008 book, I argued that the internet would usher in a “post-fact” age. Eight years later, in the death throes of an election that features a candidate who once led the campaign to lie about President Obama’s birth, there is more reason to despair about truth in the online age.

Why? Because if you study the dynamics of how information moves online today, pretty much everything conspires against truth.

You’re Not Rational

The root of the problem with online news is something that initially sounds great: We have a lot more media to choose from.

In the last 20 years, the internet has overrun your morning paper and evening newscast with a smorgasbord of information sources, from well-funded online magazines to muckraking fact-checkers to the three guys in your country club whose Facebook group claims proof that Hillary Clinton and Donald J. Trump are really the same person.

A wider variety of news sources was supposed to be the bulwark of a rational age — “the marketplace of ideas,” the boosters called it.

But that’s not how any of this works. Psychologists and other social scientists have repeatedly shown that when confronted with diverse information choices, people rarely act like rational, civic-minded automatons. Instead, we are roiled by preconceptions and biases, and we usually do what feels easiest — we gorge on information that confirms our ideas, and we shun what does not.

This dynamic becomes especially problematic in a news landscape of near-infinite choice. Whether navigating Facebook, Google or The New York Times’s smartphone app, you are given ultimate control — if you see something you don’t like, you can easily tap away to something more pleasing. Then we all share what we found with our like-minded social networks, creating closed-off, shoulder-patting circles online.

That’s the theory, at least. The empirical research on so-called echo chambers is mixed. Facebook’s data scientists have run large studies on the idea and found it wanting. The social networking company says that by exposing you to more people, Facebook adds diversity to your news diet.

Others disagree. A study published last year by researchers at the IMT School for Advanced Studies Lucca, in Italy, found that homogeneous online networks help conspiracy theories persist and grow online.

“This creates an ecosystem in which the truth value of the information doesn’t matter,” said Walter Quattrociocchi, one of the study’s authors. “All that matters is whether the information fits in your narrative.”

No Power in Proof

Digital technology has blessed us with better ways to capture and disseminate news. There are cameras and audio recorders everywhere, and as soon as something happens, you can find primary proof of it online.

You would think that greater primary documentation would lead to a better cultural agreement about the “truth.” In fact, the opposite has happened.

Consider the difference in the examples of the John F. Kennedy assassination and 9/11. While you’ve probably seen only a single film clip of the scene from Dealey Plaza in 1963 when President Kennedy was shot, hundreds of television and amateur cameras were pointed at the scene on 9/11. Yet neither issue is settled for Americans; in one recent survey, about as many people said the government was concealing the truth about 9/11 as those who said the same about the Kennedy assassination.

Documentary proof seems to have lost its power. If the Kennedy conspiracies were rooted in an absence of documentary evidence, the 9/11 theories benefited from a surfeit of it. So many pictures from 9/11 flooded the internet, often without much context about what was being shown, that conspiracy theorists could pick and choose among them to show off exactly the narrative they preferred. There is also the looming specter of Photoshop: Now, because any digital image can be doctored, people can freely dismiss any bit of inconvenient documentary evidence as having been somehow altered.

This gets to the deeper problem: We all tend to filter documentary evidence through our own biases. Researchers have shown that two people with differing points of view can look at the same picture, video or document and come away with strikingly different ideas about what it shows.

That dynamic has played out repeatedly this year. Some people look at the WikiLeaks revelations about Mrs. Clinton’s campaign and see a smoking gun, while others say it’s no big deal, and that besides, it’s been doctored or stolen or taken out of context. Surveys show that people who liked Mr. Trump saw the Access Hollywood tape where he casually referenced groping women as mere “locker room talk”; those who didn’t like him considered it the worst thing in the world.

Lies as an Institution

One of the apparent advantages of online news is persistent fact-checking. Now when someone says something false, journalists can show they’re lying. And if the fact-checking sites do their jobs well, they’re likely to show up in online searches and social networks, providing a ready reference for people who want to correct the record.

But that hasn’t quite happened. Today dozens of news outlets routinely fact-check the candidates and much else online, but the endeavor has proved largely ineffective against a tide of fakery.

That’s because the lies have also become institutionalized. There are now entire sites whose only mission is to publish outrageous, completely fake news online (like real news, fake news has become a business). Partisan Facebook pages have gotten into the act; a recent BuzzFeed analysis of top political pages on Facebook showed that right-wing sites published false or misleading information 38 percent of the time, and lefty sites did so 20 percent of the time.

“Where hoaxes before were shared by your great-aunt who didn’t understand the internet, the misinformation that circulates online is now being reinforced by political campaigns, by political candidates or by amorphous groups of tweeters working around the campaigns,” said Caitlin Dewey, a reporter at The Washington Post who once wrote a column called “What Was Fake on the Internet This Week.”

Ms. Dewey’s column began in 2014, but by the end of last year, she decided to hang up her fact-checking hat because she had doubts that she was convincing anyone.

“In many ways the debunking just reinforced the sense of alienation or outrage that people feel about the topic, and ultimately you’ve done more harm than good,” she said.

Other fact-checkers are more sanguine, recognizing the limits of exposing online hoaxes, but also standing by the utility of the effort.

“There’s always more work to be done,” said Brooke Binkowski, the managing editor of Snopes.com, one of the internet’s oldest rumor-checking sites. “There’s always more. It’s Sisyphean — we’re all pushing that boulder up the hill, only to see it roll back down.”

Yeah. Though soon, I suspect, that boulder is going to squash us all.

“Stranger Things” is a show about the internet’s dark sides

We’re all living in the “Upside Down”

Under the irresistible ’80s pastiche, “Stranger Things” explores the monstrous capabilities of digital technology

We're all living in the "Upside Down": "Stranger Things" is a show about the internet's dark sides
Winona Ryder in “Stranger Things” (Credit: Netflix/Screen Montage by Salon)

“Stranger Things” is hotter than Kayne’s Twitter feed right now and Netflix just announced that a second season is on its way. A trailer recently posted online reveals that when 2017 arrives, fans pining for more ’80s pop culture references will be returning to Hawkins, Indiana, in the fall of 1984 as the town faces the aftermath of its first contact with the alternate universe known as “the Upside Down” and the predatory “Demogorgons” that dwell therein. [Note: Spoilers ahead.]

Right now most reviews and discussions about “Stranger Things” focus on its aesthetics: how the Duffer brothers crafted such an eloquent piece of pastiche that takes the best elements of 1980s pop culture and reassembles them into something fresh and entertaining. Of course, that’s worthy of discussion in and of itself. Who wouldn’t jump at a chance to spot a “Goonies,” “Evil Dead” or “Stand By Me” reference and gush adoringly to their friends about it? But then again, there’s a lot of TV, music and film doing the same thing these days. So it’s worth asking — especially while we wait for the next season to arrive — if there is something more to “Stranger Things” that accounts for its popularity and the mass amounts of speculation surrounding its mysterious aura. I think the answer is yes, and I think that’s worth talking about, too.

Even by the end of the first episode, one gets the sense that the Duffer brothers are more than just writers and directors who have an extensive knowledge of ’80s gold. They seem to be filmic philosophers who emerged out of the shadows with this series to deliver an intriguing commentary about a world like ours, a world rife with communication technologies that can harm us just as much as they can help us.

“Stranger Things” is set in small-town America in the 1980s. On the fringes of this town is a mysterious Department of Energy compound conducting weird experiments somehow related to the U.S. military and espionage. People are not really worried about this organization, though, until they have to start worrying about it because technology, espionage, militarism, fear, surveillance and such things were all staples of the Cold War era. (But, in fact, without the Cold War, much of the technology we have today — like the internet and personal computers — simply wouldn’t exist.)

But then all of a sudden the worst thing happens that could possibly happen to a mother, older brother and a small community. A young boy is stolen away in the night to . . . where exactly? Will Byers, like other characters both major and minor in the show — #TeamBarb! — has been kidnapped and possibly eaten by a predatory creature that his friends have dubbed the Demogorgon, after a monster in their Dungeons & Dragons game. The Demogorgon travels through numerous gates between our world and another dimension — the Upside Down — that is sort of like our world but way darker and scarier. The alternate dimension and the Demogorgon are somehow connected to electricity, wires, appliances and communication devices. And they all go nuts whenever the monster lurks about in the real world.

Likewise, people trapped in the Upside Down can reach the real world only through electronic devices like telephones and radios, and the bigger the device, the better the contact. We see this clearly when Will’s mom Joyce (Winona Ryder) creates a primitive codex-like thing by painting a wall with the alphabet and assigning a light to each letter, which allows her to use it as a keyboard to communicate between dimensions with her missing son. She asks questions and Will spells out answers by flashing lights above the appropriate letters. It seems like magic, but it’s just primitive computing technology.

All of this sounds like the foundations of the internet, doesn’t it? A network of electricity, phone lines, communication devices and flashing lights that work together to connect distant worlds so that disconnected people can meet and dialogue with each other. I would wager that this intriguing little detail represents a key theme in “Stranger Things.” It also explains why the show has become so popular. Just like now, the ’80s was a paradoxical time of both hope in technology and fear of it. Deep into the Cold War by 1983, both the East and West knew that technological development was both their biggest threat and their biggest hope.

The race to space known as Star Wars — Reagan’s missile defense program, not the films — and the mass ramping up of innovations in personal computing, information technologies, communication devices and consumer entertainment systems, all combined to simultaneously strike people dumb with awe and cower in fear. Sure, an atomic bomb could hit a town at any moment and a commie spy could be running a local grocery store, but at least people had some newfangled digital devices and entertainment while they waited for all that horrible stuff to go down. Meanwhile, the military would be using all this technology to gain the upper hand on the enemy.

All of this explains why the Demogorgon and the sinister Department of Energy lab are secondary issues in “Stranger Things.” The primary issue is the Upside Down, the alternate dimension that the Department of Energy accidentally discovers. If this gate had not been opened up by Eleven, a mysterious child with telepathic and kinetic gifts who is forced to go into that world and inadvertently make contact with the Demogorgon, the monster would have never shown up in Hawkins to kidnap people in the first place. So it’s the Demogorgon’s network, the world it lives in and connects to, that enables the monster to be anywhere at any time to snatch poor Will and Barb away to the Upside Down. That is the real threat to everyone in Hawkins.

Which is to say, the Upside Down starts looking very much like an analog for the internet. Yes, the internet allows people to connect everywhere at every time, but this is not always a good thing. Just look at Kayne’s Twitter feed. Or, on a more serious note, consider the devastating social-media harassment campaign that has targeted actress Leslie Jones. Or reflect on the degree to which even our most banal online activities and conversations are being watched and collected by government agencies.

This explains why there are more seasons to come because by the end of the first season, it appears that the Demogorgon is dead. But the network itself, as well as its effects, remains. Will Byers might be back in the real world to go on D&D quests with his buddies and be bullied at school, yet the network has made an impression on him that he can’t shed. Now he is a part of the Upside Down network and he’s brought it back to Hawkins. We see this when he coughs up a little Demogorgon slug in the concluding episode. The network and its demons are taking root and growing. Pikachu is in our world now and we’d better watch out.

Once you start pulling at these thematic threads, you begin to see a deeper philosophical discussion at play in “Stranger Things.” Of course, some fans love this show because it captures several of their favorite memories of the ’80s. But at the same time, and perhaps more important, we’re intrigued by the experience of watching characters in “Stranger Things” encounter the consequences of rapid technological advancement.

There’s no better example of this than the scene when Joyce Byers finally makes semi-physical contact with Will through an opaque window that appears behind the wallpaper in the Byers’ living room. For a glimmering moment Joyce is able to reach out to Will through a translucent screen to see that he’s alive. And Will reaches back. But that window evaporates as quickly as it appeared. So Joyce takes an ax to the wall to break through to the young Will trapped in the Upside Down. She chops right through the wall and finds . . . nothing — just the world outside. Will is gone. The network escapes Joyce’s grasp, her son is still kidnapped and her life is still in shambles.

It turns out that technology can’t bring Will back, only humans can. And humans eventually do. This draws out a key thesis of the show: We don’t need more or better technology to solve our greatest problems. What we need is more courageous people — like Joyce and Sheriff Hopper; Elle; Will’s friends Mike, Lucas and Dustin; Will’s brother Jonathan, Mike’s sister Nancy and (eventually) her boyfriend Steve.

It may be difficult for some younger viewers to think about what the world was like before digital technology and the internet arrived on the scene and developed into what it is and does to us today. But artistically and philosophically “Stranger Things” helps fans get to (or back to, depending on the viewers’ age) that place. And once we’re there, we’re pressed to explore ethical questions similar to those encountered by the kids and adults of Hawkins. Will we or won’t we make contact with the Upside Down and dimensions and technologies of that kind? And if we do, how will we act when things get strange, volatile and perhaps even violent?

Michael Morelli is a PhD student who studies theological ethics, culture, and technology. Follow him on Twitter @mchlmorelli

How the Internet changed the way we read

 download

The Daily Dot

As a professor of literature, rhetoric, and writing at the University of California at Irvine, I’ve discovered that one of the biggest lies about American culture (propagated even by college students) is that Americans don’t read.

The truth is that most of us read continuously in a perpetual stream of incestuous words, but instead of reading novels, book reviews, or newspapers like we used to in the ancien régime, we now read text messages, social media, and bite-sized entries about our protean cultural history on Wikipedia.

In the great epistemic galaxy of words, we have become both reading junkies and also professional text skimmers. Reading has become a clumsy science, which is why we keep fudging the lab results. But in diagnosing our own textual attention deficit disorder (ADD), who can blame us for skimming? We’re inundated by so much opinion posing as information, much of it the same material with permutating and exponential commentary. Skimming is practically a defense mechanism against the avalanche of info-opinion that has collectively hijacked narrative, reportage, and good analysis.

We now skim everything it seems to find evidence for our own belief system. We read to comment on reality (Read: to prove our own belief system). Reading has become a relentless exercise in self-validation, which is why we get impatient when writers don’t come out and simply tell us what they’re arguing. Which reminds me:  What the hell am I arguing?  With the advent of microblogging platforms, Twitter activism, self-publishing companies, professional trolling, everyone has a microphone now and yet no one actually listens to each other any more. And this is literally because we’re too busy reading. And when we leave comments on an online article, it’s usually an argument we already agree with or one we completely reject before we’ve read the first paragraph. In the age of hyper-information, it’s practically impossible not to be blinded by our own confirmation bias. It’s hard not to be infatuated with Twitter shitstorms either, especially when we’re not the target practice.

E-novels, once the theater of the mind for experimental writers, are now mainstream things that look like long-winded websites. Their chapters bleed into the same cultural space on our screen as grocery lists, weather forecasts, calendar reminders, and email messages. What’s the real difference between reading a blog post online by an eloquent blowhard and reading one chapter of a Jonathan Franzen novel? We can literally swipe from one text to another on our Kindle without realizing we changed platforms. What’s the real difference between skimming an informed political critique on a political junkie Tumblr account and reading a focused tirade on the Washington Post’s blog written by putative experts?

What’s the real difference between skimming an informed political critique on a political junkie Tumblr account and reading a focused tirade on the Washington Post’s blog written by putative experts?

That same blog post will get reposted on other news sites and the same news article will get reposted on other blogs interchangeably.  Content—whether thought-provoking, regurgitated, or analytically superficial, impeccably-researched, politically doctrinaire, or grammatically atrocious—now occupies the same cultural space, the same screen space, and the same mental space in the public imagination.  After awhile, we just stop keeping track of what’s legitimately good because it takes too much energy to separate the crème from the foam.

As NPR digitizes itself in the 21st century, buries the “R” in its name, and translates its obsolete podcasts into online news features, every one of its articles now bleeds with its comment section, much of it written by posters who haven’t even read the article in question—essentially erasing the dividing lines between expert, echo chamber, and dilettante, journalist, hack, and self-promoter, reportage, character assassination, and mob frenzy.

One silver lining is that the technological democratization of social media has effectively deconstructed the one-sided power of the Big Bad Media in general and influential writing in particular, which in theory makes this era freer and more decentralized than ever. One downside to technological democratization is that it hasn’t lead to a thriving marketplace of ideas, but a greater retreat into the Platonic cave of self-identification with the shadow world. We have never needed a safer and quieter place to collect our thoughts from the collective din of couch quarterbacking than we do now, which is why it’s so easy to preemptively categorize the articles we read before we actually read them to save ourselves the heartache and the controversy.

The abundance of texts in this zeitgeist creates a tunnel effect of amnesia.  We now have access to so much information that we actually forget the specific nuances of what we read, where we read them, and who wrote them. We forget what’s available all the time because we live in an age of hyperabundant textuality. Now, when we’re lost, we’re just one click away from the answer. Even the line separating what we know and what we don’t know is blurry.

We now have access to so much information that we actually forget the specific nuances of what we read, where we read them, and who wrote them.

It is precisely because we now consume writing from the moment we wake until the moment we crash—most of it mundane, redundant, speculative, badly researched, partisan, and emojian—that we no longer have the same appetite (or time) for literary fiction, serious think pieces, or top-shelf journalism anymore, even though they’re all readily available. If an article on the Daily Dot shows up on page 3 of a Google search, it might as well not exist at all. The New York Timesarticle we half-read on our iPhone while standing up in the Los Angeles Metro ends up blurring with the 500 modified retweets about that same article on Twitter. Authors aren’t privileged anymore because everyone writes commentary somewhere and everyone’s commentary shows up some place. Only the platform and the means of production have changed.

Someday, the Centers for Disease Control will create a whole new branch of research dedicated to studying the infectious disease of cultural memes.  Our continuous consumption of text is intricately linked to our continuous forgetting, our continuous reinfection, and our continuous thumbs up/thumbs down approach to reality, which is why we keep reading late into the night, looking for the next place to leave a comment someone has already made somewhere. Whether we like it or not, we’re all victims and perpetrators of this commentary fractal. There seems to be no way out except deeper inside the sinkhole or to go cold turkey from the sound of our own voices.

Jackson Bliss is a hapa fiction writer and a lecturer in the English department at the University of California Irvine. He has a BA in comp lit from Oberlin College , a MFA in fiction from the University of Notre Dame, and a MA in English and a Ph.D. in Literature and Creative Writing from USC. His short stories and essays have appeared in many publications.  

 

http://www.dailydot.com/opinion/how-internet-changed-way-we-read/

LOpht’s warnings about the Internet drew notice but little action

NET OF INSECURITY

A disaster foretold — and ignored

Published on June 22, 2015

The seven young men sitting before some of Capitol Hill’s most powerful lawmakers weren’t graduate students or junior analysts from some think tank. No, Space Rogue, Kingpin, Mudge and the others were hackers who had come from the mysterious environs of cyberspace to deliver a terrifying warning to the world.

The making of a vulnerable Internet: This story is the third of a multi-part project on the Internet’s inherent vulnerabilities and why they may never be fixed.

Part 1: The story of how the Internet became so vulnerable
Part 2: The long life of a ‘quick fix’

Your computers, they told the panel of senators in May 1998, are not safe — not the software, not the hardware, not the networks that link them together. The companies that build these things don’t care, the hackers continued, and they have no reason to care because failure costs them nothing. And the federal government has neither the skill nor the will to do anything about it.

“If you’re looking for computer security, then the Internet is not the place to be,” said Mudge, then 27 and looking like a biblical prophet with long brown hair flowing past his shoulders. The Internet itself, he added, could be taken down “by any of the seven individuals seated before you” with 30 minutes of well-choreographed keystrokes.

The senators — a bipartisan group including John Glenn, Joseph I. Lieberman and Fred D. Thompson — nodded gravely, making clear that they understood the gravity of the situation. “We’re going to have to do something about it,” Thompson said.

What happened instead was a tragedy of missed opportunity, and 17 years later the world is still paying the price in rampant insecurity.

The testimony from L0pht, as the hacker group called itself, was among the most audacious of a rising chorus of warnings delivered in the 1990s as the Internet was exploding in popularity, well on its way to becoming a potent global force for communication, commerce and criminality.

Hackers and other computer experts sounded alarms as the World Wide Web brought the transformative power of computer networking to the masses. This created a universe of risks for users and the critical real-world systems, such as power plants, rapidly going online as well.

Officials in Washington and throughout the world failed to forcefully address these problems as trouble spread across cyberspace, a vast new frontier of opportunity and lawlessness. Even today, many serious online intrusions exploit flaws in software first built in that era, such as Adobe Flash, Oracle’s Java and Microsoft’s Internet Explorer.

“We have the same security problems,” said Space Rogue, whose real name is Cris Thomas. “There’s a lot more money involved. There’s a lot more awareness. But the same problems are still there.”

L0pht, born of the bustling hacker scene in the Boston area, rose to prominence as a flood of new software was introducing such wonders as sound, animation and interactive games to the Web. This software, which required access to the core functions of each user’s computer, also gave hackers new opportunities to manipulate machines from afar.

Breaking into networked computers became so easy that the Internet, long the realm of idealistic scientists and hobbyists, gradually grew infested with the most pragmatic of professionals: crooks, scam artists, spies and cyberwarriors. They exploited computer bugs for profit or other gain while continually looking for new vulnerabilities.

Tech companies sometimes scrambled to fix problems — often after hackers or academic researchers revealed them publicly — but few companies were willing to undertake the costly overhauls necessary to make their systems significantly more secure against future attacks. Their profits depended on other factors, such as providing consumers new features, not warding off hackers.

“In the real world, people only invest money to solve real problems, as opposed to hypothetical ones,” said Dan S. Wallach, a Rice University computer science professor who has been studying online threats since the 1990s. “The thing that you’re selling is not security. The thing that you’re selling is something else.”

The result was a culture within the tech industry often derided as “patch and pray.” In other words, keep building, keep selling and send out fixes as necessary. If a system failed — causing lost data, stolen credit card numbers or time-consuming computer crashes — the burden fell not on giant, rich tech companies but on their customers.

The members of L0pht say they often experienced this cavalier attitude in their day jobs, where some toiled as humble programmers or salesmen at computer stores. When they reported bugs to software makers, company officials often asked: Does anybody else know about this?

CONTINUED:

http://www.washingtonpost.com/sf/business/2015/06/22/net-of-insecurity-part-3/

The meme-ification of Ayn Rand

How the grumpy author became an Internet superstar

“Feminist” T-shirts are her latest viral sensation. Why the objectivist’s writings lend themselves to the Web

, The Daily Dot

The meme-ification of Ayn Rand: How the grumpy author became an Internet superstar
Ayn Rand (Credit: WIkimedia)
This article originally appeared on The Daily Dot.

The Daily Dot Ayn Rand is not a feminist icon, but it speaks volumes about the Internet that some are implicitly characterizing her that way, so much so that she’s even become a ubiquitous force on the meme circuit.

Last week, Maureen O’Connor of The Cut wrote a piece about a popular shirt called the Unstoppable Muscle Tee, which features the quote: “The question isn’t who is going to let me, it’s who is going to stop me.”

As The Quote Investigator determined, this was actually a distortion of a well-known passage from one of Rand’s better-known novels, The Fountainhead:

“Do you mean to tell me that you’re thinking seriously of building that way, when and if you are an architect?”

“Yes.”

“My dear fellow, who will let you?”

“That’s not the point. The point is, who will stop me?”

Ironically, Rand not only isn’t responsible for this trendy girl power mantra, but was actually an avowed enemy of feminism. As The Atlas Society explains in their article about feminism in the philosophy of Objectivism (Rand’s main ideological legacy), Randians may have supported certain political and social freedoms for women—the right to have an abortion, the ability to rise to the head of business based on individual merit—but they subscribed fiercely to cultural gender biases. Referring to herself as a “male chauvinist,” Rand argued that sexually healthy women should feel a sense of “hero worship” for the men in their life, expressed disgust at the idea that any woman would want to be president, and deplored progressive identity-based activist movements as inherently collectivist in nature.



How did Rand get so big on the Internet, which has become a popular place for progressive memory? A Pew Research study from 2005 discovered that: “the percentage of both men and women who go online increases with the amount of household income,” and while both genders are equally likely to engage in heavy Internet use, white men statistically outnumber white women. This is important because Rand, despite iconoclastic eschewing ideological labels herself, is especially popular among libertarians, who are attracted to her pro-business, anti-government, and avowedly individualistic ideology. Self-identified libertarians and libertarian-minded conservatives, in turn, were found by a Pew Research study from 2011 to be disproportionately white, male, and affluent. Indeed, the sub-sect of the conservative movement that Pew determined was most likely to identify with the libertarian label were so-called “Business Conservatives,” who are “the only group in which a majority (67 percent) believes the economic system is fair to most Americans rather than unfairly tilted in favor of the powerful.” They are also very favorably inclined toward the potential presidential candidacy of Rep. Paul Ryan (79 percent), who is well-known within the Beltway as an admirer of Rand’s work (once telling The Weekly Standard that “I give out Atlas Shrugged [by Ayn Rand] as Christmas presents, and I make all my interns read it.”).

Rand’s fans, in other words, are one of the most visible forces on the Internet, and ideally situated to distribute her ideology. Rand’s online popularity is the result of this fortuitous intersection of power and interests among frequent Internet users. If one date can be established as the turning point for the flourishing of Internet libertarianism, it would most likely be May 16, 2007, when footage of former Rep. Ron Paul’s sharp non-interventionist rebuttal to Rudy Giuliani in that night’s Republican presidential debate became a viral hit. Ron Paul’s place in the ideological/cultural milieu that encompasses Randism is undeniable, as evidenced by exposes on their joint influence on college campuses and Paul’s upcoming cameo in the movie Atlas Shrugged: Part 3. During his 2008 and 2012 presidential campaigns, Paul attracted considerable attention for his remarkable ability to raise money through the Internet, and to this day he continues to root his cause in cyberspace through a titular online political opinion channel—while his son, Sen. Rand Paul, has made no secret of his hope to tap into his father’s base for his own likely presidential campaign in 2016. Even though the Pauls don’t share Rand’s views on many issues, the self-identified libertarians that infused energy and cash into their national campaigns are part of the same Internet phenomenon as the growth of Randism.

As the Unstoppable Muscle Tee hiccup makes clear, however, Rand’s Internet fashionability isn’t always tied to libertarianism or Objectivism (the name she gave her own ideology). It also has a great deal to do with the psychology of meme culture. In the words of Annalee Newitz, a writer who frequently comments on the cultural effects of science and technology:

To share a story is in part to take ownership of it, especially because you are often able to comment on a story that you are sharing on social media. If you can share a piece of information that’s an absolute truth—whether that’s how to uninstall apps on your phone, or what the NSA is really doing—you too become a truth teller. And that feels good. Just as good as it does to be the person who has the cutest cat picture on the Internet.

If there is one quality in Rand’s writing that was evident even to her early critics, it was the tone of absolute certainty that dripped from her prose, which manifests itself in the quotes appearing in memes such as “I swear by my life and my love of it that I will never live for the sake of another man, nor ask another man to live for mine,” or  “A creative man is motivated by the desire to achieve, not by the desire to beat others” and “The ladder of success is best climbed by stepping on the rungs of opportunity.” Another Rand meme revolves around the popular quote: “Individual rights are not subject to a public vote; a majority has no right to vote away the rights of a minority; the political function of rights is precisely to protect minorities from oppression by majorities (and the smallest minority on Earth is the individual).”

What’s particularly noteworthy about these observations, aside from their definitiveness, is the fact that virtually no one adhering to a mainstream Western political ideology would disagree with them. Could you conceive of anyone on the left, right, or middle arguing that they’d accept being forced to live for another’s sake or want another to live solely for their own? Or that their ambitions are not driven by a desire to beat others? Or that they don’t think success comes from seizing on opportunities? Or that they think majorities should be able to vote away the rights of minorities?

These statements are platitudes, compellingly worded rhetorical catch-alls with inspiring messages that are unlikely to be contested when taken solely at face value. Like the erroneously attributed “The question isn’t who is going to let me, it’s who is going to stop me,” they can mean whatever the user wishes for them to mean. Conservatives can and will be found who claim that only they adhere to those values while liberals do not, many liberals will say the same thing about conservatives, and, of course, Rand wrote each of these statements with her own distinctly Objectivist contexts in mind. Because each one contains a generally accepted “absolute truth” (at least insofar as the strict text itself is concerned), they are perfect fodder for those who spread memes through pictures, GIFs, and online merchandise—people who wish to be “truth tellers.”

Future historians may marvel at the perfect storm of cultural conditions that allowed this Rand boom to take place. After all, there is nothing about the rise of Internet libertarianism that automatically guarantees the rise of meming as a trend, or vice versa. In retrospect, however, the fact that both libertarianism and meming are distinct products of the Internet age—one for demographic reasons, the other for psychological ones—made the explosion of Randisms virtually inevitable. Even if they’re destined to be used by movements with which she’d want no part, Ayn Rand isn’t going to fade away from cyberspace anytime soon.

http://www.salon.com/2014/11/18/how_ayn_rand_became_an_internet_superstar_partner/?source=newsletter

Buyer Beware: Online Shopping Prices Vary From User to User

  Corporate Accountability and WorkPlace  

Companies may be charging you more because of your geographic location or computer model.

People have a mental model of shopping that is based on experiences from brick-and-mortar stores. We intuitively understand how this process works: all available products are displayed around the store and the prices are clearly marked. Many stores offer deals via coupons, membership cards, or to special classes of people such as students or AARP members. Typically, everyone is aware of these discounts and has an equal opportunity to use them.

Many people assume this same mental model of shopping applies just as well to e-commerce websites. However, as we are discovering, this is not the case.

In 2010, shoppers realized that Amazon was charging different users different prices for the same DVD, a practice known as price discrimination or price differentiation. In 2012, the Wall Street Journal revealed that Staples was charging users different prices based on their geographic location. The paper also reported that travel retailer Orbitz was showing more expensive hotels to users browsing from Mac computers, a practice known as price steering.

These reports of price discrimination and steering provoked a great deal of negative publicity for the companies involved. The lack of transparency also raises many disturbing questions. How widespread are the e-commerce practices of manipulating search results and customizing prices? What customer information do companies use to do it? When e-commerce sites personalize prices or search results, by how much do prices change?

Price Discrimination and Steering in the Wild

My colleagues and I at Northeastern University have taken an initial stab at answering these questions in a new study. We examined ten major e-retailers – including Walmart and Home Depot – along with six hotel/rental car sites – including Orbitz and Expedia – to determine if they implement price discrimination or steering, and if so, what user attributes trigger the personalization.

We recruited 300 people from the crowdsourcing site Mechanical Turk to run product searches on the 16 sites. We paired each of these real users, who each had their own real, idiosyncratic browser history, with an automated browser that ran the same searches at the same time as the real users, but did not store any cookies.

By comparing the search results shown to these automated controls and to the real users, we identified several cases of personalization. We saw price steering from Sears, with the order of search results varying from user to user. We saw price discrimination from Home Depot, Sears, Cheaptickets, Orbitz, Priceline, Expedia, and Travelocity, with product prices varying from user to user.

So what user attributes trigger personalization? The problem is that real users have a long history of browsed sites, searches, clicks, and online purchases that we as researchers don’t know. Thus, when we observe personalized results in our experiments, we can’t tease out the underlying cause.

What Makes You Seem Like You Want to Pay More?

To figure out what user attributes drive e-commerce personalization, we conducted another round of testing using fake accounts that we created. All the accounts were identical except for one specific attribute that we changed. In particular, we tested for personalization based on browser (e.g. Chrome, Firefox, IE), platform (e.g. Windows, OSX, iOS, Android), logging-in to a user account, and purchase history (we had one account book cheap hotels and rental cars for a week, while another account booked expensive hotel rooms and rental cars).

Our fake accounts uncovered many different personalization strategies employed by e-commerce sites. For example, Travelocity reduced the prices on 5% of hotel rooms shown in search results by around US$15 per night for smartphone users. Interestingly, Cheaptickets and Orbitz gave unadvertised “Members Only” discounts of about US$12 per night on 5% of hotels rooms to users who were logged-in to their accounts on the site.


Price discrimination on Cheaptickets: users who log into the site receive ‘Members Only’ discounts of about $12/night on 5% of hotels. Aniko Hannak et al.

Expedia and Hotels.com conduct what marketers and engineers call A/B tests to steer a subset of their users toward more expensive hotels. By dividing visitors into different groups, companies are able to use A/B tests to see how users respond to new website features and algorithms. In this case, visitors to Expedia and Hotels.com were randomly assigned to groups A, B or C based on the cookies stored on their computers. Users in groups A and B were shown hotels with an average price of US$187/night, while users in group C were shown hotels with an average price of US$170/night.

Home Depot served almost completely different products to users on desktops versus mobile devices. A desktop user searching Home Depot typically received 24 search results, with an average price per item of US$120. In contrast, mobile users receive 48 search results, with an average price per item of US$230. Bizarrely, products are also US$0.41 more expensive on average for Android users.

Why Do Sites Do This?

Initially, we assumed that the sites would not personalize content, given the extremely negative PR that Amazon, Staples, and Orbitz received when earlier cases were revealed. To our surprise, this was not the case.

Unfortunately, the business logic underlying much of this personalization remains a mystery. None of the discounts we located in our experiments were advertised on sites’ homepages, so the deals do not appear to be part of marketing campaigns. When we spoke to representatives from Orbitz and Expedia, they confirmed our findings, but did not elaborate on the rationale for the design of their websites. Representatives from Travelocity confirmed that they do offer deals for mobile users, with the goal being to motivate them to use the site more and install the Travelocity app.

What’s a Bargain-Hunting Shopper to Do?

What is clear from our study is that price discrimination and steering on e-commerce sites are becoming more prevalent and more sophisticated. As a user, it’s almost impossible to know if the prices you are being shown have been altered, or if cheaper products have been hidden from search results.

If you are looking for the best deal and are willing to work for it, we recommend searching for products in your normal desktop browser, an incognito or private browser window, and your mobile device. Of course, e-commerce companies are constantly experimenting with new personalization techniques, so in the future, an entirely different attribute may trigger personalization.

Ultimately, we hope our study will encourage companies to be more transparent about how they personalize prices and search results. Rather than using opaque and creepy algorithms to secretly alter content, companies could stick to the kinds of real-world incentives that shoppers already know and love, like coupons and sales.

“The Internet’s Own Boy”: How the government destroyed Aaron Swartz

A film tells the story of the coder-activist who fought corporate power and corruption — and paid a cruel price

"The Internet's Own Boy": How the government destroyed Aaron Swartz
Aaron Swartz (Credit: TakePart/Noah Berger)

Brian Knappenberger’s Kickstarter-funded documentary “The Internet’s Own Boy: The Story of Aaron Swartz,” which premiered at Sundance barely a year after the legendary hacker, programmer and information activist took his own life in January 2013, feels like the beginning of a conversation about Swartz and his legacy rather than the final word. This week it will be released in theaters, arriving in the middle of an evolving debate about what the Internet is, whose interests it serves and how best to manage it, now that the techno-utopian dreams that sounded so great in Wired magazine circa 1996 have begun to ring distinctly hollow.

What surprised me when I wrote about “The Internet’s Own Boy” from Sundance was the snarky, dismissive and downright hostile tone struck by at least a few commenters. There was a certain dark symmetry to it, I thought at the time: A tragic story about the downfall, destruction and death of an Internet idealist calls up all of the medium’s most distasteful qualities, including its unique ability to transform all discourse into binary and ill-considered nastiness, and its empowerment of the chorus of belittlers and begrudgers collectively known as trolls. In retrospect, I think the symbolism ran even deeper. Aaron Swartz’s life and career exemplified a central conflict within Internet culture, and one whose ramifications make many denizens of the Web highly uncomfortable.

For many of its pioneers, loyalists and self-professed deep thinkers, the Internet was conceived as a digital demi-paradise, a zone of total freedom and democracy. But when it comes to specifics things get a bit dicey. Paradise for whom, exactly, and what do we mean by democracy? In one enduringly popular version of this fantasy, the Internet is the ultimate libertarian free market, a zone of perfect entrepreneurial capitalism untrammeled by any government, any regulation or any taxation. As a teenage programming prodigy with an unusually deep understanding of the Internet’s underlying architecture, Swartz certainly participated in the private-sector, junior-millionaire version of the Internet. He founded his first software company following his freshman year at Stanford, and became a partner in the development of Reddit in 2006, which was sold to Condé Nast later that year.



That libertarian vision of the Internet – and of society too, for that matter – rests on an unacknowledged contradiction, in that some form of state power or authority is presumably required to enforce private property rights, including copyrights, patents and other forms of intellectual property. Indeed, this is one of the principal contradictions embedded within our current form of capitalism, as the Marxist scholar David Harvey notes: Those who claim to venerate private property above all else actually depend on an increasingly militarized and autocratic state. And from the beginning of Swartz’s career he also partook of the alternate vision of the Internet, the one with a more anarchistic or anarcho-socialist character. When he was 15 years old he participated in the launch of Creative Commons, the immensely important content-sharing nonprofit, and at age 17 he helped design Markdown, an open-source, newbie-friendly markup format that remains in widespread use.

One can certainly construct an argument that these ideas about the character of the Internet are not fundamentally incompatible, and may coexist peaceably enough. In the physical world we have public parks and privately owned supermarkets, and we all understand that different rules (backed of course by militarized state power) govern our conduct in each space. But there is still an ideological contest between the two, and the logic of the private sector has increasingly invaded the public sphere and undermined the ancient notion of the public commons. (Former New York Mayor Rudy Giuliani once proposed that city parks should charge admission fees.) As an adult Aaron Swartz took sides in this contest, moving away from the libertarian Silicon Valley model of the Internet and toward a more radical and social conception of the meaning of freedom and equality in the digital age. It seems possible and even likely that the “Guerilla Open Access Manifesto” Swartz wrote in 2008, at age 21, led directly to his exaggerated federal prosecution for what was by any standard a minor hacking offense.

Swartz’s manifesto didn’t just call for the widespread illegal downloading and sharing of copyrighted scientific and academic material, which was already a dangerous idea. It explained why. Much of the academic research held under lock and key by large institutional publishers like Reed Elsevier had been largely funded at public expense, but was now being treated as private property – and as Swartz understood, that was just one example of a massive ideological victory for corporate interests that had penetrated almost every aspect of society. The actual data theft for which Swartz was prosecuted, the download of a large volume of journal articles from the academic database called JSTOR, was largely symbolic and arguably almost pointless. (As a Harvard graduate student at the time, Swartz was entitled to read anything on JSTOR.)

But the symbolism was important: Swartz posed a direct challenge to the private-sector creep that has eaten away at any notion of the public commons or the public good, whether in the digital or physical worlds, and he also sought to expose the fact that in our age state power is primarily the proxy or servant of corporate power. He had already embarrassed the government twice previously. In 2006, he downloaded and released the entire bibliographic dataset of the Library of Congress, a public document for which the library had charged an access fee. In 2008, he downloaded and released about 2.7 million federal court documents stored in the government database called PACER, which charged 8 cents a page for public records that by definition had no copyright. In both cases, law enforcement ultimately concluded Swartz had committed no crime: Dispensing public information to the public turns out to be legal, even if the government would rather you didn’t. The JSTOR case was different, and the government saw its chance (one could argue) to punish him at last.

Knappenberger could only have made this film with the cooperation of Swartz’s family, which was dealing with a devastating recent loss. In that context, it’s more than understandable that he does not inquire into the circumstances of Swartz’s suicide in “Inside Edition”-level detail. It’s impossible to know anything about Swartz’s mental condition from the outside – for example, whether he suffered from undiagnosed depressive illness – but it seems clear that he grew increasingly disheartened over the government’s insistence that he serve prison time as part of any potential plea bargain. Such an outcome would have left him a convicted felon and, he believed, would have doomed his political aspirations; one can speculate that was the point. Carmen Ortiz, the U.S. attorney for Boston, along with her deputy Stephen Heymann, did more than throw the book at Swartz. They pretty much had to write it first, concocting an imaginative list of 13 felony indictments that carried a potential total of 50 years in federal prison.

As Knappenberger explained in a Q&A session at Sundance, that’s the correct context in which to understand Robert Swartz’s public remark that the government had killed his son. He didn’t mean that Aaron had actually been assassinated by the CIA, but rather that he was a fragile young man who had been targeted as an enemy of the state, held up as a public whipping boy, and hounded into severe psychological distress. Of course that cannot entirely explain what happened; Ortiz and Heymann, along with whoever above them in the Justice Department signed off on their display of prosecutorial energy, had no reason to expect that Swartz would kill himself. There’s more than enough pain and blame to go around, and purely on a human level it’s difficult to imagine what agony Swartz’s family and friends have put themselves through.

One of the most painful moments in “The Internet’s Own Boy” arrives when Quinn Norton, Swartz’s ex-girlfriend, struggles to explain how and why she wound up accepting immunity from prosecution in exchange for information about her former lover. Norton’s role in the sequence of events that led to Swartz hanging himself in his Brooklyn apartment 18 months ago has been much discussed by those who have followed this tragic story. I think the first thing to say is that Norton has been very forthright in talking about what happened, and clearly feels torn up about it.

Norton was a single mom living on a freelance writer’s income, who had been threatened with an indictment that could have cost her both her child and her livelihood. When prosecutors offered her an immunity deal, her lawyer insisted she should take it. For his part, Swartz’s attorney says he doesn’t think Norton told the feds anything that made Swartz’s legal predicament worse, but she herself does not agree. It was apparently Norton who told the government that Swartz had written the 2008 manifesto, which had spread far and wide in hacktivist circles. Not only did the manifesto explain why Swartz had wanted to download hundreds of thousands of copyrighted journal articles on JSTOR, it suggested what he wanted to do with them and framed it as an act of resistance to the private-property knowledge industry.

Amid her grief and guilt, Norton also expresses an even more appropriate emotion: the rage of wondering how in hell we got here. How did we wind up with a country where an activist is prosecuted like a major criminal for downloading articles from a database for noncommercial purposes, while no one goes to prison for the immense financial fraud of 2008 that bankrupted millions? As a person who has made a living as an Internet “content provider” for almost 20 years, I’m well aware that we can’t simply do away with the concept of copyright or intellectual property. I never download pirated movies, not because I care so much about the bottom line at Sony or Warner Bros., but because it just doesn’t feel right, and because you can never be sure who’s getting hurt. We’re not going to settle the debate about intellectual property rights in the digital age in a movie review, but we can say this: Aaron Swartz had chosen his targets carefully, and so did the government when it fixed its sights on him. (In fact, JSTOR suffered no financial loss, and urged the feds to drop the charges. They refused.)

A clean and straightforward work of advocacy cinema, blending archival footage and contemporary talking-head interviews, Knappenberger’s film makes clear that Swartz was always interested in the social and political consequences of technology. By the time he reached adulthood he began to see political power, in effect, as another system of control that could be hacked, subverted and turned to unintended purposes. In the late 2000s, Swartz moved rapidly through a variety of politically minded ventures, including a good-government site and several different progressive advocacy groups. He didn’t live long enough to learn about Edward Snowden or the NSA spy campaigns he exposed, but Swartz frequently spoke out against the hidden and dangerous nature of the security state, and played a key role in the 2011-12 campaign to defeat the Stop Online Piracy Act (SOPA), a far-reaching government-oversight bill that began with wide bipartisan support and appeared certain to sail through Congress. That campaign, and the Internet-wide protest of American Censorship Day in November 2011, looks in retrospect like the digital world’s political coming of age.

Earlier that year, Swartz had been arrested by MIT campus police, after they noticed that someone had plugged a laptop into a network switch in a server closet. He was clearly violating some campus rules and likely trespassing, but as the New York Times observed at the time, the arrest and subsequent indictment seemed to defy logic: Could downloading articles that he was legally entitled to read really be considered hacking? Wasn’t this the digital equivalent of ordering 250 pancakes at an all-you-can-eat breakfast? The whole incident seemed like a momentary blip in Swartz’s blossoming career – a terms-of-service violation that might result in academic censure, or at worst a misdemeanor conviction.

Instead, for reasons that have never been clear, Ortiz and Heymann insisted on a plea deal that would have sent Swartz to prison for six months, an unusually onerous sentence for an offense with no definable victim and no financial motive. Was he specifically singled out as a political scapegoat by Eric Holder or someone else in the Justice Department? Or was he simply bulldozed by a prosecutorial bureaucracy eager to justify its own existence? We will almost certainly never know for sure, but as numerous people in “The Internet’s Own Boy” observe, the former scenario cannot be dismissed easily. Young computer geniuses who embrace the logic of private property and corporate power, who launch start-ups and seek to join the 1 percent before they’re 25, are the heroes of our culture. Those who use technology to empower the public commons and to challenge the intertwined forces of corporate greed and state corruption, however, are the enemies of progress and must be crushed.

”The Internet’s Own Boy” opens this week in Atlanta, Boston, Chicago, Cleveland, Denver, Los Angeles, Miami, New York, Toronto, Washington and Columbus, Ohio. It opens June 30 in Vancouver, Canada; July 4 in Phoenix, San Francisco and San Jose, Calif.; and July 11 in Seattle, with other cities to follow. It’s also available on-demand from Amazon, Google Play, iTunes, Vimeo, Vudu and other providers.

http://www.salon.com/2014/06/24/the_internets_own_boy_how_the_government_destroyed_aaron_swartz/?source=newsletter