How Facebook Killed the Internet

Death by Ten Billion Status Updates

White thumb up next to the like from social networks on blue bac

by DAVID ROVICS

Facebook killed the internet, and I’m pretty sure that the vast majority of people didn’t even notice.

I can see the look on many of your faces, and hear the thoughts. Someone’s complaining about Facebook again.  Yes, I know it’s a massive corporation, but it’s the platform we’re all using.  It’s like complaining about Starbucks.  After all the independent cafes have been driven out of town and you’re an espresso addict, what to do?  What do you mean “killed”?  What was killed?

I’ll try to explain.  I’ll start by saying that I don’t know what the solution is.  But I think any solution has to start with solidly identifying the nature of the problem.

First of all, Facebook killed the internet, but if it wasn’t Facebook, it would have been something else.  The evolution of social media was probably as inevitable as the development of cell phones that could surf the internet.  It was the natural direction for the internet to go in.

Which is why it’s so especially disturbing.  Because the solution is not Znet or Ello.  The solution is not better social media, better algorithms, or social media run by a nonprofit rather than a multibillion-dollar corporation.  Just as the solution to the social alienation caused by everybody having their own private car is not more electric vehicles.  Just as the solution to the social alienation caused by everyone having their own cell phone to stare at is not a collectively-owned phone company.

Many people from the grassroots to the elites are thrilled about the social media phenomenon.  Surely some of the few people who will read this are among them.  We throw around phrases like “Facebook revolution” and we hail these new internet platforms that are bringing people together all over the world.  And I’m not suggesting they don’t have their various bright sides.  Nor am I suggesting you should stop using social media platforms, including Facebook.  That would be like telling someone in Texas they should bike to work, when the whole infrastructure of every city in the state is built for sports utility vehicles.

But we should understand the nature of what is happening to us.

From the time that newspapers became commonplace up until the early 1990’s, for the overwhelming majority of the planet’s population, the closest we came to writing in a public forum were the very few of us who ever bothered to write a letter to the editor.  A tiny, tiny fraction of the population were authors or journalists who had a public forum that way on an occasional or a regular basis, depending.  Some people wrote up the pre-internet equivalent of an annual Christmas-time blog post which they photocopied and sent around to a few dozen friends and relatives.

In the 1960s there was a massive flowering of independent, “underground” press in towns and cities across the US and other countries.  There was a vastly increased diversity of views and information that could be easily accessed by anyone who lived near a university and could walk to a news stand and had an extra few cents to spend.

In the 1990s, with the development of the internet – websites, email lists – there was an explosion of communication that made the underground press of the 60’s pale in comparison.  Most people in places like the US virtually stopped using phones (to actually talk on), from my experience.  Many people who never wrote letters or much of anything else started using computers and writing emails to each other, and even to multiple people at once.

Those very few of us who were in the habit in the pre-internet era of sending around regular newsletters featuring our writing, our thoughts, our list of upcoming gigs, products or services we were trying to sell, etc., were thrilled with the advent of email, and the ability to send our newsletters out so easily, without spending a fortune on postage stamps, without spending so much time stuffing envelopes.  For a brief period of time, we had access to the same audience, the same readers we had before, but now we could communicate with them virtually for free.

This, for many of us, was the internet’s golden age – 1995-2005 or so.  There was the increasing problem of spam of various sorts.  Like junk mail, only more of it.  Spam filters started getting better, and largely eliminated that problem for most of us.

The listservs that most of us bothered to read were moderated announcements lists.  The websites we used the most were interactive, but moderated, such as Indymedia.  In cities throughout the world, big and small, there were local Indymedia collectives.  Anyone could post stuff, but there were actual people deciding whether it should get published, and if so, where.  As with any collective decision-making process, this was challenging, but many of us felt it was a challenge that was worth the effort.  As a result of these moderated listservs and moderated Indymedia sites, we all had an unprecedented ability to find out about and discuss ideas and events that were taking place in our cities, our countries, our world.

Then came blogging, and social media.  Every individual with a blog, Facebook page, Twitter account, etc., became their own individual broadcaster.  It’s intoxicating, isn’t it?  Knowing that you have a global audience of dozens or hundreds, maybe thousands of people (if you’re famous to begin with, or something goes viral) every time you post something.  Being able to have conversations in the comments sections with people from around the world who will never physically meet each other.  Amazing, really.

But then most people stopped listening.  Most people stopped visiting Indymedia.  Indymedia died, globally, for the most part.  Newspapers – right, left and center – closed, and are closing, whether offline or online ones.  Listservs stopped existing.  Algorithms replaced moderators.  People generally began to think of librarians as an antiquated phenomenon.

Now, in Portland, Oregon, one of the most politically plugged-in cities in the US, there is no listserv or website you can go to that will tell you what is happening in the city in any kind of readable, understandable format.  There are different groups with different websites, Facebook pages, listservs, etc., but nothing for the progressive community as a whole.  Nothing functional, anyway.  Nothing that approaches the functionality of the announcements lists that existed in cities and states throughout the country 15 years ago.

Because of the technical limitations of the internet for a brief period of time, there was for a few years a happy medium found between a small elite providing most of the written content that most people in the world read, and the situation we now find ourselves in, drowning in Too Much Information, most of it meaningless drivel, white noise, fog that prevents you from seeing anywhere further than the low beams can illuminate at a given time.

It was a golden age, but for the most part an accidental one, and a very brief one.  As it became easy for people to start up a website, a blog, a Myspace or Facebook page, to post updates, etc., the new age of noise began, inevitably, the natural evolution of the technology.

And most people didn’t notice that it happened.

Why do I say that?  First of all, I didn’t just come up with this shit.  I’ve been talking to a lot of people for many years, and a lot of people think social media is the best thing since sliced bread.  And why shouldn’t they?

The bottom line is, there’s no reason most people would have had occasion to notice that the internet died, because they weren’t content providers (as we call authors, artists, musicians, journalists, organizers, public speakers, teachers, etc. these days) in the pre-internet age or during the first decade or so of the internet as a popular phenomenon.  And if you weren’t a content provider back then, why would you know that anything changed?

I and others like me know – because the people who used to read and respond to stuff I sent out on my email list aren’t there anymore.  They don’t open the emails anymore, and if they do, they don’t read them.  And it doesn’t matter what medium I use – blog, Facebook, Twitter, etc.  Of course some people do, but most people are now doing other things.

What are they doing?  I spent most of last week in Tokyo, going all over town, spending hours each day on the trains.  Most people sitting in the trains back during my first visit to Japan in 2007 were sleeping, as they are now.  But those who weren’t sleeping, seven years ago, were almost all reading books.  Now, there’s hardly a book to be seen.  Most people are looking at their phones.  And they’re not reading books on their phones.  (Yes, I peeked.  A lot.)  They’re playing games or, more often, looking at their Facebook “news feeds.”  And it’s the same in the US and everywhere else that I have occasion to travel to.

Is it worth it to replace moderators with algorithms?  Editors with white noise?  Investigative journalists with pictures of your cat?  Independent record labels and community radio stations with a multitude of badly-recorded podcasts?  Independent Media Center collectives with a million Facebook updates and Twitter feeds?

I think not.  But that’s where we’re at.  How do we get out of this situation, and clear the fog, and use our brains again?  I wish I knew.

David Rovics is a singer/songwriter based in Portland, Oregon.

 

http://www.counterpunch.org/2014/12/24/how-facebook-killed-the-internet/

William Gibson: I never imagined Facebook

The brilliant science-fiction novelist who imagined the Web tells Salon how writers missed social media’s rise

William Gibson: I never imagined Facebook
William Gibson (Credit: Putnam/Michael O’Shea)

Even if you’ve never heard of William Gibson, you’re probably familiar with his work. Arguably the most important sci-fi writer of his generation, Gibson’s cyber-noir imagination has shaped everything from the Matrix aesthetic to geek culture to the way we conceptualize virtual reality. In a 1982 short story, Gibson coined the term “cyberspace.” Two years later, his first and most famous novel, “Neuromancer,” helped launch the cyberpunk genre. By the 1990s, Gibson was writing about big data, imagining Silk Road-esque Internet enclaves, and putting his characters on reality TV shows — a full four years before the first episode of “Big Brother.”

Prescience is flashy, but Gibson is less an oracle than a kind of speculative sociologist. A very contemporary flavor of dislocation seems to be his specialty. Gibson’s heroes shuttle between wildly discordant worlds: virtual paradises and physical squalor; digital landscapes and crumbling cities; extravagant wealth and poverty.

In his latest novel, “The Peripheral,” which came out on Tuesday, Gibson takes this dislocation to new extremes. Set in mid-21st century Appalachia and far-in-the-future London, “The Peripheral” is partly a murder mystery, and partly a time-travel mind-bender. Gibson’s characters aren’t just dislocated in space, now. They’ve become unhinged from history.

Born in South Carolina, Gibson has lived in Vancouver since the 1960s. Over the phone, we spoke about surveillance, celebrity and the concept of the eternal now.

You’re famous for writing about hackers, outlaws and marginal communities. But one of the heroes of “The Peripheral” is a near-omniscient intelligence agent. She has surveillance powers that the NSA could only dream of. Should I be surprised to see you portray that kind of character so positively?

Well, I don’t know. She’s complicated, because she is this kind of terrifying secret police person in the service of a ruthless global kleptocracy. At the same time, she seems to be slightly insane and rather nice. It’s not that I don’t have my serious purposes with her, but at the same time she’s something of a comic turn.

Her official role is supposed to be completely terrifying, but at the same time her role is not a surprise. It’s not like, “Wow, I never even knew that that existed.”



Most of the characters in “The Peripheral” assume that they’re being monitored at all times. That assumption is usually correct. As a reader, I was disconcerted by how natural this state of constant surveillance felt to me.

I don’t know if it would have been possible 30 years ago to convey that sense to the reader effectively, without the reader already having some sort of cultural module in place that can respond to that. If we had somehow been able to read this text 30 years ago, I don’t know how we would even register that. It would be a big thing for a reader to get their head around without a lot of explaining. It’s a scary thing, the extent to which I don’t have to explain why [the characters] take that surveillance for granted. Everybody just gets it.

You’re considered a founder of the cyberpunk genre, which tends to feature digital cowboys — independent operators working on the frontiers of technology. Is the counterculture ethos of cyberpunk still relevant in an era when the best hackers seem to be working for the Chinese and U.S. governments, and our most famous digital outlaw, Edward Snowden, is under the protection of Vladimir Putin?

It’s seemed to me for quite a while now that the most viable use for the term “cyberpunk” is in describing artifacts of popular culture. You can say, “Did you see this movie? No? Well, it’s really cyberpunk.” Or, “Did you see the cyberpunk pants she was wearing last night?”

People know what you’re talking about, but it doesn’t work so well describing human roles in the world today. We’re more complicated. I think one of the things I did in my early fiction, more or less for effect, was to depict worlds where there didn’t really seem to be much government. In “Neuromancer,” for example, there’s no government really on the case of these rogue AI experiments that are being done by billionaires in orbit. If I had been depicting a world in which there were governments and law enforcement, I would have depicted hackers on both sides of the fence.

In “Neuromancer,” I don’t think there’s any evidence of anybody who has any parents. It’s kind of a very adolescent book that way.

In “The Peripheral,” governments are involved on both sides of the book’s central conflict. Is that a sign that you’ve matured as a writer? Or are you reflecting changes in how governments operate?

I hope it’s both. This book probably has, for whatever reason, more of my own, I guess I could now call it adult, understanding of how things work. Which, I suspect, is as it should be. People in this book live under governments, for better or worse, and have parents, for better or worse.

In 1993, you wrote an influential article about Singapore for Wired magazine, in which you wondered whether the arrival of new information technology would make the country more free, or whether Singapore would prove that “it is possible to flourish through the active repression of free expression.” With two decades of perspective, do you feel like this question has been answered?

Well, I don’t know, actually. The question was, when I asked it, naive. I may have posed innocently a false dichotomy, because some days when you’re looking out at the Internet both things are possible simultaneously, in the same place.

So what do you think is a better way to phrase that question today? Or what would have been a better way to phrase it in 1993?

I think you would end with something like “or is this just the new normal?”

Is there anything about “the new normal” in particular that surprises you? What about the Internet today would you have been least likely to foresee?

It’s incredible, the ubiquity. I definitely didn’t foresee the extent to which we would all be connected almost all of the time without needing to be plugged in.

That makes me think of “Neuromancer,” in which the characters are always having to track down a physical jack, which they then use to plug themselves into this hyper-futuristic Internet.

Yes. It’s funny, when the book was first published, when it was just out — and it was not a big deal the first little while it was out, it was just another paperback original — I went to a science fiction convention. There were guys there who were, by the standards of 1984, far more computer-literate than I was. And they very cheerfully told me that I got it completely wrong, and I knew nothing. They kept saying over and over, “There’s never going to be enough bandwidth, you don’t understand. This could never happen.”

So, you know, here I am, this many years later with this little tiny flat thing in my hand that’s got more bandwidth than those guys thought was possible for a personal device to ever have, and the book is still resonant for at least some new readers, even though it’s increasingly hung with the inevitable obsolescence of having been first published in 1984. Now it’s not really in the pale, but in the broader outline.

You wrote “Neuromancer” on a 1927 Hermes typewriter. In an essay of yours from the mid-1990s, you specifically mention choosing not to use email. Does being a bit removed from digital culture help you critique it better? Or do you feel that you’re immersed in that culture, now?

I no longer have the luxury of being as removed from it as I was then. I was waiting for it to come to me. When I wrote [about staying off email], there was a learning curve involved in using email, a few years prior to the Web.

As soon as the Web arrived, I was there, because there was no learning curve. The interface had been civilized, and I’ve basically been there ever since. But I think I actually have a funny kind of advantage, in that I’m not generationally of [the Web]. Just being able to remember the world before it, some of the perspectives are quite interesting.

Drones and 3-D printing play major roles in “The Peripheral,” but social networks, for the most part, are obsolete in the book’s fictional future. How do you choose which technological trends to amplify in your writing, and which to ignore?

It’s mostly a matter of which ones I find most interesting at the time of writing. And the absence of social media in both those futures probably has more to do with my own lack of interest in that. It would mean a relatively enormous amount of work to incorporate social media into both those worlds, because it would all have to be invented and extrapolated.

Your three most recent novels, before “The Peripheral,” take place in some version of the present. You’re now returning to the future, which is where you started out as a writer in the 1980s. Futuristic sci-fi often feels more like cultural criticism of the present than an exercise in prediction. What is it about the future that helps us reflect on the contemporary world?

When I began to write science fiction, I already assumed that science fiction about the future is only ostensibly written about the future, that it’s really made of the present. Science fiction has wound up with a really good cultural toolkit — an unexpectedly good cultural toolkit — for taking apart the present and theorizing on how it works, in the guise of presenting an imagined future.

The three previous books were basically written to find out whether or not I could use the toolkit that I’d acquired writing fictions about imaginary futures on the present, but use it for more overtly naturalistic purposes. I have no idea at this point whether my next book will be set in an imaginary future or the contemporary present or the past.

Do you feel as if sci-fi has actually helped dictate the future? I was speaking with a friend earlier about this, and he phrased the question well: Did a book like “Neuromancer” predict the future, or did it establish a dress code for it? In other words, did it describe a future that people then tried to live out?

I think that the two halves of that are in some kind of symbiotic relationship with one another. Science fiction ostensibly tries to predict the future. And the people who wind up making the future sometimes did what they did because they read a piece of science fiction. “Dress code” is an interesting way to put it. It’s more like … it’s more like attitude, really. What will our attitude be toward the future when the future is the present? And that’s actually much more difficult to correctly predict than what sort of personal devices people will be carrying.

How do you think that attitude has changed since you started writing? Could you describe the attitude of our current moment?

The day the Apple Watch was launched, late in the day someone on Twitter announced that it was already over. They cited some subject, they linked to something, indicating that our moment of giddy future shock was now over. There’s just some sort of endless now, now.

Could you go into that a little bit more, what you mean by an “endless now”?

Fifty years ago, I think now was longer. I think that the cultural and individual concept of the present moment was a year, or two, or six months. It wasn’t measured in clicks. Concepts of the world and of the self couldn’t change as instantly or in some cases as constantly. And I think that has resulted in there being a now that’s so short that in a sense it’s as though it’s eternal. We’re just always in the moment.

And it takes something really horrible, like some terrible, gripping disaster, to lift us out of that, or some kind of extra-strong sense of outrage, which we know that we share with millions of other people. Unfortunately, those are the things that really perk us up. This is where we get perked up, perked up for longer than for over a new iPhone, say.

The worlds that you imagine are enchanting, but they also tend to be pretty grim. Is it possible to write good sci-fi that doesn’t have some sort of dystopian edge?

I don’t know. It wouldn’t occur to me to try. The world today, considered in its totality, has a considerable dystopian edge. Perhaps that’s always been true.

I often work in a form of literature that is inherently fantastic. But at the same time that I’m doing that, I’ve always shared concerns with more naturalistic forms of writing. I generally try to make my characters emotionally realistic. I do now, at least; I can’t say I always have done that. And I want the imaginary world they live in and the imaginary problems that they have to reflect the real world, and to some extent real problems that real people are having.

It’s difficult for me to imagine a character in a work of contemporary fiction who wouldn’t have any concerns with the more dystopian elements of contemporary reality. I can imagine one, but she’d be a weird … she’d be a strange character. Maybe some kind of monster. Totally narcissistic.

What makes this character monstrous? The narcissism?

Well, yeah, someone sufficiently self-involved. It doesn’t require anything like the more clinical forms of narcissism. But someone who’s sufficiently self-involved as to just not be bothered with the big bad things that are happening in the world, or the bad things — regular-size bad things — that are happening to one’s neighbors. There certainly are people like that out there. The Internet is full of them. I see them every day.

You were raised in the South, and you live in Vancouver, but, like Philip K. Dick, you’ve set some of your most famous work in San Francisco. What is the appeal of the city for technological dreamers? And how does the Silicon Valley of today fit into that Bay Area ethos?

I’m very curious to go back to San Francisco while on tour for this book, because it’s been a few years since I’ve been there, and it was quite a few years before that when I wrote about San Francisco in my second series of books.

I think one of the reasons I chose it was that it was a place that I would get to fairly frequently, so it would stay fresh in memory, but it also seemed kind of out of the loop. It was kind of an easy canvas for me, an easier canvas to set a future in than Los Angeles. It seemed to have fewer moving parts. And that’s obviously no longer the case, but I really know contemporary San Francisco now more by word of mouth than I do from first-person experience. I really think it sounds like a genuinely new iteration of San Francisco.

Do you think that Google and Facebook and this Silicon Valley culture are the heirs to the Internet that you so presciently imagined in the 1980s? Or do they feel like they’ve taken the Web in different directions than what you expected?

Generally it went it directions that didn’t occur to me. It seems to me now that if I had been a very different kind of novelist, I would have been more likely to foresee something like Facebook. But you know, if you try to imagine that somebody in 1982 writes this novel that totally and accurately predicted what it would be like to be on Facebook, and then tried to get it published? I don’t know if you would be able to get it published. Because how exciting is that, or what kind of crime story could you set there?

Without even knowing it, I was limited by the kind of fiction of the imaginary future that I was trying to write. I could use detective gangster stories, and there is a real world of the Internet that’s like that, you know? Very much like that. Although the crimes are so different. The ace Russian hacker mobs are not necessarily crashing into the global corporations. They’re stealing your Home Depot information. If I’d put that as an exploit in “Neuromancer,” nobody would have gotten it. Although it would have made me seem very, very prescient.

You’ve written often and eloquently about cults of celebrity and the surrealness of fame. By this point you’re pretty famous yourself. Has writing about fame changed the way you experience it? Does experiencing fame change the way you write about it?

Writers in our society, even today, have a fairly homeopathic level of celebrity compared to actors and really popular musicians, or Kardashians. I think in [my 1993 novel] “Virtual Light,” I sort of predicted Kardashian. Or there’s an implied celebrity industry in that book that’s very much like that. You become famous just for being famous. And you can keep it rolling.

But writers, not so much. Writers get just a little bit of it on a day-to-day basis. Writers are in an interesting place in our society to observe how that works, because we can be sort of famous, but not really famous. Partly I’d written about fame because I’d seen little bits of it, but the bigger reason is the extent to which it seems that celebrity is the essential postmodern product, and the essential post-industrial product. The so-called developed world pioneered it. So it’s sort of inherently in my ballpark. It would be weird if it wasn’t there.

You have this reputation of being something of a Cassandra. I don’t want to put you on the spot and ask for predictions. But I’m curious: For people who are trying to understand technological trends, and social trends, where do you recommend they look? What should they be observing?

I think the best advice I’ve ever heard on that was from Samuel R. Delany, the great American writer. He said, “If you want to know how something works, look at one that’s broken.” I encountered that remark of his before I began writing, and it’s one of my fridge magnets for writing.

Anything I make, and anything I’m describing in terms of its workings — even if I were a non-literary futuristic writer of some kind — I think that statement would be very resonant for me. Looking at the broken ones will tell you more about what the thing actually does than looking at one that’s perfectly functioning, because then you’re only seeing the surface, and you’re only seeing what its makers want you to see. If you want to understand social media, look at troubled social media. Or maybe failed social media, things like that.

Do you think that’s partly why so much science fiction is crime fiction, too?

Yeah, it might be. Crime fiction gives the author the excuse to have a protagonist who gets her nose into everything and goes where she’s not supposed to go and asks questions that will generate answers that the author wants the reader to see. It’s a handy combination. Detective fiction is in large part related to literary naturalism, and literary naturalism was a quite a radical concept that posed that you could use the novel to explore existing elements of society which had previously been forbidden, like the distribution of capital and class, and what sex really was. Those were all naturalistic concerns. They also yielded to detective fiction. Detective fiction and science fiction are an ideal cocktail, in my opinion.

 

http://www.salon.com/2014/11/09/william_gibson_i_never_imagined_facebook/?source=newsletter

US government-funded database created to track “subversive propaganda” online

http://foxnewsinsider.com/sites/foxnewsinsider.com/files/styles/780/public/Truthy2.jpg?itok=yAusZD5Z

By Matthew MacEgan
30 August 2014

The creation of the Truthy database by Indiana University researchers has drawn sharp criticism from free-speech advocates and others concerned over government censorship of political expression.

According to the award abstract accompanying the funding provided by the National Science Foundation (NSF), the Truthy project aims to demonstrate “why some ideas cause viral explosions while others are quickly forgotten.” In order to answer this and other questions, the resulting database will actively “[collect] and [analyze] massive streams of public microblogging data.”

Once the database is up and running, anyone can use its “service” to monitor “trends, bursts, and suspicious memes.” Several of the researchers suggested that the public will be able to discover the use of “shady machinery” by election campaigners who push faulty information to social media users to manipulate them politically.

As a seeming afterthought, the abstract concludes that this open-source project “could mitigate the diffusion of false and misleading ideas, detect hate speech and subversive propaganda, and assist in the preservation of open debate.”

This last statement provoked widespread criticism as troubling and even Orwellian. Right-wing media outlets Fox News and the Washington Times attacked the reference to “hate speech,” in which they specialize, without highlighting the reference to “subversive propaganda,” a term of abuse usually reserved for left-wing criticism of American government and society.

While the leaders of this government-funded operation have sought to fend off attacks with the explanation that this database is merely designed to study the diffusion of information on social media networks, there is no mistaking the repressive overtones of the project.

Filippo Menczer, the project’s principal investigator and a professor at Indiana University, has responded to allegations by issuing a statement through the Center for Complex Networks and Systems Research, explaining that Truthy is not “a political watchdog, a government probe of social media,” or “an attempt to suppress free speech.” He states that Truthy is incapable of determining whether a particular scrap of data constitutes “misinformation,” and reiterates the notion that “target” is the mere study of “patterns of information diffusion.”

However, within the same statement, Menczer also echoes the abstract’s final conclusion, stating that “an important goal of the Truthy project is to better understand how social media can be abused.” This seems to contradict the claim that the database is focused only on how information is diffused, rather than its content.

Results of the project have already been widely published in peer-reviewed journals and have been presented at several conferences around the world. One of these studies shows how the researchers, including Menczer, studied the growth of Occupy Wall Street over a 15-month period. This was done by identifying Occupy-related content on Twitter and creating a dataset that “contained approximately 1.82 million tweets produced by 447,241 distinct accounts.”

In addition, the researchers also selected 25,000 of these users at random and monitored their behavior in order to study how these users may have changed over time. This effort included the compilation of the hashtags used by each user, their engagement with foreign social movements, and the extent to which these users interacted with one another.

In other words, while the creators of Truthy have presented their service as a means for the public to expose elected officials who inject misleading information into news feeds for electoral propaganda purposes, one of the primary uses is to track and keep tabs on individuals who engage in political discussions deemed “subversive” by US authorities. A previous report has already shown that local police departments were engaged in similar coordinated efforts to spy on Occupy protesters throughout the same 15-month period.

The revelations of Edward Snowden and WikiLeaks have shown the extent of domestic spying of national governments on their own citizens and the erosion of Constitutional rights to privacy and freedom of expression. Despite Menczer’s claim that the system was not “designed” to be a government watchdog program, there is no assurance that this project will not be used for that purpose.

The 25,000 Twitter users who were studied and tracked by the project’s developers certainly did not give permission to have their behaviors and tweets recorded and studied. Truthy will enable anyone, including federal officials, to similarly track and follow the actions of groups and individuals deemed to be “diffusing” ideas labeled as “misleading.” The fact that the United States government has already contributed more than $900,000 to this project only exacerbates this fear.

America’s Stupid and Self-Obsessed Capitalist Culture, Perfectly Lampooned by … Weird Al?

 



Why the nerd comic might be the most relevant artist of the moment.

Photo Credit: Shutterstock.com

Remember Weird Al Yankovic? That geekmeister from the ’80s who did hilarious parodies of pop hits? He’s back, and critics are calling him the most relevant comedian of the moment, one going so far as to pronounce him “America’s greatest living artist.” His new album, “Mandatory Fun,” just rocketed to the top of the Billboard 200 on its debut week — the first parodic album ever to do so.

Looks like something’s percolating in pop culture, revealing our growing discontent with America’s twisted brand of capitalism. Is it any wonder? We know we’re lied to. We know we’re manipulated. We get that the country is stuck in airtight self-obsession. So we’re starting to gravitate toward artists who confront our slow-boiling anxiety. If death-obsessed pop siren Lana Del Rey (whose “Ultraviolence” album topped the charts earlier in July) is the zombie bride of capitalism, Weird Al is the court jester.

Right sorely do we need him just now.

Who is this guy, anyway?

Raised on Mad Magazine and encouraged by his parents to learn the accordion, Weird Al cut his comedic teeth on Dr. Demento’s radio show in the late ’70s and early ’80s, where he began to conjure catchy parodies of songs like “My Sharona” (“My Bologna”) and “Another One Bites the Dust” (“Another One Rides the Bus”). If you’re Gen X, you remember gleefully sharing these tunes along with your Cheetos during lunchtime.

Eventually he grabbed the national spotlight with his 1984 monster hit “Eat It,” a parody of Michael Jackson’s “Beat It.”  A hero to sci-fi nerds and to every kid burdened with an inner bullshit detector on high alert, Weird Al became a crusader against clichés and an antidote to the toxic inanities of pop culture. Somewhere along the way, he started moving beyond simply goofy and spoofy to something deeper. Obesity, grunge rock, the Amish — there was no sacred cow he would not poke. He held up a funhouse mirror to our foibles.

By 2006, he was introducing a younger generation to his comedic gifts with the hit “White and Nerdy,” a send-up of the hip-hop song “Ridin,’” in which Weird Al portrays a Dungeons & Dragons-playing science nerd who yearns to hang with the gangstas.

Off the Charts

Comedians typically get less cred than other artists, but they are no less essential to society. With “Mandatory Fun,” Weird Al takes his rightful place among those who have explored our strained relationship with the American dream, forcing us to grapple with it. From Charlie Chaplin up through the Yes Men, Russell Brand and Stephen Colbert, these tricksters have connected us to our pain and channeled our collective revulsion.

Why does Weird Al stick to comedy? His answer, in typical fashion, mocks the question. “There’s enough people that do unfunny music,” Weird Al once said. “I’ll leave the serious stuff to Paris Hilton and Kevin Federline.”

For his most recent blockbuster album, Weird Al cleverly used social media to market and grab viral attention, releasing eight videos on YouTube one at a time. More than 46 million people watched. Album sales surged.

In “First World Problems,” done in the style of the Pixies, Weird Al takes on our bourgeois obsession with comfort and consumption, while simultaneously poking fun at the indie rock preoccupations of suburban white kids who complain about their cushy lives: “My house is so big I can’t get wi-fi in the kitchen,” whines the douchey blonde kid Al plays in the video.

Tacky,” set to the tune of Pharrell’s overplayed hit “Happy,” skewers not only the tackiness of dressing cluelessly, but wandering the Earth in a solipsistic bubble: “Nothing wrong with wearin’ stripes and plaid/I Instagram every meal I’ve had…Can’t nothin’ bring me shame.” The brilliance lies in Weird Al’s intimation that the happiness sold by slick pop icons like Pharrell is predicated on a state of oblivious solipsism that cuts us off from the plight of our fellow humans.

Perhaps the best song of all is the Crosby, Stills & Nash-inspired “Mission Statement,” made for everyone who has found herself sinking in the mire of meaningless gibberish that flows through the modern corporate office. In the video, which features that annoyingly overused trope of a hand scribbling illustrations, the despair of office alienation is juxtaposed with the relentlessly upbeat buzzwords and conventions taught in MBA schools. What’s particularly resonant about this song is how Weird Al skewers the corporate capitalism which promised us all the wonders of efficiency, harmony and prosperity, only to deliver us to Dilbert’s cubicle of despair.

In “Mission Statement,” the dreams of love and peace echoed in ’60s folk tunes have congealed into a nightmare in which we can’t escape capitalism’s relentless propaganda, brought to a kind of posthuman wretchedness in which we are forced to speak in the tongues of abstract gods of the market.

As students of the human psyche know, the line between humor and horror is often thin. Weird Al gets us to laugh when we might ordinarily scream. Lighthearted though Weird Al may seem, there’s a deeply moral theme in “Mandatory Fun,” about how capitalism’s servants — narcissism, greed, vulgarity, and all-around douchiness — have to carry out its orders to beat us into a pulverized pulp of compliance.

Weird Al gets our number because he does what we all yearn to do: He bites back.

Lynn Parramore is an AlterNet senior editor. She is cofounder of Recessionwire, founding editor of New Deal 2.0, and author of “Reading the Sphinx: Ancient Egypt in Nineteenth-Century Literary Culture.” She received her Ph.D. in English and cultural theory from NYU. She is the director of AlterNet’s New Economic Dialogue Project. Follow her on Twitter @LynnParramore.

 

http://www.alternet.org/culture/americas-stupid-and-self-obsessed-capitalist-culture-perfectly-lampooned-weird-al?akid=12077.265072.WSyO4n&rd=1&src=newsletter1013718&t=6&paging=off&current_page=1#bookmark

Could you “free” yourself of Facebook?

A 99-day challenge offers a new kind of social media experiment

Could you "free" yourself of Facebook?
(Credit: LoloStock via Shutterstock)

Let’s try a new experiment now, Facebook. And this time, you’re the subject.

Remember just last month, when the monolithic social network revealed that it had been messing with its users’ minds as part of an experiment? Writing in PNAS, Facebook researchers disclosed the results of a study that showed it had tinkered with the news feeds of nearly 700,000 users, highlighting either more positive or more negative content, to learn if “emotional contagion occurs without direct interaction between people.” What they found was that “When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.” More significantly, after the news of the study broke, they discovered that people get pretty creeped out when they feel like their personal online space is being screwed with, and that their reading and posting activity is being silently monitored and collected – even when the terms of service they agreed to grant permission to do just that. And they learned that lawmakers in the U.S. and around the world question the ethics of Facebook’s intrusion.

Now, a new campaign out of Europe is aiming to do another experiment involving Facebook, its users and their feelings. But this time Facebook users aren’t unwitting participants but willing volunteers. And the first step involves quitting Facebook. The 99 Days of Freedom campaign started as an office joke at Just, a creative agency in the Netherlands. But the company’s art director Merijn Straathof says it quickly evolved into a bona fide cause. “As we discussed it internally, we noted an interesting tendency: Everyone had at least a ‘complicated’ relationship with Facebook. Whether it was being tagged in unflattering photos, getting into arguments with other users or simply regretting time lost through excessive use, there was a surprising degree of negative sentiment.” When the staff learned that Facebook’s 1.2 billion users “spend an average of 17 minutes per day on the site, reading updates, following links or browsing photos,” they began to wonder what that time might be differently applied to – and whether users would find it “more emotionally fulfilling.”



The challenge – one that close to 9,000 people have already taken – is simple. Change your FB avatar to the “99 Days of Freedom” one to let friends know you’re not checking in for the next few months. Create a countdown. Opt in, if you wish, to be contacted after 33, 66 and 99 days to report on your satisfaction with life without Facebook. Straathof says everyone at Just is also participating, to “test that one firsthand.”

Straathof and company say the goal isn’t to knock Facebook, but to show users the “obvious emotional benefits to moderation.” And, he adds, “Our prediction is that the experiment will yield a lot of positive personal experiences and, 99 days from now, we’ll know whether that theory has legs.” The anecdotal data certainly seems to support it. Seductive as FB, with its constant flow of news and pet photos, may be, you’d be hard-pressed to find a story about quitting it that doesn’t make getting away from it sound pretty great. It’s true that grand experiments, especially of a permanent nature, have never gotten off the ground. Four years ago, a group of disgruntled users tried to gather momentum for a Quit Facebook Day that quietly went nowhere. But individual tales certainly make a compelling case for, if not going cold turkey, at least scaling back. Elizabeth Lopatto recently wrote in Forbes of spending the past eight years Facebook free and learning that “If you really are interested in catching up with your friends, catch up with your friends. You don’t need Facebook to do it.” And writing on EliteDaily this past winter, Rudolpho Sanchez questioned why “We allow our successes to be measured in little blue thumbs” and declared, “I won’t relapse; I’ve been liberated. It’s nice not knowing what my fake friends are up to.” Writing a few weeks later in Business Insider, Dylan Love, who’d been on FB since he was an incoming college student 10 years ago, gave it up and reported his life, if not improved, remarkably unchanged, “except I’m no longer devoting mental energy to reading about acquaintances from high school getting married or scrolling through lots of pictures of friends’ vacation meals.” And if you want a truly persuasive argument, try this: My teenager has not only never joined Facebook, she dismissively asserts that she doesn’t want to because “It’s for old people.”

Facebook, of course, doesn’t want you to consider that you might be able to maintain your relationships or your sense of delight in the world without it. When my mate and I went away for a full week recently, we didn’t check in on social media once the whole time. Every day, with increasing urgency, we received emails from Facebook alerting us to activity in our feeds that we surely wanted to check. And since I recently gutted my friend list, I’ve been receiving a bevy of suggested people I might know. Why so few friends, lonely lady? Why so few check-ins? Don’t you want more, more, more?

I don’t know if I need to abandon Facebook entirely – I like seeing what people I know personally and care about are up to, especially those I don’t get to see in the real world that often. That connection has often been valuable, especially through our shared adventures in love, illness and grief, and I will always be glad for it. But a few months ago I deleted the FB app, which makes avoiding Facebook when I’m not at my desk a no-brainer. No more stealth checking my feed from the ladies’ room. No more spending time expressing my “like” of someone’s recent baking success when I’m walking down the street. No more “one more status update before bed” time sucks. And definitely no more exasperation when FB insistently twiddles with my news feed to show “top stories” when I prefer “most recent.” It was never a huge part of my life, but it’s an even smaller part of it now, and yeah, it does feel good. I recommend it. Take Just’s 99-day challenge or just a tech Sabbath or just scale back a little. Consider it an experiment. One in which the user, this time, is the winner.

Mary Elizabeth Williams Mary Elizabeth Williams is a staff writer for Salon and the author of “Gimme Shelter: My Three Years Searching for the American Dream.” Follow her on Twitter: @embeedub.

http://www.salon.com/2014/07/11/could_you_free_yourself_of_facebook/?source=newsletter

After you’re gone, what happens to your social media and data?

Web of the dead: When Facebook profiles of the deceased outnumber the living

Web of the dead: When Facebook profiles of the deceased outnumber the living

There’s been chatter — and even an overly hyped study — predicting the eventual demise of Facebook.

But what about the actual death of Facebook users? What happens when a social media presence lives beyond the grave? Where does the data go?

The folks over at WebpageFX looked into what they called “digital demise,” and made a handy infographic to fully explain what happens to your Web presence when you’ve passed.

It was estimated that 30 million Facebook users died in the first eight years of the social media site’s existence, according to the Huffington Post. Facebook even has settings to memorialize a deceased user’s page.

Facebook isn’t the only site with policies in place to handle a user’s passing. Pinterest, Google, LinkedIn and Twitter all handle death and data differently. For instance, to deactivate a Facebook profile you must provide proof that you are an immediate family member; for Twitter, however, you must produce the death certificate and your identification. All of the sites pinpointed by WebpageFX stated that your data belongs to you — some with legal or family exceptions.

Social media sites are in in general a young Internet phenomena — Facebook only turned 10 this year. So are a majority of their users. (And according to Mashable, Facebook still has a large number of teen adapters.) Currently, profiles of the living far outweigh those of the dead.



However, according to calculations done by XKDC, that will not always be the case. They presented two hypothetical scenarios. If Facebook loses its “cool” and market share, dead users will outnumber the living in 2065. If Facebook keeps up its growth, the site won’t be a digital graveyard until the mid 2100s.

Check out the fascinating infographic here.

h/t Mashable

http://www.salon.com/2014/06/24/web_of_the_dead_when_facebook_profiles_of_the_deceased_outnumber_the_living/?source=newsletter

Facebook is giving folks more control over which ads they’ll get, while also plowing deeper into user data

Facebook’s faux transparency: The company is rolling out a new ad plan while digging deeper into user data

 

Facebook's faux transparency: The company is rolling out a new ad plan while digging deeper into user data
Facebook founder Mark Zuckerberg (Credit: AP/Jeff Chiu)

Today you may have received a notification from Facebook that said: “We’re improving ads based on apps and sites you use, and giving you control. Learn more.” Clicking on that notification probably brought you to this video:

Facebook has long been sharing user info with advertisers based on on what you might “like” on Facebook, list as an interest or click on your newsfeed, according to The Verge. And now, they’re actually notifying users about the process.

It is part of new Facebook advertising features. Next week, users will be able to click a drop-down menu on a particular advertisement and see why you are targeted with that particular ad. Users will also be able to view their entire “ad preferences” and make alterations to them. Seems pretty great, right? After all, Facebook did say they are “giving you control.”

Well, yes and no. The New York Times explains:

“Facebook’s move also comes as the Federal Trade Commission and the White House have called on Congress to pass legislation that would better protect consumers’ private data, including requiring companies to give people more control over the digital files collected on them.

“It is unclear how privacy advocates and public officials will react to Facebook’s efforts to provide more clarity about how its ads work. The F.T.C., which was briefed on the company’s intentions, had no immediate comment. Users will start seeing the changes within the next few weeks.

“Although Facebook will now give its users a way to modify the customer profiles that drive the ads they see, users can’t completely get rid of ads. If people were to delete everything Facebook had collected about them, they would simply see generic pitches. Nor it is clear what level of detail a user can control.”



At the end of the day, our “control” really just helps the company learn more about us, and target us with more specific ads. You know, because you really wanted Facebook to help you buy a new TV, or suggest a new brand. The move is disingenuous and creepy. And while there is the possibility of changing ads to not fit who you are, you are still being bombarded with ads — for stuff you don’t even like. Basically, users are feeding this tech giant more information so they can make money under the guise of user control.

To top it all off, Facebook also announced that it was going to start using more than just “likes” and other Facebook activity, it is also going to dig into Web browser and smartphone data to help target ads. According to The Verge, Facebook has had this data for a while, but was mainly using it for security purposes.

Users can opt out of this data sharing, but they’ll have to visit the Digital Advertising Alliance on their computers, and adjust settings on their smartphones.

h/t The Verge, The New York Times

http://www.salon.com/2014/06/12/facebooks_faux_transparency_the_company_is_rolling_out_a_new_ad_plan_while_digging_deeper_into_user_data/?source=newsletter