On Monday, a senior Facebook executive repented some more, reporting that $100,000 from Russian-sponsored troll farms bought 4.4 million page viewsbefore the 2016 election. “We understand more about how our service was abused and we will continue to investigate to learn all we can,” said vice president Elliot Schrage.
The Facebook leadership, like the U.S. government and the rest of us, is belatedly facing up to what Zuckerberg once denied: the social harms that can be inflicted by digital platform monopolies. The contrition and the voluntary remedies, notes Quartz, are “designed to head off looming regulations.”
What Is To Be Done
Facebook came to dominate social media with an ingenious interface that enables users to escape the Wild West of the open internet and join a sentimental community of family and friends, knitted together by likes, links, timelines, photos and videos.
Along the way, the company employed a scalable and amoral business model: use alogorithms of people’ personal data to mix messges of “promoted posts” with family messages and friendly momentos. Its an automated system that is profitable because it requires relatively little human intervention and can be used by anyone who wants to influence the behavior of Facebook users.
When the Russia government wanted to use the platform to confused and demoralize Democratic voters and promote favorite son Donald Trump, Facebook was ready, willing and able to monetize the opportunity. As sociologist Zeynep Tufekci has explained, “Facebook’s Ad Scandal Isn’t a ‘Fail,’ It’s a Feature.”
The question is, what can government and civil society do to protect the public interest from a $300 billion monopoly with 2 billion users? “Facebook is so gargantuan,” says Siva Vaidhyanathan, director of the Center for Media and Citizenship at the University of Virginia, “it’s exceeded our capability to manage it.”
One tool is traditional antitrust laws, created in the late 19th century and early 20th century to control railroads, the oil industry and electrical utilities. The reformers, in the Progressive era and the New Deal, passed legislation like the Sherman Anti-Trust Act and the Glass-Steagall Act to prevent and break up concentrations of economic power.
The problem is that since the 1970s, antitrust law has been interpreted through the lens of University of Chicago “free-market” economics. In this view, the test of a monopoly is the short-term harm it does to consumers; i.e., does it raise prices?
If a monopoly doesn’t raise prices, the Chicago School claims, it’s not doing any harm. As a result, most of the legal precedents in antitrust law, developed over the last 40 years, are ideologically hostile to the notion of a “public interest.”
As Pasquale, a law professor at the University of Maryland, said, “We need to have institutions that guarantee algorithmic accountability.”
1. FCC regulation
Jeff John Roberts of Fortune compares Facebook to the highly regulated TV broadcast networks, “at a time when Facebook has become the equivalent of a single TV channel showing a slew of violence and propaganda, the time may have come to treat Facebook as the broadcaster it is.”
In the immediate aftermath of the Las Vegas shooting, a Facebook search yielded a page created by a chronic hoaxer who calls himself an investigative journalist for Alex Jones’ Infowars. “To Facebook’s algorithms, it’s just a fast-growing group with an engaged community,” notes Alex Madrigal of the Atlantic.
“Just imagine if CBS inadvertently sold secret political ads to the Chinese or broadcast a gang rape—the FCC, which punished the network over a Super Bowl nipple incident, would come down like a ton of bricks.”
This would require rewriting the Federal Communications Act to include platform monopolies. Not impossible, but not likely, and probably not the right regulator regime to diminish Facebook’s monopoly power over information.
Last week, Democrats in the House and Senate sent a letter to the Federal Election Commission urging it to “develop new guidance” on how to prevent illicit foreign spending in U.S. elections.” The letter was signed by all of the possible 2020 Democratic presidential aspirants in the Senate, including Warren, Sherrod Brown (Ohio), Cory Booker (N.J.), and Kamala Harris (Calif.).
Another Democratic proposal floated in Congress would require digital platforms with more than 1 million users to publicly log any “electioneering communications” purchased by anyone who spends more than $10,000 in political ads online. The FEC defines electioneering communications as ads “that refer to a federal candidate, are targeted to voters and appear within 30 days of a primary or 60 days of a general election.”
But such measures probably would not have prevented—or called attention to—the Russian intervention in 2016, because the Russian-sponsored ads usually played on social divisions without referencing a federal candidate, and buyers could have evaded the reporting requirement with smaller payments.
Such measures address the symptoms of Facebook’s dominance, not the causes.
3. Empower Users
Luigi Zingales and Guy Rolnik, professors at the University of Chicago Booth School of Business, have a market solution: empower Facebook users to take their friends and their “likes” elsewhere. They propose giving Facebook users something they do not now possess: “ownership of all the digital connections” that they create, or a “social graph.”
Right now Facebook owns your social graph, but that is not inevitable.
“If we owned our own social graph, we could sign into a Facebook competitor — call it MyBook — and, through that network, instantly reroute all our Facebook friends’ messages to MyBook, as we reroute a phone call.”
The idea is to foster the emergence of new social networks and diminish the power of Facebook’s monopoly.
Such a reform alone isn’t going to undermine Facebook. In conjunction with other measures to create competition, it could be helpful.
Last January, opponents of President Trump organized the Women’s March on Facebook, and several million people participated.
“The list of those who RSVP’d is now stored on Facebook servers and will be until the end of time, or until Facebook goes bankrupt, or gets hacked, or bought by a hedge fund, or some rogue sysadmin decides that list needs to be made public.”
To ensure privacy and protect dissent, Ceglowski says, “There should be a user-configurable time horizon after which messages and membership lists in these places evaporate.”
Again, this is a small but worthwhile step. If Facebook won’t implement it voluntarily, it could be compelled to do so.
5. Break up Facebook
But Ceglowski has a more audacious idea: break up Facebook into different companies for social interaction and news consumption.
The problem, he said in an April 2017 talk, is the algorithms Facebook deploys to maximize engagement and thus ad revenue.
“The algorithms have learned that users interested in politics respond more if they’re provoked more, so they provoke. Nobody programmed the behavior into the algorithm; it made a correct observation about human nature and acted on it.”
When a monopoly controls the algorithms of engagement, commercial power is converted into political power.
“Decisions like what is promoted to the top of a news feed can swing elections. Small changes in UI can drive big changes in user behavior. There are no democratic checks or controls on this power, and the people who exercise it are trying to pretend it doesn’t exist.”
So government has to step in, he says.
“Just like banks have a regulatory ‘Chinese wall’ between investment and brokerage, and newspapers have a wall between news and editorial, there must be a separation between social network features and news delivery.”
Most important is political imagination. The ascendancy of free-market thinking since the heyday of Ronald Reagan and Margaret Thatcher has transformed citizens into consumers and failed civil society in the process. The rise of income inequality is one result. The emergence of unaccountable platform monopolies is another.
Facebook, the website, is the creation of Zuckerberg and clever programmers. But their enormous power is the result of a selfish and short-sighted ideology that privatizes public space at the expense of most people.
With the Democrats incorporating anti-monopoly ideas into their “Better Deal” platform and right-wing nationalists such as Steve Bannon talking about regulating internet giants “like utilities,” the free-market ideology has lost credibility and there is a growing demand for action. As the Roosevelt Institute puts it, “Let’s Reimagine the Rules.”
The urgency of reining in Facebook is that if the public does not control its surveillance and engagement technologies, those techniques will be used to secretly manipulate, if not control, the public sphere, as they were in the 2016 election.
“Either we work with government to regulate algorithmic systems,” says Pasquale of the University of Maryland, “or we will see partnerships with governments and those running algorithmic systems to regulate and control us.”
Controlling Facebook, in other words, is a matter of self-protection.
The meeting of the United Nations General Assembly in New York is taking place under the shadow of the accelerating drive of the major powers, spearheaded by the United States, toward World War III. This found its most noxious expression in the fascistic speech delivered to the assembly on Tuesday by Donald Trump, in which the US president threatened to “destroy North Korea” and attack Iran and Venezuela.
Trump devoted a significant portion of his tirade to a denunciation of socialism and communism, reflecting the fear within the US ruling elite of the growth of social opposition and rise of anti-capitalist and socialist sentiment in the working class.
Another major focus of the assembly is the mounting campaign of the US and European governments to crack down on the exchange of information and views on the Internet. British Prime Minister Theresa May, French President Emmanuel Macron and Italian Prime Minister Paolo Gentiloni all used the pretext of fighting terrorism and “fake news” to call for more drastic measures by the major technology firms to censor the Internet, which Gentiloni called a “battlefield for hearts and minds.”
This attack on free speech is a central part of the response of the crisis-ridden capitalist ruling elites to the growth of global geo-political tensions and economic instability, and the political radicalization of broad masses of workers and youth.
In the US, the drive for Internet censorship has been spearheaded by the so-called “liberal” wing of the political establishment, concentrated in the Democratic Party, whose chief media organ is the New York Times. On the eve of the UN assembly, the Times published an unambiguous brief for censorship of the Internet in the form of an op-ed column by the ambassador to the UN under Barack Obama, Samantha Power.
Under the headline “Why Foreign Propaganda Is More Dangerous Now,” and on the pretext of combating Russian disinformation and subversion, Power calls for the use of “professional gatekeepers” to police public discourse on the Internet.
Power, a leading proponent of “human rights” imperialism, looks back nostalgically at the Cold War as a golden age of news dissemination, when “most Americans received their news and information via mediated platforms.” She continues: “Reporters and editors serving in the role of professional gatekeepers had almost full control over what appeared in the media. A foreign adversary seeking to reach American audiences did not have great options for bypassing these umpires, and Russian disinformation rarely penetrated.”
It is worth considering who is writing these lines. First as a key policy advisor to Obama, then as Washington’s representative to the United Nations, Power was a leading architect of the disastrous US-led destabilization operation in Libya that shattered that country’s society. She is a key propagandist of the American-instigated civil war in Syria, which has led to hundreds of thousands of deaths and the greatest refugee crisis since the Second World War.
Power longs for the time when, as was the case during the Korean War and the earlier part of the Vietnam War, the monopoly of the major broadcasters over public discourse could be used to keep the criminal policies of US imperialism under wraps.
She is bitter and resentful over the fact that, despite the best efforts of the corporate-controlled media to sell US operations in the Middle East to the public as anti-terrorist and humanitarian efforts, organizations such as Wikileaks and journalists such as Seymour Hersh have exposed the fact that the United States has cultivated alliances with forces linked to Al Qaeda and ISIS to pursue regime-change in Libya and Syria, totally undercutting the narrative of the “war on terror” that has been used to justify US imperialist policy since 2001.
If Power had her way, Chelsea Manning’s exposure of the murder of journalists and Iraqi civilians by the US military and Edward Snowden’s exposure illegal dragnet surveillance by the NSA would be branded as “fake news” and blocked by technology giants such as Google, Apple and Facebook.
In her Times column, she mourns the passing of the overarching—and thoroughly repressive—anti-communist ideological framework of the Cold War period, writing: “During the Cold War, the larger struggle against communism created a mainstream consensus about what America stood for and against. Today, our society appears to be defined by a particularly vicious form of ‘partyism’ affecting Democrats and Republicans alike.”
Power presents the rise of the Internet, and consequent weakening of control over the flow of information and opinion by state-sanctioned and allied corporate media outlets such as the Times, as an altogether dangerous and negative development. Under conditions where the establishment media is increasingly discredited—“60 percent believe news stories today are ‘often inaccurate,’ according to Gallup”—Power notes, the fact that “two thirds of Americans are getting at least some of their news through social media” is a matter of the gravest concern.
The “growing reliance on new media—and the absence of real umpires,” she writes, have opened up the US to disinformation and subversion at the hands of a demonic Russia, with its all-powerful media outlets RT and Sputnik, and its “trolls, bots and thousands of fake Twitter and Facebook accounts that amplified damaging stories on Hillary Clinton.”
Here we see the coming together of the hysterical, neo-McCarthyite campaign against Russia that has been used by the intelligence agencies, the Democratic Party and their media allies to attempt to whip up a war fever and pressure Trump to take a more bellicose posture toward Moscow with a growing attack on public access to anti-war, progressive and socialist web sites.
Power’s demands for state-sponsored censorship have already been put into practice by Internet giant Google. In the name of combating “fake news” and promoting “authoritative content” over “alternative viewpoints,” Google has implemented changes to its search engine that have slashed traffic to leading left-wing and alternative news web sites by 55 percent. The central target of this attack is the World Socialist Web Site, whose Google referrals have fallen by 75 percent.
By “gatekeepers,” Power means the thoroughly vetted and subservient editorial boards of newspapers such as the Times, which dutifully hide from the American people whatever the CIA and State Department do not want them to know, while dispensing state lies and propaganda in the guise of “news.”
In 2010, then-New York Times Executive Editor Bill Keller spelled out the policy of such “mediated” news outlets with unusual bluntness when he declared that “transparency is not an absolute good.” He added, “Freedom of the press includes freedom not to publish, and that is a freedom we exercise with some regularity.”
More than a quarter century after the dissolution of the Soviet Union, all factions of the US ruling elite are haunted by the realization that socialist politics are, as Hillary Clinton put it in her recently published book, tapping “into powerful emotional currents” within the population. The fact that in the 2016 Democratic primaries, 12 million Americans, mostly young people and workers, voted for a candidate, Bernie Sanders, who called himself a socialist, shocked and unnerved the ruling class.
Unable to advance any policies to address the social grievances of working people or turn away from its foreign agenda of militarism and war, the ruling elite responds to the growth of opposition by recourse to police methods. The escalating corporate-state attack on freedom of speech on the Internet makes all the more urgent the campaign of the World Socialist Web Site against Google censorship. We call on all of our readers and supporters to sign our petition demanding an end to the censorship, send statements of support for our campaign, and actively work to distribute WSWS articles as widely as possible via Facebook and other social media outlets.
In late June, Mark Zuckerberg announced the new mission of Facebook: “To give people the power to build community and bring the world closer together.”
The rhetoric of the statement is carefully selected, centered on empowering people, and in so doing, ushering in world peace, or at least something like it. Tech giants across Silicon Valley are adopting similarly utopian visions, casting themselves as the purveyors of a more connected, more enlightened, more empowered future. Every year, these companies articulate their visions onstage at internationally streamed pep rallies, Apple’s WWDC and Google’s I/O being the best known.
But companies like Facebook can only “give people the power” because we first ceded it to them, in the form of our attention. After all, that is how many Silicon Valley companies thrive: Our attention, in the form of eyes and ears, provides a medium for them to advertise to us. And the more time we spend staring at them, the more money Facebook and Twitter make — in effect, it’s in their interest that we become psychologically dependent on the self-esteem boost from being wired in all the time.
This quest for our eyeballs doesn’t mesh well with Silicon Valley’s utopian visions of world peace and people power. Earlier this year, many sounded alarm bells when a “60 Minutes” exposé revealed the creepy cottage industry of “brain-hacking,” industrial psychology techniques that tech giants use and study to make us spend as much time staring at screens as possible.
Indeed, it is Silicon Valley’s continual quest for attention that both motivates their utopian dreams, and that compromises them from the start. As a result, the tech industry often has compromised ethics when it comes to product design.
Case in point: At January’s Consumer Electronics Convention – a sort of Mecca for tech start-ups dreaming of making it big – I found myself in a suite with one of the largest kid-tech (children’s toys) developers in the world. A small flock of PR reps, engineers and executives hovered around the entryway as one development head walked my photographer and me through the mock setup. They were showing off the first voice assistant developed solely with kids in mind.
At the end of the tour, I asked if the company had researched or planned to research the effects of voice assistant usage on kids. After all, parents had been using tablets to occupy their kids for years by the time evidence of their less-than-ideal impact on children’s attention, behavior and sleep emerged.
The answer I received was gentle but firm: No, because we respect parents’ right to make decisions on behalf of their children.
This free-market logic – that says the consumer alone arbitrates the value of a product – is pervasive in Silicon Valley. What consumer, after all, is going to argue they can’t make their own decisions responsibly? But a free market only functions properly when consumers operate with full agency and access to information, and tech companies are working hard to limit both.
During a “60 Minutes” story on brain hacking, former product manager at Google Tristan Harris said, “There’s always this narrative that technology’s neutral. And it’s up to us to choose how we use it.”
The problem, according to Harris, is that “this is just not true… [Developers] want you to use it in particular ways and for long periods of time. Because that’s how they make their money.”
Harris was homing in on the fact that, increasingly, it isn’t the price tag on the platform itself that earns companies money, but the attention they control on said platform – whether it’s a voice assistant, operating system, app or website. We literally “pay” attention to ads or sponsored content in order to access websites.
But Harris went on to explain that larger platforms, using systems of rewards similar to slot machines, are working not only to monetize our attention, but also to monopolize it. And with that monopoly comes incredible power.
If Facebook, for instance, can control hours of people’s attention daily, it can not only determine the rate at which it will sell that attention to advertisers, but also decide which advertisers or content creators it will sell to. In other words, in an attention economy Facebook becomes a gatekeeper for content – one that mediates not only personalized advertising, but also news and information.
This sort of monopoly brings the expected fiscal payoff, and also the amassing of immeasurable social and cultural power.
So how does Facebook’s new mission statement fit into this attention economy?
Think of it in terms of optics. The carotid artery of Facebook, along with the other tech giants of Silicon Valley, is brand. Brand ubiquity means Facebook is the first thing people check when they take their phones out of their pockets, or when they open Chrome or Safari (brought to you by Google and Apple, respectively). It means Prime Day is treated like a real holiday. Just like Kleenex means tissues and Xerox means copy, online search has literally become synonymous with Google.
Yet all these companies are painfully aware of what a brand-gone-bad can do – or undo. The current generation of online platforms is built on the foundations of empires that rose and fell while the attention economy was still incipient. Today’s companies have maintained their centrality by consistently copying (Instagram Stories, a clone of Snapchat) or outright purchasing (YouTube) their fiercest competitors – all to maintain or expand their brand.
And perhaps as important, tech giants have made it near impossible to imagine a future without them, simply by being the most prominent public entities doing such imagining.
Facebook’s mission affixes the company in our shared future, and also injects it with a moral or at least charitable sensibility – even if it’s only in the form of “bring[ing] the world closer together”-type vagaries.
So how should we as average consumers respond?
In his award-winning essay “Stand Out of Our Light: Freedom and Persuasion in the Attention Economy,” James Williams argues, “We must … move urgently to assert and defend our freedom of attention.”
To assert our freedom is to sufficiently recognize and evaluate the demands to attention all these devices and digital services represent. To defend our freedom entails two forms of action: first, by individual action – not unplugging completely, as the self-styled prophets of Facebook and Twitter encourage (before logging back on after a few months of asceticism) – but rather unplugging partially, habitually and ruthlessly.
Attention is the currency upon which tech giants are built. And the power of agency and free information is the power we cede when we turn over our attention wholly to platforms like Facebook.
But individual consumers can only do so much. The second way we must defend our freedom is through our demand for ethical practices from Silicon Valley.
Some critics believe government regulation is the only way to rein in Silicon Valley developers. The problem is, federal agencies that closely monitor the effects of product usage on consumers don’t have a good category for monitoring the effects of online platforms yet. The Food and Drug Administration (FDA) tracks medical technology. The Consumer Product Safety Commission (CPSC) focuses on physical risk to consumers. The Federal Communication Commission (FCC) focuses on content — not platform. In other words, we don’t have a precedent for monitoring social media or other online platforms and their methods for retaining users.
Currently, there is no corollary agency that leads dedicated research into the effects of platforms like Facebook on users. There is no Surgeon General’s warning. There is no real protection for consumers from unethical practices by tech giants — as long as those practices fall in the cracks between existing ethics standards.
While it might seem idealistic to hold out for the creation of a new government agency that monitors Facebook (especially given the current political regime), the first step toward curbing Silicon Valley’s power is simple: We must acknowledge freedom of attention as an inalienable right — one inextricable from our freedom to pursue happiness. So long as the companies producing the hardware surrounding us and the platforms orienting social life online face no strictures, they will actively work to control how users think, slowly eroding our society’s collective free will.
With so much at stake, and with so little governmental infrastructure in place, checking tech giants’ ethics might seem like a daunting task. The U.S. government, after all, has demonstrated a consistent aversion to challenging Silicon Valley’s business and consumer-facing practices before.
But while we fight for better policy and stronger ethics-enforcing bodies, we can take one more practical step: “pay” attention to ethics in Silicon Valley. Read about Uber’s legal battles and the most recent research on social media’s effects on the brain. Demand more ethical practices from the companies we patronize. Why? The best moderators of technology ethics thus far have been tech giants themselves — when such moderation benefits the companies’ brands.
In Silicon Valley, money talks, but attention talks louder. It’s time to reclaim our voice.
On March 2, a disturbing report hit the desks of U.S. counterintelligence officials in Washington. For months, American spy hunters had scrambled to uncover details of Russia’s influence operation against the 2016 presidential election. In offices in both D.C. and suburban Virginia, they had created massive wall charts to track the different players in Russia’s multipronged scheme. But the report in early March was something new.
It described how Russia had already moved on from the rudimentary email hacks against politicians it had used in 2016. Now the Russians were running a more sophisticated hack on Twitter. The report said the Russians had sent expertly tailored messages carrying malware to more than 10,000 Twitter users in the Defense Department. Depending on the interests of the targets, the messages offered links to stories on recent sporting events or the Oscars, which had taken place the previous weekend. When clicked, the links took users to a Russian-controlled server that downloaded a program allowing Moscow’s hackers to take control of the victim’s phone or computer–and Twitter account.
As they scrambled to contain the damage from the hack and regain control of any compromised devices, the spy hunters realized they faced a new kind of threat. In 2016, Russia had used thousands of covert human agents and robot computer programs to spread disinformation referencing the stolen campaign emails of Hillary Clinton, amplifying their effect. Now counterintelligence officials wondered: What chaos could Moscow unleash with thousands of Twitter handles that spoke in real time with the authority of the armed forces of the United States? At any given moment, perhaps during a natural disaster or a terrorist attack, Pentagon Twitter accounts might send out false information. As each tweet corroborated another, and covert Russian agents amplified the messages even further afield, the result could be panic and confusion.
For many Americans, Russian hacking remains a story about the 2016 election. But there is another story taking shape. Marrying a hundred years of expertise in influence operations to the new world of social media, Russia may finally have gained the ability it long sought but never fully achieved in the Cold War: to alter the course of events in the U.S. by manipulating public opinion. The vast openness and anonymity of social media has cleared a dangerous new route for antidemocratic forces. “Using these technologies, it is possible to undermine democratic government, and it’s becoming easier every day,” says Rand Waltzman of the Rand Corp., who ran a major Pentagon research program to understand the propaganda threats posed by social media technology.
Current and former officials at the FBI, at the CIA and in Congress now believe the 2016 Russian operation was just the most visible battle in an ongoing information war against global democracy. And they’ve become more vocal about their concern. “If there has ever been a clarion call for vigilance and action against a threat to the very foundation of our democratic political system, this episode is it,” former Director of National Intelligence James Clapper testified before Congress on May 8.
If that sounds alarming, it helps to understand the battlescape of this new information war. As they tweet and like and upvote their way through social media, Americans generate a vast trove of data on what they think and how they respond to ideas and arguments–literally thousands of expressions of belief every second on Twitter, Facebook, Reddit and Google. All of those digitized convictions are collected and stored, and much of that data is available commercially to anyone with sufficient computing power to take advantage of it.
That’s where the algorithms come in. American researchers have found they can use mathematical formulas to segment huge populations into thousands of subgroups according to defining characteristics like religion and political beliefs or taste in TV shows and music. Other algorithms can determine those groups’ hot-button issues and identify “followers” among them, pinpointing those most susceptible to suggestion. Propagandists can then manually craft messages to influence them, deploying covert provocateurs, either humans or automated computer programs known as bots, in hopes of altering their behavior.
That is what Moscow is doing, more than a dozen senior intelligence officials and others investigating Russia’s influence operations tell TIME. The Russians “target you and see what you like, what you click on, and see if you’re sympathetic or not sympathetic,” says a senior intelligence official. Whether and how much they have actually been able to change Americans’ behavior is hard to say. But as they have investigated the Russian 2016 operation, intelligence and other officials have found that Moscow has developed sophisticated tactics.
In one case last year, senior intelligence officials tell TIME, a Russian soldier based in Ukraine successfully infiltrated a U.S. social media group by pretending to be a 42-year-old American housewife and weighing in on political debates with specially tailored messages. In another case, officials say, Russia created a fake Facebook account to spread stories on political issues like refugee resettlement to targeted reporters they believed were susceptible to influence.
As Russia expands its cyberpropaganda efforts, the U.S. and its allies are only just beginning to figure out how to fight back. One problem: the fear of Russian influence operations can be more damaging than the operations themselves. Eager to appear more powerful than they are, the Russians would consider it a success if you questioned the truth of your news sources, knowing that Moscow might be lurking in your Facebook or Twitter feed. But figuring out if they are is hard. Uncovering “signals that indicate a particular handle is a state-sponsored account is really, really difficult,” says Jared Cohen, CEO of Jigsaw, a subsidiary of Google’s parent company, Alphabet, which tackles global security challenges.
Like many a good spy tale, the story of how the U.S. learned its democracy could be hacked started with loose lips. In May 2016, a Russian military intelligence officer bragged to a colleague that his organization, known as the GRU, was getting ready to pay Clinton back for what President Vladimir Putin believed was an influence operation she had run against him five years earlier as Secretary of State. The GRU, he said, was going to cause chaos in the upcoming U.S. election.
What the officer didn’t know, senior intelligence officials tell TIME, was that U.S. spies were listening. They wrote up the conversation and sent it back to analysts at headquarters, who turned it from raw intelligence into an official report and circulated it. But if the officer’s boast seems like a red flag now, at the time U.S. officials didn’t know what to make of it. “We didn’t really understand the context of it until much later,” says the senior intelligence official. Investigators now realize that the officer’s boast was the first indication U.S. spies had from their sources that Russia wasn’t just hacking email accounts to collect intelligence but was also considering interfering in the vote. Like much of America, many in the U.S. government hadn’t imagined the kind of influence operation that Russia was preparing to unleash on the 2016 election. Fewer still realized it had been five years in the making.
In 2011, protests in more than 70 cities across Russia had threatened Putin’s control of the Kremlin. The uprising was organized on social media by a popular blogger named Alexei Navalny, who used his blog as well as Twitter and Facebook to get crowds in the streets. Putin’s forces broke out their own social media technique to strike back. When bloggers tried to organize nationwide protests on Twitter using #Triumfalnaya, pro-Kremlin botnets bombarded the hashtag with anti-protester messages and nonsense tweets, making it impossible for Putin’s opponents to coalesce.
Putin publicly accused then Secretary of State Clinton of running a massive influence operation against his country, saying she had sent “a signal” to protesters and that the State Department had actively worked to fuel the protests. The State Department said it had just funded pro-democracy organizations. Former officials say any such operations–in Russia or elsewhere–would require a special intelligence finding by the President and that Barack Obama was not likely to have issued one.
After his re-election the following year, Putin dispatched his newly installed head of military intelligence, Igor Sergun, to begin repurposing cyberweapons previously used for psychological operations in war zones for use in electioneering. Russian intelligence agencies funded “troll farms,” botnet spamming operations and fake news outlets as part of an expanding focus on psychological operations in cyberspace.
It turns out Putin had outside help. One particularly talented Russian programmer who had worked with social media researchers in the U.S. for 10 years had returned to Moscow and brought with him a trove of algorithms that could be used in influence operations. He was promptly hired by those working for Russian intelligence services, senior intelligence officials tell TIME. “The engineer who built them the algorithms is U.S.-trained,” says the senior intelligence official.
Soon, Putin was aiming his new weapons at the U.S. Following Moscow’s April 2014 invasion of Ukraine, the U.S. considered sanctions that would block the export of drilling and fracking technologies to Russia, putting out of reach some $8.2 trillion in oil reserves that could not be tapped without U.S. technology. As they watched Moscow’s intelligence operations in the U.S., American spy hunters saw Russian agents applying their new social media tactics on key aides to members of Congress. Moscow’s agents broadcast material on social media and watched how targets responded in an attempt to find those who might support their cause, the senior intelligence official tells TIME. “The Russians started using it on the Hill with staffers,” the official says, “to see who is more susceptible to continue this program [and] to see who would be more favorable to what they want to do.”
On Aug. 7, 2016, the infamous pharmaceutical executive Martin Shkreli declared that Hillary Clinton had Parkinson’s. That story went viral in late August, then took on a life of its own after Clinton fainted from pneumonia and dehydration at a Sept. 11 event in New York City. Elsewhere people invented stories saying Pope Francis had endorsed Trump and Clinton had murdered a DNC staffer. Just before Election Day, a story took off alleging that Clinton and her aides ran a pedophile ring in the basement of a D.C. pizza parlor.
Congressional investigators are looking at how Russia helped stories like these spread to specific audiences. Counterintelligence officials, meanwhile, have picked up evidence that Russia tried to target particular influencers during the election season who they reasoned would help spread the damaging stories. These officials have seen evidence of Russia using its algorithmic techniques to target the social media accounts of particular reporters, senior intelligence officials tell TIME. “It’s not necessarily the journal or the newspaper or the TV show,” says the senior intelligence official. “It’s the specific reporter that they find who might be a little bit slanted toward believing things, and they’ll hit him” with a flood of fake news stories.
Russia plays in every social media space. The intelligence officials have found that Moscow’s agents bought ads on Facebook to target specific populations with propaganda. “They buy the ads, where it says sponsored by–they do that just as much as anybody else does,” says the senior intelligence official. (A Facebook official says the company has no evidence of that occurring.) The ranking Democrat on the Senate Intelligence Committee, Mark Warner of Virginia, has said he is looking into why, for example, four of the top five Google search results the day the U.S. released a report on the 2016 operation were links to Russia’s TV propaganda arm, RT. (Google says it saw no meddling in this case.) Researchers at the University of Southern California, meanwhile, found that nearly 20% of political tweets in 2016 between Sept. 16 and Oct. 21 were generated by bots of unknown origin; investigators are trying to figure out how many were Russian.
As they dig into the viralizing of such stories, congressional investigations are probing not just Russia’s role but whether Moscow had help from the Trump campaign. Sources familiar with the investigations say they are probing two Trump-linked organizations: Cambridge Analytica, a data-analytics company hired by the campaign that is partly owned by deep-pocketed Trump backer Robert Mercer; and Breitbart News, the right-wing website formerly run by Trump’s top political adviser Stephen Bannon.
The congressional investigators are looking at ties between those companies and right-wing web personalities based in Eastern Europe who the U.S. believes are Russian fronts, a source familiar with the investigations tells TIME. “Nobody can prove it yet,” the source says. In March, McClatchy newspapers reported that FBI counterintelligence investigators were probing whether far-right sites like Breitbart News and Infowars had coordinated with Russian botnets to blitz social media with anti-Clinton stories, mixing fact and fiction when Trump was doing poorly in the campaign.
There are plenty of people who are skeptical of such a conspiracy, if one existed. Cambridge Analytica touts its ability to use algorithms to microtarget voters, but veteran political operatives have found them ineffective political influencers. Ted Cruz first used their methods during the primary, and his staff ended up concluding they had wasted their money. Mercer, Bannon, Breitbart News and the White House did not answer questions about the congressional probes. A spokesperson for Cambridge Analytica says the company has no ties to Russia or individuals acting as fronts for Moscow and that it is unaware of the probe.
Democratic operatives searching for explanations for Clinton’s loss after the election investigated social media trends in the three states that tipped the vote for Trump: Michigan, Wisconsin and Pennsylvania. In each they found what they believe is evidence that key swing voters were being drawn to fake news stories and anti-Clinton stories online. Google searches for the fake pedophilia story circulating under the hashtag #pizzagate, for example, were disproportionately higher in swing districts and not in districts likely to vote for Trump.
The Democratic operatives created a package of background materials on what they had found, suggesting the search behavior might indicate that someone had successfully altered the behavior in key voting districts in key states. They circulated it to fellow party members who are up for a vote in 2018.
Even as investigators try to piece together what happened in 2016, they are worrying about what comes next. Russia claims to be able to alter events using cyberpropaganda and is doing what it can to tout its power. In February 2016, a Putin adviser named Andrey Krutskikh compared Russia’s information-warfare strategies to the Soviet Union’s obtaining a nuclear weapon in the 1940s, David Ignatius of the Washington Post reported. “We are at the verge of having something in the information arena which will allow us to talk to the Americans as equals,” Krutskikh said.
But if Russia is clearly moving forward, it’s less clear how active the U.S. has been. Documents released by former National Security Agency contractor Edward Snowden and published by the Intercept suggested that the British were pursuing social media propaganda and had shared their tactics with the U.S. Chris Inglis, the former No. 2 at the National Security Agency, says the U.S. has not pursued this capability. “The Russians are 10 years ahead of us in being willing to make use of” social media to influence public opinion, he says.
There are signs that the U.S. may be playing in this field, however. From 2010 to 2012, the U.S. Agency for International Development established and ran a “Cuban Twitter” network designed to undermine communist control on the island. At the same time, according to the Associated Press, which discovered the program, the U.S. government hired a contractor to profile Cuban cell phone users, categorizing them as “pro-revolution,” “apolitical” or “antirevolutionary.”
Much of what is publicly known about the mechanics and techniques of social media propaganda comes from a program at the Defense Advanced Research Projects Agency (DARPA) that the Rand researcher, Waltzman, ran to study how propagandists might manipulate social media in the future. In the Cold War, operatives might distribute disinformation-laden newspapers to targeted political groups or insinuate an agent provocateur into a group of influential intellectuals. By harnessing computing power to segment and target literally millions of people in real time online, Waltzman concluded, you could potentially change behavior “on the scale of democratic governments.”
In the U.S., public scrutiny of such programs is usually enough to shut them down. In 2014, news articles appeared about the DARPA program and the “Cuban Twitter” project. It was only a year after Snowden had revealed widespread monitoring programs by the government. The DARPA program, already under a cloud, was allowed to expire quietly when its funding ran out in 2015.
In the wake of Russia’s 2016 election hack, the question is how to research social media propaganda without violating civil liberties. The need is all the more urgent because the technology continues to advance. While today humans are still required to tailor and distribute messages to specially targeted “susceptibles,” in the future crafting and transmitting emotionally powerful messages will be automated.
The U.S. government is constrained in what kind of research it can fund by various laws protecting citizens from domestic propaganda, government electioneering and intrusions on their privacy. Waltzman has started a group called Information Professionals Association with several former information operations officers from the U.S. military to develop defenses against social media influence operations.
Social media companies are beginning to realize that they need to take action. Facebook issued a report in April 2017 acknowledging that much disinformation had been spread on its pages and saying it had expanded its security. Google says it has seen no evidence of Russian manipulation of its search results but has updated its algorithms just in case. Twitter claims it has diminished cyberpropaganda by tweaking its algorithms to block cleverly designed bots. “Our algorithms currently work to detect when Twitter accounts are attempting to manipulate Twitter’s Trends through inorganic activity, and then automatically adjust,” the company said in a statement.
In the meantime, America’s best option to protect upcoming votes may be to make it harder for Russia and other bad actors to hide their election-related information operations. When it comes to defeating Russian influence operations, the answer is “transparency, transparency, transparency,” says Rhode Island Democratic Senator Sheldon Whitehouse. He has written legislation that would curb the massive, anonymous campaign contributions known as dark money and the widespread use of shell corporations that he says make Russian cyberpropaganda harder to trace and expose.
But much damage has already been done. “The ultimate impact of [the 2016 Russian operation] is we’re never going to look at another election without wondering, you know, Is this happening, can we see it happening?” says Jigsaw’s Jared Cohen. By raising doubts about the validity of the 2016 vote and the vulnerability of future elections, Russia has achieved its most important objective: undermining the credibility of American democracy.
For now, investigators have added the names of specific trolls and botnets to their wall charts in the offices of intelligence and law-enforcement agencies. They say the best way to compete with the Russian model is by having a better message. “It requires critical thinkers and people who have a more powerful vision” than the cynical Russian view, says former NSA deputy Inglis. And what message is powerful enough to take on the firehose of falsehoods that Russia is deploying in targeted, effective ways across a range of new media? One good place to start: telling the truth.
–With reporting by PRATHEEK REBALA/WASHINGTON
Correction: The original version of this story misstated Jared Cohen’s title. He is CEO, not president.
The survey, published on Friday, concluded that Snapchat, Facebook and Twitter are also harmful. Among the five only YouTube was judged to have a positive impact.
The four platforms have a negative effect because they can exacerbate children’s and young people’s body image worries, and worsen bullying, sleep problems and feelings of anxiety, depression and loneliness, the participants said.
The findings follow growing concern among politicians, health bodies, doctors, charities and parents about young people suffering harm as a result of sexting, cyberbullying and social media reinforcing feelings of self-loathing and even the risk of them committing suicide.
“It’s interesting to see Instagram and Snapchat ranking as the worst for mental health and wellbeing. Both platforms are very image-focused and it appears that they may be driving feelings of inadequacy and anxiety in young people,” said Shirley Cramer, chief executive of the Royal Society for Public Health, which undertook the survey with the Young Health Movement.
She demanded tough measures “to make social media less of a wild west when it comes to young people’s mental health and wellbeing”. Social media firms should bring in a pop-up image to warn young people that they have been using it a lot, while Instagram and similar platforms should alert users when photographs of people have been digitally manipulated, Cramer said.
The 1,479 young people surveyed were asked to rate the impact of the five forms of social media on 14 different criteria of health and wellbeing, including their effect on sleep, anxiety, depression, loneliness, self-identity, bullying, body image and the fear of missing out.
Instagram emerged with the most negative score. It rated badly for seven of the 14 measures, particularly its impact on sleep, body image and fear of missing out – and also for bullying and feelings of anxiety, depression and loneliness. However, young people cited its upsides too, including self-expression, self-identity and emotional support.
YouTube scored very badly for its impact on sleep but positively in nine of the 14 categories, notably awareness and understanding of other people’s health experience, self-expression, loneliness, depression and emotional support.
However, the leader of the UK’s psychiatrists said the findings were too simplistic and unfairly blamed social media for the complex reasons why the mental health of so many young people is suffering.
Prof Sir Simon Wessely, president of the Royal College of Psychiatrists, said: “I am sure that social media plays a role in unhappiness, but it has as many benefits as it does negatives.. We need to teach children how to cope with all aspects of social media – good and bad – to prepare them for an increasingly digitised world. There is real danger in blaming the medium for the message.”
Tom Madders, its director of campaigns and communications, said: “Prompting young people about heavy usage and signposting to support they may need, on a platform that they identify with, could help many young people.”
However, he also urged caution in how content accessed by young people on social media is perceived. “It’s also important to recognise that simply ‘protecting’ young people from particular content types can never be the whole solution. We need to support young people so they understand the risks of how they behave online, and are empowered to make sense of and know how to respond to harmful content that slips through filters.”
Parents and mental health experts fear that platforms such as Instagram can make young users feel worried and inadequate by facilitating hostile comments about their appearance or reminding them that they have not been invited to, for example, a party many of their peers are attending.
May, who has made children’s mental health one of her priorities, highlighted social media’s damaging effects in her “shared society” speech in January, saying: “We know that the use of social media brings additional concerns and challenges. In 2014, just over one in 10 young people said that they had experienced cyberbullying by phone or over the internet.”
In February, Jeremy Hunt, the health secretary, warned social media and technology firms that they could face sanctions, including through legislation, unless they did more to tackle sexting, cyberbullying and the trolling of young users.
The 2017 National Defense Authorization Act that was signed by President Obama on December 23 greenlights the creation of a new federal center ostensibly aimed at countering foreign “propaganda and disinformation.” Termed the Global Engagement Center, the body is granted broad and ill-defined powers to surveil the “populations most susceptible to propaganda,” compile reporting and social media messaging critical of the U.S. government and disseminate pro-American propaganda.
The head of the center will be appointed by the president, meaning that a Donald Trump nominee will likely sit at its helm.
The center was originally proposed in separate legislation introduced by U.S. senators Rob Portman (R-OH) and Chris Murphy (D-CT) before being inserted into the NDAA. “The purpose of the Center shall be to lead, synchronize, and coordinate efforts of the Federal Government to recognize, understand, expose, and counter foreign state and non-state propaganda and disinformation efforts aimed at undermining United States national security interests,” the NDAA states.
The center is tasked with generating and disseminating “fact-based narratives,” a directive likely to unleash a torrent of pro-American propaganda, as demonstrated by other government agencies.
The center will also be tasked with monitoring and tracking “counterfactual narratives abroad that threaten the national security interests of the United States and United States allies and partner nations.” While the precise meaning of this language is unclear, such instructions could be interpreted as targeting information and communications critical of the U.S. government.
The surveillance powers granted to the center are sweeping. The body is instructed to, “Identify the countries and populations most susceptible to propaganda and disinformation based on information provided by appropriate interagency entities.” It is not immediately clear from the text how the government will determine which populations qualify for this escalated surveillance.
The center will also “collect and store examples in print, online, and social media, disinformation, misinformation, and propaganda directed at the United States and its allies and partners.” The language indicates that federal authorities will have a new mechanism for monitoring social media and reporting that is critical of the U.S. government.
Michael Macleod-Ball, chief of staff for the Washington Legislative Office of the ACLU, told AlterNet it is not yet clear how this language will be put into practice.
“We just saw that the Department of Homeland Security is now collecting social media identifiers for people applying for visa waivers, so the collection, retention and sharing of social media information is going to be a growth industry for the federal government,” he said. “We have big concerns with the retention of that information and how it might be shared across agencies.”
He added, “There are already a whole bunch of government agencies collecting information. Whether you’re talking about law enforcement or intelligence officials, having the government in the business of monitoring individual communications is very troubling to us.”
Overshadowed by the holidays, the provision passed with little debate or notice, despite its potentially broad implications. The measure will be handed over to the administration of Trump, who has previously called for a ban on Muslims entering the country, a database to track Muslims within the United States, the mass deportation of 11 million undocumented people, and the authorization of torture.
The NDAA also rubber-stamps a massive military budget of nearly $619 million and places limits on transfers from the Guantánamo Bay detention center, meaning the prison will almost certainly remain open despite Obama’s pledges to shut it down.
Sarah Lazare is a staff writer for AlterNet. A former staff writer for Common Dreams, she coedited the book About Face: Military Resisters Turn Against War. Follow her on Twitter at @sarahlazare.
I’m a millennial computer scientist who also writes books and runs a blog. Demographically speaking I should be a heavy social media user, but that is not the case. I’ve never had a social media account.
At the moment, this makes me an outlier, but I think many more people should follow my lead and quit these services. There are many issues with social media, from its corrosion of civic life to its cultural shallowness, but the argument I want to make here is more pragmatic: You should quit social media because it can hurt your career.
This claim, of course, runs counter to our current understanding of social media’s role in the professional sphere. We’ve been told that it’s important to tend to your so-called social media brand, as this provides you access to opportunities you might otherwise miss and supports the diverse contact network you need to get ahead. Many people in my generation fear that without a social media presence, they would be invisible to the job market.
In a recent New York magazine essay, Andrew Sullivan recalled when he started to feel obligated to update his blog every half-hour or so. It seemed as if everyone with a Facebook account and a smartphone now felt pressured to run their own high-stress, one-person media operation, and “the once-unimaginable pace of the professional blogger was now the default for everyone,” he wrote.
I think this behavior is misguided. In a capitalist economy, the market rewards things that are rare and valuable. Social media use is decidedly not rare or valuable. Any 16-year-old with a smartphone can invent a hashtag or repost a viral article. The idea that if you engage in enough of this low-value activity, it will somehow add up to something of high value in your career is the same dubious alchemy that forms the core of most snake oil and flimflam in business.
Professional success is hard, but it’s not complicated. The foundation to achievement and fulfillment, almost without exception, requires that you hone a useful craft and then apply it to things that people care about. This is a philosophy perhaps best summarized by the advice Steve Martin used to give aspiring entertainers: “Be so good they can’t ignore you.” If you do that, the rest will work itself out, regardless of the size of your Instagram following.
A common response to my social media skepticism is the idea that using these services “can’t hurt.” In addition to honing skills and producing things that are valuable, my critics note, why not also expose yourself to the opportunities and connections that social media can generate? I have two objections to this line of thinking.
First, interesting opportunities and useful connections are not as scarce as social media proponents claim. In my own professional life, for example, as I improved my standing as an academic and a writer, I began receiving more interesting opportunities than I could handle. I currently have filters on my website aimed at reducing, not increasing, the number of offers and introductions I receive.
My research on successful professionals underscores that this experience is common: As you become more valuable to the marketplace, good things will find you. To be clear, I’m not arguing that new opportunities and connections are unimportant. I’m instead arguing that you don’t need social media’s help to attract them.
My second objection concerns the idea that social media is harmless. Consider that the ability to concentrate without distraction on hard tasks is becoming increasingly valuable in an increasingly complicated economy. Social media weakens this skill because it’s engineered to be addictive. The more you use social media in the way it’s designed to be used — persistently throughout your waking hours — the more your brain learns to crave a quick hit of stimulus at the slightest hint of boredom.
Once this Pavlovian connection is solidified, it becomes hard to give difficult tasks the unbroken concentration they require, and your brain simply won’t tolerate such a long period without a fix. Indeed, part of my own rejection of social media comes from this fear that these services will diminish my ability to concentrate — the skill on which I make my living.
The idea of purposefully introducing into my life a service designed to fragment my attention is as scary to me as the idea of smoking would be to an endurance athlete, and it should be to you if you’re serious about creating things that matter.
Perhaps more important, however, than my specific objections to the idea that social media is a harmless lift to your career, is my general unease with the mind-set this belief fosters. A dedication to cultivating your social media brand is a fundamentally passive approach to professional advancement. It diverts your time and attention away from producing work that matters and toward convincing the world that you matter. The latter activity is seductive, especially for many members of my generation who were raised on this message, but it can be disastrously counterproductive.
Most social media is best described as a collection of somewhat trivial entertainment services that are currently having a good run. These networks are fun, but you’re deluding yourself if you think that Twitter messages, posts and likes are a productive use of your time.
If you’re serious about making an impact in the world, power down your smartphone, close your browser tabs, roll up your sleeves and get to work.
Cal Newport is an associate professor of computer science at Georgetown University and the author of “Deep Work: Rules for Focused Success in a Distracted World” (Grand Central).