In late June, Mark Zuckerberg announced the new mission of Facebook: “To give people the power to build community and bring the world closer together.”
The rhetoric of the statement is carefully selected, centered on empowering people, and in so doing, ushering in world peace, or at least something like it. Tech giants across Silicon Valley are adopting similarly utopian visions, casting themselves as the purveyors of a more connected, more enlightened, more empowered future. Every year, these companies articulate their visions onstage at internationally streamed pep rallies, Apple’s WWDC and Google’s I/O being the best known.
But companies like Facebook can only “give people the power” because we first ceded it to them, in the form of our attention. After all, that is how many Silicon Valley companies thrive: Our attention, in the form of eyes and ears, provides a medium for them to advertise to us. And the more time we spend staring at them, the more money Facebook and Twitter make — in effect, it’s in their interest that we become psychologically dependent on the self-esteem boost from being wired in all the time.
This quest for our eyeballs doesn’t mesh well with Silicon Valley’s utopian visions of world peace and people power. Earlier this year, many sounded alarm bells when a “60 Minutes” exposé revealed the creepy cottage industry of “brain-hacking,” industrial psychology techniques that tech giants use and study to make us spend as much time staring at screens as possible.
Indeed, it is Silicon Valley’s continual quest for attention that both motivates their utopian dreams, and that compromises them from the start. As a result, the tech industry often has compromised ethics when it comes to product design.
Case in point: At January’s Consumer Electronics Convention – a sort of Mecca for tech start-ups dreaming of making it big – I found myself in a suite with one of the largest kid-tech (children’s toys) developers in the world. A small flock of PR reps, engineers and executives hovered around the entryway as one development head walked my photographer and me through the mock setup. They were showing off the first voice assistant developed solely with kids in mind.
At the end of the tour, I asked if the company had researched or planned to research the effects of voice assistant usage on kids. After all, parents had been using tablets to occupy their kids for years by the time evidence of their less-than-ideal impact on children’s attention, behavior and sleep emerged.
The answer I received was gentle but firm: No, because we respect parents’ right to make decisions on behalf of their children.
This free-market logic – that says the consumer alone arbitrates the value of a product – is pervasive in Silicon Valley. What consumer, after all, is going to argue they can’t make their own decisions responsibly? But a free market only functions properly when consumers operate with full agency and access to information, and tech companies are working hard to limit both.
During a “60 Minutes” story on brain hacking, former product manager at Google Tristan Harris said, “There’s always this narrative that technology’s neutral. And it’s up to us to choose how we use it.”
The problem, according to Harris, is that “this is just not true… [Developers] want you to use it in particular ways and for long periods of time. Because that’s how they make their money.”
Harris was homing in on the fact that, increasingly, it isn’t the price tag on the platform itself that earns companies money, but the attention they control on said platform – whether it’s a voice assistant, operating system, app or website. We literally “pay” attention to ads or sponsored content in order to access websites.
But Harris went on to explain that larger platforms, using systems of rewards similar to slot machines, are working not only to monetize our attention, but also to monopolize it. And with that monopoly comes incredible power.
If Facebook, for instance, can control hours of people’s attention daily, it can not only determine the rate at which it will sell that attention to advertisers, but also decide which advertisers or content creators it will sell to. In other words, in an attention economy Facebook becomes a gatekeeper for content – one that mediates not only personalized advertising, but also news and information.
This sort of monopoly brings the expected fiscal payoff, and also the amassing of immeasurable social and cultural power.
So how does Facebook’s new mission statement fit into this attention economy?
Think of it in terms of optics. The carotid artery of Facebook, along with the other tech giants of Silicon Valley, is brand. Brand ubiquity means Facebook is the first thing people check when they take their phones out of their pockets, or when they open Chrome or Safari (brought to you by Google and Apple, respectively). It means Prime Day is treated like a real holiday. Just like Kleenex means tissues and Xerox means copy, online search has literally become synonymous with Google.
Yet all these companies are painfully aware of what a brand-gone-bad can do – or undo. The current generation of online platforms is built on the foundations of empires that rose and fell while the attention economy was still incipient. Today’s companies have maintained their centrality by consistently copying (Instagram Stories, a clone of Snapchat) or outright purchasing (YouTube) their fiercest competitors – all to maintain or expand their brand.
And perhaps as important, tech giants have made it near impossible to imagine a future without them, simply by being the most prominent public entities doing such imagining.
Facebook’s mission affixes the company in our shared future, and also injects it with a moral or at least charitable sensibility – even if it’s only in the form of “bring[ing] the world closer together”-type vagaries.
So how should we as average consumers respond?
In his award-winning essay “Stand Out of Our Light: Freedom and Persuasion in the Attention Economy,” James Williams argues, “We must … move urgently to assert and defend our freedom of attention.”
To assert our freedom is to sufficiently recognize and evaluate the demands to attention all these devices and digital services represent. To defend our freedom entails two forms of action: first, by individual action – not unplugging completely, as the self-styled prophets of Facebook and Twitter encourage (before logging back on after a few months of asceticism) – but rather unplugging partially, habitually and ruthlessly.
Attention is the currency upon which tech giants are built. And the power of agency and free information is the power we cede when we turn over our attention wholly to platforms like Facebook.
But individual consumers can only do so much. The second way we must defend our freedom is through our demand for ethical practices from Silicon Valley.
Some critics believe government regulation is the only way to rein in Silicon Valley developers. The problem is, federal agencies that closely monitor the effects of product usage on consumers don’t have a good category for monitoring the effects of online platforms yet. The Food and Drug Administration (FDA) tracks medical technology. The Consumer Product Safety Commission (CPSC) focuses on physical risk to consumers. The Federal Communication Commission (FCC) focuses on content — not platform. In other words, we don’t have a precedent for monitoring social media or other online platforms and their methods for retaining users.
Currently, there is no corollary agency that leads dedicated research into the effects of platforms like Facebook on users. There is no Surgeon General’s warning. There is no real protection for consumers from unethical practices by tech giants — as long as those practices fall in the cracks between existing ethics standards.
While it might seem idealistic to hold out for the creation of a new government agency that monitors Facebook (especially given the current political regime), the first step toward curbing Silicon Valley’s power is simple: We must acknowledge freedom of attention as an inalienable right — one inextricable from our freedom to pursue happiness. So long as the companies producing the hardware surrounding us and the platforms orienting social life online face no strictures, they will actively work to control how users think, slowly eroding our society’s collective free will.
With so much at stake, and with so little governmental infrastructure in place, checking tech giants’ ethics might seem like a daunting task. The U.S. government, after all, has demonstrated a consistent aversion to challenging Silicon Valley’s business and consumer-facing practices before.
But while we fight for better policy and stronger ethics-enforcing bodies, we can take one more practical step: “pay” attention to ethics in Silicon Valley. Read about Uber’s legal battles and the most recent research on social media’s effects on the brain. Demand more ethical practices from the companies we patronize. Why? The best moderators of technology ethics thus far have been tech giants themselves — when such moderation benefits the companies’ brands.
In Silicon Valley, money talks, but attention talks louder. It’s time to reclaim our voice.
In a Facebook post published on Monday, Robert Reich, former labor secretary under Bill Clinton, reported a conversation with one of his friends, “a former Republican member of Congress.” If this GOPer is as in-the-know as he claims to be, President Donald Trump could be in quite a bit of trouble.
According to Reich, his friend is hearing from his former colleagues who still serve that “Trump is out of his gourd.” He also claims that “stuff with [Attorney General Jeff] Sessions is pissing them off,” presumably a reference to how Trump has undercut his own attorney general over his decision to recuse himself from the Russia investigation. The colleague also pointed to Trump’s hiring of “horse’s ass” Scaramucci as communications director, a matter that may have been remedied by Trump’s subsequentfiring of the same “horse’s ass.”
The Republican also said that his co-partisans are worried that Trump will hurt their reelection chances in 2018 and 2020 and “want him outa there,” although he doubts they’ll impeach him without special counsel Robert Mueller coming up with a “smoking gun” (he also doubts that Trump will fire Mueller).
So what does he think the plan is?
“Put someone else up in ’20,” Reich’s friend said. “Lots of maneuvering already. Pence, obviously. Cruz thinks he has a shot.”
Reich’s friend also claims that Republicans believe Trump has become mentally unstable. He attributed Trump’s firing of former White House chief of staff Reince Priebus and undermining of both Sessions and Tillerson to an alleged belief that they are plotting to cite the 25th Amendment, which allows a president’s cabinet to remove him from power if he’s mentally incompetent, in order to remove Trump from power.
Reich said his friend dismissed any notion of disloyalty by Sessions and Tillerson as “ludicrous” but insisted that “Trump is fritzing out. Having manic delusions. He’s actually going nuts.” As a result, “my betting is he’s out of office before the midterms. And Pence is president.”
It is important to note that some of these predictions seem a tad disconnected from political reality. For one thing, the fact that Trump and his right-wing surrogates have been gradually building up a case for firing Mueller suggests that, at the very least, the notion that he could do so should not be dismissed out of hand. Similarly, there is no evidence from other media outlets that Trump believed Priebus, Sessions and Tillerson were plotting against him.
Finally, and perhaps most importantly, the Republican Party has so far only stood up to Trump when it comes to the specific issue of sanctions against Russia. Aside from that issue, their approach toward the president has been fearful bordering on obsequious, so it’s a stretch to say the least to imagine them plotting against him behind the scenes.
This morning I phoned my friend, a former Republican member of Congress.
Me: What’s going on? Seems like the White House is imploding, and Republicans are going down with the ship.
Him (chuckling): We’re officially a banana republic.
Me: Seriously, what are you hearing from your former colleagues on the Hill?
Him: They’re convinced Trump is out of his gourd.
Me: So what are they going to do about it?
Him: Remember what I told you at the start of this circus? They planned to use Trump’s antics for cover, to get done what they most wanted – big tax cuts, rollbacks of regulations, especially financial. They’d work with Pence behind the scenes and forget the crazy uncle in the attic.
Him: Well, I’m hearing a different story now. Stuff with Sessions is pissing them off. And now Trump’s hired that horse’s ass Scaramucci — a communications director who talks dirty on CNN! Plus Trump’s numbers are in freefall. They think he’s gonna hurt them in ’18 and ’20.
Me: So what’s the plan?
Him: They want him outa there.
Me: Really? Impeachment?
Him: Doubt it, unless Mueller comes up with a smoking gun.
Me: Or if he fires Mueller.
Him: Not gonna happen.
Me: So how do they get him out?
Him: Put someone else up in ’20. Lots of maneuvering already. Pence, obviously. Cruz thinks he has a shot.
Me: But that won’t help them in the midterms. What’s the plan before then?
Him: Lots think he’s fritzing out.
Me: Fritzing out?
Him: Going totally bananas. Paranoia. You want to know why he fired Priebus, wants Sessions out, and is now gunning for Tillerson?
Me: He wants to shake things up?
Him (chuckling): No. The way I hear it, he thinks they’ve been plotting against him.
Me: What do you mean?
Him: Twenty-fifth amendment! Read it! A Cabinet can get rid of a president who’s nuts. Trump thinks they’ve been preparing a palace coup. So one by one, he’s firing them.
Me: I find it hard to believe they’re plotting against him.
Him: Of course not! It’s ludicrous. Sessions is a loyal lapdog. Tillerson doesn’t know where the bathroom is. That’s my point. Trump is fritzing out. Having manic delusions. He’s actually going nuts.
Him: Well, it’s downright dangerous.
Me: Yeah, but that still doesn’t tell me what Republicans are planning to do about it.
Him: Look. How long do you think it will be before everyone in Washington knows he’s flipping out? I don’t mean just weird. I mean really off his rocker.
Me: I don’t know.
Him: No all that long.
Me: So what are you telling me?
Him: They don’t have to plot against him. It will be obvious to everyone that he’s got to go. That’s where the twenty-fifth amendment really does comes in.
Me: So you think…
Him: Who knows? But he’s losing it fast. My betting is he’s out of office before the midterms. And Pence is president.
Matthew Rozsa is a breaking news writer for Salon. He holds an MA in History from Rutgers University-Newark and his work has appeared in Mic, Quartz and MSNBC.
It’s a hot day in New York City. You’re thirsty, but your water bottle is empty. So you walk into a store and place your bottle in a machine. You activate the machine with an app on your phone, and it fills your bottle with tap water. Now you are no longer thirsty.
This is the future envisioned by the founders of a startup called Reefill. If the premise sounds oddly familiar, that’s because it is: Reefill has reinvented the water fountain as a Bluetooth-enabled subscription service. Customers pay $1.99 a month for the privilege of using its machines, located at participating businesses around Manhattan.
Predictably, the company has already come in for its fair share of ridicule. In Slate, Henry Grabar called it “tap water in a suit”. But while Reefill is a particularly cartoonish example, its basic business model is a popular one within tech. The playbook is simple: take a public service and build a private, app-powered version of it.
he most obvious examples are Uber and Lyft, which aspire not merely to eliminate the taxi industry, but to replace public transportation. They’re slowly succeeding: municipalities around America are nowsubsidizing ride-hailing fares instead of running public buses. And earlier this year, Lyft began offering a fixed-route, flat-rate service called Lyft Shuttle in Chicago and San Francisco – an aggressive bid to poach more riders from public transit.
These companies wouldn’t have customers if better public alternatives existed. It can be hard to find a water fountain in Manhattan, and public transit in American cities ranges from mediocre to nonexistent. But solving these problems by ceding them to the private sector ensures that public services will continue to deteriorate until they disappear.
Decades of defunding and outsourcing have already pushed public services to the brink. Now, fortified with piles of investor cash and the smartphone, tech companies are trying to finish them off.
Proponents of privatization believe this is a good thing. For years, they have advanced the argument that business will always perform a given task better than government, whether it’s running buses or schools, supplying healthcare or housing. The public sector is sclerotic, wasteful and undisciplined by the profit motive. The private sector is dynamic, innovative and, above all, efficient.
This belief has become common sense in political life. It is widely shared by the country’s elite, and has guided much policymaking over the past several decades. But like most of our governing myths, it collapses on closer inspection.
No word is invoked more frequently or more fervently by apostles of privatization than efficiency. Yet this is a strange basis on which to build their case, given the fact that public services are often more efficient than private ones. Take healthcare. The United States has one of the least efficient systems on the planet: we spend more money on healthcare than anyone else, and in return we receive some of the worst health outcomes in the west. Not coincidentally, we also have the most privatized healthcare system in the advanced world. By contrast, the UK spends a fraction of what we do and achieves far better results. It also happens to provision healthcare as a public service. Somehow, the absence of the profit motive has not produced an epidemic of inefficiency in British healthcare. Meanwhile, we pay nearly $10,000 per capita and a staggering 17% of our GDP to achieve a life expectancy somewhere between that of Costa Rica and Cuba.
A profit-driven system doesn’t mean we get more for our money – it means someone gets to make more money off of us. The healthcare industry posts record profits and rewards its chief executives with the highest salaries in the country. It takes a peculiar frame of mind to see this arrangement as anything resembling efficient.
Attacking public services on the grounds of efficiency isn’t just incorrect, however – it’s beside the point. Decades of neoliberalism have corroded our capacity to think in non-economic terms. We’ve been taught that all fields of human life should be organized as markets, and that government should be run like a business. This ideology has found its perverse culmination in the figure of Donald Trump, a celebrity billionaire with no prior political experience who catapulted himself into the White House by invoking his expertise as an businessman. The premise of Trump’s campaign was that America didn’t need a president – it needed a CEO.
Nowhere is the neoliberal faith embodied by Trump more deeply felt than in Silicon Valley. Tech entrepreneurs work tirelessly to turn more of our lives into markets and devote enormous resources towards “disrupting” government by privatizing its functions. Perhaps this is why, despite Silicon Valley’s veneer of liberal cosmopolitanism, it has a certain affinity for the president. On Monday, Trump met with top executives from Apple, Amazon, Google and other major tech firms to explore how to “unleash the creativity of the private sector to provide citizen services”, in the words of Jared Kushner. Between Trump and tech, never before have so many powerful people been so intent on transforming government into a business.
But government isn’t a business; it’s a different kind of machine. At its worst, it can be repressive and corrupt and autocratic. At its best, it can be an invaluable tool for developing and sustaining a democratic society. Among other things, this includes ensuring that everyone receives the resources they need to exercise the freedoms on which democracy depends. When we privatize public services, we don’t just risk replacing them with less efficient alternatives – we risk damaging democracy itself.
If this seems like a stretch, that’s because pundits and politicians have spent decades defining the idea of democracy downwards. It has come to mean little more than holding elections every few years. But this is the absolute minimum of democracy’s meaning. Its Greek root translates to “rule of the people” – not rule by certain people, such as the rich (plutocracy) or the priests (theocracy), but by all people. Democracy describes a way of organizing society in which the whole of the people determine how society should be organized.
What does this have to do with buses or schools or hospitals or houses? In a democracy, everyone gets to participate in the decisions that affect their lives. But that’s impossible if people don’t have access to the goods they need to survive – if they’re hungry or homeless or sick. And the reality is that when goods are rationed by the market, fewer people have access to them. Markets are places of winners and losers. You don’t get what you need – you get what you can afford.
By contrast, public services offer a more equitable way to satisfy basic needs. By taking things off the market, government can democratize access to the resources that people rely on to lead reasonably dignified lives. Those resources can be offered cheap or free, funded by progressive taxation. They can also be managed by publicly accountable institutions led by elected officials, or subject to more direct mechanisms of popular control.
These ideas are considered wildly radical in American politics. Yet other places around the world have implemented them with great success. When Oxfam surveyed more than 100 countries, they discovered that public services significantly reduce economic inequality. They shrink the distance between rich and poor by lowering the cost of living. They empower working people by making their survival less dependent on their bosses and landlords and creditors. Perhaps most importantly, they entitle citizens to a share of society’s wealth and a say over how it’s used.
But where will the money come from? This is the perennial question, posed whenever someone suggests raising the welfare state above a whisper. Fortunately, it has a simple answer. The United States is the richest country in the history of the world. It is so rich, in fact, that its richest people can afford to pour billions of dollars into a company such as Uber, which loses billions of dollars each year, in the hopes of getting just a little bit richer. In the face of such extravagance, diverting a modest portion of the prosperity we produce in common toward services that benefit everyone shouldn’t be controversial. It’s a small price to pay for making democracy mean more than a hollow slogan, or a sick joke.
Ben Tarnoff writes about technology and politics. He lives in San Francisco.
On March 2, a disturbing report hit the desks of U.S. counterintelligence officials in Washington. For months, American spy hunters had scrambled to uncover details of Russia’s influence operation against the 2016 presidential election. In offices in both D.C. and suburban Virginia, they had created massive wall charts to track the different players in Russia’s multipronged scheme. But the report in early March was something new.
It described how Russia had already moved on from the rudimentary email hacks against politicians it had used in 2016. Now the Russians were running a more sophisticated hack on Twitter. The report said the Russians had sent expertly tailored messages carrying malware to more than 10,000 Twitter users in the Defense Department. Depending on the interests of the targets, the messages offered links to stories on recent sporting events or the Oscars, which had taken place the previous weekend. When clicked, the links took users to a Russian-controlled server that downloaded a program allowing Moscow’s hackers to take control of the victim’s phone or computer–and Twitter account.
As they scrambled to contain the damage from the hack and regain control of any compromised devices, the spy hunters realized they faced a new kind of threat. In 2016, Russia had used thousands of covert human agents and robot computer programs to spread disinformation referencing the stolen campaign emails of Hillary Clinton, amplifying their effect. Now counterintelligence officials wondered: What chaos could Moscow unleash with thousands of Twitter handles that spoke in real time with the authority of the armed forces of the United States? At any given moment, perhaps during a natural disaster or a terrorist attack, Pentagon Twitter accounts might send out false information. As each tweet corroborated another, and covert Russian agents amplified the messages even further afield, the result could be panic and confusion.
For many Americans, Russian hacking remains a story about the 2016 election. But there is another story taking shape. Marrying a hundred years of expertise in influence operations to the new world of social media, Russia may finally have gained the ability it long sought but never fully achieved in the Cold War: to alter the course of events in the U.S. by manipulating public opinion. The vast openness and anonymity of social media has cleared a dangerous new route for antidemocratic forces. “Using these technologies, it is possible to undermine democratic government, and it’s becoming easier every day,” says Rand Waltzman of the Rand Corp., who ran a major Pentagon research program to understand the propaganda threats posed by social media technology.
Current and former officials at the FBI, at the CIA and in Congress now believe the 2016 Russian operation was just the most visible battle in an ongoing information war against global democracy. And they’ve become more vocal about their concern. “If there has ever been a clarion call for vigilance and action against a threat to the very foundation of our democratic political system, this episode is it,” former Director of National Intelligence James Clapper testified before Congress on May 8.
If that sounds alarming, it helps to understand the battlescape of this new information war. As they tweet and like and upvote their way through social media, Americans generate a vast trove of data on what they think and how they respond to ideas and arguments–literally thousands of expressions of belief every second on Twitter, Facebook, Reddit and Google. All of those digitized convictions are collected and stored, and much of that data is available commercially to anyone with sufficient computing power to take advantage of it.
That’s where the algorithms come in. American researchers have found they can use mathematical formulas to segment huge populations into thousands of subgroups according to defining characteristics like religion and political beliefs or taste in TV shows and music. Other algorithms can determine those groups’ hot-button issues and identify “followers” among them, pinpointing those most susceptible to suggestion. Propagandists can then manually craft messages to influence them, deploying covert provocateurs, either humans or automated computer programs known as bots, in hopes of altering their behavior.
That is what Moscow is doing, more than a dozen senior intelligence officials and others investigating Russia’s influence operations tell TIME. The Russians “target you and see what you like, what you click on, and see if you’re sympathetic or not sympathetic,” says a senior intelligence official. Whether and how much they have actually been able to change Americans’ behavior is hard to say. But as they have investigated the Russian 2016 operation, intelligence and other officials have found that Moscow has developed sophisticated tactics.
In one case last year, senior intelligence officials tell TIME, a Russian soldier based in Ukraine successfully infiltrated a U.S. social media group by pretending to be a 42-year-old American housewife and weighing in on political debates with specially tailored messages. In another case, officials say, Russia created a fake Facebook account to spread stories on political issues like refugee resettlement to targeted reporters they believed were susceptible to influence.
As Russia expands its cyberpropaganda efforts, the U.S. and its allies are only just beginning to figure out how to fight back. One problem: the fear of Russian influence operations can be more damaging than the operations themselves. Eager to appear more powerful than they are, the Russians would consider it a success if you questioned the truth of your news sources, knowing that Moscow might be lurking in your Facebook or Twitter feed. But figuring out if they are is hard. Uncovering “signals that indicate a particular handle is a state-sponsored account is really, really difficult,” says Jared Cohen, CEO of Jigsaw, a subsidiary of Google’s parent company, Alphabet, which tackles global security challenges.
Like many a good spy tale, the story of how the U.S. learned its democracy could be hacked started with loose lips. In May 2016, a Russian military intelligence officer bragged to a colleague that his organization, known as the GRU, was getting ready to pay Clinton back for what President Vladimir Putin believed was an influence operation she had run against him five years earlier as Secretary of State. The GRU, he said, was going to cause chaos in the upcoming U.S. election.
What the officer didn’t know, senior intelligence officials tell TIME, was that U.S. spies were listening. They wrote up the conversation and sent it back to analysts at headquarters, who turned it from raw intelligence into an official report and circulated it. But if the officer’s boast seems like a red flag now, at the time U.S. officials didn’t know what to make of it. “We didn’t really understand the context of it until much later,” says the senior intelligence official. Investigators now realize that the officer’s boast was the first indication U.S. spies had from their sources that Russia wasn’t just hacking email accounts to collect intelligence but was also considering interfering in the vote. Like much of America, many in the U.S. government hadn’t imagined the kind of influence operation that Russia was preparing to unleash on the 2016 election. Fewer still realized it had been five years in the making.
In 2011, protests in more than 70 cities across Russia had threatened Putin’s control of the Kremlin. The uprising was organized on social media by a popular blogger named Alexei Navalny, who used his blog as well as Twitter and Facebook to get crowds in the streets. Putin’s forces broke out their own social media technique to strike back. When bloggers tried to organize nationwide protests on Twitter using #Triumfalnaya, pro-Kremlin botnets bombarded the hashtag with anti-protester messages and nonsense tweets, making it impossible for Putin’s opponents to coalesce.
Putin publicly accused then Secretary of State Clinton of running a massive influence operation against his country, saying she had sent “a signal” to protesters and that the State Department had actively worked to fuel the protests. The State Department said it had just funded pro-democracy organizations. Former officials say any such operations–in Russia or elsewhere–would require a special intelligence finding by the President and that Barack Obama was not likely to have issued one.
After his re-election the following year, Putin dispatched his newly installed head of military intelligence, Igor Sergun, to begin repurposing cyberweapons previously used for psychological operations in war zones for use in electioneering. Russian intelligence agencies funded “troll farms,” botnet spamming operations and fake news outlets as part of an expanding focus on psychological operations in cyberspace.
It turns out Putin had outside help. One particularly talented Russian programmer who had worked with social media researchers in the U.S. for 10 years had returned to Moscow and brought with him a trove of algorithms that could be used in influence operations. He was promptly hired by those working for Russian intelligence services, senior intelligence officials tell TIME. “The engineer who built them the algorithms is U.S.-trained,” says the senior intelligence official.
Soon, Putin was aiming his new weapons at the U.S. Following Moscow’s April 2014 invasion of Ukraine, the U.S. considered sanctions that would block the export of drilling and fracking technologies to Russia, putting out of reach some $8.2 trillion in oil reserves that could not be tapped without U.S. technology. As they watched Moscow’s intelligence operations in the U.S., American spy hunters saw Russian agents applying their new social media tactics on key aides to members of Congress. Moscow’s agents broadcast material on social media and watched how targets responded in an attempt to find those who might support their cause, the senior intelligence official tells TIME. “The Russians started using it on the Hill with staffers,” the official says, “to see who is more susceptible to continue this program [and] to see who would be more favorable to what they want to do.”
On Aug. 7, 2016, the infamous pharmaceutical executive Martin Shkreli declared that Hillary Clinton had Parkinson’s. That story went viral in late August, then took on a life of its own after Clinton fainted from pneumonia and dehydration at a Sept. 11 event in New York City. Elsewhere people invented stories saying Pope Francis had endorsed Trump and Clinton had murdered a DNC staffer. Just before Election Day, a story took off alleging that Clinton and her aides ran a pedophile ring in the basement of a D.C. pizza parlor.
Congressional investigators are looking at how Russia helped stories like these spread to specific audiences. Counterintelligence officials, meanwhile, have picked up evidence that Russia tried to target particular influencers during the election season who they reasoned would help spread the damaging stories. These officials have seen evidence of Russia using its algorithmic techniques to target the social media accounts of particular reporters, senior intelligence officials tell TIME. “It’s not necessarily the journal or the newspaper or the TV show,” says the senior intelligence official. “It’s the specific reporter that they find who might be a little bit slanted toward believing things, and they’ll hit him” with a flood of fake news stories.
Russia plays in every social media space. The intelligence officials have found that Moscow’s agents bought ads on Facebook to target specific populations with propaganda. “They buy the ads, where it says sponsored by–they do that just as much as anybody else does,” says the senior intelligence official. (A Facebook official says the company has no evidence of that occurring.) The ranking Democrat on the Senate Intelligence Committee, Mark Warner of Virginia, has said he is looking into why, for example, four of the top five Google search results the day the U.S. released a report on the 2016 operation were links to Russia’s TV propaganda arm, RT. (Google says it saw no meddling in this case.) Researchers at the University of Southern California, meanwhile, found that nearly 20% of political tweets in 2016 between Sept. 16 and Oct. 21 were generated by bots of unknown origin; investigators are trying to figure out how many were Russian.
As they dig into the viralizing of such stories, congressional investigations are probing not just Russia’s role but whether Moscow had help from the Trump campaign. Sources familiar with the investigations say they are probing two Trump-linked organizations: Cambridge Analytica, a data-analytics company hired by the campaign that is partly owned by deep-pocketed Trump backer Robert Mercer; and Breitbart News, the right-wing website formerly run by Trump’s top political adviser Stephen Bannon.
The congressional investigators are looking at ties between those companies and right-wing web personalities based in Eastern Europe who the U.S. believes are Russian fronts, a source familiar with the investigations tells TIME. “Nobody can prove it yet,” the source says. In March, McClatchy newspapers reported that FBI counterintelligence investigators were probing whether far-right sites like Breitbart News and Infowars had coordinated with Russian botnets to blitz social media with anti-Clinton stories, mixing fact and fiction when Trump was doing poorly in the campaign.
There are plenty of people who are skeptical of such a conspiracy, if one existed. Cambridge Analytica touts its ability to use algorithms to microtarget voters, but veteran political operatives have found them ineffective political influencers. Ted Cruz first used their methods during the primary, and his staff ended up concluding they had wasted their money. Mercer, Bannon, Breitbart News and the White House did not answer questions about the congressional probes. A spokesperson for Cambridge Analytica says the company has no ties to Russia or individuals acting as fronts for Moscow and that it is unaware of the probe.
Democratic operatives searching for explanations for Clinton’s loss after the election investigated social media trends in the three states that tipped the vote for Trump: Michigan, Wisconsin and Pennsylvania. In each they found what they believe is evidence that key swing voters were being drawn to fake news stories and anti-Clinton stories online. Google searches for the fake pedophilia story circulating under the hashtag #pizzagate, for example, were disproportionately higher in swing districts and not in districts likely to vote for Trump.
The Democratic operatives created a package of background materials on what they had found, suggesting the search behavior might indicate that someone had successfully altered the behavior in key voting districts in key states. They circulated it to fellow party members who are up for a vote in 2018.
Even as investigators try to piece together what happened in 2016, they are worrying about what comes next. Russia claims to be able to alter events using cyberpropaganda and is doing what it can to tout its power. In February 2016, a Putin adviser named Andrey Krutskikh compared Russia’s information-warfare strategies to the Soviet Union’s obtaining a nuclear weapon in the 1940s, David Ignatius of the Washington Post reported. “We are at the verge of having something in the information arena which will allow us to talk to the Americans as equals,” Krutskikh said.
But if Russia is clearly moving forward, it’s less clear how active the U.S. has been. Documents released by former National Security Agency contractor Edward Snowden and published by the Intercept suggested that the British were pursuing social media propaganda and had shared their tactics with the U.S. Chris Inglis, the former No. 2 at the National Security Agency, says the U.S. has not pursued this capability. “The Russians are 10 years ahead of us in being willing to make use of” social media to influence public opinion, he says.
There are signs that the U.S. may be playing in this field, however. From 2010 to 2012, the U.S. Agency for International Development established and ran a “Cuban Twitter” network designed to undermine communist control on the island. At the same time, according to the Associated Press, which discovered the program, the U.S. government hired a contractor to profile Cuban cell phone users, categorizing them as “pro-revolution,” “apolitical” or “antirevolutionary.”
Much of what is publicly known about the mechanics and techniques of social media propaganda comes from a program at the Defense Advanced Research Projects Agency (DARPA) that the Rand researcher, Waltzman, ran to study how propagandists might manipulate social media in the future. In the Cold War, operatives might distribute disinformation-laden newspapers to targeted political groups or insinuate an agent provocateur into a group of influential intellectuals. By harnessing computing power to segment and target literally millions of people in real time online, Waltzman concluded, you could potentially change behavior “on the scale of democratic governments.”
In the U.S., public scrutiny of such programs is usually enough to shut them down. In 2014, news articles appeared about the DARPA program and the “Cuban Twitter” project. It was only a year after Snowden had revealed widespread monitoring programs by the government. The DARPA program, already under a cloud, was allowed to expire quietly when its funding ran out in 2015.
In the wake of Russia’s 2016 election hack, the question is how to research social media propaganda without violating civil liberties. The need is all the more urgent because the technology continues to advance. While today humans are still required to tailor and distribute messages to specially targeted “susceptibles,” in the future crafting and transmitting emotionally powerful messages will be automated.
The U.S. government is constrained in what kind of research it can fund by various laws protecting citizens from domestic propaganda, government electioneering and intrusions on their privacy. Waltzman has started a group called Information Professionals Association with several former information operations officers from the U.S. military to develop defenses against social media influence operations.
Social media companies are beginning to realize that they need to take action. Facebook issued a report in April 2017 acknowledging that much disinformation had been spread on its pages and saying it had expanded its security. Google says it has seen no evidence of Russian manipulation of its search results but has updated its algorithms just in case. Twitter claims it has diminished cyberpropaganda by tweaking its algorithms to block cleverly designed bots. “Our algorithms currently work to detect when Twitter accounts are attempting to manipulate Twitter’s Trends through inorganic activity, and then automatically adjust,” the company said in a statement.
In the meantime, America’s best option to protect upcoming votes may be to make it harder for Russia and other bad actors to hide their election-related information operations. When it comes to defeating Russian influence operations, the answer is “transparency, transparency, transparency,” says Rhode Island Democratic Senator Sheldon Whitehouse. He has written legislation that would curb the massive, anonymous campaign contributions known as dark money and the widespread use of shell corporations that he says make Russian cyberpropaganda harder to trace and expose.
But much damage has already been done. “The ultimate impact of [the 2016 Russian operation] is we’re never going to look at another election without wondering, you know, Is this happening, can we see it happening?” says Jigsaw’s Jared Cohen. By raising doubts about the validity of the 2016 vote and the vulnerability of future elections, Russia has achieved its most important objective: undermining the credibility of American democracy.
For now, investigators have added the names of specific trolls and botnets to their wall charts in the offices of intelligence and law-enforcement agencies. They say the best way to compete with the Russian model is by having a better message. “It requires critical thinkers and people who have a more powerful vision” than the cynical Russian view, says former NSA deputy Inglis. And what message is powerful enough to take on the firehose of falsehoods that Russia is deploying in targeted, effective ways across a range of new media? One good place to start: telling the truth.
–With reporting by PRATHEEK REBALA/WASHINGTON
Correction: The original version of this story misstated Jared Cohen’s title. He is CEO, not president.
The survey, published on Friday, concluded that Snapchat, Facebook and Twitter are also harmful. Among the five only YouTube was judged to have a positive impact.
The four platforms have a negative effect because they can exacerbate children’s and young people’s body image worries, and worsen bullying, sleep problems and feelings of anxiety, depression and loneliness, the participants said.
The findings follow growing concern among politicians, health bodies, doctors, charities and parents about young people suffering harm as a result of sexting, cyberbullying and social media reinforcing feelings of self-loathing and even the risk of them committing suicide.
“It’s interesting to see Instagram and Snapchat ranking as the worst for mental health and wellbeing. Both platforms are very image-focused and it appears that they may be driving feelings of inadequacy and anxiety in young people,” said Shirley Cramer, chief executive of the Royal Society for Public Health, which undertook the survey with the Young Health Movement.
She demanded tough measures “to make social media less of a wild west when it comes to young people’s mental health and wellbeing”. Social media firms should bring in a pop-up image to warn young people that they have been using it a lot, while Instagram and similar platforms should alert users when photographs of people have been digitally manipulated, Cramer said.
The 1,479 young people surveyed were asked to rate the impact of the five forms of social media on 14 different criteria of health and wellbeing, including their effect on sleep, anxiety, depression, loneliness, self-identity, bullying, body image and the fear of missing out.
Instagram emerged with the most negative score. It rated badly for seven of the 14 measures, particularly its impact on sleep, body image and fear of missing out – and also for bullying and feelings of anxiety, depression and loneliness. However, young people cited its upsides too, including self-expression, self-identity and emotional support.
YouTube scored very badly for its impact on sleep but positively in nine of the 14 categories, notably awareness and understanding of other people’s health experience, self-expression, loneliness, depression and emotional support.
However, the leader of the UK’s psychiatrists said the findings were too simplistic and unfairly blamed social media for the complex reasons why the mental health of so many young people is suffering.
Prof Sir Simon Wessely, president of the Royal College of Psychiatrists, said: “I am sure that social media plays a role in unhappiness, but it has as many benefits as it does negatives.. We need to teach children how to cope with all aspects of social media – good and bad – to prepare them for an increasingly digitised world. There is real danger in blaming the medium for the message.”
Tom Madders, its director of campaigns and communications, said: “Prompting young people about heavy usage and signposting to support they may need, on a platform that they identify with, could help many young people.”
However, he also urged caution in how content accessed by young people on social media is perceived. “It’s also important to recognise that simply ‘protecting’ young people from particular content types can never be the whole solution. We need to support young people so they understand the risks of how they behave online, and are empowered to make sense of and know how to respond to harmful content that slips through filters.”
Parents and mental health experts fear that platforms such as Instagram can make young users feel worried and inadequate by facilitating hostile comments about their appearance or reminding them that they have not been invited to, for example, a party many of their peers are attending.
May, who has made children’s mental health one of her priorities, highlighted social media’s damaging effects in her “shared society” speech in January, saying: “We know that the use of social media brings additional concerns and challenges. In 2014, just over one in 10 young people said that they had experienced cyberbullying by phone or over the internet.”
In February, Jeremy Hunt, the health secretary, warned social media and technology firms that they could face sanctions, including through legislation, unless they did more to tackle sexting, cyberbullying and the trolling of young users.
Are you addicted to technology? I’m certainly not. In my first sitting reading Adam Alter’s Irresistible, an investigation into why we can’t stop scrolling and clicking and surfing online, I only paused to check my phone four times. Because someone might have emailed me. Or texted me. One time I stopped to download an app Alter mentioned (research) and the final time I had to check the shares on my play brokerage app, Best Brokers (let’s call this one “business”).
Half the developed world is addicted to something, and Alter, a professor at New York University, informs us that, increasingly, that something isn’t drugs or alcohol, but behaviour. Recent studies suggest the most compulsive behaviour we engage in has to do with cyber connectivity; 40% of us have some sort of internet-based addiction – whether it’s checking your email (on average workers check it 36 times an hour), mindlessly scrolling through other people’s breakfasts on Instagram or gambling online.
Facebook was fun three years ago, Alter warns. Now it’s addictive. This tech zombie epidemic is not entirely our fault. Technology is designed to hook us, and to keep us locked in a refresh/reload cycle so that we don’t miss any news, cat memes or status updates from our friends. Tristan Harris, a “design ethicist” (whatever that is) tells the author that it’s not a question of willpower when “there are a thousand people on the other side of the screen whose job it is to break down the self-regulation you have”. After all, Steve Jobs gave the world the iPad, but made very sure his kids never got near one. Brain patterns of heroin users just after a hit and World of Warcraft addicts starting up a new game are nearly identical. The tech innovators behind our favourite products and apps understood that they were offering us endless portals to addiction. We’re the only ones late to the party.
Addiction isn’t inherent or genetic incertain people, as was previously thought. Rather, it is largely a function of environment and circumstance. Everyone is vulnerable; we’re all just a product or substance away from an uncomfortable attachment of some kind. And the internet, Alter writes, with its unpredictable but continuous loop of positive feedback, simulation of connectivity and culture of comparison, is “ripe for abuse”.
For one thing, it’s impossible to avoid; a recovering alcoholic can re-enter the slipstream of his life with more ease than someone addicted to online gaming – the alcoholic can avoid bars while the gaming addict still has to use a computer at work, to stay in touch with family, to be included in his micro-society.
Secondly, it’s bottomless. Everything is possible in the ideology of the internet – need a car in the middle of the night? Here you go. Want to borrow a stranger’s dog to play with for an hour, with no long-term responsibility for the animal? Sure, there’s an app for that. Want to send someone a message and see when it reaches their phone, when they read it and whether they like it? Even BlackBerry could do that.
Thirdly, it’s immersive – and even worse, it’s mobile. You can carry your addiction around with you. Everywhere. You don’t need to be locked in an airless room or unemployed in order to spend hours online. Moment, an app designed to track how often you pick up and look at your phone, estimates that the average smartphone user spends two to three hours on his or her mobile daily.
I downloaded Moment (the research I mentioned earlier) and uninstalled it after it informed me that, by noon, I had already fiddled away an hour of my time on the phone.
Though the age of mobile tech has only just begun, Alter believes that signs point to a crisis. In 2000, Microsoft Canada found that our average attention span was 12 seconds long. By 2013, it was eight seconds long. Goldfish, by comparison, can go nine seconds. Our ability to empathise, a slow-burning skill that requires immediate feedback on how our actions affect others, suffers the more we disconnect from real-life interaction in favour of virtual interfacing. Recent studies found that this decline in compassion was more pronounced among young girls. One in three teenage girls say their peers are cruel online (only one in 11 boys agree).
Sure, communication technology has its positives. It’s efficient and cheap, and has the ability to teach creatively, raise money for worldwide philanthropic causes and to disseminate news under and over the reach of censors, but the corrosive culture of online celebrity, fake news and trolling must have a downside, too – namely that we can’t seem to get away from it.
There is a tinge of first world problems in Irresistible. World of Warcraft support groups; a product Alter writes about called Realism (a plastic frame resembling a screenless smartphone, which you can hold to temper your raging internet addiction, but can’t actually use); a spike in girl gaming addicts fuelled by Kim Kardashian’s Hollywood app – it’s difficult to see why these things should elicit much sympathy while one in 10 people worldwide still lack access to clean drinking water. This very western focus on desire and goal orientation is one that eastern thinkers might consider a wrong view of the world and its material attachments, but Alter’s pop-scientific approach still makes for an entertaining break away from one’s phone.
Chris, an independent contractor in his midfifties, knows a lot about what it means to deal with an unstable job market, especially during those moments when you are between gigs and don’t know when you are going to get the next one. There was a period in 2012 where he hadn’t had a contracting job for a while, and he had no idea how he was going to pay his rent. He realized he might be able to make his rent for another month, but if he didn’t get a job soon, he might be homeless. He decided that he needed to get his body ready for this very likely possibility. “I started to sleep on the floor a few hours each night, as long as I could take it, so I could get used to sleeping on a sidewalk or on the dirt. That’s how bad it looked. It just seemed hopeless,” Chris said. Out of the blue, a staffing agency based in India contacted him and offered him a contract in the Midwest, giving him enough money to make it through this bad patch. But this stark moment, in which he saw homelessness around the corner, is part and parcel of the downside of careers made up of temporary jobs. Chris responded to this possibility in the way that you are supposed to if you are constantly enhancing yourself. He began to train his body for living on the streets, realizing that he needed to learn how to sleep without a bed. He was determined to be flexible and to adapt to potential new circumstances. Seeing the self as a bundle of skills, in practice, means that for some people enhancing your skills involves training yourself to survive being homeless. This too is a logical outcome of our contemporary employment model.
I have studied how people are responding to this new way of thinking about work and what it means to be a worker. In the United States, people are moving away from thinking that when they enter into an employment contract, they are metaphorically renting their capacities to an employer for a bounded period of time. Many people are no longer using a notion of the self-as-rented-property as an underlying metaphor and are starting to think of themselves as though they are a business, although not everyone likes this new metaphor or accepts all its implications. When you switch to thinking about the employment contract as a business-to-business relationship, much changes—how you present yourself as a desirable employee, what it means to be a good employer, what your relationships with your coworkers should be like, the relationship between a job and a career, and how you prepare yourself for the future.
The self-as-business metaphor makes a virtue of flexibility as well as the practical ways people might respond in their daily lives to conditions of instability and insecurity. As Gina Neff points out in Venture Labor, the model encourages people to embrace risk as a positive, even sought-out, element of how they individually should craft a career. Each time you switch jobs, you risk. You don’t know the amount of time you will have at a job before having to find a new one, and you risk how lucky you will be at getting that job and the next job. And with every job transition, you risk the salary that you might make. If there is a gap between jobs, then some people will find that they no longer experience a reliable, steady, upward trajectory in their salaries as they navigate the contemporary job market. Yet this is what you are now supposed to embrace as liberating.
Chris’s experiences cycling between employment and increasing periods of unemployment was a familiar story for me. I interviewed so many people in their late forties to early sixties who had a few permanent jobs early in their careers. But as companies increasingly focused on having a more transient workforce, these white-collar workers found their career trajectories veering from what they first thought their working life would look like. They thought that they might climb the organizational ladder in one or maybe even three companies over the course of their lifetime. Instead, they found that at some point in their mid to late forties, they started having shorter and shorter stints at different companies. The jobs, some would say, would last as long as a project. And as they grew older, the gaps between permanent jobs could start growing longer and longer. They struggled to make do, often using up their savings or selling their homes as they hoped to get the next job. Some started to find consulting jobs in order to make ends meet before landing the hoped-for permanent job, and then found themselves trapped on the consulting track—living only in the gig economy. True, not everyone felt like contracting was plan B, the option they had to take because of bad luck. In their book about contractors, Steve Barley and Gideon Kunda talk about the people they interviewed who actively chose this life. I met these people too, but they weren’t the majority of the job seekers I interviewed. Because I was studying people looking for a wide range of types of jobs, instead of studying people who already had good relationships with staffing agencies that provided consultants, I tended to meet people who felt their bad luck had backed them into becoming permanent freelancers. These were people who encountered the self-as-business metaphor as a relatively new model, one they felt they actively had to learn in order to survive in today’s workplace, as opposed to the younger people I interviewed, many of whom had grown up with the self-as-business model as their primary way to understand employment.
When you think of the employment contract in a new way, you often have to revisit what counts as moral behavior, since older frameworks offer substantively different answers to questions of moral business practice. People have to decide what it means for a company to behave well under this new framework. Consider the self-as-business model. What does a good company do to help its workers enhance themselves as allied businesses? What are the limits in what a company should do? What counts as exploitation under this new model? Can businesses do things that count as exploitation or bad practices now that might not have been considered problems earlier, or not considered problems for the same reasons (and thus are regulated or resolved differently)? Businesses are certainly deeply concerned that workers’ actions both at work and outside of work could threaten the company’s brand, a new worry—but this is the tip of the iceberg. And the moral behavior of companies isn’t the only issue. Can workers exploit the companies they align with now or behave badly toward them in new ways?
Yet while these two metaphors—the self-as-property and the self-as-business—encourage people to think about employment in different ways, there are still similarities in how the metaphors ask people to think about getting hired. In both cases, the metaphors are focusing on market choices and asking people to operate by a market logic. Deciding whether to rent your capacities is a slightly different question than deciding whether to enter into a business alliance with someone, but in both instances you are expected to make a decision based on the costs and benefits involved in the decision. In addition, both metaphorical contracts presume that people enter into these contracts as equals, and yet this equality doesn’t last in practice once you are hired. In most jobs, the moment you are hired, you are in a hierarchical relationship; you are taking orders from a boss. Some aspects of working have changed because of this shift in frameworks, but many aspects have stayed the same.
Avoiding Corporate Nostalgia
I talked to people who were thoughtfully ambivalent about this transition in the metaphors underlying employment. They didn’t like their current insecurity, but they pointed out that earlier workplaces weren’t ideal either. Before, people often felt trapped in jobs they disliked and confronted with office politics that were alienating and demoralizing. Like many people today, they dealt with companies in which they were constantly encountering sexism and racism. Not everyone had equal opportunities to move into the jobs they wanted or to be promoted or acknowledged for the work that they did well.
However, as anthropologist Karen Ho points out, when you have a corporate ladder that excludes certain groups of people, you also have a structure that you can potentially reform so that these groups will in the future have equal opportunities. When you have no corporate ladder—when all you have is the uncertainty of moving between companies or between freelance jobs—you no longer have a clear structure to target if you want to make a workplace a fairer environment. If there is more gender equality in the US workplace these days than there was thirty years ago, it is in part because corporate structures were stable enough and reformers stayed at companies long enough that specific business practices could be effectively targeted and reformed. Part of what has changed about workplaces today is that there has been a transformation in the kinds of solutions available to solve workplace problems.
I see what people said to me about their preference for the kinds of guarantees and rights people used to have at work as a form of critique, not a form of nostalgia. People didn’t necessarily want to return to the way things used to be. When people talked to me nostalgically about how workplaces used to function, it was often because they valued the protections they used to be able to rely on and a system they knew well enough to be able to imagine how to change it for the better.
Many people I spoke to were very unhappy with the contemporary workplace’s increasing instability. They worried a great deal about making it financially through the longer and longer dry spells of unemployment between jobs. I talked to a man who was doing reasonably well that year as a consultant, and he began reflecting on what the future would hold for his children. He didn’t want them to follow in his footsteps and become a computer programmer, because too many people like him were contingent workers. He wanted them to have their own families and reasoned: “If everybody thinks they can be laid-off in two weeks, how would they feel confident enough to be a parent and know that they’ve got twenty-one years of consistent investment?”
It is not that the people I spoke to necessarily wanted older forms of work. What many wanted was stability. No matter how many times people are told to embrace being flexible, to embrace risk, in practice many of the people I spoke to did not actually want to live with the downsides of this riskier life. The United States does not have enough safety nets in place to protect you during the moments when life doesn’t work out. Because you are supposed to be looking for a new job regularly over the course of a lifetime, the opportunities when you might become dramatically downwardly mobile increase. There are more possible moments in which you have to enhance your skills at surviving on much less money or even living rough.
Changing Notions of What Counts as a Good Employment Relationship
When people are thought of as businesses, significant aspects of the employment relationship change. The genre repertoire you use to get a job alters to reflect this understanding as you use resumes, interview answers, and other genres to represent yourself as a bundle of business solutions that can address the hiring company’s market-specific temporary needs. Networking has changed—what it means to manage your social relationships so that you can stay employed has shifted. Some people I met are now arguing that you treat the companies you are considering joining in the same way you would treat any other business investment: in terms of the financial and career risk involved in being allied with this company.
It is not just that you evaluate jobs differently when you know that your job is temporary—deciding you can put up with some kinds of inconveniences but not others. Instead, you see the job as a short-term investment of time and labor, and the job had better pay off—perhaps by providing you with new skills, new networks, or a new way of framing your work experiences that makes you potentially more desirable for the next job. What if this new framework allows workers to have new expectations of their employers, or can safeguard workers’ interests in new ways? If you have this perspective, what are the new kinds of demands that employees could potentially make of employers?
For Tom, this new vision of self-as-business was definitely guiding how he was judging the ways companies treated him and what was appropriate behavior. I first contacted Tom because I heard through the grapevine that he refused to use LinkedIn. I was curious, as I had been doing research for seven months by that point and only came across one other person who was not using LinkedIn (and has since rejoined). We talked about his refusal, and he explained to me that LinkedIn didn’t seem to offer enough in return for his data. He clearly saw himself in an exchange relationship with LinkedIn, providing data for it to use and in return having access to the platform. Fair enough, I thought: as far as I can tell, the data scientists at LinkedIn and Facebook whom I have met see the exchange relationship in similar ways. Yet Tom decided that what LinkedIn offered wasn’t good enough. It wasn’t worth providing the company with his personal data. So I asked him about various other sites that he might use in which the exchange might be more equitable, and he lit up talking about these other sites. For Tom, because he saw himself as a business, and viewed his data as part of his assets, he was ready to see LinkedIn as offering a bad business arrangement, one he didn’t want to accept. The self-as-business framework allowed him to see the use of certain platforms as instances of participating in business alliances. Some alliances he was willing to enter into, but not all.
This wasn’t his only encounter with a potentially exploitative business arrangement. He typically worked as an independent contractor, and a company asked him to come in for a job interview. When he got there, his interviewer explained that the position was a sweat equity job—Tom wouldn’t get a salary, but rather he would get equity in the company in exchange for his labor. “Okay” he replied. “So what is your business model?” His interviewer was surprised and discomforted to be asked this. He refused to answer; employees don’t need to know the details of the company’s business model, he said. Tom felt that this was wrong; because he was being asked to be an investor in the company—admittedly with his labor instead of with money—he felt should be given the same financial details that any other investor in a company would expect before signing on. It sounded to me like Tom’s interviewer was caught between two models: wanting the possible labor arrangements now available but unwilling to adjust whom he told what. The interviewer was not willing to follow through on the implications of this new model of employment, and as a result, Tom wasn’t willing to take the job. This is one way in which the self-as-business model offers a new way to talk about what counts as exploitation and as inappropriate behavior—behavior that might not have been an issue decades ago, or would have been a problem for different reasons (perhaps because a couple of decades ago, few people found sweat equity an acceptable arrangement).
But this new model also opens up the possibility that companies can have obligations to their employees that they did not have in the same way before. Since companies often don’t offer stable employment, they now provide a temporary venue for people to express their passion and to enhance themselves. Can this look like an obligation that businesses have to their workers? Perhaps—businesses could take seriously what it means to provide workers with the opportunities to enhance themselves. Michael Feher argues that if people are now supposed to see themselves as human capital, there should be a renewed focus on what good investment in people looks like—regardless of whether workers stay at a single company.
Should companies now help provide training for an employee’s next job? Throughout the twentieth century, companies understood that they had to provide their workers training in order for them to do their job at the company to their best of their ability. Internal training made sense both for the company’s immediate interests and for the company’s ability to retain a supply of properly trained workers over the life of the company. Now that jobs are so temporary, who is responsible for training workers is a bit more up in the air. Yet some companies are beginning to offer support for workers to train, not for the benefit of the company, but so that workers can pursue their passion, should they discover that working at that company is not their passion. Amazon, for example, in 2012 began to provide training for employees who potentially want radically different jobs. Jeff Bezos explained in his 2014 letter to shareholders: “We pre-pay 95% of tuition for our employees to take courses for in-demand fields, such as airplane mechanic or nursing, regardless of whether the skills are relevant to a career at Amazon. The goal is to enable choice.” It makes sense for a company to support its workers learning skills for a completely different career only under the contemporary perspective that people are businesses following their passions in temporary alliances with companies.
This model of self-as-business might give workers some new language to protest business practices that keep them from enhancing themselves or entering into as many business alliances as they would like. For example, just-in-time scheduling in practice is currently preventing retail workers from getting enough hours so that they can earn as much as they would like to in a week. This type of scheduling means that workers only find out that week how many hours they are working and when. They can’t expect to have certain hours reliably free, and they need to be available whenever their employer would like them to work. Marc Doussard has found that good workers are rewarded with more hours at work. While white-collar workers might get better pay in end-of-the-year bonuses for seeming passionate, retail workers get more hours in the week. If workers make special requests to have certain hours, Doussard discovered, their managers will often punish them in response, by either giving them fewer hours to work or only assigning them to shifts they find undesirable. In practice, this means that workers have trouble holding two jobs or taking classes to improve themselves, as unpredictable shifts will inevitably conflict with each other or class times. Predictable work hours, in short, are essential for being able to plan for the future—either to make sure you are working enough hours in the week to support yourself or to educate yourself for other types of jobs. Since companies are now insisting that people imagine themselves as businesses, what would happen if workers protested when companies don’t allow them to “invest” in themselves or when they are thwarted from having as many business partnerships (that is, jobs) as possible? Perhaps employees should now be able to criticize and change employers’ practices when they are prevented from being the best businesses they can be because of their employers’ workplace strategies.