A man and woman are awakened by the cooing alarm emanating from a massive wall-mounted touchscreen. A wall of floor-to-ceiling photochromic windows gradually brightens to reveal the morning sun kissing a lush estate garden. The scene shifts to the woman brushing her teeth while checking work email from a bathroom mirror screen. Moments later, two girls in school uniforms stand in a gleaming white kitchen; one of them is playing with a touchscreen-covered refrigerator door while the father makes an omelet on a sleek high-tech induction stovetop interacting with yet another touchscreen embedded in the countertop.
Amid the tinkling of an electric keyboard, this five-minute promotional video from Gorilla Glass manufacturer Corning walks us through the day of this fictional wealthy family in an idealized version of a Manhattan-like “smart” city impossibly devoid of traffic. Corning isn’t just selling its durable glass, but its vision of future society.
In Corningland, everyone is happy, wealthy and living out fruitful, productive lives, surrounded by products of benevolent technological disruption. This world has no unhappy Uber drivers, Airbnb-fueled gentrification doesn’t exist and iPads in the classrooms actually help to educate children. When tech marketing underscores social or global problems, it’s used only as a setup to underscore how technology can solve them.
“It’s like you have one class [in tech-focused promotional material] and the class that you have is upper middle,” Chris Birks, associate professor of digital media at Benedictine University, told Salon. “You see a utopian vision, not one necessarily of everyone being super rich, but doing better than they were because of the new technology we have, which is not the case.”
As 18th-century English writer Samuel Johnson famously said: “Promise, large promise, is the soul of an advertisement.” It’s natural for product promotions to either depict the world in utopian terms or to engage in what’s known as “constructive discontent,” in which a problem is highlighted in order to show that a product or service is its solution.
But unlike, say, environmentally unfriendly laundry detergent or sugary carbonated beverages, the underlying assumptions proposed by ads for Google Glass, Amazon Prime, Microsoft Cloud and other innovative products often go unquestioned.
“Technology advertising is especially interesting because what it’s doing is saying all technological advances are good and all technology is beneficial to the people who will be lucky enough to adopt it,” John Carroll, assistant professor of Mass Communications as Boston University, who specializes in advertising and media criticism, told Salon. “There’s nothing that says an advertisement needs to point out the downside of a product, but one of the issues here is that the counterbalancing argument that not all innovation is beneficial doesn’t get the kind of exposure that might be helpful to the public.”
Indeed, visit any technology-focused media outlet, or the tech sections of many news organizations, and you’ll see that “gadget porn” videos, hagiographic profiles of startup founders or the regurgitation of lofty growth expectations from Wall Street analysts vastly outnumber critical analyses of technological disruption. The criticisms that do exist tend to focus on ancillary issues, such as Silicon Valley’s dismal lack of workplace diversity, or how innovation is upsetting norms in the labor market, or the class-based digital divide; all are no doubt important topics, but they’re ones that don’t question the overall assumptions that innovation and disruption are at worst harmless if not benevolent.
Carroll says that it’s up to the media, schools and even religious institutions to counterbalance the presumptions made in advertising, whose goal, he points out, is often to portray happiness “through acquisition as opposed to achievement.”
This idea of selling innovation as a pathway to universal prosperity isn’t new. In the 1980s, South Korean technology companies LG and Samsung were churning out idealistic portrayals of technology’s role in creating what Su-Ji Lee, a faculty member at Seoul National University who studies design and culture, described in a paper published in November as “technological utopianism.” The idea that technology will save us all emerged in South Korea during the country’s rapid economic development following decades of poverty.
In these ads, Samsung and LG portrayed consumers as happy or bewildered children, innocent and helpless, as technology lorded benevolently over the innocent and helpless, bringing to them (and to Korea itself) a new era of post-war prosperity.
In these advertisements, Lee writes, “the corporations . . . [play] the leading role of progress towards the future and enlightenment of people.” In these advertising campaigns, she continued, “the hero is the corporation rather than the human.”
Birks, who has studied utopian depictions in web advertising, says that while innovation can be off-putting and certainly not always benevolent, it’s always been the case that innovators views themselves as disruptors.
“For better or worse, they are changing the world,” he said.
Like any sector, the tech industry isn’t going to underscore the negative implications of its innovations in its own promotional materials. Helped by more objective and less fawning tech coverage, people can decide how much technology they want in their lives. Perhaps it would help them if they realized that many of the tech industry’s most celebrated heroes, including the late Steve Jobs, are so wary of emerging technologies that they keep their own children away from their own gadgets..
In a financial report released last week, ride-hailing app company Uber reported a staggering $708 million loss for the first three months of the year. Since the company was founded eight years ago, it’s burned through almost half of the $15 billion in private venture capital that it has raised.
But despite the mounting losses, the departure of more than a dozen company executives over the past year and a string of controversies that would send the typical company plunging into an irreversible death spiral, Uber CEO and co-founder Travis Kalanick’s net worth is immense.
According to Forbes, Kalanick is worth $6.3 billion, making him the world’s 226th wealthiest billionaire and the 35th richest magnate of the global tech industry. That makes him richer than Wal-Mart heiress Christy Walton and Liu Qiandong, founder and head of Chinese e-commerce and retail giant JD.com, which recently reported $11 billion in quarterly sales and its first profit as a publicly traded company.
So how does a 40-year-old computer programmer heading a beleaguered and unprofitable company have a net worth greater than the gross domestic product of Barbados?
The short answer is: hopes, dreams and aspirations. Specifically, those of the Uber’s financial backers, who believe in the gospel that Uber is on its way to killing the global taxi industry.
Under normal startup circumstances, a business faces intense pressure to attain profitability within a short period of time. According to the U.S. Small Business Administration, 1 in 5 new businesses goes under in the first year while nearly half fail within the fifth year. According to a 2015 study from Babson and Baruch Colleges, the typical entrepreneur provides nearly 60 percent of the funding needed for his or her business.
But in the world of Silicon Valley, profitability takes a back seat as deep-pocketed investors throw money at long-term aspirations. For years private investors have assigned sky-high valuations to tech industry startups in a bid to find the next Amazon or Google nestled in some Northern California office building or garage. Billionaire investors, private equity firms and sovereign wealth fund managers are willing to take considerable risks that mushroom the wealth of founders and CEOs to astronomical levels.
Kalanick is a billionaire because private investors have assigned a value to Uber based on its future potential; that’s where the hopey-dreamy stuff comes in. The company is currently valued at a sky-high $68 billion according to CBInsights, more than half the value of global aerospace behemoth Boeing. Because Kalanick is a primary shareholder of Uber, his net worth is boosted by this potentially irrational valuation, making him a “paper” billionaire.
Though what he does with his equity is not publicly known, Kalanick can potentially leverage this net worth to grow his personal fortune by using his stake in Uber to engage in other business endeavors, like buying real estate or investing in securities, all based on what private investors think his startup is worth.
In the typical scenario, an executive at private equity firm considering an investment in a private startup might compare the numbers offered in a business plan with those of a comparable publicly traded company and examine operating costs, profit margins and overall capital structure. If the startup has a prospectus with targets that seem viable compared with those of an existing competitor, investors will have some degree of confidence that they’ll wind up with a windfall of profit once the company is acquired or it files an initial public offering.
But because of the strange nature of the tech industry, there often isn’t a comparable company upon which investors can base their assessments. When Amazon was raising money in the early 1990s, there was no existing competitor with a similar business model, so early investors had to make estimates and assumptions to base their hopes on. It is interesting that very few individuals invested in Amazon prior to its initial public offering.
In retrospect, offering seed money to Amazon was a no brainer. Internet commerce was growing by a staggering 2,300 percent a year in 1994, and Jeff Bezos saw that light early and famously drew up a business plan during a road trip to Seattle. Venture capital firm Kleiner Perkins Caufield & Byers was one of a few private investors that gave Bezos money early on, and it reaped a fortune after Amazon filed its initial public offering in 1997 just as the dot-com bubble peaked.
But the success of tech companies like Amazon.com and Google are few and far between. Often the decision by private investors whether to invest in a technology startup is based on assumptions, best estimates and industrywide averages of publicly traded companies in the same sector.
While private equity firms have special access to review a startup’s books, CEO- founders have much more latitude in selling their plans and manipulating their numbers than the heads of established publicly traded companies, who face more regulatory scrutiny.
Once startups make their way to the public markets through initial public offerings, founder-CEOs can continue to reap billions from their company’s valuations without the companies making a dime in profit. Tesla CEO Elon Musk, who’s worth an estimated $16 billion, the head of Snap, Evan Spiegel ($4.7 billion) and Twitter’s Jack Dorsey ($1.8 billion) are notable examples of rich CEOs who head unprofitable companies.
These founder-CEOs can spend good portions of their lives as billionaire heads of money-losing companies as long as investors keep believing that these companies may someday strike it rich. But there’s always a make-or-break point, and paper billionaire are always at risk of sinking their fortunes with investors losing their shirts. One thing is almost certain: Even if Uber crashes and burns, Kalanick would likely walk away from the wreckage a very wealthy computer programmer.
On March 2, a disturbing report hit the desks of U.S. counterintelligence officials in Washington. For months, American spy hunters had scrambled to uncover details of Russia’s influence operation against the 2016 presidential election. In offices in both D.C. and suburban Virginia, they had created massive wall charts to track the different players in Russia’s multipronged scheme. But the report in early March was something new.
It described how Russia had already moved on from the rudimentary email hacks against politicians it had used in 2016. Now the Russians were running a more sophisticated hack on Twitter. The report said the Russians had sent expertly tailored messages carrying malware to more than 10,000 Twitter users in the Defense Department. Depending on the interests of the targets, the messages offered links to stories on recent sporting events or the Oscars, which had taken place the previous weekend. When clicked, the links took users to a Russian-controlled server that downloaded a program allowing Moscow’s hackers to take control of the victim’s phone or computer–and Twitter account.
As they scrambled to contain the damage from the hack and regain control of any compromised devices, the spy hunters realized they faced a new kind of threat. In 2016, Russia had used thousands of covert human agents and robot computer programs to spread disinformation referencing the stolen campaign emails of Hillary Clinton, amplifying their effect. Now counterintelligence officials wondered: What chaos could Moscow unleash with thousands of Twitter handles that spoke in real time with the authority of the armed forces of the United States? At any given moment, perhaps during a natural disaster or a terrorist attack, Pentagon Twitter accounts might send out false information. As each tweet corroborated another, and covert Russian agents amplified the messages even further afield, the result could be panic and confusion.
For many Americans, Russian hacking remains a story about the 2016 election. But there is another story taking shape. Marrying a hundred years of expertise in influence operations to the new world of social media, Russia may finally have gained the ability it long sought but never fully achieved in the Cold War: to alter the course of events in the U.S. by manipulating public opinion. The vast openness and anonymity of social media has cleared a dangerous new route for antidemocratic forces. “Using these technologies, it is possible to undermine democratic government, and it’s becoming easier every day,” says Rand Waltzman of the Rand Corp., who ran a major Pentagon research program to understand the propaganda threats posed by social media technology.
Current and former officials at the FBI, at the CIA and in Congress now believe the 2016 Russian operation was just the most visible battle in an ongoing information war against global democracy. And they’ve become more vocal about their concern. “If there has ever been a clarion call for vigilance and action against a threat to the very foundation of our democratic political system, this episode is it,” former Director of National Intelligence James Clapper testified before Congress on May 8.
If that sounds alarming, it helps to understand the battlescape of this new information war. As they tweet and like and upvote their way through social media, Americans generate a vast trove of data on what they think and how they respond to ideas and arguments–literally thousands of expressions of belief every second on Twitter, Facebook, Reddit and Google. All of those digitized convictions are collected and stored, and much of that data is available commercially to anyone with sufficient computing power to take advantage of it.
That’s where the algorithms come in. American researchers have found they can use mathematical formulas to segment huge populations into thousands of subgroups according to defining characteristics like religion and political beliefs or taste in TV shows and music. Other algorithms can determine those groups’ hot-button issues and identify “followers” among them, pinpointing those most susceptible to suggestion. Propagandists can then manually craft messages to influence them, deploying covert provocateurs, either humans or automated computer programs known as bots, in hopes of altering their behavior.
That is what Moscow is doing, more than a dozen senior intelligence officials and others investigating Russia’s influence operations tell TIME. The Russians “target you and see what you like, what you click on, and see if you’re sympathetic or not sympathetic,” says a senior intelligence official. Whether and how much they have actually been able to change Americans’ behavior is hard to say. But as they have investigated the Russian 2016 operation, intelligence and other officials have found that Moscow has developed sophisticated tactics.
In one case last year, senior intelligence officials tell TIME, a Russian soldier based in Ukraine successfully infiltrated a U.S. social media group by pretending to be a 42-year-old American housewife and weighing in on political debates with specially tailored messages. In another case, officials say, Russia created a fake Facebook account to spread stories on political issues like refugee resettlement to targeted reporters they believed were susceptible to influence.
As Russia expands its cyberpropaganda efforts, the U.S. and its allies are only just beginning to figure out how to fight back. One problem: the fear of Russian influence operations can be more damaging than the operations themselves. Eager to appear more powerful than they are, the Russians would consider it a success if you questioned the truth of your news sources, knowing that Moscow might be lurking in your Facebook or Twitter feed. But figuring out if they are is hard. Uncovering “signals that indicate a particular handle is a state-sponsored account is really, really difficult,” says Jared Cohen, CEO of Jigsaw, a subsidiary of Google’s parent company, Alphabet, which tackles global security challenges.
Like many a good spy tale, the story of how the U.S. learned its democracy could be hacked started with loose lips. In May 2016, a Russian military intelligence officer bragged to a colleague that his organization, known as the GRU, was getting ready to pay Clinton back for what President Vladimir Putin believed was an influence operation she had run against him five years earlier as Secretary of State. The GRU, he said, was going to cause chaos in the upcoming U.S. election.
What the officer didn’t know, senior intelligence officials tell TIME, was that U.S. spies were listening. They wrote up the conversation and sent it back to analysts at headquarters, who turned it from raw intelligence into an official report and circulated it. But if the officer’s boast seems like a red flag now, at the time U.S. officials didn’t know what to make of it. “We didn’t really understand the context of it until much later,” says the senior intelligence official. Investigators now realize that the officer’s boast was the first indication U.S. spies had from their sources that Russia wasn’t just hacking email accounts to collect intelligence but was also considering interfering in the vote. Like much of America, many in the U.S. government hadn’t imagined the kind of influence operation that Russia was preparing to unleash on the 2016 election. Fewer still realized it had been five years in the making.
In 2011, protests in more than 70 cities across Russia had threatened Putin’s control of the Kremlin. The uprising was organized on social media by a popular blogger named Alexei Navalny, who used his blog as well as Twitter and Facebook to get crowds in the streets. Putin’s forces broke out their own social media technique to strike back. When bloggers tried to organize nationwide protests on Twitter using #Triumfalnaya, pro-Kremlin botnets bombarded the hashtag with anti-protester messages and nonsense tweets, making it impossible for Putin’s opponents to coalesce.
Putin publicly accused then Secretary of State Clinton of running a massive influence operation against his country, saying she had sent “a signal” to protesters and that the State Department had actively worked to fuel the protests. The State Department said it had just funded pro-democracy organizations. Former officials say any such operations–in Russia or elsewhere–would require a special intelligence finding by the President and that Barack Obama was not likely to have issued one.
After his re-election the following year, Putin dispatched his newly installed head of military intelligence, Igor Sergun, to begin repurposing cyberweapons previously used for psychological operations in war zones for use in electioneering. Russian intelligence agencies funded “troll farms,” botnet spamming operations and fake news outlets as part of an expanding focus on psychological operations in cyberspace.
It turns out Putin had outside help. One particularly talented Russian programmer who had worked with social media researchers in the U.S. for 10 years had returned to Moscow and brought with him a trove of algorithms that could be used in influence operations. He was promptly hired by those working for Russian intelligence services, senior intelligence officials tell TIME. “The engineer who built them the algorithms is U.S.-trained,” says the senior intelligence official.
Soon, Putin was aiming his new weapons at the U.S. Following Moscow’s April 2014 invasion of Ukraine, the U.S. considered sanctions that would block the export of drilling and fracking technologies to Russia, putting out of reach some $8.2 trillion in oil reserves that could not be tapped without U.S. technology. As they watched Moscow’s intelligence operations in the U.S., American spy hunters saw Russian agents applying their new social media tactics on key aides to members of Congress. Moscow’s agents broadcast material on social media and watched how targets responded in an attempt to find those who might support their cause, the senior intelligence official tells TIME. “The Russians started using it on the Hill with staffers,” the official says, “to see who is more susceptible to continue this program [and] to see who would be more favorable to what they want to do.”
On Aug. 7, 2016, the infamous pharmaceutical executive Martin Shkreli declared that Hillary Clinton had Parkinson’s. That story went viral in late August, then took on a life of its own after Clinton fainted from pneumonia and dehydration at a Sept. 11 event in New York City. Elsewhere people invented stories saying Pope Francis had endorsed Trump and Clinton had murdered a DNC staffer. Just before Election Day, a story took off alleging that Clinton and her aides ran a pedophile ring in the basement of a D.C. pizza parlor.
Congressional investigators are looking at how Russia helped stories like these spread to specific audiences. Counterintelligence officials, meanwhile, have picked up evidence that Russia tried to target particular influencers during the election season who they reasoned would help spread the damaging stories. These officials have seen evidence of Russia using its algorithmic techniques to target the social media accounts of particular reporters, senior intelligence officials tell TIME. “It’s not necessarily the journal or the newspaper or the TV show,” says the senior intelligence official. “It’s the specific reporter that they find who might be a little bit slanted toward believing things, and they’ll hit him” with a flood of fake news stories.
Russia plays in every social media space. The intelligence officials have found that Moscow’s agents bought ads on Facebook to target specific populations with propaganda. “They buy the ads, where it says sponsored by–they do that just as much as anybody else does,” says the senior intelligence official. (A Facebook official says the company has no evidence of that occurring.) The ranking Democrat on the Senate Intelligence Committee, Mark Warner of Virginia, has said he is looking into why, for example, four of the top five Google search results the day the U.S. released a report on the 2016 operation were links to Russia’s TV propaganda arm, RT. (Google says it saw no meddling in this case.) Researchers at the University of Southern California, meanwhile, found that nearly 20% of political tweets in 2016 between Sept. 16 and Oct. 21 were generated by bots of unknown origin; investigators are trying to figure out how many were Russian.
As they dig into the viralizing of such stories, congressional investigations are probing not just Russia’s role but whether Moscow had help from the Trump campaign. Sources familiar with the investigations say they are probing two Trump-linked organizations: Cambridge Analytica, a data-analytics company hired by the campaign that is partly owned by deep-pocketed Trump backer Robert Mercer; and Breitbart News, the right-wing website formerly run by Trump’s top political adviser Stephen Bannon.
The congressional investigators are looking at ties between those companies and right-wing web personalities based in Eastern Europe who the U.S. believes are Russian fronts, a source familiar with the investigations tells TIME. “Nobody can prove it yet,” the source says. In March, McClatchy newspapers reported that FBI counterintelligence investigators were probing whether far-right sites like Breitbart News and Infowars had coordinated with Russian botnets to blitz social media with anti-Clinton stories, mixing fact and fiction when Trump was doing poorly in the campaign.
There are plenty of people who are skeptical of such a conspiracy, if one existed. Cambridge Analytica touts its ability to use algorithms to microtarget voters, but veteran political operatives have found them ineffective political influencers. Ted Cruz first used their methods during the primary, and his staff ended up concluding they had wasted their money. Mercer, Bannon, Breitbart News and the White House did not answer questions about the congressional probes. A spokesperson for Cambridge Analytica says the company has no ties to Russia or individuals acting as fronts for Moscow and that it is unaware of the probe.
Democratic operatives searching for explanations for Clinton’s loss after the election investigated social media trends in the three states that tipped the vote for Trump: Michigan, Wisconsin and Pennsylvania. In each they found what they believe is evidence that key swing voters were being drawn to fake news stories and anti-Clinton stories online. Google searches for the fake pedophilia story circulating under the hashtag #pizzagate, for example, were disproportionately higher in swing districts and not in districts likely to vote for Trump.
The Democratic operatives created a package of background materials on what they had found, suggesting the search behavior might indicate that someone had successfully altered the behavior in key voting districts in key states. They circulated it to fellow party members who are up for a vote in 2018.
Even as investigators try to piece together what happened in 2016, they are worrying about what comes next. Russia claims to be able to alter events using cyberpropaganda and is doing what it can to tout its power. In February 2016, a Putin adviser named Andrey Krutskikh compared Russia’s information-warfare strategies to the Soviet Union’s obtaining a nuclear weapon in the 1940s, David Ignatius of the Washington Post reported. “We are at the verge of having something in the information arena which will allow us to talk to the Americans as equals,” Krutskikh said.
But if Russia is clearly moving forward, it’s less clear how active the U.S. has been. Documents released by former National Security Agency contractor Edward Snowden and published by the Intercept suggested that the British were pursuing social media propaganda and had shared their tactics with the U.S. Chris Inglis, the former No. 2 at the National Security Agency, says the U.S. has not pursued this capability. “The Russians are 10 years ahead of us in being willing to make use of” social media to influence public opinion, he says.
There are signs that the U.S. may be playing in this field, however. From 2010 to 2012, the U.S. Agency for International Development established and ran a “Cuban Twitter” network designed to undermine communist control on the island. At the same time, according to the Associated Press, which discovered the program, the U.S. government hired a contractor to profile Cuban cell phone users, categorizing them as “pro-revolution,” “apolitical” or “antirevolutionary.”
Much of what is publicly known about the mechanics and techniques of social media propaganda comes from a program at the Defense Advanced Research Projects Agency (DARPA) that the Rand researcher, Waltzman, ran to study how propagandists might manipulate social media in the future. In the Cold War, operatives might distribute disinformation-laden newspapers to targeted political groups or insinuate an agent provocateur into a group of influential intellectuals. By harnessing computing power to segment and target literally millions of people in real time online, Waltzman concluded, you could potentially change behavior “on the scale of democratic governments.”
In the U.S., public scrutiny of such programs is usually enough to shut them down. In 2014, news articles appeared about the DARPA program and the “Cuban Twitter” project. It was only a year after Snowden had revealed widespread monitoring programs by the government. The DARPA program, already under a cloud, was allowed to expire quietly when its funding ran out in 2015.
In the wake of Russia’s 2016 election hack, the question is how to research social media propaganda without violating civil liberties. The need is all the more urgent because the technology continues to advance. While today humans are still required to tailor and distribute messages to specially targeted “susceptibles,” in the future crafting and transmitting emotionally powerful messages will be automated.
The U.S. government is constrained in what kind of research it can fund by various laws protecting citizens from domestic propaganda, government electioneering and intrusions on their privacy. Waltzman has started a group called Information Professionals Association with several former information operations officers from the U.S. military to develop defenses against social media influence operations.
Social media companies are beginning to realize that they need to take action. Facebook issued a report in April 2017 acknowledging that much disinformation had been spread on its pages and saying it had expanded its security. Google says it has seen no evidence of Russian manipulation of its search results but has updated its algorithms just in case. Twitter claims it has diminished cyberpropaganda by tweaking its algorithms to block cleverly designed bots. “Our algorithms currently work to detect when Twitter accounts are attempting to manipulate Twitter’s Trends through inorganic activity, and then automatically adjust,” the company said in a statement.
In the meantime, America’s best option to protect upcoming votes may be to make it harder for Russia and other bad actors to hide their election-related information operations. When it comes to defeating Russian influence operations, the answer is “transparency, transparency, transparency,” says Rhode Island Democratic Senator Sheldon Whitehouse. He has written legislation that would curb the massive, anonymous campaign contributions known as dark money and the widespread use of shell corporations that he says make Russian cyberpropaganda harder to trace and expose.
But much damage has already been done. “The ultimate impact of [the 2016 Russian operation] is we’re never going to look at another election without wondering, you know, Is this happening, can we see it happening?” says Jigsaw’s Jared Cohen. By raising doubts about the validity of the 2016 vote and the vulnerability of future elections, Russia has achieved its most important objective: undermining the credibility of American democracy.
For now, investigators have added the names of specific trolls and botnets to their wall charts in the offices of intelligence and law-enforcement agencies. They say the best way to compete with the Russian model is by having a better message. “It requires critical thinkers and people who have a more powerful vision” than the cynical Russian view, says former NSA deputy Inglis. And what message is powerful enough to take on the firehose of falsehoods that Russia is deploying in targeted, effective ways across a range of new media? One good place to start: telling the truth.
–With reporting by PRATHEEK REBALA/WASHINGTON
Correction: The original version of this story misstated Jared Cohen’s title. He is CEO, not president.
The survey, published on Friday, concluded that Snapchat, Facebook and Twitter are also harmful. Among the five only YouTube was judged to have a positive impact.
The four platforms have a negative effect because they can exacerbate children’s and young people’s body image worries, and worsen bullying, sleep problems and feelings of anxiety, depression and loneliness, the participants said.
The findings follow growing concern among politicians, health bodies, doctors, charities and parents about young people suffering harm as a result of sexting, cyberbullying and social media reinforcing feelings of self-loathing and even the risk of them committing suicide.
“It’s interesting to see Instagram and Snapchat ranking as the worst for mental health and wellbeing. Both platforms are very image-focused and it appears that they may be driving feelings of inadequacy and anxiety in young people,” said Shirley Cramer, chief executive of the Royal Society for Public Health, which undertook the survey with the Young Health Movement.
She demanded tough measures “to make social media less of a wild west when it comes to young people’s mental health and wellbeing”. Social media firms should bring in a pop-up image to warn young people that they have been using it a lot, while Instagram and similar platforms should alert users when photographs of people have been digitally manipulated, Cramer said.
The 1,479 young people surveyed were asked to rate the impact of the five forms of social media on 14 different criteria of health and wellbeing, including their effect on sleep, anxiety, depression, loneliness, self-identity, bullying, body image and the fear of missing out.
Instagram emerged with the most negative score. It rated badly for seven of the 14 measures, particularly its impact on sleep, body image and fear of missing out – and also for bullying and feelings of anxiety, depression and loneliness. However, young people cited its upsides too, including self-expression, self-identity and emotional support.
YouTube scored very badly for its impact on sleep but positively in nine of the 14 categories, notably awareness and understanding of other people’s health experience, self-expression, loneliness, depression and emotional support.
However, the leader of the UK’s psychiatrists said the findings were too simplistic and unfairly blamed social media for the complex reasons why the mental health of so many young people is suffering.
Prof Sir Simon Wessely, president of the Royal College of Psychiatrists, said: “I am sure that social media plays a role in unhappiness, but it has as many benefits as it does negatives.. We need to teach children how to cope with all aspects of social media – good and bad – to prepare them for an increasingly digitised world. There is real danger in blaming the medium for the message.”
Tom Madders, its director of campaigns and communications, said: “Prompting young people about heavy usage and signposting to support they may need, on a platform that they identify with, could help many young people.”
However, he also urged caution in how content accessed by young people on social media is perceived. “It’s also important to recognise that simply ‘protecting’ young people from particular content types can never be the whole solution. We need to support young people so they understand the risks of how they behave online, and are empowered to make sense of and know how to respond to harmful content that slips through filters.”
Parents and mental health experts fear that platforms such as Instagram can make young users feel worried and inadequate by facilitating hostile comments about their appearance or reminding them that they have not been invited to, for example, a party many of their peers are attending.
May, who has made children’s mental health one of her priorities, highlighted social media’s damaging effects in her “shared society” speech in January, saying: “We know that the use of social media brings additional concerns and challenges. In 2014, just over one in 10 young people said that they had experienced cyberbullying by phone or over the internet.”
In February, Jeremy Hunt, the health secretary, warned social media and technology firms that they could face sanctions, including through legislation, unless they did more to tackle sexting, cyberbullying and the trolling of young users.
Google’s headquarters in Mountain View, CA(Credit: AP/Marcio Jose Sanchez)
New York Times columnist Farhad Manjoo published a disquieting essay this week titled “Google, Not the Government, Is Building the Future.” His article followed a familiar formula for a New York Times opinion column: point out a so-called problem and follow it up with an anodyne technocratic solution.
“The tech giants that are building the future would like some help changing the world,” Manjoo wrote, identifying massive private investment in artificial intelligence research as a problem. “We would be wise to chip in,” Manjoo said about public investment in such technology, “or let them take over the future for themselves.”
But there’s a glaring problem with Manjoo’s logic — apparent in the headline.
Manjoo fundamentally misunderstands the reason that the tech industry exists. The tech industry does not exist to “build the future.” It does not exist to change the world. It does not exist to “disrupt” or “innovate” or to cull from any of the biz-speak buzzwords that the tech industry uses to mask its sole intent — which is, of course, to turn a profit.
There’s some kind of reality distortion field that pervades Silicon Valley, one that even an esteemed Times (and former Salon) columnist sometimes can’t even see. Hype goggles removed, there is no fundamental difference between Google and Monsanto; between Apple and Exxon; between Facebook and Raytheon. These publicly held corporations pay people money to make things and try to make sure that the amount they pay their workers is less than those goods sell for. That’s it. Anything else that the tech industry tells you that it does — say, try to convince you that it’s not out to profit, but to make the world amazing — is false. All the world-change rhetoric around Silicon Valley is an act of branding that lets the tech industry get away with far more than it should.
I should note that I am not questioning Manjoo’s secondary point, which is about government investment in research. Manjoo proposed that, lest the tech industry become too dominant in artificial intelligence research, the federal government should invest more in it. “Technology giants, not the government, are building the artificially intelligent future,” he wrote. “And unless the government vastly increases how much it spends on research into such technologies, it is the corporations that will decide how to deploy them.”
In general, that’s a great idea. Private research tends to enrich only private interests, while government research has the potential to more equitably distribute the gains.
What else did Manjoo prescribe we do to combat this so-called problem? Later on in his essay, Manjoo declared that there are only “two ways to respond to the tech industry’s huge investments in the intelligent future.”
“On the one hand,” he wrote, “you could greet the news with optimism and even gratitude. The technologies that Google and other tech giants are working on will have a huge impact on society. . . .But the tech industry’s huge investments in A.I. might also be cause for alarm, because they are not balanced by anywhere near that level of investment by the government.”
His notion that there are only “two ways” to respond is not true. There are far more than two. We could break up Google with anti-trust lawsuits — a move that has been argued for, and which is probably past due. We could make Twitter and Facebook into public, worker-owned entities or government-regulated monopolies. Given that they basically function as public utilities, this would remove some of their negative externalities, like their questionable privacy policies and their use of brain hacking that stem from their status as for-profit companies.
Or we could reduce the length of time that patents last or demand that the maker of any product that was made with publicly funded science pays royalties to the American people. We could even just get rid of tax loopholes and use the money to invest in science or even provide a basic income.
The imagination of elites struggles to comprehend political alternatives that involve bottom-up, rather than top-down, power. Indeed, if you’re reading this and having trouble imagining an alternative future, know that examples abound. There are a number of worker-owned tech enterprises in the model of gig economy companies; “platform cooperatism” is the term for this. “’Platform cooperativism’ hopes to harness the power of tech to democratize the economy and advance labor rights,” Tom Ladendorf wrote in a 2016 article for In These Times. Ladendorf cited TransUnion Car Service as an example of a worker-owned, unionized taxi service (Uber but without the exploitation!) and Stocksy, a stock photo company, whose artists are also voting members and co-owners of the agency.
The point is, Silicon Valley and The New York Times both suffer from a severe lack of imagination when it comes to considering what is politically possible. Silicon Valley believes that the future will be created from above, by the wise scions who impose their technological will on us. Farhad Manjoo believes that’s fine, but perhaps the government might spend a teensy bit more on science. These are both technocratic visions of a future ruled by technocrats from above. But we won’t have a future to build if we can’t imagine an alternative to the status quo.
The growth of social inequality is manifested in every facet of American life, including the health and lifespans of individuals. Inequality in life expectancy has grown substantially since 1980, a new study published May 8 in the American Medical Association’s JAMA: Internal Medicineconfirms. The study documents “large—and increasing—geographic disparities among counties in life expectancy over the past 35 years.”
Researchers from the University of Washington’s Institute for Health Metrics and Evaluation (IHME) and Erasmus University in the Netherlands analyzed death records and population counts from all US counties.
Their study, “Inequalities in Life Expectancy Among US Counties, 1980 to 2014: Temporal Trends and Key Drivers,” drew data from the National Center for Health Statistics (NCHS), along with population counts from the US Census Bureau, NCHS, and the Human Mortality Database. This data set allows for a fuller picture of the scale of inequality in life expectancy that other recent research has shown. (The IHME maintains an interactive county-level map)
The study found that in 2014 life expectancy at birth for both sexes at the national level was 79.1 years (76.7 years for men and 81.5 years for women). The combined average amounts to a 5.3-year growth in life expectancy over the 1980 average of 73.8 years.
Life expectancy at birth
Behind this overall growth in lifespan, however, the study found a staggering 20.1-year gap between the lowest and highest life expectancy among all US counties.
Three wealthy counties in central Colorado—Summit, Eagle, and Pitkin—recorded the longest life expectancies in the country, at 86 years on average. At the other end of the spectrum, several counties in South and North Dakota had the lowest life expectancy, along with “counties along the lower half of the Mississippi [the Delta region] and in eastern Kentucky and southwestern West Virginia,” the study found. These areas “saw little, if any, improvement” since 1980. Thirteen counties registered a decline in life expectancy.
In the Dakotas, several of the shortest-lived counties encompass Native American reservations. Oglala Lakota County in South Dakota, home to the Pine Ridge Native American reservation, had the lowest life expectancy in the country in 2014, at just 66.8 years. In a press release, the IHME researchers noted that this was lower than the life expectancies of Sudan and Iraq—countries that have been torn apart by brutal wars over the course of decades.
“Looking at life expectancy on a national level masks the massive differences that exist at the local level, especially in a country as diverse as the United States,” lead author Laura Dwyer-Lindgren of IHME explained. “Risk factors like obesity, lack of exercise, high blood pressure, and smoking explain a large portion of the variation in lifespans, but so do socioeconomic factors like race, education, and income.”
The study found that all counties saw a decline in the risk of dying before age 5 since 1980, attributable to improvements in health programs for infants and children. At the same time, the data showed an increased risk of death for adults aged 25-45 in 11.5 percent of counties, a phenomenon partially explained by the rise in suicides and drug addiction.
Although the research points to “a combination of socioeconomic and race/ethnicity factors, behavioral and metabolic risk factors, and health care factors” to account for the disparities in life expectancy, all of the factors intersect with poverty. It is not a coincidence that the poorest areas recorded the shortest life expectancies and the wealthiest areas recorded the longest lifespans.
Risk factors like obesity, diabetes, high blood pressure, smoking, and physical inactivity are highly correlated to poverty, unemployment and lack of education. In areas where the population lacks access to preventive care or they cannot afford basic health care, chronic conditions become debilitating. Cancers go undetected, mental illness is undiagnosed, pregnancies are carried without adequate prenatal care, heart disease is untreated, and work-related injuries are managed with highly addictive pain medications instead of physical therapy and rest.
Of the 10 counties where lifespans fell the most since 1980, eight are in the coalfields region of eastern Kentucky: Owsley (-3 percent); Lee (-2 percent); Leslie (-1.9 percent); Breathitt (-1.4 percent); Clay (-1.3 percent); Powell (-1.1 percent); Estill (-1 percent); Perry County, Kentucky (-0.8 percent). Kiowa County, Oklahoma, (-0.7 percent), and Perry County, Alabama, (-0.6 percent) round out the list of counties where life expectancy declined the most.
Change in life expectancy at birth
Residents of Owsley County, Kentucky saw a decline in life expectancy from 72.4 in 1980 to 70.2 in 2014—comparable to the life expectancy in Kyrgyzstan or North Korea.
Owsley County was found by a 2016 Al Jazeera analysis to be the poorest white-majority county in the US. Some 45 percent of the county’s 4,500 residents, and 56.3 percent of children, live below the poverty threshold. Official unemployment stands at 10 percent, but with only 35 percent of the working age population included in the labor force, real unemployment is approaching 75 percent. Per capita income as of 2015 stands at $15,158, according to federal Census Bureau data.
As with the rest of the Appalachian coalfields region, the counties where life expectancy has dropped have seen every metric of economic and social well-being decline over the past several decades. Coal mining employment in eastern Kentucky has fallen to levels not seen in a century. With hundreds of mines shuttered, counties have lost so-called coal-severance tax revenue paid by companies per ton of coal extracted. Thousands of families have left in search of work, triggering a further collapse in the tax base for local governments, school districts, and social programs. The elimination of thousands of coal mining jobs has left mostly low-wage occupations for residents.
Lee County, second to Owsley in terms of the decline in life expectancy, is home to “America’s poorest white town”—Beattyville, Kentucky, the county seat. Beattyville has seen an explosion of opioid addiction since the closure of its few coalmines and decline of the oil and timber industries. The median household income in the town stands at $14,871, less than a third of the national median. Like its measure of life expectancy, Lee County’s household income is lower today than it was in 1980.
Kentucky and neighboring West Virginia have among the highest opioid overdose rates in the country, with the coalfields counties especially hard-hit. In 2013, drug overdoses accounted for 56 percent of all accidental deaths in Kentucky; the state’s death rate for overdoses is 29.9 per 100,000. In the eastern counties, emergency services are less able to reach and save overdose victims and health providers have struggled to afford lifesaving anti-opioid treatments like Narcan.
Are you addicted to technology? I’m certainly not. In my first sitting reading Adam Alter’s Irresistible, an investigation into why we can’t stop scrolling and clicking and surfing online, I only paused to check my phone four times. Because someone might have emailed me. Or texted me. One time I stopped to download an app Alter mentioned (research) and the final time I had to check the shares on my play brokerage app, Best Brokers (let’s call this one “business”).
Half the developed world is addicted to something, and Alter, a professor at New York University, informs us that, increasingly, that something isn’t drugs or alcohol, but behaviour. Recent studies suggest the most compulsive behaviour we engage in has to do with cyber connectivity; 40% of us have some sort of internet-based addiction – whether it’s checking your email (on average workers check it 36 times an hour), mindlessly scrolling through other people’s breakfasts on Instagram or gambling online.
Facebook was fun three years ago, Alter warns. Now it’s addictive. This tech zombie epidemic is not entirely our fault. Technology is designed to hook us, and to keep us locked in a refresh/reload cycle so that we don’t miss any news, cat memes or status updates from our friends. Tristan Harris, a “design ethicist” (whatever that is) tells the author that it’s not a question of willpower when “there are a thousand people on the other side of the screen whose job it is to break down the self-regulation you have”. After all, Steve Jobs gave the world the iPad, but made very sure his kids never got near one. Brain patterns of heroin users just after a hit and World of Warcraft addicts starting up a new game are nearly identical. The tech innovators behind our favourite products and apps understood that they were offering us endless portals to addiction. We’re the only ones late to the party.
Addiction isn’t inherent or genetic incertain people, as was previously thought. Rather, it is largely a function of environment and circumstance. Everyone is vulnerable; we’re all just a product or substance away from an uncomfortable attachment of some kind. And the internet, Alter writes, with its unpredictable but continuous loop of positive feedback, simulation of connectivity and culture of comparison, is “ripe for abuse”.
For one thing, it’s impossible to avoid; a recovering alcoholic can re-enter the slipstream of his life with more ease than someone addicted to online gaming – the alcoholic can avoid bars while the gaming addict still has to use a computer at work, to stay in touch with family, to be included in his micro-society.
Secondly, it’s bottomless. Everything is possible in the ideology of the internet – need a car in the middle of the night? Here you go. Want to borrow a stranger’s dog to play with for an hour, with no long-term responsibility for the animal? Sure, there’s an app for that. Want to send someone a message and see when it reaches their phone, when they read it and whether they like it? Even BlackBerry could do that.
Thirdly, it’s immersive – and even worse, it’s mobile. You can carry your addiction around with you. Everywhere. You don’t need to be locked in an airless room or unemployed in order to spend hours online. Moment, an app designed to track how often you pick up and look at your phone, estimates that the average smartphone user spends two to three hours on his or her mobile daily.
I downloaded Moment (the research I mentioned earlier) and uninstalled it after it informed me that, by noon, I had already fiddled away an hour of my time on the phone.
Though the age of mobile tech has only just begun, Alter believes that signs point to a crisis. In 2000, Microsoft Canada found that our average attention span was 12 seconds long. By 2013, it was eight seconds long. Goldfish, by comparison, can go nine seconds. Our ability to empathise, a slow-burning skill that requires immediate feedback on how our actions affect others, suffers the more we disconnect from real-life interaction in favour of virtual interfacing. Recent studies found that this decline in compassion was more pronounced among young girls. One in three teenage girls say their peers are cruel online (only one in 11 boys agree).
Sure, communication technology has its positives. It’s efficient and cheap, and has the ability to teach creatively, raise money for worldwide philanthropic causes and to disseminate news under and over the reach of censors, but the corrosive culture of online celebrity, fake news and trolling must have a downside, too – namely that we can’t seem to get away from it.
There is a tinge of first world problems in Irresistible. World of Warcraft support groups; a product Alter writes about called Realism (a plastic frame resembling a screenless smartphone, which you can hold to temper your raging internet addiction, but can’t actually use); a spike in girl gaming addicts fuelled by Kim Kardashian’s Hollywood app – it’s difficult to see why these things should elicit much sympathy while one in 10 people worldwide still lack access to clean drinking water. This very western focus on desire and goal orientation is one that eastern thinkers might consider a wrong view of the world and its material attachments, but Alter’s pop-scientific approach still makes for an entertaining break away from one’s phone.