Do You Have A Living Doppelgänger?

p0418209

Folk wisdom has it that everyone has a doppelganger; somewhere out there there’s a perfect duplicate of you, with your mother’s eyes, your father’s nose and that annoying mole you’ve always meant to have removed. Now BBC reports that last year Teghan Lucas set out to test the hypothesis that everyone has a living double. Armed with a public collection of photographs of U.S. military personnel and the help of colleagues from the University of Adelaide, Lucas painstakingly analyzed the faces of nearly four thousand individuals, measuring the distances between key features such as the eyes and ears. Next she calculated the probability that two peoples’ faces would match. What she found was good news for the criminal justice system, but likely to disappoint anyone pining for their long-lost double: the chances of sharing just eight dimensions with someone else are less than one in a trillion. Even with 7.4 billion people on the planet, that’s only a one in 135 chance that there’s a single pair of doppelgangers.

Lucas says this study has provided much-needed evidence that facial anthropometric measurements are as accurate as fingerprints and DNA when it comes to identifying a criminal. “The use of video surveillance systems for security purposes is increasing and as a result, there are more and more instances of criminals leaving their ‘faces’ at a scene of a crime,” says Ms Lucas. “At the same time, criminals are getting smarter and are avoiding leaving DNA or fingerprint traces at a crime scene.” But that’s not the whole story. The study relied on exact measurements; if your doppelganger’s ears are 59mm but yours are 60mm, your likeness wouldn’t count. “It depends whether we mean ‘lookalike to a human’ or ‘lookalike to facial recognition software,'” says David Aldous. If fine details aren’t important, suddenly the possibility of having a lookalike looks a lot more realistic. It depends on the way faces are stored in the brain: more like a map than an image. To ensure that friends and acquaintances can be recognized in any context, the brain employs an area known as the fusiform gyrus to tie all the pieces together.

This holistic ‘sum of the parts’ perception is thought to make recognizing friends a lot more accurate than it would be if their features were assessed in isolation. Using this type of analysis, and judging by the number of celebrity look-alikes out there, unless you have particularly rare features, you may have literally thousands of doppelgangers. “I think most people have somebody who is a facial lookalike unless they have a truly exceptional and unusual face,” says Francois Brunelle has photographed more than 200 pairs of doppelgangers for his I’m Not a Look-Alike project. “I think in the digital age which we are entering, at some point we will know because there will be pictures of almost everyone online.

 

https://science.slashdot.org/story/16/07/15/2039233/do-you-have-a-living-doppelgnger

THE RISE OF FACEBOOK AND ‘THE OPERATING SYSTEM OF OUR LIVES’

Siva Vaidhyanathan, UVA’s Robertson Professor of Media Studies, is the director of the University of Virginia’s Center for Media and Citizenship.Siva Vaidhyanathan, UVA’s Robertson Professor of Media Studies, is the director of the University of Virginia’s Center for Media and Citizenship. (Photo by Dan Addison)

Recent changes announced by social media giant Facebook have roiled the media community and raised questions about privacy. The company’s updates include a higher level of news feed priority for posts made by friends and family and testing for new end-to-end encryption software inside its messenger service.

As Facebook now boasts more than a billion users worldwide, both of these updates are likely to impact the way the world communicates. Prior to the company’s news-feed algorithm change, a 2016 study from the Pew Research Center found that approximately 44 percent of American adults regularly read news content through Facebook.

UVA Today sat down with Siva Vaidhyanathan, the director of the University of Virginia’s Center for Media and Citizenship and Robertson Professor of Media Studies, to discuss the impact of these changes and the evolving role of Facebook in the world. Naturally, the conversation first aired on Facebook Live.

Excerpts from the conversation and the full video are available below.

Q. What is the change to Facebook’s News Feed?

A. Facebook has announced a different emphasis within its news feed. Now of course, your news feed is much more than news. It’s all of those links and photos and videos that your friends are posting and all of the sites that you’re following. So that could be an interesting combination of your cousin, your coworker, the New York Times and Fox News all streaming through.

A couple of years ago, the folks that run Facebook recognized that Facebook was quickly becoming the leading news source for many millions of Americans, and considering that they have 1.6 billion users around the world, and it’s growing fast, there was a real concern that Facebook should take that responsibility seriously. So one of the things that Facebook did was cut a deal with a number of publishers to be able to load up their content directly from Facebook servers, rather than just link to an original content server. That provided more dependable loading, especially of video, but also faster loading, especially through mobile.

But in recent weeks, Facebook has sort of rolled back on that. They haven’t removed the partnership program that serves up all that content in a quick form, but they’ve made it very clear that their algorithms that generate your news feed will be weighted much more heavily to what your friends are linking to, liking and commenting on, and what you’ve told Facebook over the years you’re interested in.

This has a couple of ramifications. One, it sort of downgrades the project of bringing legitimate news into the forefront by default, but it also makes sure that we are more likely to be rewarded with materials that we’ve already expressed an interest in. We’re much more likely to see material from publications and our friends we reward with links and likes. We’re much more likely to see material linked by friends with whom we have had comment conversations.

This can generate something that we call a “filter bubble.” A gentlemen named Eli Pariser wrote a book called “The Filter Bubble.” It came out in 2011, and the problem he identified has only gotten worse since it came out. Facebook is a prime example of that because Facebook is in the business of giving you reasons to feel good about being on Facebook. Facebook’s incentives are designed to keep you engaged.

Q. How will this change the experience for publishers?

A. The change or the announcement of the change came about because a number of former Facebook employees told stories about how Facebook had guided their decisions to privilege certain things in news feeds that seemed to diminish the content and arguments of conservative media.

Well, Facebook didn’t want that reputation, obviously. Facebook would rather not be mixed up or labeled as a champion of liberal causes over conservative causes in the U.S. That means that Facebook is still going to privilege certain producers of media – those producers of media that have signed contracts with Facebook. The Guardian is one, the New York Times is another. There are dozens of others. Those are still going to be privileged in Facebook’s algorithm, and among the news sources you encounter, you’re more likely to see those news sources than those that have not engaged in a explicit contract with Facebook. So Facebook is making editorial decisions based on their self-interest more than anything, and not necessarily on any sort of political ideology.

Q. You wrote “The Googlization of Everything” in 2011. Since then, have we progressed to the “Facebookization” of everything?

A. I wouldn’t say that it’s the Facebookization of everything – and that’s pretty clumsy anyway. I would make an argument that if you look at five companies that don’t even seem to do the same thing – Google, Facebook, Microsoft, Apple and Amazon – they’re actually competing in a long game, and it has nothing to do with social media. It has nothing to do with your phone, nothing to do with your computer and nothing to do with the Internet as we know it.

They’re all competing to earn our trust and manage the data flows that they think will soon run through every aspect of our lives – through our watches, through our eyeglasses, through our cars, through our refrigerators, our toasters and our thermostats. So you see companies – all five of these companies from Amazon to Google to Microsoft to Facebook to Apple – are all putting out products and services meant to establish ubiquitous data connections, whether it’s the Apple Watch or the Google self-driving car or whether it’s that weird obelisk that Amazon’s selling us [the Echo] that you can talk to or use to play music and things. These are all part of what I call the “operating system of our lives.”

Facebook is interesting because it’s part of that race. Facebook, like those other companies, is trying to be the company that ultimately manages our lives, in every possible way.

We often hear a phrase called the “Internet of things.” I think that’s a misnomer because what we’re talking about, first of all, is not like the Internet at all. It’s going to be a closed system, not an open system. Secondly, it’s not about things. It’s actually about our bodies. The reason that watches and glasses and cars are important is that they lie on and carry human bodies. What we’re really seeing is the full embeddedness of human bodies and human motion in these data streams and the full connectivity of these data streams to the human body.

So the fact that Facebook is constantly tracking your location, is constantly encouraging you to be in conversation with your friends through it – at every bus stop and subway stop, at every traffic light, even though you’re not supposed to – is a sign that they are doing their best to plug you in constantly. That phenomenon, and it’s not just about Facebook alone, is something that’s really interesting.

Q. What are the implications of that for society?

A. The implications of the emergence of an operating system of our lives are pretty severe. First of all, consider that we will consistently be outsourcing decision-making like “Turn left or turn right?,” “What kind of orange juice to buy?” and “What kind of washing detergent to buy?” All of these decisions will be guided by, if not determined by, contracts that these data companies will be signing with consumer companies.

… We’re accepting short-term convenience, a rather trivial reward, and deferring long-term harms. Those harms include a loss of autonomy, a loss of privacy and perhaps even a loss of dignity at some point. … Right now, what I am concerned about is the notion that we’re all plugging into these data streams and deciding to allow other companies to manage our decisions. We’re letting Facebook manage what we get to see and which friends we get to interact with.

MEDIA CONTACT

10 Takeaways About the Gig Economy That Has Pushed Europe to Say No to These Predatory Capitalists

ECONOMY
Europe knows what undermines economic stability, says author Steven Hill.

Photo Credit: rmnoa357 / Shutterstock.com

The gig economy, exemplified by ride-service companies like Uber, housing rental companies like Airbnb and freelance brokers like Upwork, is not a harbinger of an empowering new tech-driven economy. These are predatory corporations using age-old practices to exploit workers, dodge government oversight and evade taxes.

Those are the takeaways from the new book Raw Deal: How The Uber Economy and Runaway Capitalism are Screwing American Workers, by Steven Hill, a fellow at New America Foundation. Hill, who splits his time between California and Europe, intriguingly notes that unlike the states, the continent has not embraced the gig economy mainstays in its midst.

AlterNet talked with Hill about how Europe—particularly Germany, where he has lived for most of 2016—has not bought into the sector’s claims that it is somehow futuristic, different and above government regulation and public accountability.

“I first started looking at this myself because I live in San Francisco and I’ve been watching the impact of technology on jobs, and specific companies like Uber, TaskRabbit, and Upwork, which I think in some ways is the most alarming of these companies,” he said, referring to a firm that farms out freelance work to the lowest global bidders. “Having studied that here, and these companies are now operating in other countries as well, I wanted to see how other countries are reacting to the pressures that they’re getting from these companies.”

What follows are 10 of Hill’s observations about the burgeoning gig economy.

1. Not following the laws, anywhere. “The first thing for Americans to realize is that a lot of these companies aren’t following the laws, aren’t paying taxes. Uber comes in and just doesn’t follow local laws for taxis. Not only in terms of background checks, insurance laws, qualifications for drivers, but even in terms of paying livery taxes. Airbnb, same thing. Upwork and TaskRabbit, these companies aren’t following minimum wage laws.”

2. Europeans are not okay with that. “Americans sort of accept some of this. But the first thing you notice when you go to France or Germany, they say, ‘No, this is a taxi service. I don’t care that you are using technology to connect a driver to a passenger.’ In fact, Uber changed its name to Uber Technologies just so they could say they were a technology company, not a taxi company. Across Europe, people say, So what? We don’t care that you’re a technology company. It is the same service you offer, therefore we expect you to follow the same laws that we have for taxis. We expect you to follow the same laws that we have for hotels or any of these other services and platforms.

“So there is this interesting and I would say refreshing perspective: Of course, you’re going to follow the laws. You don’t get out of following laws just because you think you are something new and different.”

3. Anything but a new business model. “Airbnb, for example, will tell you, ‘Look, we want to pay taxes—the hotel and occupancy tax that hotels pay—and we want to follow local laws, but we’re in 34,000 cities and we just haven’t had time to research all of these cities and their laws, and we’re going to get to it.’

The thing that’s rather remarkable is they are asserting this new corporate right that you can set up operations first and figure out the local laws and taxes later on. If Boeing, for example, were to set up an airplane assembly plant here, and said we’ll figure out the taxes and local laws later on, we’d live in a very different world. Corporations don’t get to set up where they want and figure out the laws later on. But that’s what these companies are insisting on being able to do.”

4. Europeans are trying to reel this in. “In Berlin, where I was living for the last five months, they passed a law two years ago saying, ‘There’s going to be the new rules we are going to insist on [for Airbnb rentals]. We are not going to have it take effect for two years so everyone can have a chance to get ready.’

Well, the law just went into effect on May 1, and it basically says that you cannot own multiple properties, you can’t rent out your whole house—you can only rent out a spare room in your house or apartment; they put a percentage on it, and you have to register with the city. They are not looking to shut it down completely. Most people realize that the core idea of Airbnb, that you can allow people to rent out a spare room and make some extra money, is okay. But the problem is a lot of it has been completely taken over by professional real estate operatives, some of whom have dozens of properties. So cities like Berlin and Copenhagen are trying to return it to that earlier core business, where you can still have someone rent out a spare room, but you’re not going to create an opportunity for professionals to circumvent local laws or use Airbnb as a massive loophole.”

5. Pushback in U.S. lags far behind. “San Francisco just passed a law for tougher Airbnb rules. They already have a law that folks have to register, which has been in effect for almost a year and a half. This new law is going to put the burden on Airbnb and says you can’t list folks who haven’t registered. If you do list folks who have not registered, you are going to be fined… Other cities are looking at it.

“The problem, in terms of Airbnb, is that many of these laws have turned out to be unenforceable. Unless you have the data from Airbnb, it’s hard to know how many nights they’re renting out, how much they’re charging, these sorts of things. Airbnb is the only one that has that data, and has refused to give it up, despite requests. So this is a big problem in terms of enforcing any of these laws.”

6. Europe is increasing enforcement. “In Berlin, in contrast, they’re ramping up enforcement. They are actually going to have people go door to door. They are encouraging neighbors to start reporting on neighbors who are illegally operating as Airbnb hotels. That’s interesting, as it taps into a whole history of Germany reporting on neighbors, going back to the Stasi [secret police] and everything else. It’s just a lot of different approaches that are happening. We’ll see if any of them are effective.”

7. Europeans are more concerned. “The public in Europe, in general, expect corporations to be better citizens. There is more of social dimension to the economy there. So when companies like Uber come in and say, Hey, we’re going to give you a new service, people take a second look.

First of all, in Europe, taxi service is pretty good, whereas here in the U.S. taxi service is not. There aren’t enough taxis on the road because of medallions and those sorts of things. But in Europe, there is not as big a need. There’s good public transit. That makes Europeans react by saying, If you want to operate here, that’s fine, but you have to follow the law. You can’t do it on the backs of your drivers, cutting their pay, and those sorts of things.”

8. The gig economy hasn’t exploded there. “It hasn’t hit there like it has here, partly because they insist that you follow the law. So Uber and Lyft and TaskRabbit and these other companies say it is a bigger uphill battle for them. They haven’t tried to push into there as much as in India or China. Europeans are aware of the gig economy. Some call it the digital economy. Some just call it the Internet economy. They’re aware that some of the ways that these companies operate really strike at the core of their social model.

“When you talk to business people in Germany and say, This is what these companies are; this is how they operate; they want a labor force they can turn on and off like a garden hose, they understand what that means for the economy. They have good labor relations and it’s part of their economic successes… So that is a barrier to entry for these kinds of companies.”

9. But European business people are worried. “The companies that are most concerning to them are a company like Upwork. It is based in San Francisco, in Silicon Valley, and has 250 employees who use technology to oversee 10 million freelancers from around the world. If you go on that platform, you see workers from Germany saying, I want 60 euros an hour for this job. And you can see a worker from Thailand or India saying, I’ll take two euros an hour. Some of those workers in Thailand or India are very skilled, have access to technology and can do the job.

“So, if you’re a business person in Germany, you feel torn because you can get someone for a lot less, but understand it undermines something crucial about the German economy, and the relationship between employer and employee, and the basis for their economic success. That’s the dilemma.”

10. American workers are less protected. “A lot of Americans who are working in these jobs just need the work. They know the jobs are not very good. If you look at Uber’s own numbers, it shows that 50 percent of their drivers last a year on their platform and then move on. So it’s a temporary job. It’s something they do because they can’t find anything better.”

The biggest takeaway 

Perhaps the biggest takeaway from listening to Hill is that the gig economy flourishes when the public faces a mix of economic anxieties and an absense of government oversight. On both those counts, the U.S. sits between Europe, which has resisted these exploitive firms, and Asia, where they are able to rapidly expand. Indeed, if Americans didn’t have as many economic anxieties at home, there would be less of a need to work longer hours and wrest more income from one’s assets.

Steven Rosenfeld covers national political issues for AlterNet, including America’s retirement crisis, democracy and voting rights, and campaigns and elections. He is the author of “Count My Vote: A Citizen’s Guide to Voting” (AlterNet Books, 2008).

ALTERNET

Why Hillary Clinton Should be Prosecuted for Reckless Abuses of National Security

0,,18294574_303,00

Yesterday FBI Director James Comey described Hillary Clinton’s email communications as Secretary of State as “extremely careless.” His statement undermined the defenses Clinton put forward, stating the FBI found 110 emails on Clinton’s server that were classified at the time they were sent or received; eight contained information classified at the highest level, “top secret,” at the time they were sent. That stands in direct contradiction to Clinton’s repeated insistence she never sent or received any classified emails.

All the elements necessary to prove a felony violation were found by the FBI investigation, specifically of Title 18 Section 793(f) of the federal penal code, a law ensuring proper protection of highly classified information. Director Comey said that Clinton was “extremely careless” and “reckless” in handling such information. Contrary to the implications of the FBI statement, the law does not require showing that Clinton intended to harm the United States, but that she acted with gross negligence.

The recent State Department Inspector General (IG) report was clear that Clinton blithely disregarded safeguards to protect the most highly classified national security information and that she included on her unprotected email server the names of covert CIA officers. The disclosure of such information is a felony under the Intelligence Identities Protection Act.

While the FBI is giving Clinton a pass for not “intending” to betray state secrets, her staff has said Secretary Clinton stated she used her private email system because she did not want her personal emails to become accessible under FOI laws. This is damning on two counts – that she intended to disregard the protection of security information, and that she had personal business to conceal.

This is not the end of the Clinton email issues. Department of Justice officials filed a motion in federal court on June 29th requesting a 27-month delay in producing correspondence between former Secretary of State Hillary Clinton’s four top aides and officials with the Clinton Foundation and Teneo Holdings, a public relations firm that Bill Clinton helped launch.

Hillary Clinton deleted 30,000 emails claiming they were ‘personal’. This is equal to the volume of her emails designated as department business. If half of an employee’s email volume is for their personal business, they are not using their time for their job.

If Secretary Clinton was conducting personal business for her family Foundation through the Secretary of State’s Office, this is a matter the American public deserves to know about. As Secretary of State Hillary Clinton routinely granted lucrative special contracts, weapons deals and government partnerships to Clinton Foundation donors. The Secretary of State’s office should not be a place to conduct private back room business deals.

The blurring of the lines between Clinton family private business and national security matters in the Secretary of State Office underscores evidence on many other fronts that Hillary Clinton is serving the 1%, not we the people.

Hillary Clinton’s failure to protect critical security information is not the only thing in her tenure as Secretary that deserves the term reckless, including her decision to pursue catastrophic regime change in Libya, and to support the overthrow of democratically elected governments in Ukraine and Honduras.

More articles by:

Corporate Globalization Has Been a Wrecking Ball to the American Dream

LOCAL PEACE ECONOMY
If the American Dream isn’t working for them, why should anyone, anywhere, believe it will work for their own children?

Photo Credit: pixabay.com

This piece originally appeared atLocal Futures.

Implicit in all the rhetoric promoting globalization is the premise that the rest of the world can and should be brought up to the standard of living of the West, and America in particular. For much of the world the American Dream—though a constantly moving target—is globalization’s ultimate endpoint.

But if this is the direction globalization is taking the world, it is worth examining where America itself is headed. A good way to do so is to take a hard look at America’s children, since so many features of the global monoculture have been in place their whole lives. If the American Dream isn’t working for them, why should anyone, anywhere, believe it will work for their own children?

As it turns out, children in the US are far from “confident, self-reliant, tolerant, generous, and future-oriented.” One indication of this is that more than 8.3 million American children and adolescents require psychiatric drugs; over 2 million are on anti-depressants, and another 2 million are on anti-anxiety drugs. The age groups for which these drugs are prescribed is shockingly young: nearly half a million children 0-3 years old are taking drugs to combat anxiety.[1]

Most people in the “less developed” world will find it hard to imagine how a toddler could be so anxiety-ridden that they need psychiatric help. Equally difficult to fathom are many other symptoms of social breakdown among America’s children. Eating disorders, for example: the incidence of anorexia, bulimia and other eating disorders has doubled since the 1960s, and girls are developing these problems at younger and younger ages.[2]

If eating disorders are the bane of America’s young girls, violence is a more common problem for its boys. Consider the fact that there have been more than 150 school shootings in the US since 1990, claiming 165 lives. The youngest killer? A six-year old boy.[3]

Sometimes the violence is directed inward, with suicide the result. In America today, suicide is the third leading cause of death for 15- to 24-year olds. In 2013, 17 percent of US high school students seriously considered suicide during the preceding year.[4]

What has made America’s children so insecure and troubled? A number of causes are surely involved, most of which can be linked to the global economy. For example, as corporations scour the world for bigger subsidies and lower costs, jobs move with them, and families as well: the typical American moves eleven times during their life, repeatedly severing connections with relatives, neighbors and friends.[5]

Within almost every family, the economic pressures on parents systematically rob them of time with even their own children. Americans put in longer hours than workers in any other industrialized country, with many breadwinners working two or more jobs just to make ends meet.[6] Increasing numbers of women are in the workforce, so there are no adults left at home; young children are relegated to day-care centers, while older children are left in the company of video games, the internet, or the corporate sponsors of their favorite television shows. According to a 2010 study of American children, the average 8- to 10-year-old spends nearly eight hours a day with various media; older children and teenagers spend more than 11 hours a day with media. Not surprisingly, time spent in nature—something essential for our well-being—has all but disappeared: only 10 percent of American children spend time outside on a daily basis.[7]

America’s screen-obsessed children no longer have flesh-and-blood role models—parents and grandparents, aunts and uncles, friends and neighbors—to look up to. Instead they have media and advertising images: rakish movie stars and music idols, steroid-enhanced athletes and airbrushed supermodels. Children who strive to emulate the manufactured “perfection” of these role models are left feeling insecure and inadequate. This is one reason cosmetic surgery is on the increase among America’s children. According to the president of the American Academy for Facial Plastic Surgery, “the more consumers are inundated with celebrity images via social media, the more they want to replicate the enhanced, re-touched images that are passed off as reality.” What’s more, he adds, “we are seeing a younger demographic than ever before.”[8]

It seems clear that what is often called ‘American culture’ is no longer a product of the American people: it is instead an artificial consumer culture created and projected by corporate advertising and media. This consumer culture is fundamentally different from the diverse cultures that for millennia were shaped by climate, topography, and the local biota—by a dialogue between humans and the natural world. This is a new phenomenon, something that has never happened before: a culture determined by technological and economic forces, rather than human and ecological needs. It is not surprising that American children, many of whom seem to “have everything,” are so unhappy: like their parents, their teachers and their peers, they have been put on a treadmill that is ever more stressful and competitive, ever more meaningless and lonely.

As the globalization juggernaut continues to advance, the number of victims worldwide is growing exponentially. Millions of children from Mongolia to Patagonia are today targeted by a fanatical and fundamentalist campaign to bring them into the consumer culture. The cost is massive in terms of self-rejection, psychological breakdown and violence. Like American children they are bombarded with sophisticated marketing messages telling them that this brand of make-up will inch them closer to perfection, that this brand of sneakers will make them more like their sports hero. But in the global South—where the ideal is often blue-eyed, blonde, and Western—children are even more vulnerable. It’s no wonder that sales of dangerous bleach to lighten the skin, and contact lenses advertised as “the color of eyes you wish you were born with,” are booming across the South.[9]

This psychological impoverishment is accompanied by a massive rise in material poverty. Even though more than 46 million Americans—nearly 15 percent of the population—live in poverty,[10] globalization aims to replicate the American model of development across the global South. Among the results are the elimination of small farmers and the gutting of rural communities, with hundreds of millions of people drawn into sweatshops or unemployment in rapidly growing urban slums. Meanwhile, many of those whose ways of life are threatened by the forces of globalization are turning to fundamentalism, even terrorism.

The central hope of the American Dream—that our children will have a better life than we do—seems to have vanished. Many people, in fact, no longer believe that our children really have any future at all.

Nonetheless policymakers insist that globalization is bringing a better world for everyone. How can there be such a gap between the cheerleading rhetoric and the lives of real people?

Part of the disconnect results from the way globalization’s promoters measure “progress.” The shallowest definition compares the modern consumer cornucopia with what was available 50 or 100 years ago—as though electronic gadgets and plastic gewgaws are synonymous with happiness and fulfillment. More often the baseline for comparison is the Dickensian period of the early industrial revolution, when exploitation and deprivation, pollution and squalor were rampant. From this starting point, our child-labor laws and 40-hour workweek look like real progress. Similarly, the baseline in the global South is the immediate post-colonial period, with its uprooted cultures, poverty, over-population and political instability. Based on the misery of these contrived starting points, political leaders can argue that our technologies and our economic system have brought a far better world into being, and that globalization will bring similar benefits to the “wretched, servile, fatalistic and intolerant human beings” in the remaining “undeveloped” parts of the world.

In reality, however, globalization is a continuation of a broad process that started with the age of conquest and colonialism in the South and the enclosures and the Industrial Revolution in the North. From then on a single economic system has relentlessly expanded, taking over other cultures, other peoples’ resources and labor. Far from elevating those people from poverty, the globalizing economic system has systematically impoverished them.

If there is to be any hope of a better world, it is vital that we connect the dots between “progress” and poverty. Erasing other cultures—replacing them with an artificial culture created by corporations and the media they control—can only lead to an increase in social breakdown and poverty. Even in the narrowest economic terms, globalization means continuing to rob, rather than enrich, the majority. According to a recent report by Oxfam, the world’s richest 62 people now have more wealth than the poorest half of the global population combined. Their assets have risen by more than $500 million since 2010, while the bottom 3.5 billion people have become poorer by $1 trillion.[11] This is globalization at work.

While globalization systematically widens the gap between rich and poor, attempting in the name of equity to globalize the American standard of living is a fool’s errand. The earth is finite, and global economic activity has already outstripped the planet’s ability to provide resources and absorb wastes. When the average American uses 32 times more resources and produces 32 times more waste than the average resident of the global South, it is a criminal hoax to promise that development can enable everyone to live the American Dream.[12]

The spread of globalization has been profoundly destructive to people’s ability to survive in their own cultures, in their own place on the earth. It has even been destructive to those considered to be its most privileged beneficiaries. Continuing down this corporate-determined path will only lead to further social, psychological and environmental breakdown. Whether they know it or not, America’s children are telling us we need to go in a very different direction.

 

Helena Norberg-Hodge is founder and director of Local Futures (International Society for Ecology and Culture). A pioneer of the “new economy” movement, she has been promoting an economics of personal, social and ecological well-being for more than thirty years. She is the producer and co-director of the award-winning documentary, The Economics of Happiness, and is the author of Ancient Futures: Learning from Ladakh. She was honored with the Right Livelihood Award for her groundbreaking work in Ladakh, and received the 2012 Goi Peace Prize for contributing to “the revitalization of cultural and biological diversity, and the strengthening of local communities and economies worldwide.”

Steven Gorelick is Managing Programs Director at Local Futures (International Society for Ecology and Culture). He is the author of Small is Beautiful, Big is Subsidized (pdf), co-author of Bringing the Food Economy Home, and co-director of The Economics of Happiness. His writings have been published in The Ecologist and Resurgence magazines. He frequently teaches and speaks on local economics around the US.

http://www.alternet.org/local-peace-economy/how-globalization-impacts-american-dream?akid=14341.265072.CtYp-J&rd=1&src=newsletter1058139&t=8

The old game of labor surveillance is finding new forms

Happy All the Time

As biometric tracking takes over the modern workplace, the old game of labor surveillance is finding new forms.

By Lynn Stuart Parramore

Call them soldiers, call them monks, call them machines: so they were but happy ones, I should not care.
Jeremy Bentham, 1787

Housed in a triumph of architectural transparency in Cambridge, Massachusetts, is the Media Lab complex at MIT, a global hub of human-machine research. From the outside of its newest construction, you can see clear through the building. Inside are open workspaces, glittering glass walls, and screens, all encouragement for researchers to peek in on one another. Everybody always gets to observe everybody else.

Here, computational social scientist Alex Pentland, known in the tech world as the godfather of wearables, directs a team that has created technology applied in Google Glass, smart watches, and other electronic or computerized devices you can wear or strap to your person. In Pentland’s quest to reshape society by tracking human behavior with software algorithms, he has discovered you don’t need to look through a glass window to find out what a person is up to. A wearable device can trace subliminal signals in a person’s tone of voice, body language, and interactions. From a distance, you can monitor not only movements and habits; you can begin to surmise thoughts and motivations.

In the mid-2000s Pentland invented the sociometric badge, which looks like an ID card and tracks and analyzes the wearer’s interactions, behavior patterns, and productivity. It became immediately clear that the technology would appeal to those interested in a more hierarchical kind of oversight than that enjoyed by the gurus of MIT’s high-tech playgrounds. In 2010 Pentland cofounded Humanyze, a company that offers employers the chance to find out how employee behavior affects their business. It works like this: A badge hanging from your neck embedded with microphones, accelerometers, infrared sensors, and a Bluetooth connection collects data every sixteen milliseconds, tracking such matters as how far you lean back in your chair, how often you participate in meetings, and what kind of conversationalist you are. Each day, four gigabytes’ worth of information about your office behavior is compiled and analyzed by Humanyze. This data, which then is delivered to your supervisor, reveals patterns that supposedly correlate with employee productivity.

IMAGE:Discovery of Achilles on Skyros, by Nicolas Poussin, c. 1649. © Museum of Fine Arts, Boston / Juliana Cheney Edwards Collection / Bridgeman Images. 

Humanyze CEO Ben Waber, a former student of Pentland’s, has claimed to take his cues from the world of sports, where “smart clothes” are used to measure the mechanics of a pitcher’s throw or the launch of a skater’s leap. He is determined to usher in a new era of “Moneyball for business,” a nod to baseball executive Billy Beane, whose data-driven approach gave his team, the Oakland Athletics, a competitive edge. With fine-grained biological data points, Waber promises to show how top office performers behave—what happy, productive workers do.

Bank of America hired Humanyze to use sociometric badges to study activity at the bank’s call centers, which employ more than ten thousand souls in the United States alone. By scrutinizing how workers communicated with one another during breaks, analysts came to the conclusion that allowing people to break together, rather than in shifts, reduced stress. This was indicated by voice patterns picked up by the badge, processed by the technology, and reported on an analyst’s screen. Employees grew happier. Turnover decreased.

The executives at Humanyze emphasize that minute behavior monitoring keeps people content. So far, the company has focused on loaning the badges to clients for limited study periods, but as Humanyze scales up, corporate customers may soon be able to use their own in-house analysts and deploy the badges around the clock.

Workers of the world can be happy all the time.

The optimists’ claim: technologies that monitor every possible dimension of biological activity can create faster, safer, and more efficient workplaces, full of employees whose behavior can be altered in accordance with company goals.

Widespread implementation is already underway. Tesco employees stock shelves with greater speed when they wear armbands that register their rate of activity. Military squad leaders are able to drill soldiers toward peak performance with the use of skin patches that measure vital signs. On Wall Street, experiments are ongoing to monitor the hormones of stock traders, the better to encourage profitable trades. According to cloud-computing company Rackspace, which conducted a survey in 2013 of four thousand people in the United States and United Kingdom, 6 percent of businesses provide wearable devices for workers. A third of the respondents expressed readiness to wear such devices, which are most commonly wrist- or head-mounted, if requested to do so.

The life of spies is to know, not be known.
– George Herbert, 1621

Biological scrutiny is destined to expand far beyond on-the-job performance. Workers of the future may look forward to pre-employment genetic testing, allowing a business to sort potential employees based on disposition toward anything from post-traumatic stress disorder to altitude sickness. Wellness programs will give employers reams of information on exercise habits, tobacco use, cholesterol levels, blood pressure, and body mass index. Even the monitoring of brain signals may become an office commonplace: at IBM, researchers bankrolled by the military are working on functional magnetic-resonance imaging, or fMRI, a technology that can render certain brain activities into composite images, turning thoughts into fuzzy external pictures. Such technology is already being used in business to divine customer preferences and detect lies. In 2006 a San Diego start-up called No Lie MRI expressed plans to begin marketing the brain-scanning technology to employers, highlighting its usefulness for employee screening. And in Japan, researchers at ATR Computational Neuro­science Laboratories have a dream-reading device in the pipeline that they claim can predict what a person visualizes during sleep. Ryan Hurd, who serves on the board of the International Association for the Study of Dreams, says such conditioning could be used to enhance performance. While unconscious, athletes could continue to practice; creative types could boost their imaginations.

The masterminds at Humanyze have grasped a fundamental truth about surveillance: a person watched is a person transformed. The man who invented the panopticon—a circular building with a central inspection tower that has a view of everything around itgleaned this, too. But contrary to most discussions of the “all-seeing place,” the idea was conceived not for the prison, but for the factory.

Jeremy Bentham is usually credited with the idea of the panopticon, but it was his younger brother, Samuel Bentham, who saw the promise of panoptical observation in the 1780s while in the service of Grigory Potemkin, a Russian officer and statesman. Potemkin, mostly remembered for creating fake villages to fool his lover, Catherine the Great, was in a quandary: his factories, which churned out everything from brandy to sailcloth, were a hot managerial mess. He turned to Samuel, a naval engineer whose inventions for Potemkin also  included the Imperial Vermicular, a wormlike, segmented 250-foot barge that could navigate sinuous rivers. Samuel summoned skilled craftsmen from Britain and set them to the hopeless task of overseeing a refractory mass of unskilled peasant laborers who cursed and fought in a babel of languages. Determined to win Potemkin’s favor, he hit on a plan for a workshop at a factory in Krichev that would allow a person, or persons, to view the entire operation from a central inspector’s lodge “in the twinkling of an eye,” as his brother Jeremy would later write in a letter. The inspector could at once evoke the omnipresence of God and the traditional Russian noble surrounded by his peasants. Laborers who felt themselves to be under the constant eye of the inspector would give up their drunken brawls and wife-swapping in favor of work.

War thwarted Samuel’s plans for the Krichev factory, eventually forcing him to return home to Britain, where, in 1797, he drew up a second panoptical scheme, a workhouse for paupers. Six years earlier, in 1791, Jeremy had borrowed Samuel’s idea to publish a work on the panoptical prison, built so that guards could see all of the inmates while the latter could only presume they were being watched, fostering “the sentiment of a sort of omnipresence” and “a new mode of obtaining power of mind over mind.” In America, the Society for Alleviating the Miseries of Public Prisons adopted panoptical elements for the Eastern State Penitentiary in Philadelphia, adding solitary confinement with the idea of delivering the maximum opportunity for prisoner repentance and rehabilitation. Visiting the prison in 1842, Charles Dickens noted that its chief effect on inmates was to drive them insane.

Before the days of industrialization, employers had little use for surveillance schemes. The master craftsman lived in his workshop, and his five to ten apprentices, journeymen, and hirelings occupied the same building or adjacent cottages, taking their behavioral cues from his patriarchal authority. The blacksmith or master builder or shoemaker interacted with his underlings in a sociable and informal atmosphere, taking meals with them, playing cards, even tippling rum and cider. Large-scale manufacturing swept this all away. Workmen left the homes of their employers; by the early decades of the nineteenth century, the family-centered workplace—where employers provided models of behavior, food, and lodging—was becoming a thing of memory.

IMAGE:Sacks full of Stasi files in the former Ministry for State Security headquarters, Berlin, 1996. © SZ Photo / Joker / David Ausserhofer / Bridgeman Images. 

Proto-industrialists found that their new employees, an ever-shifting mass of migrants and dislocated farm boys, found ample opportunities for on-the-job drunkenness, inattention, and fractious behavior. In his classic work A Shopkeeper’s Millennium, historian Paul
E. Johnson observes that in America an answer to this problem was found in the Protestant temperance movement just then blowing righteous winds across the Northeast. Managers found that the revival and the Sunday school could foster strict internal values that made constant supervision less important. Workers, if properly evangelized, would turn willingly from the bottle to the grueling business of tending power-driven machines. God would do the monitoring as He does it best—from the inside.

Unfortunately, God’s providential eye tended to blink in the absence of regular churchgoing. So in the 1880s and 1890s, mechanical engineer Frederick Winslow Taylor displaced God with scientific management systems, devising precise methods of judging and measuring workers to ensure uniformity of behavior and enhanced efficiency. Taylor’s zeal to scrutinize every aspect of work in the factory led to such inventions as a keystroke monitor that could measure the speed of a typist’s fingers. His methods of identifying underperforming cogs in the industrial machine became so popular that Joseph Wharton, owner of Bethlehem Steel, incorporated Taylor’s theories into the bachelor’s degree program in business he had founded at the University of Pennsylvania. Harvard University soon created a new master’s degree in business administration, the MBA, that focused on studying Taylorism.

Workplace surveillance didn’t evolve much beyond Taylor’s ideas until closed-circuit television brought prying to heights unimagined by the brothers Bentham. In 1990 the Olivetti Research Laboratory, in partnership with the University of Cambridge Computer Laboratory, announced an exciting new workplace-spying project aptly named Pandora. The Pandora’s Box processor handled video streams and controlled real-time data paths that allowed supervisors to peek in on remote workstations. An improved system launched in 1995 was named Medusa, after the Greek monster who turned victims to stone with her gaze.

By the early twenty-first century, electronic monitoring in the workplace became de facto, with bosses peering into emails, computer files, and browser histories. From the lowest-rung laborers to the top of the ivory tower, no employee was safe. In 2013 Harvard University was found to have snooped in the email accounts of sixteen faculty deans for the source of a media leak during a cheating scandal. Global positioning systems using satellite technology, which came to maturity by 1994 and grew popular for tracking delivery trucks, opened new methods of watching. Dennis Gray, owner of Accurid Pest Solutions, could satisfy a hunch in 2013 that workers were straying from their tasks. He quietly installed GPS tracking software on the company-issued smartphones of five of its drivers; one indeed was found to be meeting up with a woman during work hours. In 2015 Myrna Arias, a sales executive for money-transfer service Intermex, objected to her employer monitoring her personal time and turned off the GPS device that tracked her around the clock. She was fired.

Secrecy lies at the very core of power.
– Elias Canetti, 1960

Surveillance technology stirs up profound questions as to who may observe whom, under what conditions, for how long, and for what purpose. The argument for monitoring the vital signs of an airline pilot, whose job routinely holds lives at stake, may seem compelling, but less so for a part-time grocery store clerk. In a 1986 executive order President Ronald Reagan, expressing concern about the “serious adverse effects” of drug use on the workforce, which resulted in “billions of dollars of lost productivity each year,” instituted mandatory drug testing for all safety-sensitive executive-level and civil-service federal employees. A noble mission, perhaps, but prone to expand like kudzu: by 2006 it entangled up to three out of four jobseekers, from would-be Walmart greeters to washroom attendants, who were forced to submit to such degradations as peeing in a plastic jar, sometimes under the watchful eye of a lab employee. Thirty-nine percent could expect random tests after they were hired, as well as dismissal for using substances on or off the job and regardless of whether their use impaired performance. Job applicants often accordingly changed their behavior; one scheme involved ordering dog urine through the mail to fool the bladder inspectors.

At the 2014 Conference on Human Factors in Computing Systems, held in Toronto, participants noticed an unusual sign affixed to restroom doors: behavior at these toilets is being recorded for analysis. It had been placed there by Quantified Toilets (slogan:
every day. every time.), whose mission, posted on its website, states: “We analyze the biological waste process of buildings to make better spaces and happier people.” At the conference, Quantified Toilets was able to provide a real-time feed of analytical results. These piss prophets of the new millennium could tell if participants were pregnant, whether or not they had a sexually transmitted disease, or when they had drugs or alcohol in their system. (One man had showed up with a blood-alcohol level of 0.0072 percent and a case of gonorrhea.)

Quantified Toilets, it turned out, was not a real company, but a thought experiment for the conference, designed to provoke discussion about issues of privacy in a world where every facial expression, utterance, heartbeat, and trip to the bathroom can be captured to generate a biometric profile. Workplace surveillance, after all, is a regulatory Wild West; employees have few rights to privacy on the job. A court order may be necessary for police to track a criminal suspect, but no such niceties prevent an employer from exploring the boundaries of new technologies. History suggests that abuses will be irresistible: in 1988 the Washington, DC, police department admitted using urine tests to screen female employees for pregnancy without their knowledge.

Biosurveillance has strong allies, including its own Washington lobbying firm, the Secure Identity and Biometrics Association, committed to bringing new products to government, commercial, and consumer spheres. The VeriChip, a human-implantable microchip using radio frequency identification, allows scanners in range of the implant to access records and information about a person. It received FDA approval in 2004 (though the company later merged and became PositiveID). In Mexico, eighteen workers at the attorney
general’s office were required to have the rice-grain-sized chip injected under their skin to gain access to high-security areas. One anti-RFID crusader has called the technology the “mark of the beast,” as predicted in the Book of Revelation.

In the film Gattaca, set in the not-too-distant future, biometric surveillance is deployed to distinguish between genetically engineered superior humans and genetically natural inferior humans, who are forced to do menial jobs. We are quickly approaching such a world: employers who are able to identify—and create—workers with superior biological profiles are already turning the science fiction into reality.

Humanyze assures in corporate materials that privacy is a top priority. The names of employees are stored separately from behavioral information, and individual conversations aren’t recorded, just the metadata—a distinction familiar to those following the story of the widespread phone-tapping program of the NSA. Still, it requires little imagination to see how employers can use it for more extensive and rigorous surveillance of individual workers. A benign boss in the present may use data to decide the arrangements of break rooms and cubicles to enhance worker satisfaction and, in so doing, improve productivity. But in the future the same data may be retrieved and analyzed for unimagined possibilities. Observation is versatile in its application. In the face of capitalist demands for high performance and efficiency, abstract ideas like privacy and freedom can come to sound quaint and sentimental.

As optic and electronic watching give way to biosurveillance, the architecture of the Bentham brothers’ panopticon melts away and becomes internalized. The self-watching employee, under her own unwavering gaze, pre-adjusts
behavior according to a boss’ desire. Biosurveillance is sold as a tool for boosting happiness, but it also promotes a particular idea of what happiness is—which probably looks a lot more like workers who don’t make trouble than like squeaky wheels or even like the champions of disruption touted in Silicon Valley. The power to make you happy is also the power to define your happiness.

With his mantra “the medium is the message,” Marshall McLuhan stressed that the changes wrought upon us by technology may be more significant than the information revealed by it. Devices that monitor our minds and movements become part of who we are. Back in the Cold War, the Western press routinely derided Communist-bloc news clips of happy workers toiling away, singing songs in the mills and fields. One anti-communist propaganda animation from 1949, Meet King Joe, depicts a Chinese peasant smiling only because he is unaware of the paltriness and restrictions of his conditions. Such promos, perhaps, were just ahead of their time. Modern capitalism is poised to do them one better.

Secrets are rarely betrayed or discovered according to any program our fear has sketched out.
– George Eliot, 1860

Despite its name, a company like Humanyze—which brings forth the next frontier of biometric, device-driven surveillance—can make us less ourselves, more like who we’re supposed to be according to objectives of those who track our metrics. When we can feel, even on a cellular level, the gaze of the inspector, the invisible hand becomes the invisible eye, guiding as it does best, from within. Perhaps we will find true what we once feared: that contented workers are all alike. But so long as we are happy, who cares?

The future of American policing is the stuff of dystopian science fiction

Departments nationwide are adopting wild new technologies capable of destroying what’s left of our personal privacy

Robots will rule us all: The future of American policing is the stuff of dystopian science fiction
Arnold Schwarzenegger in “Terminator Genisys” (Credit: Paramount Pictures)
This piece originally appeared on TomDispatch.

Can’t you see the writing on the touchscreen? A techno-utopia is upon us. We’ve gone from smartphones at the turn of the twenty-first century to smart fridges and smart cars. The revolutionary changes to our everyday life will no doubt keep barreling along. By 2018, so predicts Gartner, an information technology research and advisory company, more than three million employees will work for “robo-bosses” and soon enough we — or at least the wealthiest among us — will be shopping in fully automated supermarkets and sleeping in robotic hotels.

With all this techno-triumphalism permeating our digitally saturated world, it’s hardly surprising that law enforcement would look to technology — “smart policing,” anyone? — to help reestablish public trust after the 2014 death of Michael Brown in Ferguson, Missouri, and the long list of other unarmed black men killed by cops in Anytown, USA. The idea that technology has a decisive role to play in improving policing was, in fact, a central plank of President Obama’s policing reform task force.

In its report, released last May, the Task Force on 21st Century Policing emphasized the crucial role of technology in promoting better law enforcement, highlighting the use of police body cameras in creating greater openness. “Implementing new technologies,” it claimed, “can give police departments an opportunity to fully engage and educate communities in a dialogue about their expectations for transparency, accountability, and privacy.”

Indeed, the report emphasized ways in which the police could engage communities, work collaboratively, and practice transparency in the use of those new technologies. Perhaps it won’t shock you to learn, however, that the on-the-ground reality of twenty-first-century policing looks nothing like what the task force was promoting. Police departments nationwide have been adopting powerful new technologies that are remarkably capable of intruding on people’s privacy, and much of the time these are being deployed in secret, without public notice or discussion, let alone permission.

And while the task force’s report says all the right things, a little digging reveals that the feds not only aren’t putting the brakes on improper police use of technology, but are encouraging it — even subsidizing the misuse of the very technology the task force believes will keep cops honest. To put it bluntly, a techno-utopia isn’t remotely on the horizon, but its flipside may be.

Getting Stung and Not Even Knowing It

Shemar Taylor was charged with robbing a pizza delivery driver at gunpoint. The police got a warrant to search his home and arrested him after learning that the cell phone used to order the pizza was located in his house. How the police tracked down the location of that cell phone is what Taylor’s attorney wanted to know.

The Baltimore police detective called to the stand in Taylor’s trial was evasive. “There’s equipment we would use that I’m not going to discuss,” he said. When Judge Barry Williams ordered him to discuss it, he still refused, insisting that his department had signed a nondisclosure agreement with the FBI.

“You don’t have a nondisclosure agreement with the court,” replied the judge, threatening to hold the detective in contempt if he did not answer. And yet he refused again. In the end, rather than reveal the technology that had located Taylor’s cell phone to the court, prosecutors decided to withdraw the evidence, jeopardizing their case.

And don’t imagine that this courtroom scene was unique or even out of the ordinary these days. In fact, it was just one sign of a striking nationwide attempt to keep an invasive, constitutionally questionable technology from being scrutinized, whether by courts or communities.

The technology at issue is known as a “Stingray,” a brand name for what’s generically called a cell site simulator or IMSI catcher. By mimicking a cell phone tower, this device, developed for overseas battlefields, gets nearby cell phones to connect to it. It operates a bit like the children’s game Marco Polo. “Marco,” the cell-site simulator shouts out and every cell phone on that network in the vicinity replies, “Polo, and here’s my ID!”

Thanks to this call-and-response process, the Stingray knows both what cell phones are in the area and where they are. In other words, it gathers information not only about a specific suspect, but any bystanders in the area as well. While the police may indeed use this technology to pinpoint a suspect’s location, by casting such a wide net there is also the potential for many kinds of constitutional abuses — for instance, sweeping up the identities of every person attending a demonstration or a political meeting. Some Stingrays are capable of collecting not only cell phone ID numbers but also numbers those phones have dialed and even phone conversations. In other words, the Stingray is a technology that potentially opens the door for law enforcement to sweep up information that not so long ago wouldn’t have been available to them.

All of this raises the sorts of constitutional issues that might normally be settled through the courts and public debate… unless, of course, the technology is kept largely secret, which is exactly what’s been happening.

After the use of Stingrays was first reported in 2011, the American Civil Liberties Union (ACLU) and other activist groups attempted to find out more about how the technology was being used, only to quickly run into heavy resistance from police departments nationwide. Served with “open-records requests” under Freedom of Information Act-like state laws, they almost uniformly resisted disclosing information about the devices and their uses. In doing so, they regularly cited nondisclosure agreements they had signed with the Harris Corporation, maker of the Stingray, and with the FBI, prohibiting them from telling anyone (including other government outfits) about how — or even that — they use the devices.

Sometimes such evasiveness reaches near-comical levels. For example, police in the city of Sunrise, Florida, served with an open-records request, refused to confirm or deny that they had any Stingray records at all. Under cover of a controversial national security court ruling, the CIA and the NSA sometimes resort to just this evasive tactic (known as a “Glomar response“). The Sunrise Police Department, however, is not the CIA, and no provision in Florida law would allow it to take such a tack. When the ACLU pointed out that the department had already posted purchase records for Stingrays on its public website, it generously provided duplicate copies of those very documents and then tried to charge the ACLU $20,000 for additional records.

In a no-less-bizarre incident, the Sarasota Police Department was about to turn some Stingray records over to the ACLU in accordance with Florida’s open-records law, when the U.S. Marshals Service swooped in and seized the records first, claiming ownership because it had deputized one local officer. And excessive efforts at secrecy are not unique to Florida, as those charged with enforcing the law commit themselves to Stingray secrecy in a way that makes them lawbreakers.

And it’s not just the public that’s being denied information about the devices and their uses; so are judges. Often, the police get a judge’s sign-off for surveillance without even bothering to mention that they will be using a Stingray. In fact, officers regularly avoid describing the technology to judges, claiming that they simply can’t violate those FBI nondisclosure agreements.

More often than not, police use Stingrays without bothering to get a warrant, instead seeking a court order on a more permissive legal standard. This is part of the charm of a new technology for the authorities: nothing is settled on how to use it. Appellate judges in Tallahassee, Florida, for instance, revealed that local police had used the tool more than 200 times without a warrant. In Sacramento, California, policeadmitted in court that they had, in more than 500 investigations, used Stingrays without telling judges or prosecutors.  That was “an estimated guess,” since they had no way of knowing the exact number because they had conveniently deleted records of Stingray use after passing evidence discovered by the devices on to detectives.

Much of this blanket of secrecy, spreading nationwide, has indeed been orchestrated by the FBI, which has required local departments eager for the hottest new technology around to sign those nondisclosure agreements. One agreement,unearthed in Oklahoma, explicitly instructs the local police to find “additional and independent investigative means” to corroborate Stingray evidence. In short, they are to cover up the use of Stingrays by pretending their information was obtained some other way — the sort of dangerous constitutional runaround that is known euphemistically in law enforcement circles as a “parallel construction.” Now that information about the widespread use of this new technology is coming out — as in the Shemar Taylor trial in Baltimore — judges are beginning to rule that Stingray use does indeed require a warrant. They are also insisting that police must accurately inform judges when they intend to use a Stingray and disclose its privacy implications.

Garbage In, Garbage Out

And it’s not just the Stingray that’s taking local police forces into new and unknown realms of constitutionally questionable but deeply seductive technology. Consider the hot new trend of “predictive policing.” Its products couldn’t be high-techier. They go by a variety of names like PredPol (yep, short for predictive policing) and HunchLab (and there’s nothing wrong with a hunch, is there?).  What they all promise, however, is the same thing: supposedly bias-free policing built on the latest in computer software and capable of leveraging big data in ways that — so their salesmen will tell you — can coolly determine where crime is most likely to occur next.

Such technology holds out the promise of allowing law enforcement agencies to deploy their resources to areas that need them most without that nasty element of human prejudice getting involved. “Predictive methods allow police to work more proactively with limited resources,” reports the RAND Corporation. But the new software offers something just as potentially alluring as efficient policing — exactly what the president’s task force called for. According to market leader PredPol, its technology “provides officers an opportunity to interact with residents, aiding in relationship building and strengthening community ties.”

How idyllic! In post-Ferguson America, that’s a winning sales pitch for decision-makers in blue. Not so surprisingly, then, PredPol is now used by nearly 60 law enforcement agencies in the United States, and investment capital just keeps pouring into the company. In 2013, SF Weekly reported that over 150 departments across the nation were already using predictive policing software, and those numbers can only have risen as the potential for cashing in on the craze has attracted tech heavy hitters like IBM,Microsoft, and Palantir, the co-creation of PayPal co-founder Peter Thiel.

Like the Stingray, the software for predictive policing is yet another spillover from the country’s distant wars. PredPol was, according to SF Weekly, initially designed for “tracking insurgents and forecasting casualties in Iraq,” and was financed by the Pentagon. One of the company’s advisors, Harsh Patel, used to work for In-Q-Tel, the CIA’s venture capital firm.

Civil libertarians and civil rights activists, however, are less than impressed with what’s being hailed as breakthrough police technology. We tend to view it instead as a set of potential new ways for the police to continue a long history of profiling and pre-convicting poor and minority youth. We also question whether the technology even performs as advertised. As we see it, the old saying “garbage in, garbage out” is likely to best describe how the new software will operate, or as the RAND Corporation puts it, “predictions are only as good as the underlying data used to make them.”

If, for instance, the software depends on historical crime data from a racially biased police force, then it’s just going to send a flood of officers into the very same neighborhoods they’ve always over-policed. And if that happens, of course, more personnel will find more crime — and presto, you have the potential for a perfect feedback loop of prejudice, arrests, and high-tech “success.” To understand what that means, keep in mind that, without a computer in sight, nearly four times as many blacks as whites are arrested for marijuana possession, even though usage among the two groups is about the same.

If you leave aside issues of bias, there’s still a fundamental question to answer about the new technology: Does the software actually work or, for that matter, reduce crime? Of course, the companies peddling such products insist that it does, but no independent analyses or reviews had yet verified its effectiveness until last year — or so it seemed at first.

In December 2015, the Journal of the American Statistical Association published a study that brought joy to the predictive crime-fighting industry. The study’s researchers concluded that a predictive policing algorithm outperformed human analysts in indicating where crime would occur, which in turn led to real crime reductions after officers were dispatched to the flagged areas. Only one problem: five of the seven authors held PredPol stock, and two were co-founders of the company. On itswebsite, PredPol identifies the research as a “UCLA study,” but only because PredPol co-founder Jeffery Brantingham is an anthropology professor there.

Predictive policing is a brand new area where question marks abound. Transparency should be vital in assessing this technology, but the companies generally won’t allow communities targeted by it to examine the code behind it. “We wanted a greater explanation for how this all worked, and we were told it was all proprietary,” Kim Harris, a spokeswoman for Bellingham, Washington’s Racial Justice Coalition, told theMarshall Project after the city purchased such software last August. “We haven’t been comforted by the process.”

The Bellingham Police Department, which bought predictive software made by Bair Analytics with a $21,200 Justice Department grant, didn’t need to go to the city council for approval and didn’t hold community meetings to discuss the development or explain how the software worked. Because the code is proprietary, the public is unable to independently verify that it doesn’t have serious problems.

Even if the data underlying most predictive policing software accurately anticipates where crime will indeed occur — and that’s a gigantic if — questions of fundamental fairness still arise. Innocent people living in or passing through identified high crime areas will have to deal with an increased police presence, which, given recent history, will likely mean more questioning or stopping and frisking — and arrests for things like marijuana possession for which more affluent citizens are rarely brought in.  Moreover, the potential inequality of all this may only worsen as police departments bring online other new technologies like facial recognition.

We’re on the verge of “big data policing,” suggests law professor Andrew Ferguson, which will “turn any unknown suspect into a known suspect,” allowing an officer to “search for information that might justify reasonable suspicion” and lead to stop-and-frisk incidents and aggressive questioning. Just imagine having a decades-old criminal record and facing police armed with such powerful, invasive technology.

This could lead to “the tyranny of the algorithm” and a Faustian bargain in which the public increasingly forfeits its freedoms in certain areas out of fears for its safety. “The Soviet Union had remarkably little street crime when they were at their worst of their totalitarian, authoritarian controls,” MIT sociologist Gary Marx observed. “But, my god, at what price?”

To Record and Serve… Those in Blue

On a June night in 2013, Augustin Reynoso discovered that his bicycle had been stolen from a CVS in the Los Angeles suburb of Gardena. A store security guard called the police while Reynoso’s brother Ricardo Diaz Zeferino and two friends tried to find the missing bike in the neighborhood. When the police arrived, they promptly ordered his two friends to put their hands up. Zeferino ran over, protesting that the police had the wrong men.  At that point, they told him to raise his hands, too. He then lowered and raised his hands as the police yelled at him. When he removed his baseball hat, lowered his hands, and began to raise them again, he was shot to death.

The police insisted that Zeferino’s actions were “threatening” and so their shooting justified. They had two videos of it taken by police car cameras — but refused to release them.

Although police departments nationwide have been fighting any spirit of new openness, car and body cameras have at least offered the promise of bringing new transparency to the actions of officers on the beat. That’s why the ACLU and many civil rights groups, as well as President Obama, have spoken out in favor of the technology’s potential to improve police-community relations — but only, of course, if the police are obliged to release videos in situations involving allegations of abuse. And many departments are fighting that fiercely.

In Chicago, for instance, the police notoriously opposed the release of dashcam video in the shooting death of Laquan McDonald, citing the supposed imperative of an “ongoing investigation.” After more than a year of such resistance, a judge finally ordered the video made public. Only then did the scandal of seeing Officer Jason Van Dyke unnecessarily pump 16 bullets into the 17-year-old’s body explode into national consciousness.

In Zeferino’s case, the police settled a lawsuit with his family for $4.7 million and yet continued to refuse to release the videos. It took two years before a judge finally ordered their release, allowing the public to see the shooting for itself.

Despite this, in April 2015 the Los Angeles Board of Police Commissioners approveda body-camera policy that failed to ensure future transparency, while protecting and serving the needs of the Los Angeles Police Department (LAPD).  In doing so, it ignored the sort of best practices advocated by the White House, the president’s task force on policing, and even the Police Executive Research Forum, one of the profession’s most respected think tanks.

On the possibility of releasing videos of alleged police misconduct and abuse, the new policy remained silent, but LAPD officials, including Chief Charlie Beck, didn’t. They made it clear that such videos would generally be exempt from California’s public records law and wouldn’t be released without a judge’s orders. Essentially, the police reserved the right to release video when and how they saw fit. This self-serving policy comes from the most lethal large police department in the country, whose officers shot and killed 21 people last year.

Other departments around the country have made similar moves to ensure control over body camera videos. Texas and South Carolina, among other states, have even changed their open-records laws to give the police power over when such footage should (or should not) be released. In other words, when a heroic cop saves a drowning child, you’ll see the video; when that same cop guns down a fleeing suspect, don’t count on it.

Curiously, given the stated positions of the president and his task force, the federal government seems to have no fundamental problem with that. In May 2015, for example, the Justice Department announced competitive grants for the purchase of police body cameras, officially tying funding to good body-cam-use policies. The LAPD applied. Despite letters from groups like the ACLU pointing out just how poor its version of body-cam policy was, the Justice Department awarded it $1 million to purchase approximately 700 cameras — accountability and transparency be damned.

To receive public money for a tool theoretically meant for transparency and accountability and turn it into one of secrecy and impunity, with the feds’ complicity and financial backing, sends an unmistakable message on how new technology is likely to affect America’s future policing practices. Think of it as a door slowly opening onto a potential policing dystopia.

Hello Darkness, Power’s Old Friend

Keep in mind that this article barely scratches the surface when it comes to the increasing numbers of ways in which the police’s use of technology has infiltrated our everyday lives.

In states and cities across America, some public bus and train systems have begun to add to video surveillance, the surreptitious recording of the conversations of passengers, a potential body blow to the concept of a private conversation in public space. And whether or not the earliest versions of predictive policing actually work, the law enforcement community is already moving to technology that will try to predict who will commit crimes in the future. In Chicago, the police are using social networking analysis and prediction technology to draw up “heat lists” of those who might perpetuate violent crimes someday and pay them visits now. You won’t be shocked to learn which side of the tracks such future perpetrators live on. The rationale behind all this, as always, is “public safety.”

Nor can anyone begin to predict how law enforcement will avail itself of science-fiction-like technology in the decade to come, much less decades from now, though cops on patrol may very soon know a lot about you and your past. They will be able to cull such information from a multitude of databases at their fingertips, while you will know little or nothing about them — a striking power imbalance in a situation in which one person can deprive the other of liberty or even life itself.

With little public debate, often in almost total secrecy, increasing numbers of police departments are wielding technology to empower themselves rather than the communities they protect and serve. At a time when trust in law enforcement is dangerously low, police departments should be embracing technology’s democratizing potential rather than its ability to give them almost superhuman powers at the expense of the public trust.

Unfortunately, power loves the dark.

Matthew Harwood is the ACLU’s senior writer/editor.