The rise of data and the death of politics

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisions that such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0″) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer
For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.
Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, drive Uber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”

 

Let’s nationalize Amazon and Google

Publicly funded technology built Big Tech

They’re huge and ruthless and define our lives. They’re close to monopolies. Let’s make them public utilities

Let's nationalize Amazon and Google: Publicly funded technology built Big Tech
Jeff Bezos (Credit: AP/Reed Saxon/Pakhnyushcha via Shutterstock/Salon)

They’re huge, they’re ruthless, and they touch every aspect of our daily lives. Corporations like Amazon and Google keep expanding their reach and their power. Despite a history of abuses, so far the Justice Department has declined to take antitrust actions against them. But there’s another solution.

Is it time to manage and regulate these companies as public utilities?

That argument’s already been made about broadband access. In her book “Captive Justice,” law professor Susan Crawford argues that “high-speed wired Internet access is as basic to innovation, economic growth, social communication, and the country’s competitiveness as electricity was a century ago.”

Broadband as a public utility? If not for corporate corruption of our political process, that would seem like an obvious solution. Instead, our nation’s wireless access is the slowest and costliest in the world.

But why stop there? Policymakers have traditionally considered three elements when evaluating the need for a public utility: production, transmission, and distribution. Broadband is transmission. What about production and distribution?

The Big Tech mega-corporations have developed what Al Gore calls the “Stalker Economy,” manipulating and monitoring as they go. But consider: They were created with publicly funded technologies, and prospered as the result of indulgent policies and lax oversight. They’ve achieved monopoly or near-monopoly status, are spying on us to an extent that’s unprecedented in human history, and have the potential to alter each and every one of our economic, political, social and cultural transactions.

In fact, they’re already doing it.

Public utilities? It’s a thought experiment worth conducting.

Big Tech was created with publicly developed technology.

No matter how they spin it, these corporations were not created in garages or by inventive entrepreneurs. The core technology behind them is the Internet, a publicly funded platform for which they pay no users’ fee. In fact, they do everything they can to avoid paying their taxes.



Big Tech’s use of public technology means that it operates in a technological “commons,” which they are using solely for its own gain, without regard for the public interest. Meanwhile the United States government devotes considerable taxpayer resource to protecting them – from patent infringement, cyberterrorism and other external threats.

Big Tech’s services have become a necessity in modern society.

Businesses would be unable to participate in modern society without access to the services companies like Amazon, Google and Facebook provide. These services have become public marketplaces.

For individuals, these entities have become the public square where social interactions take place, as well as the marketplace where they purchase goods.

They’re at or near monopoly status – and moving fast.

Google has 80 percent of the search market in the United States, and an even larger share of key overseas markets. Google’s browsers have now surpassed Microsoft’s in usage across all devices. It has monopoly-like influence over online news, as William Baker noted in the Nation. Its YouTube subsidiary dominates the U.S. online-video market, with nearly double the views of its closest competitor. (Roughly 83 percent of the Americans who watched a video online in April went to YouTube.)

Even Microsoft’s Steve Ballmer argued that Google is a “monopoly” whose activities were “worthy of discussion with competition authority.” He should know.

As a social platform, Facebook has no real competitors. Amazon’s book business dominates the market. E-books are now 30 percent of the total book market, and its Kindle e-books account for 65 percent of e-book sales.  Nearly one book in five is an Amazon product – and that’s not counting Amazon’s sales of physical books. It has become such a behemoth that it is able to command discounts of more than 50 percent from major publishers like Random House.

They abuse their power.

The bluntness with which Big Tech firms abuse their monopoly power is striking. Google has said that it will soon begin blocking YouTube videos from popular artists like Radiohead and Adele unless independent record labels sign deals with its upcoming music streaming service (at what are presumably disadvantageous rates).   Amazon’s war on publishers like Hachette is another sign of Big Tech arrogance.

But what is equally striking about these moves is the corporations’ disregard for basic customer service. Because YouTube’s dominance of the video market is so large, Google is confident that even frustrated music fans have nowhere to go. Amazon is so confident of its dominance that it retaliated against Hachette by removing order buttons when a Hachette book came up (which users must find maddening) and lied about the availability of Hachette books when a customer attempts to order one. It also altered its search process for recommendations to freeze out Hachette books and direct users to non-Hachette authors.

Amazon even suggested its customers use other vendors if they’re unhappy, a move that my Salon colleague Andrew Leonard described as “nothing short of amazing – and troubling.”

David Stratfield of the New York Times asked, “When does discouragement become misrepresentation?” One logical answer: When you tell customers a product isn’t available, even though it is, or rig your sales mechanism to prevent customers from choosing the item they want.

And now Amazon’s using some of the same tactics against Warner Home Video.

They got there with our help.

As we’ve already noted, Internet companies are using taxpayer-funded technology to make billions of dollars from the taxpayers – without paying a licensing fee. As we reported earlier, Amazon was the beneficiary of tax exemptions that allowed it to reach its current monopolistic size.

Google and the other technology companies have also benefited from tax policies and other forms of government indulgence. Contrary to popular misconception, Big Tech corporations aren’t solely the products of ingenuity and grit. Each has received, and continues to receive, a lot of government largess.

The real “commodity” is us.

Most of Big Tech’s revenues come from the use of our personal information in its advertising business. Social media entries, Web-surfing patterns, purchases, even our private and personal communications add value to these corporations. They don’t make money by selling us a product. We are the product, and we are sold to third parties for profit.

Public utilities are often created when the resource being consumed isn’t a “commodity” in the traditional sense. “We” aren’t an ordinary resource. Like air and water, the value of our information is something that should be publicly shared – or, at a minimum, publicly managed.

Our privacy is dying … or already dead.

“We know where you are,” says Google CEO Eric Schmidt. “We know where you’ve been. We can more or less know what you’re thinking about.”

Facebook tracks your visits to the website of any corporate Facebook “partner,” stores that information, and uses it to track and manipulate the ads you see. Its mobile app also has a new, “creepy” feature that turns on your phone’s microphone, analyzes what you’re listening to or watching, and is capable of posting updates to your status like “Listening to Albert King” or “Watching ‘Orphan Black.’

Google tracks your search activity, an activity with a number of disturbing implications. (A competing browser that does not track searches called DuckDuckGo offers an illustrated guide to its competitors’ practices.)  If you use its Chrome browser, Google tracks your website visits too (unless you’re in “private” mode.)

Yasha Levine, who is tracking corporate data spying in his “Surveillance Valley” series, notes that “True end-to-end encryption would make our data inaccessible to Google, and grind its intel extraction apparatus to a screeching halt.” As the ACLU’s Christopher Soghoian points out: “It’s very, very difficult to deploy privacy protective policies with the current business model of ad supported services.”

As Levine notes, the widely publicized revelation that Big Data companies track rape victims was just the tip of the iceberg. They also track “anorexia, substance abuse, AIDS and HIV … Bedwetting (Enuresis), Binge Eating Disorder, Depression, Fetal Alcohol Syndrome, Genital Herpes, Genital Warts, Gonorrhea, Homelessness, Infertility, Syphilis … the list goes on and on and on and on.”

Given its recent hardball tactics, here’s a little-known development that should concern more people: Amazon also hosts 37 percent of the nation’s cloud computing services, which means it has access to the inner workings of the software that runs all sorts of businesses – including ones that handle your personal data.

For all its protestations, Microsoft is no different when it comes to privacy. The camera and microphone on its Xbox One devices were initially designed to be left on at all times, and it refused to change that policy until purchasers protested.

Privacy, like water or energy, is a public resource. As the Snowden revelations have taught us, all such resources are at constant risk of government abuse.  The Supreme Court just banned warrantless searches of smartphones – by law enforcement. Will we be granted similar protections from Big Tech corporations?

Freedom of information is at risk.

Google tracks your activity and customizes search results, a process that can filter or distort your perception of the world around you.  What’s more, this “personalized search results” feature leads you back to information sources you’ve used before, which potentially narrows our ability to discover new perspectives or resources.  Over time this creates an increasingly narrow view of the world.

What’s more, Google’s shopping tools have begun using “paid inclusion,” a pay-for-play search feature it once condemned as “evil.” Its response is to say it prefers not to call this practice “paid inclusion,” even though its practices appear to meet the Federal Trade Commission’s definition of the term.

As for Amazon, it has even manipulated its recommendation searches in order to retaliate against other businesses, as we’ll see in the next section.

The free market could become even less free.

Could Big Tech and its data be used to set user-specific pricing, based on what is known about an individual’s willingness to pay more for the same product? Benjamin Schiller of Brandeis University wrote a working paper last year that showed how Netflix could do exactly that. Grocery stores and other retailers are already implementing technology that offers different pricing to different shoppers based on their data profile.

For its part, Amazon is introducing a phone that will also tag the items around you, as well as the music and programs you hear, for you to purchase – from Amazon, of course. Who will be purchasing the data those phones collect about you?

They could hijack the future.

The power and knowledge they have accumulated is frightening. But the Big Tech corporations are just getting started. Google has photographically mapped the entire world. It intends to put the world’s books into a privately owned online library. It’s launching balloons around the globe that will bring Internet access to remote areas – on its terms. It’s attempting to create artificial intelligence and extend the human lifespan.

Amazon hopes to deliver its products by drone within the next few years, an idea that would seem preposterous if not for its undeniable lobbying clout. Each of these Big Tech corporations has the ability to filter – and alter – our very perceptions of the world around us. And each of them has already shown a willingness to abuse it for their own ends.

These aren’t just the portraits of futuristic corporations that have become drunk on unchecked power. It’s a sign that things are likely to get worse – perhaps a lot worse – unless something is done. The solution may lie with an old concept. It may be time to declare Big Tech a public utility.

 

Richard (RJ) Eskow is a writer and policy analyst. He is a Senior Fellow with the Campaign for America’s Future and is host and managing editor of The Zero Hour on We Act Radio.

http://www.salon.com/2014/07/08/lets_nationalize_amazon_and_google_publicly_funded_technology_built_big_tech/?source=newsletter

Net neutrality is dying, Uber is waging a war on regulations, and Amazon grows stronger by the day

Why 2014 could be the year we lose the Internet

Why 2014 could be the year we lose the Internet
Jeff Bezos, Tim Cook (Credit: Reuters/Gus Ruelas/Robert Galbraith/Photo collage by Salon)

Halfway through 2014, and the influence of technology and Silicon Valley on culture, politics and the economy is arguably bigger than ever — and certainly more hotly debated. Here are Salon’s choices for the five biggest stories of the year.

1) Net neutrality is on the ropes.

So far, 2014 has been nothing but grim for the principle known as “net neutrality” — the idea that the suppliers of Internet bandwidth should not give preferential access (so-called fast lanes) to the providers of Internet services who are willing and able to pay for it. In January, the D.C. Court of Appeals struck down the FCC’s preliminary plan to enforce a weak form of net neutrality. Less than a month later, Comcast, the nation’s largest cable company and broadband Internet service provider, announced its plans to buy Time-Warner — and inadvertently gave us a compelling explanation for why net neutrality is so important. A single company with a dominant position in broadband will simply have too much power, something that could have enormous implications for our culture.

The situation continued to degenerate from there. Tom Wheeler, President Obama’s new pick to run the FCC, a former top cable industry lobbyist, unveiled a new plan for net neutrality that was immediately slammed as toothless. In May, ATT announced plans to merge with DirecTV. Consolidation proceeds apace, and our government appears incapable of managing the consequences.

2) Uber takes over.

After completing its most recent round of financing, Uber is now valued at $18.2 billion. Along with Airbnb, the Silicon Valley start-up has become a standard bearer for the Valley’s cherished allegiance to “disruption.” The established taxi industry is under sustained assault, but Uber has made it clear that the company’s ultimate ambitions go far beyond simply connecting people with rides. Uber has designs on becoming the premier logistics connection platform for getting anything to anyone. What Google is to search, Uber wants to be for moving objects from Point A to Point B. And Google, of course, has a significant financial stake in Uber.



Uber’s path has been bumpy. The company is fighting regulatory battles with municipalities across the world, and its own drivers are increasingly angry at fare cuts, and making sporadic attempts to organize. But the smart money sees Uber as one of the major players of the near future. The “sharing” economy is here to stay.

3) The year of the stream.

Apple bought Beats by Dre. Amazon launched its own streaming music service. Google is planning a new paid streaming offering. Spotify claimed 10 million paying customers and Pandora boasts 75 million listeners every month.

We may end up remembering 2014 as the year that streaming established itself as the dominant way people consume music. The numbers are stark. Streaming is surging, while paid downloads are in free fall.

For consumers, all-you-can-eat services like Spotify are generally marvelous. But it remains astonishing that a full 20 years after the Internet threw the music industry into turmoil, it is still completely unclear how artists and songwriters will make a decent living in an era when music is essentially free.

We also face unanswered questions about the potential implications for what kinds of music get made in an environment where every listen is tracked and every tweet or Facebook like observed. What will Big Data mean for music?

4) Amazon shows its true colors.

What a busy six months for Jeff Bezos! Amazon introduced its own set-top box for TV watching, its own smartphone for insta-shopping, anywhere, any time, and started abusing its near monopoly power to win better terms with publishing companies.

For years, consumer adoration of Amazon’s convenience and low prices fueled the company’s rise. It’s hard, at the midpoint of 2014, to avoid the conclusion that we’ve created a monster. This year, Amazon started getting sustained bad press at the very highest levels. And you know what? Jeff Bezos deserves it.

5) The tech culture wars boil over.

In the first six months of 2014, the San Francisco Bay Area witnessed emotional public hearings about Google shuttle buses, direct action by radicals against technology company executives, bar fights centering on Google Glass wearers, and a steady rise in political heat focused on tech economy-driven gentrification.

As I wrote in April

Just as the Luddites, despite their failure, spurred the creation of worker-class consciousness, the current Bay Area tech protests have had a pronounced political effect. While the tactics range from savvy, well-organized protest marches to juvenile acts of violence, the impact is clear. The attention of political leaders and the media has been engaged. Everyone is watching.

Ultimately, maybe this will be the biggest story of 2014. This year, numerous voices started challenging the transformative claims of Silicon Valley hype and began grappling with the nitty-gritty details of how all this “disruption” is changing our economy and culture. Don’t expect the second half of 2014 to be any different.

Facebook: The company is mostly white and male

Facebook releases diversity figures: They look a lot like Google’s and Yahoo’s

 

Facebook releases diversity figures: They look a lot like Google's and Yahoo's
Mark Zuckerberg (Credit: Reuters/Robert Galbraith)

For the first time ever, Facebook released its workplace diversity figures. The numbers were made public yesterday, in a blog post written by Global Head of Diversity Maxine Williams. The company is mostly male, white and Asian.

This public disclosure, of both gender and diversity statistics, follows the trend of other large tech companies, spurred by a late May release from Google. Since Google made its numbers public Chegg, LinkedIn and Yahoo have also released their workforce breakdown.

Sadly, Facebook’s numbers look a lot like the other four. I’ll let the figures speak for themselves:

Globally the company is 69 percent male, 31 percent female. In terms of ethnicity the company is 57 percent white, 34 percent Asian, 4 percent Hispanic, 3 percent two or more races, 3 percent black and 0 percent other.

Scrutinized further, in the tech force of Facebook, 85 percent are male and 15 percent are female. In terms of ethnicity in the tech division 53 percent are white, 41 percent are Asian, 3 percent are Hispanic, 2 percent are two or more races, 1 percent is black and 0 percent is other.

Globally, leadership is 77 percent male, and 23 percent female. Facebook’s leadership in the U.S. is also mostly white — 74 percent. Leadership at the company is 19 percent Asian, 4 percent Hispanic, 2 percent black and 1 percent two or more races.

The non-tech workforce is 53 percent male, 47 percent female, 63 percent white, 24 percent Asian, 6 percent Hispanic, 4 percent two or more races, 2 percent black and 1 percent other.

“As these numbers show, we have more work to do – a lot more,” Williams wrote. Yep, that’s for sure.

Williams also stated that the company was working toward building a more diverse workforce. These efforts include partnering with organizations like the Anita Borg Institute, Girls Who Code, the National Society of Black Engineers, the Society of Hispanic Professional Engineers and Yes We Code.



Releasing the figures and working with these groups are both important strides to becoming a more inclusive diverse workforce. Previously Facebook had been reluctant to release its figures. The New York Times reported that Facebook COO and “Lean In” author Sheryl Sandberg, had originally stated that the company would rather share the numbers internally.

Diversity in companies is critical to not just fostering a positive, creative, inclusive workplace. Diversity is important in terms of representation, role models and mentorship. Sapna Cheryan, an assistant professor of psychology at the University of Washington, explains.

“There’s a really strong image of what a computer scientist is — male, skinny, no social life, eats junk food, plays video games, likes science fiction,” Cheryan told the New York Times. “It makes it hard for people who don’t fit that image to think of it as an option for them.”

http://www.salon.com/2014/06/26/facebook_releases_diversity_figures_they_look_a_lot_like_googles_and_yahoos/?source=newsletter

After you’re gone, what happens to your social media and data?

Web of the dead: When Facebook profiles of the deceased outnumber the living

Web of the dead: When Facebook profiles of the deceased outnumber the living

There’s been chatter — and even an overly hyped study — predicting the eventual demise of Facebook.

But what about the actual death of Facebook users? What happens when a social media presence lives beyond the grave? Where does the data go?

The folks over at WebpageFX looked into what they called “digital demise,” and made a handy infographic to fully explain what happens to your Web presence when you’ve passed.

It was estimated that 30 million Facebook users died in the first eight years of the social media site’s existence, according to the Huffington Post. Facebook even has settings to memorialize a deceased user’s page.

Facebook isn’t the only site with policies in place to handle a user’s passing. Pinterest, Google, LinkedIn and Twitter all handle death and data differently. For instance, to deactivate a Facebook profile you must provide proof that you are an immediate family member; for Twitter, however, you must produce the death certificate and your identification. All of the sites pinpointed by WebpageFX stated that your data belongs to you — some with legal or family exceptions.

Social media sites are in in general a young Internet phenomena — Facebook only turned 10 this year. So are a majority of their users. (And according to Mashable, Facebook still has a large number of teen adapters.) Currently, profiles of the living far outweigh those of the dead.



However, according to calculations done by XKDC, that will not always be the case. They presented two hypothetical scenarios. If Facebook loses its “cool” and market share, dead users will outnumber the living in 2065. If Facebook keeps up its growth, the site won’t be a digital graveyard until the mid 2100s.

Check out the fascinating infographic here.

h/t Mashable

http://www.salon.com/2014/06/24/web_of_the_dead_when_facebook_profiles_of_the_deceased_outnumber_the_living/?source=newsletter

GLENN GREENWALD ON PRIVACY

Glenn Greenwald: ‘What I Tell People Who Say They Don’t Care About Their Privacy’

“Those people don’t believe what they’re saying,” the civil liberties-focused journalist says.

Since he obtained and published Edward Snowden’s leaked National Security Agency documents a little more than a year ago, journalist Glenn Greenwald said people have told him over and over that government surveillance does not concern them.

“Those people don’t believe what they’re saying,” he told a sold-out audience last week at the Nourse Theater in San Francisco.

To illustrate this, every time someone would come up to Greenwald and say they didn’t mind people knowing what they were doing because they had nothing to hide, he would proceed with the same two steps: first, by giving them his email address and then by asking them to send him all their email and social media passwords — just so he could have a look.

“I’ve not had one single person send me them,” he said, as the room swelled with laughter. “And I check my email box constantly!”

The humorous anecdote, Greenwald said, exemplifies how people instinctively understand how privacy is vital to who we are. Just as much as we need to be social, we need a place where we can go to learn and think without others passing judgment on us.

“Privacy is embedded in what it means to be human and always has been across time periods and across cultures,” Greenwald said.

Greenwald recalled prominent figures who have tried to distance themselves from this fundamental need. Eric Schmidt, CEO of Google, said in an interview in 2009, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” But four years before, Schmidt blacklisted CNET after it published an article on privacy concerns that listed where he lives, his salary, his political contributions, and his hobbies — all obtained from a 30-minute Google search.

Another privacy hypocrite Greenwald mentioned is Dianne Feinstein, chairman of the U.S. Senate Intelligence Committee. Feinstein has been a major supporter of the NSA program, and has maintained it’s “not a surveillance program,” but rather a collection of lists of data. And while Greenwald said the program does regularly spy on people by listening to their phone calls or reading their emails, he said lists of your conversations  — perhaps with a self-help hotline or medical clinic — are just as insidious. After all, he said, Feinstein never responded to a campaign calling on her to publish a list of all the people she emailed and called on a given day.

“Somebody collecting the list of all the people with whom you’re communicating will know an enormous amount about your most invasive and intimate realm,” he said. “Oftentimes even more than they’ll learn if they’re listening to your telephone calls, which could be cryptic, or your email communications which could be quite stunted.”

Greenwald then listed three well-known media figures who have also claimed that they weren’t worried about being targets of surveillance: MSNBC anchor Lawrence O’Donnell, Washington Postcolumnist Ruth Marcus and Hendrik Hertzberg of The New Yorker.

“I started thinking about what those people have in common,” Greenwald said, adding that he realized they all more or less defend the government in their reporting. But if you go into American Muslim communities, or the Occupy movement, or groups who challenge the status quo, he said, you’ll find countless people afraid of being targeted.

In addition, this notion implies that if you don’t challenge the government, you won’t have to worry about being spied on. But “the true measure of how free a society is,” Greenwald said, “is how it treats its dissidents.”

He added, “We should not be comfortable or content in a society where the only way to remain free of surveillance and oppression is if we make ourselves unthreatening and passive and compliant as possible.”

The discontent with the way society operates is starting to ignite change. While the NSA has not closed up shop, because, as Greenwald said, a government isn’t going to limit its own power, there’s no reason to be pessimistic. Greenwald pointed to the countries worldwide that are angered by what the United States is doing and are pushing back. In addition, tech giants like Google and Facebook, which enthusiastically assist the NSA, are threatened by their bottom line, as people can refuse their services and seek other developing platforms that don’t put their privacy up for sale.

Ultimately, the lesson of Snowden’s actions signifies that people can spark change. After all, Greenwald said, Snowden was a 29-year-old high school dropout who grew up in a working-class family.

“And yet, through nothing more than a pure act of a conscience, a choice to be fearless in the face of injustice, Edward Snowden literally changed the world,” Greenwald told the audience. “I’ve come infected by that courage.… All of this should be a personal antidote to the temptation of defeatism.”

Glenn Greenwald will be speaking in other cities in the upcoming week about the NSA, privacy, and his new bookNo Place to Hide.Click here for more info.

Death of a libertarian fantasy: Why dreams of a digital utopia are rapidly fading away

Free-market enthusiasts have long predicted that technology would liberate humanity. It turns out they were wrong

Death of a libertarian fantasy: Why dreams of a digital utopia are rapidly fading away
Ron Paul (Credit: AP/Cliff Owen/objectifphoto via iStock/Salon)

There is no mystery why libertarians love the Internet and all the freedom-enhancing applications, from open source software to Bitcoin, that thrive in its nurturing embrace. The Internet routes around censorship. It enables peer-to-peer connections that scoff at arbitrary geographical boundaries. It provides infinite access to do-it-yourself information. It fuels dreams of liberation from totalitarian oppression. Give everyone a smartphone, and dictators will fall! (Hell, you can even download the code that will let you 3-D print a gun.)

Libertarian nirvana: It’s never more than a mouse-click away.

So, no mystery, sure. But there is a paradox. The same digital infrastructure that was supposed to enable freedom turns out to be remarkably effective at control as well. Privacy is an illusion, surveillance is everywhere, and increasingly, black-box big-data-devouring algorithms guide and shape our behavior and consumption. The instrument of our liberation turns out to be doing double-duty as the facilitator of a Panopticon. 3-D printer or no, you better believe somebody is watching you download your guns.

Facebook delivered a fresh reminder of this unfortunate evolution earlier this week. On Thursday, it announced, with much fanfare and plenty of admiring media coverage, that it was going to allow users to opt out of certain kinds of targeted ads. Stripped of any context, this would normally be considered a good thing. (Come to think of it, are there any two words, excluding “Ayn Rand,” that capture the essence of libertarianism better than “opt out”?)

Of course, the announcement about opting out was just a bait-and-switch designed to distract people from the fact that Facebook was actually vastly increasing the omniscience of its ongoing ad-targeting program. Even as it dangled the opt-out olive branch, Facebook also revealed that it would now start incorporating your entire browsing history, as well as information gathered by your smartphone apps, into its ad targeting database. (Previously, ads served by Facebook limited themselves to targeting users based on their activity on Facebook. Now, everything goes!)



The move was classic Facebook: A design change that — as Jeff Chester, executive director of the Center for Digital Democracy, told the Washington Post – constitutes “a dramatic expansion of its spying on users.”

Of course, even while Facebook is spying on us, we certainly could be using Facebook to organize against dictators, or to follow 3-D gun maestro Cody Wilson, or to topple annoyingly un-libertarian congressional House majority leaders.

It’s confusing, this situation we’re in, where the digital tools of liberation are simultaneously tools of manipulation. It would be foolish to say that there is no utility to our digital infrastructure. But we need to at least ask ourselves the question — is it possible that in some important ways, we are less free now than before the Internet entered our lives? Because it’s not just Facebook who is spying on us; it’s everyone.

* * *

A week or so ago, I received a tweet from someone who had apparently read a story in which I was critical of the “sharing” economy:

I’ll be honest — I’m not exactly sure what “gun-yielding regulator thugs” are. (Maybe he meant gun-wielding?) But I was intrigued by the combination of the right to constitutionally guaranteed “free association” with the right of companies like Airbnb and Uber to operate free of regulatory oversight.  The “sharing” economy is often marketed as some kind of hippy-dippy post-capitalist paradise — full of sympathy, and trust abounding – but it is also apparent that the popularity of these services taps into a deep reservoir of libertarian yearning. In the libertopia, we’ll dispense with government and even most corporations. All we’ll need will be convenient platforms that enable to us to contract with each other for every essential service!

But what’s missing here is the realization that those ever-so-convenient platforms are actually far more intrusive and potentially oppressive than the incumbent regimes that they are displacing. Operating on a global scale, companies like Airbnb and Uber are amassing vast databases of information about what we do and where we go. They are even figuring out the kind of people that we are, through our social media profiles and the ratings and reputation systems that they deploy to enforce good behavior. They have our credit card numbers and real names and addresses. They’re inside our phones. The cab driver you paid with cash last year was an entirely anonymous transactor. Not so for the ride on Lyft or Uber. The sharing economy, it turns out, is an integral part of the surveillance economy. In our race to let Silicon Valley mediate every consumer experience, we are voluntarily imprisoning ourselves in the Panopticon.

The more data we generate, the more we open ourselves up to manipulation based on how that data is investigated and acted upon by algorithmic rules. Earlier this month, Slate published a fascinating article, titled “Data-Driven Discrimination: How algorithms can perpetuate poverty and inequality.”

It reads:

Unlike the mustache-twiddling racists of yore, conspiring to segregate and exploit particular groups, redlining in the Information Age can happen at the hand of well-meaning coders crafting exceedingly complex algorithms. One reason is because algorithms learn from one another and iterate into new forms, making them inscrutable to even the coders responsible for creating them, it’s harder for concerned parties to find the smoking gun of wrongdoing.

A potential example of such information redlining:

A transportation agency may pledge to open public transit data to inspire the creation of applications like “Next Bus,” which simplify how we plan trips and save time. But poorer localities often lack the resources to produce or share transit data, meaning some neighborhoods become dead zones—places your smartphone won’t tell you to travel to or through.

And that’s a well-meaning example of how an algorithm can go awry! What happens when algorithms are designed purposely to discriminate? The most troubling aspect of our new information infrastructure is that the opportunities to manipulate us via our data are greatly expanded in an age of digital intermediation. The recommendations we get from Netflix or Amazon, the ads we see on Facebook, the search results we generate on Google — they’re all connected and influenced by hard data on what we read and buy and watch and seek. Is this freedom? Or is it a more insidious set of constraints than we could ever possibly have imagined the first time we logged in and started exploring the online universe.

Why Online Tracking Is Getting Creepier

The merger of online and offline data is bringing more intrusive tracking.

The marketers that follow you around the web are getting nosier.

Currently, many companies track where users go on the Web—often through cookies—in order to display customized ads. That’s why if you look at a pair of shoes on one site, ads for those shoes may follow you around the Web.

But online marketers are increasingly seeking to track users offline, as well, by collecting data about people’s offline habits—such as recent purchases, where you live, how many kids you have, and what kind of car you drive.

Onboarding: a ProPublica explainer of how online tracking is getting creepier. Follow ProPublica on Vine for more explainer shorts. (Icons courtesy of Lil Squid, André Renault, Gabriele Garofalo and Patrick Morrison, Noun Project)

Here’s how it works, according to some revealing marketing literature we came across from digital marketing firm LiveRamp:

  • A retailer—let’s call it The Pricey Store—collects the e-mail addresses of its high-spending customers. (Ever wonder why stores keep bugging you for your email at the checkout counter these days?)
  • The Pricey Store brings the list to LiveRamp, which locates the customers online when the customers use their email address to log into a website that has a relationship with LiveRamp. (The identity of these websites is a closely guarded secret.) The website that has a relationship with LiveRamp then allows LiveRamp to “tag” the customers’ computer with a tracker.
  • When those high-spending customers arrive at PriceyStore.com, they see a version of the site customized to “show more expensive offerings to them.” (Yes, the marketing documents really say that.)

Tracking people using their real names—often called “onboarding”—is a hot trend in Silicon Valley. In 2012, ProPublica documented how political campaigns used onboarding to bombard voters with ads based on their party affiliation and donor history. Since then, Twitter and Facebook have both started offering onboarding services allowing advertisers to find their customers online.

“The marriage of online and offline is the ad targeting of the last 10 years on steroids,” said Scott Howe, chief executive of broker firm Acxiom at a conference earlier this year.

Last month, Acxiom—one of the country’s largest data brokers, which claims to have 3,000 data points on nearly every U.S. consumer—agreed to pay $310 million to purchase onboarding specialist LiveRamp. Acxiom and LiveRamp declined to comment for this article, citing the need to remain quiet until the acquisition is complete.

Companies that match users online and offline identities generally emphasize that the data is still anonymous because users’ actual names aren’t included in the cookie.

But critics worry about the implications of allowing data brokers to profile every person who is connected to the Internet. In May, the Federal Trade Commission issued a report that found that data brokers collected information on sensitive categories such as whether an individual is pregnant, has a “diabetes interest,” is interested in a “Bible Lifestyle” or is “likely to seek a [credit-card] chargeback.”

Previously, data brokers primarily sold this data to marketers who sent direct mail—aka “junk mail”—to your home. Now, they have found a new market: online marketing that can be targeted as precisely as junk mail.

“Will these classifications mean that some consumers will only be shown advertisements for subprime loans while others will see ads for credit cards?” Federal Trade Commission Chairwoman Edith Ramirez said at a press conference. “Will some be routinely shunted to inferior customer service?”

The FTC has called for Congress to pass legislation requiring data brokers to allow consumers to access their information and to opt out of targeted marketing. Currently, many data brokers don’t offer people either one.

The Direct Marketing Association, which represents the data broker industry, doesn’t offer a specific opt-out for onboarding. It does offer a global opt-out from all of its members’ direct mail databases, but it only requires members to remove people’s data for three years after they opt-out.

Some companies offer their own opt-outs. Twitter allows users to opt out of onboarding by unchecking the “promoted content” button in their account settings. LiveRamp offers a so-called ” permanent opt-out” for users who do not want to be targeted via their e-mail address.

Facebook does not offer a specific opt-out for onboarding. Instead, it suggests users opt out of the data brokers themselves. A Facebook spokesman says that users who don’t like specific targeted ads can avoid seeing them again by clicking an ‘x’ on the top right corner of the ad and following the links to the advertisers’ opt-out page.

Julia Angwin is a senior reporter at ProPublica. From 2000 to 2013, she was a reporter at The Wall Street Journal, where she led a privacy investigative team that was a finalist for a Pulitzer Prize in Explanatory Reporting in 2011 and won a Gerald Loeb Award in 2010.

http://www.propublica.org/article/why-online-tracking-is-getting-creepier?utm_source=et&utm_medium=email&utm_campaign=dailynewsletter

Who talks like FDR but acts like Ayn Rand? Easy: Silicon Valley’s wealthiest and most powerful people

Tech’s toxic political culture: The stealth libertarianism of Silicon Valley bigwigs

Tech's toxic political culture: The stealth libertarianism of Silicon Valley bigwigs
Ayn Rand, Marc Andreessen, Franklin D. Roosevelt (Credit: AP/Reuters/Fred Prouser/Salon)

Marc Andreessen is a major architect of our current technologically mediated reality. As the leader of the team that created the Mosaic Web browser in the early ’90s and as co-founder of Netscape, Andreessen, possibly more than any single other person, helped make the Internet accessible to the masses.

In his second act as a Silicon Valley venture capitalist, Andreessen has hardly slackened the pace. The portfolio of companies with investments from his VC firm, Andreessen Horowitz, is a roll-call for tech “disruption.” (Included on the list: Airbnb, Lyft, Box, Oculus VR, Imgur, Pinterest, RapGenius, Skype and, of course, Twitter and Facebook.) Social media, the “sharing” economy, Bitcoin — Andreessen’s dollars are fueling all of it.

So when the man tweets, people listen.

And, good grief, right now the man is tweeting. Since Jan. 1, when Andreessen decided to aggressively reengage with Twitter after staying mostly silent for years, @pmarca has been pumping out so many tweets that one wonders how he finds time to attend to his normal business.

On June 1, Andreessen took his game to a new level. In what seems to be a major bid to establish himself as Silicon Valley’s premier public intellectual, Andreessen has deployed Twitter to deliver a unified theory of tech utopia.

In seven different multi-part tweet streams, adding up to a total of almost 100 tweets, Andreessen argues that we shouldn’t bother our heads about the prospect that robots will steal all our jobs.  Technological innovation will end poverty, solve bottlenecks in education and healthcare, and usher in an era of ubiquitous affluence in which all our basic needs are taken care of. We will occupy our time engaged in the creative pursuits of our heart’s desire.



So how do we get there? Easy! All we have to do is just get out of Silicon Valley’s way. (Andreessen is never specific about exactly what he means by this, but it’s easy to guess: Don’t burden tech’s disruptive firms with the safety, health and insurance regulations that the old economy must abide by.)

Oh, and one other little thing: Make sure that we have a social welfare safety net robust enough to take care of the people who fall though the cracks (or are eaten by robots).

The full collection of tweets marks an impressive achievement — a manifesto, you might even call it, although Andreessen has been quick to distinguish his techno-capitalist-created utopia from any kind of Marxist paradise. But there’s a hole in his argument big enough to steer a $500 million round of Series A financing right through. Getting out of the way of Silicon Valley and ensuring a strong safety net add up to a political paradox. Because Silicon Valley doesn’t want to pay for the safety net.

* * *

http://www.salon.com/2014/06/06/techs_toxic_political_culture_the_stealth_libertarianism_of_silicon_valley_bigwigs/

Cut Off Glassholes’ Wi-Fi With This Google Glass Detector

Image: Julian Oliver

Not a fan of Google Glass’s ability to turn ordinary humans into invisibly recording surveillance cyborgs? Now you can create your own “glasshole-free zone.”

Berlin artist Julian Oliver has written a simple program called Glasshole.sh that detects any Glass device attempting to connect to a Wi-Fi network based on a unique character string that he says he’s found in the MAC addresses of Google’s augmented reality headsets. Install Oliver’s program on a Raspberry Pi or Beaglebone mini-computer and plug it into a USB network antenna, and the gadget becomes a Google Glass detector, sniffing the local network for signs of Glass users. When it detects Glass, it uses the program Aircrack-NG to impersonate the network and send a “deauthorization” command, cutting the headset’s Wi-Fi connection. It can also emit a beep to signal the Glass-wearer’s presence to anyone nearby.

“To say ‘I don’t want to be filmed’ at a restaurant, at a party, or playing with your kids is perfectly OK. But how do you do that when you don’t even know if a device is recording?” Oliver tells WIRED. “This steps up the game. It’s taking a jammer-like approach.”

Oliver came up with the program after hearing that a fellow artist friend was disturbed by guests who showed up to his art exhibit wearing Glass. The device, after all, offered no way for the artist to know if the Glass-wearing visitors were photographing, recording, or even live-streaming his work.

Oliver came up with the program after hearing that a fellow artist friend was disturbed by guests who showed up to his art exhibit wearing Glass.

Oliver’s program is still a mostly-unproven demonstration, though the 40-year-old New Zealand native has successfully tested it by booting Glass off his own studio’s network. More importantly, it shows how the uneasiness with Glass’ social implications could play out as the device hits the mainstream. Bars in San Francisco and Seattle have already banned Glass-wearers. In January, a Glass-headed movie-goer was suspected of piracy and questioned by Homeland Security agents after wearing the device in a theater. And the inventor of a Glass-like augmented reality setup claimed to have been violently thrown out of a Paris McDonald’s in 2012 based on the restaurant’s no-recording policy.

A program like Glasshole.sh could make those sorts of no-Glass policies more technically enforceable, though it may have to be adapted as Glass MAC addresses shift in future versions. And Oliver argues that a Glass-booting device is legal so long as the Glasshole.sh user is the owner of the network. He sees it as no different from cell phone jammers, which have been adopted in many schools, libraries, and government buildings.

Oliver warns, though, that the same Glass-ejecting technique could be used more aggressively: He plans to create another version of Glasshole.sh in the near future that’s designed to be a kind of roving Glass-disconnector, capable of knocking Glass off any network or even severing its link to the user’s phone. “That moves it from a territorial statement to ‘you can all go to hell.’ It’s a very different position, politically,” he says. For that version, Oliver says he plans to warn users that the program may be more legally ill-advised, and is only to be used “in extreme circumstances.”

As a long-time Berlin resident, Oliver says he sees Glass as a replay of the events surrounding Google Streetview in Germany, where private citizens protested Google’s uninvited photography of their homes and places of work. He sees Glass as another case of Google violating privacy norms and asking questions later.

“These are cameras, highly surreptitious in nature, with network backup function and no external indication of recording,” says Oliver. “To focus on the device is to dance past a heritage of heartfelt protest against the unconsented video documentation of our public places and spaces.”

http://www.wired.com/2014/06/find-and-ban-glassholes-with-this-artists-google-glass-detector/