The terrifying uncertainty of our high-tech future

Our new robot overlords:

Are computers taking our jobs?

Our new robot overlords: The terrifying uncertainty of our high-tech future
(Credit: Ociacia, MaraZe via Shutterstock/Salon)
This article was originally published by Scientific American.

Scientific American Last fall economist Carl Benedikt Frey and information engineer Michael A. Osborne, both at the University of Oxford, published a study estimating the probability that 702 occupations would soon be computerized out of existence. Their findings were startling. Advances in data mining, machine vision, artificial intelligence and other technologies could, they argued, put 47 percent of American jobs at high risk of being automated in the years ahead. Loan officers, tax preparers, cashiers, locomotive engineers, paralegals, roofers, taxi drivers and even animal breeders are all in danger of going the way of the switchboard operator.

Whether or not you buy Frey and Osborne’s analysis, it is undeniable that something strange is happening in the U.S. labor market. Since the end of the Great Recession, job creation has not kept up with population growth. Corporate profits have doubled since 2000, yet median household income (adjusted for inflation) dropped from $55,986 to $51,017. At the same time, after-tax corporate profits as a share of gross domestic product increased from around 5 to 11 percent, while compensation of employees as a share of GDP dropped from around 47 to 43 percent. Somehow businesses are making more profit with fewer workers.

Erik Brynjolfsson and Andrew McAfee, both business researchers at the Massachusetts Institute of Technology, call this divergence the “great decoupling.” In their view, presented in their recent book “The Second Machine Age,” it is a historic shift.

The conventional economic wisdom has long been that as long as productivity is increasing, all is well. Technological innovations foster higher productivity, which leads to higher incomes and greater well-being for all. And for most of the 20th century productivity and incomes did rise in parallel. But in recent decades the two began to diverge. Productivity kept increasing while incomes—which is to say, the welfare of individual workers—stagnated or dropped.

Brynjolfsson and McAfee argue that technological advances are destroying jobs, particularly low-skill jobs, faster than they are creating them. They cite research showing that so-called routine jobs (bank teller, machine operator, dressmaker) began to fade in the 1980s, when computers first made their presence known, but that the rate has accelerated: between 2001 and 2011, 11 percent of routine jobs disappeared.



Plenty of economists disagree, but it is hard to referee this debate, in part because of a lack of data. Our understanding of the relation between technological advances and employment is limited by outdated metrics. At a roundtable discussion on technology and work convened this year by the European Union, the IRL School at Cornell University and the Conference Board (a business research association), a roomful of economists and financiers repeatedly emphasized how many basic economic variables are measured either poorly or not at all. Is productivity declining? Or are we simply measuring it wrong? Experts differ. What kinds of workers are being sidelined, and why? Could they get new jobs with the right retraining? Again, we do not know.

In 2013 Brynjolfsson told Scientific American that the first step in reckoning with the impact of automation on employment is to diagnose it correctly—“to understand why the economy is changing and why people aren’t doing as well as they used to.” If productivity is no longer a good proxy for a vigorous economy, then we need a new way to measure economic health. In a 2009 report economists Joseph Stiglitz of Columbia University, Amartya Sen of Harvard University and Jean-Paul Fitoussi of the Paris Institute of Political Studies made a similar case, writing that “the time is ripe for our measurement system to shift emphasis from measuring economic production to measuring people’s well-being.” An IRL School report last year called for statistical agencies to capture more and better data on job market churn—data that could help us learn which job losses stem from automation.

Without such data, we will never properly understand how technology is changing the nature of work in the 21st century—and what, if anything, should be done about it. As one participant in this year’s roundtable put it, “Even if this is just another industrial revolution, people underestimate how wrenching that is. If it is, what are the changes to the rules of labor markets and businesses that should be made this time? We made a lot last time. What is the elimination of child labor this time? What is the eight-hour workday this time?”

 

The rise of data and the death of politics

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisions that such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0″) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer
For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.
Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, drive Uber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”

 

Fukushima: Bad and Getting Worse

Global Physicians Issue Scathing Critique of UN Report on Fukushima

http://www.greenpeace.org/international/Global/international/photos/nuclear/2011/fukushima-reactor4-tepco.jpg

by JOHN LaFORGE

There is broad disagreement over the amounts and effects of radiation exposure due to the triple reactor meltdowns after the 2011 Great East-Japan Earthquake and tsunami. The International Physicians for the Prevention of Nuclear War (IPPNW) joined the controversy June 4, with a 27-page “Critical Analysis of the UNSCEAR Report ‘Levels and effects of radiation exposures due to the nuclear accident after the 2011 Great East-Japan Earthquake and tsunami.’”

IPPNW is the Nobel Peace Prize winning global federation of doctors working for “a healthier, safer and more peaceful world.” The group has adopted a highly critical view of nuclear power because as it says, “A world without nuclear weapons will only be possible if we also phase out nuclear energy.”

UNSCEAR, the United Nations Scientific Committee on the Effects of Atomic Radiation, published its deeply flawed report April 2. Its accompanying press release summed up its findings this way: “No discernible changes in future cancer rates and hereditary diseases are expected due to exposure to radiation as a result of the Fukushima nuclear accident.” The word “discernable” is a crucial disclaimer here.

Cancer, and the inexorable increase in cancer cases in Japan and around the world, is mostly caused by toxic pollution, including radiation exposure according to the National Cancer Institute.[1] But distinguishing a particular cancer case as having been caused by Fukushima rather than by other toxins, or combination of them, may be impossible – leading to UNSCEAR’s deceptive summation. As the IPPNW report says, “A cancer does not carry a label of origin…”

UNSCEAR’s use of the phrase “are expected” is also heavily nuanced. The increase in childhood leukemia cases near Germany’s operating nuclear reactors, compared to elsewhere, was not “expected,” but was proved in 1997. The findings, along with Chernobyl’s lingering consequences, led to the country’s federally mandated reactor phase-out. The plummeting of official childhood mortality rates around five US nuclear reactors after they were shut down was also “unexpected,” but shown by Joe Mangano and the Project on Radiation and Human Health.

The International Physicians’ analysis is severely critical of UNSCEAR’s current report which echoes its 2013 Fukushima review and press release that said, “It is unlikely to be able to attribute any health effects in the future among the general public and the vast majority of workers.”

“No justification for optimistic presumptions”

The IPPNW’s report says flatly, “Publications and current research give no justification for such apparently optimistic presumptions.” UNSCEAR, the physicians complain, “draws mainly on data from the nuclear industry’s publications rather than from independent sources and omits or misinterprets crucial aspects of radiation exposure”, and “does not reveal the true extent of the consequences” of the disaster. As a result, the doctors say the UN report is “over-optimistic and misleading.” The UN’s “systematic underestimations and questionable interpretations,” the physicians warn, “will be used by the nuclear industry to downplay the expected health effects of the catastrophe” and will likely but mistakenly be considered by public authorities as reliable and scientifically sound. Dozens of independent experts report that radiation attributable health effects are highly likely.

Points of agreement: Fukushima is worse than reported and worsening still

Before detailing the multiple inaccuracies in the UNSCEAR report, the doctors list four major points of agreement. First, UNSCEAR improved on the World Health Organization’s health assessment of the disaster’s on-going radioactive contamination. UNSCEAR also professionally “rejects the use of a threshold for radiation effects of 100 mSv [millisieverts], used by the International Atomic Energy Agency in the past.” Like most health physicists, both groups agree that there is no radiation dose so small that it can’t cause negative health effects. There are exposures allowed by governments, but none of them are safe.

Second, the UN and the physicians agree that  areas of Japan that were not evacuated were seriously contaminated with iodine-132, iodine-131 and tellurium-132, the worst reported instance being Iwaki City which had 52 times the annual absorbed dose to infants’ thyroid than from natural background radiation. UNSCEAR also admitted that “people all over Japan” were affected by radioactive fallout (not just in Fukushima Prefecture) through contact with airborne or ingested radioactive materials. And while the UNSCEAR acknowledged that “contaminated rice, beef, seafood, milk, milk powder, green tea, vegetables, fruits and tap water were found all over mainland Japan”, it neglected “estimating doses for Tokyo …  which also received a significant fallout both on March 15 and 21, 2011.”

Third, UNSCEAR agrees that the nuclear industry’s and the government’s estimates of the total radioactive contamination of the Pacific Ocean are “far too low.” Still, the IPPNW reports shows, UNSCEAR’s use of totally unreliable assumptions results in a grossly understated final estimate. For example, the UN report ignores all radioactive discharges to the ocean after April 30, 2011, even though roughly 300 tons of highly contaminated water has been pouring into the Pacific every day for 3-and-1/2 years, about 346,500 tons in the first 38 months.

Fourth, the Fukushima catastrophe is understood by both groups as an ongoing disaster, not the singular event portrayed by industry and commercial media. UNSCEAR even warns that ongoing radioactive pollution of the Pacific “may warrant further follow-up of exposures in the coming years,” and “further releases could not be excluded in the future,” from forests and fields during rainy and typhoon seasons when winds spread long-lived radioactive particles – a and from waste management plans that now include incineration.

As the global doctors say, in their unhappy agreement with UNSCAR, “In the long run, this may lead to an increase in internal exposure in the general population through radioactive isotopes from ground water supplies and the food chain.”

Physicians find ten grave failures in UN report

The majority of the IPPNW’s report details 10 major errors, flaws or discrepancies in the UNSCEAR paper and explains study’s omissions, underestimates, inept comparisons, misinterpretations and unwarranted conclusions.

1. The total amount of radioactivity released by the disaster was underestimated by UNSCEAR and its estimate was based on disreputable sources of information. UNSCEAR ignored 3.5 years of nonstop emissions of radioactive materials “that continue unabated,” and only dealt with releases during the first weeks of the disaster. UNSCEAR relied on a study by the Japanese Atomic Energy Agency (JAEA) which, the IPPNW points out, “was severely criticized by the Fukushima Nuclear Accident Independent Investigation Commission … for its collusion with the nuclear industry.” The independent Norwegian Institute for Air Research’s estimate of cesium-137 released (available to UNSCEAR) was four times higher than the JAEA/UNSCEAR figure (37 PBq instead of 9 PBq). Even Tokyo Electric Power Co. itself estimated that iodine-131 releases were over four times higher than what JAEA/UNSCEAR) reported (500 PBq vs. 120 BPq). The UNSCEAR inexplicably chose to ignore large releases of strontium isotopes and 24 other radionuclides when estimating radiation doses to the public. (A PBq or petabecquerel is a quadrillion or 1015 Becquerels. Put another way, a PBq equals 27,000 curies, and one curie makes 37 billion atomic disintegrations per second.)

2. Internal radiation taken up with food and drink “significantly influences the total radiation dose an individual is exposed to,” the doctors note, and their critique warns pointedly, “UNSCEAR uses as its one and only source, the still unpublished database of the International Atomic Energy Association and the Food and Agriculture Organization. The IAEA was founded … to ‘accelerate and enlarge the contribution of atomic energy to peace, health and prosperity throughout the world.’ It therefore has a profound conflict of interest.” Food sample data from the IAEA should not be relied on, “as it discredits the assessment of internal radiation doses and makes the findings vulnerable to claims of manipulation.” As with its radiation release estimates, IAEA/UNSCEAR ignored the presence of strontium in food and water. Internal radiation dose estimates made by the Japanese Ministry for Science and Technology were 20, 40 and even 60 times higher than the highest numbers used in the IAEA/UNSCEAR reports.

 

3. To gauge radiation doses endured by over 24,000 workers on site at Fukushima, UNSCEAR relied solely on figures from Tokyo Electric Power Co., the severely compromised owners of the destroyed reactors. The IPPNW report dismisses all the conclusions drawn from Tepco, saying, “There is no meaningful control or oversight of the nuclear industry in Japan and data from Tepco has in the past frequently been found to be tampered with and falsified.”

4. The UNSCEAR report disregards current scientific fieldwork on actual radiation effects on plant and animal populations. Peer reviewed ecological and genetic studies from Chernobyl and Fukushima find evidence that low dose radiation exposures cause, the doctors point out, “genetic damage such as increased mutation rates, as well as developmental abnormalities, cataracts, tumors, smaller brain sizes in birds and mammals and further injuries to populations, biological communities and ecosystems.” Ignoring these studies, IPPNW says “gives [UNSCEAR] the appearance of bias or lack of rigor.”

5. The special vulnerability of the embryo and fetus to radiation was completely discounted by the UNSCEAR, the physicians note. UNSCEAR shockingly said that doses to the fetus or breast-fed infants “would have been similar to those of other age groups,” a claim that, the IPPNW says, “goes against basic principles of neonatal physiology and radiobiology.”  By dismissing the differences between an unborn and an infant, the UNSCEAR “underestimates the health risks of this particularly vulnerable population.” The doctors quote a 2010 report from American Family Physician that, “in utero exposure can be teratogenic, carcinogenic or mutagenic.”

6. Non-cancerous diseases associated with radiation doses — such as cardiovascular diseases, endocrinological and gastrointestinal disorders, infertility, genetic mutations in offspring and miscarriages — have been documented in medical journals, but ate totally dismissed by the UNSCEAR. The physicians remind us that large epidemiological studies have shown undeniable associations of low dose ionizing radiation to non-cancer health effects and “have not been scientifically challenged.”

7. The UNSCEAR report downplays the health impact of low-doses of radiation by misleadingly comparing radioactive fallout to “annual background exposure.” The IPPNW scolds the UNSCEAR saying it is, “not scientific to argue that natural background radiation is safe or that excess radiation from nuclear fallout that stays within the dose range of natural background radiation is harmless.” In particular, ingested or inhaled radioactive materials, “deliver their radioactive dose directly and continuously to the surrounding tissue” — in the thyroid, bone or muscles, etc. — “and therefore pose a much larger danger to internal organs than external background radiation.”

8. Although UNSCEAR’s April 2 Press Release and Executive Summary give the direct and mistaken impression that there will be no radiation health effects from Fukushima, the report itself states that the Committee “does not rule out the possibility of future excess cases or disregard the suffering associated…” Indeed, UNSCEAR admits to “incomplete knowledge about the release rates of radionuclides over time and the weather conditions during the releases.” UNSCEAR concedes that “there were insufficient measurements of gamma dose rate…” and that, “relatively few measurements of foodstuff were made in the first months.” IPPNW warns that these glaring uncertainties completely negate the level of certainty implied in UNSCEAR’s Exec. Summary.

9. UNSCEAR often praises the protective measures taken by Japanese authorities, but the IPPNW finds it “odd that a scientific body like UNSCEAR would turn a blind eye to the many grave mistakes of the Japanese disaster management…” The central government was slow to inform local governments and “failed to convey the severity of the accident,” according to the Fukushima Nuclear Accident Independent Investigation Commission. “Crisis management ‘did not function correctly,’ the Commission said, and its failure to distribute stable iodine, “caused thousands of children to become irradiated with iodine-131,” IPPNW reports.

10. The UNSCEAR report lists “collective” radiation doses “but does not explain the expected cancer cases that would result from these doses.” This long chapter of IPPNW’s report can’t be summarized easily. The doctors offer conservative estimates, “keeping in mind that these most probably represent underestimations for the reasons listed above.” The IPPNW estimates that 4,300 to 16,800 excess cases of cancer due to the Fukushima catastrophe in Japan in the coming decades. Cancer deaths will range between 2,400 and 9,100. UNSCEAR may call these numbers insignificant, the doctors archly point out, but individual cancers are debilitating and terrifying and they “represent preventable and man-made diseases” and fatalities.

IPPNW concludes that Fukushima’s radiation disaster is “far from over”: the destroyed reactors are still unstable; radioactive liquids and gases continuously leak from the complex wreckage; melted fuel and used fuel in quake-damaged cooling pools hold enormous quantities of radioactivity “and are highly vulnerable to further earthquakes, tsunamis, typhoons and human error.” Catastrophic releases of radioactivity “could occur at any time and eliminating this risk will take many decades.”

IPPNW finally recommends urgent actions that governments should take, because the UNSCEAR report, “does not adhere to scientific standards of neutrality,” “represents a systematic underestimation,” “conjures up an illusion of scientific certainty that obscures the true impact of the nuclear catastrophe on health and the environment,” and its conclusion is phrased “in such a way that would most likely be misunderstood by most people…”

John LaForge works for Nukewatch, a nuclear watchdog and anti-war group in Wisconsin, and edits its Quarterly.

Notes.


[1] Nancy Wilson, National Cancer Institute, “The Majority of Cancers Are Linked to the Environment, NCI Benchmarks, Vol. 4, Issue 3, June 17, 2004

 

http://www.counterpunch.org/2014/07/18/fukushima-bad-and-getting-worse/

 

THE BULLSHIT MACHINE

Here’s a tiny confession. I’m bored.

Yes; I know. I’m a sinner. Go ahead. Burn me at the stake of your puritanical Calvinism; the righteously, thoroughly, well, boring idea that boredom itself is a moral defect; that a restless mind is the Devil’s sweatshop.

There’s nothing more boring than that; and I’ll return to that very idea at the end of this essay; which I hope is the beginning.

What am I bored of? Everything. Blogs books music art business ideas politics tweets movies science math technology…but more than that: the spirit of the age; the atmosphere of the time; the tendency of the now; the disposition of the here.

Sorry; but it’s true. It’s boring me numb and dumb.

A culture that prizes narcissism above individualism. A politics that places “tolerance” above acceptance. A spirit that encourages cynicism over reverence. A public sphere that places irony over sincerity. A technosophy that elevates “data” over understanding. A society that puts “opportunity” before decency. An economy that…you know. Works us harder to make us poorer at “jobs” we hate where we make stuff that sucks every last bit of passion from our souls to sell to everyone else who’s working harder to get poorer at “jobs” they hate where they make stuff that sucks every last bit of passion from their souls.

To be bored isn’t to be indifferent. It is to be fatigued. Because one is exhausted. And that is precisely where—and only where—the values above lead us. To exhaustion; with the ceaseless, endless, meaningless work of maintaining the fiction. Of pretending that who we truly want to be is what everyone believes everyone else wants to be. Liked, not loved; “attractive”, not beautiful; clever, not wise; snarky, not happy; advantaged, not prosperous.

It exhausts us; literally; this game of parasitically craving everyone’s cravings. It makes us adversaries not of one another; but of ourselves. Until there is nothing left. Not of us as we are; but of the people we might have been. The values above shrink and reduce and diminish our potential; as individuals, as people, societies. And so I have grown fatigued by them.

Ah, you say. But when hasn’t humanity always suffered all the above? Please. Let’s not mince ideas. Unless you think the middle class didn’t actually thrive once; unless you think that the gentleman that’s made forty seven Saw flicks (so far) is this generation’s Alfred Hitchcock; unless you believe that this era has a John Lennon; unless you think that Jeff Koons is Picasso…perhaps you see my point.

I’m bored, in short, of what I’d call a cycle of perpetual bullshit. A bullshit machine. The bullshit machine turns life into waste.

The bullshit machine looks something like this. Narcissism about who you are leads to cynicism about who you could be leads to mediocrity in what you do…leads to narcissism about who you are. Narcissism leads to cynicism leads to mediocrity…leads to narcissism.

Let me simplify that tiny model of the stalemate the human heart can reach with life.

The bullshit machine is the work we do only to live lives we don’t want, need, love, or deserve.

Everything’s work now. Relationships; hobbies; exercise. Even love. Gruelling; tedious; unrelenting; formulaic; passionless; calculated; repetitive; predictable; analysed; mined; timed; performed.

Work is bullshit. You know it, I know it; mankind has always known it. Sure; you have to work at what you want to accomplish. But that’s not the point. It is the flash of genius; the glimmer of intuition; the afterglow of achievement; the savoring of experience; the incandescence of meaning; all these make life worthwhile, pregnant, impossible, aching with purpose. These are the ends. Work is merely the means.

Our lives are confused like that. They are means without ends; model homes; acts which we perform, but do not fully experience.

Remember when I mentioned puritanical Calvinism? The idea that being bored is itself a sign of a lack of virtue—and that is, itself, the most boring idea in the world?

That’s the battery that powers the bullshit machine. We’re not allowed to admit it: that we’re bored. We’ve always got to be doing something. Always always always. Tapping, clicking, meeting, partying, exercising, networking, “friending”. Work hard, play hard, live hard. Improve. Gain. Benefit. Realize.

Hold on. Let me turn on crotchety Grandpa mode. Click.

Remember when cafes used to be full of people…thinking? Now I defy you to find one not full of people Tinder—Twitter—Facebook—App-of-the-nanosecond-ing; furiously. Like true believers hunched over the glow of a spiritualized Eden they can never truly enter; which is precisely why they’re mesmerized by it. The chance at a perfect life; full of pleasure; the perfect partner, relationship, audience, job, secret, home, career; it’s a tap away. It’s something like a slot-machine of the human soul, this culture we’re building. The jackpot’s just another coin away…forever. Who wouldn’t be seduced by that?

Winners of a million followers, fans, friends, lovers, dollars…after all, a billion people tweeting, updating, flicking, swiping, tapping into the void a thousand times a minute can’t be wrong. Can they?

And therein is the paradox of the bullshit machine. We do more than humans have ever done before. But we are not accomplishing much; and we are, it seems to me, becoming even less than that.

The more we do, the more passive we seem to become. Compliant. Complaisant. As if we are merely going through the motions.

Why? We are something like apparitions today; juggling a multiplicity of selves through the noise; the “you” you are on Facebook, Twitter, Tumblr, Tinder…wherever…at your day job, your night job, your hobby, your primary relationship, your friend-with-benefits, your incredibly astonishing range of extracurricular activities. But this hyperfragmentation of self gives rise to a kind of schizophrenia; conflicts, dissocations, tensions, dislocations, anxieties, paranoias, delusions. Our social wombs do not give birth to our true selves; the selves explosive with capability, possibility, wonder.

Tap tap tap. And yet. We are barely there, at all; in our own lives; in the moments which we will one day look back on and ask ourselves…what were we thinking wasting our lives on things that didn’t matter at all?

The answer, of course, is that we weren’t thinking. Or feeling. We don’t have time to think anymore. Thinking is a superluxury. Feeling is an even bigger superluxury. In an era where decent food, water, education, and healthcare are luxuries; thinking and feeling are activities to costly for society to allow. They are a drag on “growth”; a burden on “productivity”; they slow down the furious acceleration of the bullshit machine.

And so. Here we are. Going through the motions. The bullshit machine says the small is the great; the absence is the presence; the vicious is the noble; the lie is the truth. We believe it; and, greedily, it feeds on our belief. The more we feed it, the more insatiable it becomes. Until, at last, we are exhausted. By pretending to want the lives we think we should; instead of daring to live the lives we know we could.

Fuck it. Just admit it. You’re probably just as bored as I am.

Good for you.

Welcome to the world beyond the Bullshit Machine.

“Alive Inside”: Music may be the best medicine for dementia

A heartbreaking new film explores the breakthrough that can help severely disabled seniors: It’s called the iPod VIDEO

"Alive Inside": Music may be the best medicine for dementia

One physician who works with the elderly tells Michael Rossato-Bennett’s camera, in the documentary “Alive Inside,” that he can write prescriptions for $1,000 a month in medications for older people under his care, without anyone in the healthcare bureaucracy batting an eye. Somebody will pay for it (ultimately that somebody is you and me, I suppose) even though the powerful pharmaceutical cocktails served up in nursing homes do little or nothing for people with dementia, except keep them docile and manageable. But if he wants to give those older folks $40 iPods loaded up with music they remember – which both research and empirical evidence suggest will improve their lives immensely — well, you can hardly imagine the dense fog of bureaucratic hostility that descends upon the whole enterprise.

“Alive Inside” is straightforward advocacy cinema, but it won the audience award at Sundance this year because it will completely slay you, and it has the greatest advantages any such movie can have: Its cause is easy to understand, and requires no massive social change or investment. Furthermore, once you see the electrifying evidence, it becomes nearly impossible to oppose. This isn’t fracking or climate change or drones; I see no possible way for conservatives to turn the question of music therapy for senior citizens into some kind of sinister left-wing plot. (“Next up on Fox News: Will Elton John turn our seniors gay?”) All the same, social worker Dan Cohen’s crusade to bring music into nursing homes could be the leading edge of a monumental change in the way we approach the care and treatment of older people, especially the 5 million or so Americans living with dementia disorders.

You may already have seen a clip from “Alive Inside,” which became a YouTube hit not long ago: An African-American man in his 90s named Henry, who spends his waking hours in a semi-dormant state, curled inward like a fetus with his eyes closed, is given an iPod loaded with the gospel music he grew up with. The effect seems almost impossible and literally miraculous: Within seconds his eyes are open, he’s singing and humming along, and he’s fully present in the room, talking to the people around him. It turns out Henry prefers the scat-singing of Cab Calloway to gospel, and a brief Calloway imitation leads him into memories of a job delivering groceries on his bicycle, sometime in the 1930s.



Of course Henry is still an elderly and infirm person who is near the end of his life. But the key word in that sentence is “person”; we become startlingly and heartbreakingly aware that an entire person’s life experience is still in there, locked inside Henry’s dementia and isolation and overmedication. As Oliver Sacks put it, drawing on a word from the King James Bible, Henry has been “quickened,” or returned to life, without the intervention of supernatural forces. It’s not like there’s just one such moment of tear-jerking revelation in “Alive Inside.” There might be a dozen. I’m telling you, one of those little pocket packs of tissue is not gonna cut it. Bring the box.

There’s the apologetic old lady who claims to remember nothing about her girlhood, until Louis Armstrong singing “When the Saints Go Marching In” brings back a flood of specific memories. (Her mom was religious, and Armstrong’s profane music was taboo. She had to sneak off to someone else’s house to hear his records.) There’s the woman with multiple psychiatric disorders and a late-stage cancer diagnosis, who ditches the wheelchair and the walker and starts salsa dancing. There’s the Army veteran who lost all his hair in the Los Alamos A-bomb test and has difficulty recognizing a picture of his younger self, abruptly busting out his striking baritone to sing along with big-band numbers. “It makes me feel like I got a girl,” he says. “I’m gonna hold her tight.” There’s the sweet, angular lady in late middle age, a boomer grandma who can’t reliably find the elevator in her building, or tell the up button from the down, boogieing around the room to the Beach Boys’ “I Get Around,” as if transformed into someone 20 years younger. The music cannot get away from her, she says, as so much else has done.

There’s a bit of hard science in “Alive Inside” (supplied by Sacks in fascinating detail) and also the beginnings of an immensely important social and cultural debate about the tragic failures of our elder-care system and how the Western world will deal with its rapidly aging population. As Sacks makes clear, music is a cultural invention that appears to access areas of the brain that evolved for other reasons, and those areas remain relatively unaffected by the cognitive decline that goes with Alzheimer’s and other dementia disorders. While the “quickening” effect observed in someone like Henry is not well understood, it appears that stimulating those undamaged areas of the brain with beloved and familiar signals – and what will we ever love more than the hit songs of our youth? — can unlock other things at least temporarily, including memory, verbal ability, and emotion. Sacks doesn’t address this, but the effects appear physical as well: Everyone we see in the film becomes visibly more active, even the man with late-stage multiple sclerosis and the semi-comatose woman who never speaks.

Dementia is a genuine medical phenomenon, as anyone who has spent time around older people can attest, and one that’s likely to exert growing psychic and economic stress on our society as the population of people over 65 continues to grow. But you can’t help wondering whether our social practice of isolating so many old people in anonymous, characterless facilities that are entirely separated from the rhythms of ordinary social life has made the problem considerably worse. As one physician observes in the film, the modern-day Medicare-funded nursing home is like a toxic combination of the poorhouse and the hospital, and the social stigma attached to those places is as strong as the smell of disinfectant and overcooked Salisbury steak. Our culture is devoted to the glamour of youth and the consumption power of adulthood; we want to think about old age as little as possible, even though many of us will live one-quarter to one-third of our lives as senior citizens.

Rossato-Bennett keeps the focus of “Alive Inside” on Dan Cohen’s iPod crusade (run through a nonprofit called Music & Memory), which is simple, effective and has achievable goals. The two of them tread more lightly on the bigger philosophical questions, but those are definitely here. Restoring Schubert or Motown to people with dementia or severe disabilities can be a life-changing moment, but it’s also something of a metaphor, and the lives that really need changing are our own. Instead of treating older people as a medical and financial problem to be managed and contained, could we have a society that valued, nurtured and revered them, as most societies did before the coming of industrial modernism? Oh, and if you’re planning to visit me in 30 or 40 years, with whatever invisible gadget then exists, please take note: No matter how far gone I am, you’ll get me back with “Some Girls,” Roxy Music’s “Siren” and Otto Klemperer’s 1964 recording of “The Magic Flute.”

“Alive Inside” opens this week at the Sunshine Cinema in New York. It opens July 25 in Huntington, N.Y., Toronto and Washington; Aug. 1 in Asbury Park, N.J., Boston, Los Angeles and Philadelphia; Aug. 8 in Chicago, Martha’s Vineyard, Mass., Palm Springs, Calif., San Diego, San Francisco, San Jose, Calif., and Vancouver, Canada; Aug. 15 in Denver, Minneapolis and Phoenix; and Aug. 22 in Atlanta, Dallas, Harrisburg, Pa., Portland, Ore., Santa Fe, N.M., Seattle and Spokane, Wash., with more cities and home video to follow.

http://www.salon.com/2014/07/15/alive_inside_music_may_be_the_best_medicine_for_dementia/?source=newsletter

The universe according to Nietzsche

Modern cosmology and the theory of eternal recurrence

The philosopher’s musings on the nature of reality could have scientific basis, according to a prominent physicist

The universe according to Nietzsche: Modern cosmology and the theory of eternal recurrence

Friedrich Nietzsche (Credit: AP/Salon)

Excerpted from “The Universe: Leading Scientists Explore the Origin, Mysteries, and Future of the Cosmos.” It originally appeared as a speech given by Steinhardt at an event in 2002.

If you were to ask most cosmologists to give a summary of where we stand right now in the field, they would tell you that we live in a very special period in human history where, thanks to a whole host of advances in technology, we can suddenly view the very distant and very early universe in ways we haven’t been able to do ever before. For example, we can get a snapshot of what the universe looked like in its infancy, when the first atoms were forming. We can get a snapshot of what the universe looked like in its adolescence, when the first stars and galaxies were forming. And we are now getting a full detail, three-dimensional image of what the local universe looks like today. When you put together this different information, which we’re getting for the first time in human history, you obtain a very tight series of constraints on any model of cosmic evolution.

If you go back to the different theories of cosmic evolution in the early 1990s, the data we’ve gathered in the last decade has eliminated all of them save one, a model that you might think of today as the consensus model. This model involves a combination of the Big Bang model as developed in the 1920s, ’30s, and ’40s; the inflationary theory, which Alan Guth proposed in the 1980s; and a recent amendment that I will discuss shortly. This consensus theory matches the observations we have of the universe today in exquisite detail. For this reason, many cosmologists conclude that we have finally determined the basic cosmic history of the universe.

But I have a rather different point of view, a view that has been stimulated by two events. The first is the recent amendment to which I referred earlier. I want to argue that the recent amendment is not simply an amendment but a real shock to our whole notion of time and cosmic history. And secondly, in the last year I’ve been involved in the development of an alternative theory that turns the cosmic history topsy-turvy: All the events that created the important features of our universe occur in a different order, by different physics, at different times, over different time scales. And yet this model seems capable of reproducing all the successful predictions of the consensus picture with the same exquisite detail.



The key difference between this picture and the consensus picture comes down to the nature of time. The standard model, or consensus model, assumes that time has a beginning that we normally refer to as the Big Bang. According to that model, for reasons we don’t quite understand, the universe sprang from nothingness into somethingness, full of matter and energy, and has been expanding and cooling for the past 15 billion years. In the alternative model, the universe is endless. Time is endless, in the sense that it goes on forever in the past and forever in the future, and in some sense space is endless. Indeed, our three spatial dimensions remain infinite throughout the evolution of the universe.

More specifically, this model proposes a universe in which the evolution of the universe is cyclic. That is to say, the universe goes through periods of evolution from hot to cold, from dense to under-dense, from hot radiation to the structure we see today, and eventually to an empty universe. Then, a sequence of events occurs that cause the cycle to begin again. The empty universe is reinjected with energy, creating a new period of expansion and cooling. This process repeats periodically forever. What we’re witnessing now is simply the latest cycle.

The notion of a cyclic universe is not new. People have considered this idea as far back as recorded history. The ancient Hindus, for example, had a very elaborate and detailed cosmology based on a cyclic universe. They predicted the duration of each cycle to be 8.64 billion years—a prediction with three-digit accuracy. This is very impressive, especially since they had no quantum mechanics and no string theory! It disagrees with the number I’m going suggest, which is trillions of years rather than billions.

The cyclic notion has also been a recurrent theme in Western thought. Edgar Allan Poe and Friedrich Nietzsche, for example, each had cyclic models of the universe, and in the early days of relativistic cosmology Albert Einstein, Alexander Friedmann, Georges Lemaître, and Richard Tolman were interested in the cyclic idea. I think it’s clear why so many have found the cyclic idea to be appealing: If you have a universe with a beginning, you have the challenge of explaining why it began and the conditions under which it began. If you have a universe that’s cyclic, it’s eternal, so you don’t have to explain the beginning.

During the attempts to try to bring cyclic ideas into modern cosmology, it was discovered in the 1920s and ’30s that there are various technical problems. The idea at that time was a cycle in which our three-dimensional universe goes through periods of expansion beginning from the Big Bang and then reversal to contraction and a Big Crunch. The universe bounces, and expansion begins again. One problem is that every time the universe contracts to a crunch, the density and temperature of the universe rises to an infinite value, and it is not clear if the usual laws of physics can be applied.

Second, every cycle of expansion and contraction creates entropy through natural thermodynamic processes, which adds to the entropy from earlier cycles. So at the beginning of a new cycle, there is higher entropy density than the cycle before. It turns out that the duration of a cycle is sensitive to the entropy density. If the entropy increases, the duration of the cycle increases as well. So, going forward in time, each cycle becomes longer than the one before. The problem is that, extrapolating back in time, the cycles become shorter until, after a finite time, they shrink to zero duration. The problem of avoiding a beginning has not been solved; it has simply been pushed back a finite number of cycles. If we’re going to reintroduce the idea of a truly cyclic universe, these two problems must be overcome. The cyclic model I will describe uses new ideas to do just that.

To appreciate why an alternative model is worth pursuing, it’s important to get a more detailed impression of what the consensus picture is like. Certainly some aspects are appealing. But what I want to argue is that, overall, the consensus model is not so simple. In particular, recent observations have forced us to amend the consensus model and make it more complicated. So, let me begin with an overview of the consensus model.

The consensus theory begins with the Big Bang: The universe has a beginning. It’s a standard assumption that people have made over the last fifty years, but it’s not something we can prove at present from any fundamental laws of physics. Furthermore, you have to assume that the universe began with an energy density less than the critical value. Otherwise, the universe would stop expanding and recollapse before the next stage of evolution, the inflationary epoch. In addition, to reach this inflationary stage, there must be some sort of energy to drive the inflation. Typically this is assumed to be due to an inflation field. You have to assume that in those patches of the universe that began at less than the critical density, a significant fraction of the energy is stored in inflation energy so that it can eventually overtake the universe and start the period of accelerated expansion. All of these are reasonable assumption, but assumptions nevertheless. It’s important to take into account these assumptions and ingredients, because they’re helpful in comparing the consensus model to the challenger.

Assuming these conditions are met, the inflation energy overtakes the matter and radiation after a few instants. The inflationary epoch commences, and the expansion of the universe accelerates at a furious pace. The inflation does a number of miraculous things: It makes the universe homogeneous, it makes the universe flat, and it leaves behind certain inhomogeneities, which are supposed to be the seeds for the formation of galaxies. Now the universe is prepared to enter the next stage of evolution with the right conditions. According to the inflationary model, the inflation energy decays into a hot gas of matter and radiation. After a second or so, there form the first light nuclei. After a few tens of thousands of years, the slowly moving matter dominates the universe. It’s during these stages that the first atoms form, the universe becomes transparent, and the structure in the universe begins to form—the first stars and galaxies. Up to this point, the story is relatively simple.

But there is the recent discovery that we’ve entered a new stage in the evolution of the universe. After the stars and galaxies have formed, something strange has happened to cause the expansion of the universe to speed up again. During the 15 billion years when matter and radiation dominated the universe and structure was forming, the expansion of the universe was slowing down, because the matter and radiation within it is gravitationally self-attractive and resists the expansion of the universe. Until very recently, it had been presumed that matter would continue to be the dominant form of energy in the universe and this deceleration would continue forever.

But we’ve discovered instead, due to recent observations, that the expansion of the universe is speeding up. This means that most of the energy of the universe is neither matter nor radiation. Rather, another form of energy has overtaken the matter and radiation. For lack of a better term, this new energy form is called dark energy. Dark energy, unlike the matter and radiation we’re familiar with, is gravitationally self-repulsive. That’s why it causes the expansion to speed up rather than slow down. In Newton’s theory of gravity, all mass is gravitationally attractive, but Einstein’s theory allows the possibility of forms of energy that are gravitationally self-repulsive.

I don’t think either the physics or cosmology communities, or even the general public, have fully absorbed the full implications of this discovery. This is a revolution in the grand historic sense—in the Copernican sense. In fact, if you think about Copernicus—from whom we derive the word “revolution”—his importance was that he changed our notion of space and of our position in the universe. By showing that the Earth revolves around the sun, he triggered a chain of ideas that led us to the notion that we live in no particular place in the universe; there’s nothing special about where we are. Now we’ve discovered something very strange about the nature of time: that we may live in no special place, but we do live at a special time, a time of recent transition from deceleration to acceleration; from one in which matter and radiation dominate the universe to one in which they are rapidly becoming insignificant components; from one in which structure is forming in ever larger scales to one in which now, because of this accelerated expansion, structure formation stops. We are in the midst of the transition between these two stages of evolution. And just as Copernicus’ proposal that the Earth is no longer the center of the universe led to a chain of ideas that changed our whole outlook on the structure of the solar system and eventually to the structure of the universe, it shouldn’t be too surprising that perhaps this new discovery of cosmic acceleration could lead to a whole change in our view of cosmic history. That’s a big part of the motivation for thinking about our alternative proposal.

With these thoughts about the consensus model in mind, let me turn to the cyclic proposal. Since it’s cyclic, I’m allowed to begin the discussion of the cycle at any point I choose. To make the discussion parallel, I’ll begin at a point analogous to the Big Bang; I’ll call it the Bang. This is a point in the cycle where the universe reaches its highest temperature and density. In this scenario, though, unlike the Big Bang model, the temperature and density don’t diverge. There is a maximal, finite temperature. It’s a very high temperature, around 1020 degrees Kelvin—hot enough to evaporate atoms and nuclei into their fundamental constituents—but it’s not infinite. In fact, it’s well below the so-called Planck energy scale, where quantum gravity effects dominate. The theory begins with a bang and then proceeds directly to a phase dominated by radiation. In this scenario you do not have the inflation one has in the standard scenario. You still have to explain why the universe is flat, you still have to explain why the universe is homogeneous, and you still have to explain where the fluctuations came from that led to the formation of galaxies, but that’s not going to be explained by an early stage of inflation. It’s going to be explained by yet a different stage in the cyclic universe, which I’ll get to.

In this new model, you go directly to a radiation-dominated universe and form the usual nuclear abundances; then go directly to a matter-dominated universe in which the atoms and galaxies and larger-scale structure form; and then proceed to a phase of the universe dominated by dark energy. In the standard case, the dark energy comes as a surprise, since it’s something you have to add into the theory to make it consistent with what we observe. In the cyclic model, the dark energy moves to center stage as the key ingredient that is going to drive the universe, and in fact drives the universe, into the cyclic evolution. The first thing the dark energy does when it dominates the universe is what we observe today: It causes the expansion of the universe to begin to accelerate. Why is that important? Although this acceleration rate is 100 orders of magnitude smaller than the acceleration thatone gets in inflation, if you give the universe enough time it actually accomplishes the same feat that inflation does. Over time, it thins out the distribution of matter and radiation in the universe, making the universe more and more homogeneous and isotropic—in fact, making it perfectly so—driving it into what is essentially a vacuum state.

Seth Lloyd said there were 1080 or 1090 bits inside the horizon, but if you were to look around the universe in a trillion years, you would find on average no bits inside your horizon, or less than one bit inside your horizon. In fact, when you count these bits, it’s important to realize that now that the universe is accelerating, our computer is actually losing bits from inside our horizon. This is something that we observe.

At the same time that the universe is made homogeneous and isotropic, it is also being made flat. If the universe had any warp or curvature to it, or if you think about the universe stretching over this long period of time, although it’s a slow process it makes the space extremely flat. If it continued forever, of course, that would be the end of the story. But in this scenario, just like inflation, the dark energy survives only for a finite period and triggers a series of events that eventually lead to a transformation of energy from gravity into new energy and radiation that will then start a new period of expansion of the universe. From a local observer’s point of view, it looks like the universe goes through exact cycles; that is to say, it looks like the universe empties out each round and a new matter and radiation is created, leading to a new period of expansion. In this sense it’s a cyclic universe. If you were a global observer and could see the entire universe, you’d discover that our three dimensions are forever infinite in this story. What’s happened is that at each stage when we create matter and radiation, it gets thinned out. It’s out there somewhere, but it’s getting thinned out. Locally, it looks like the universe is cyclic, but globally the universe has a steady evolution, a well-defined era in which, over time and throughout our three dimensions, entropy increases from cycle to cycle.

Exactly how this works in detail can be described in various ways. I will choose to present a very nice geometrical picture that’s motivated by superstring theory. We use only a few basic elements from superstring theory, so you don’t really have to know anything about superstring theory to understand what I’m going to talk about, except to understand that some of the strange things I’m going to introduce I am not introducing for the first time. They’re already sitting there in superstring theory waiting to be put to good purpose.

One of the ideas in superstring theory is that there are extra dimensions; it’s an essential element to that theory, which is necessary to make it mathematically consistent. In one particular formulation of that theory, the universe has a total of eleven dimensions. Six of them are curled up into a little ball so tiny that, for my purposes, I’m just going to pretend they’re not there. However, there are three spatial dimensions, one time dimension, and one additional dimension that I do want to consider. In this picture, our three dimensions with which we’re familiar and through which we move lie along a hypersurface, or membrane. This membrane is a boundary of the extra dimension. There is another boundary, or membrane, on the other side. In between, there’s an extra dimension that, if you like, only exists over a certain interval. It’s like we are one end of a sandwich, in between which there is a so-called bulk volume of space. These surfaces are referred to as orbifolds or branes—the latter referring to the word “membrane.” The branes have physical properties. They have energy and momentum, and when you excite them you can produce things like quarks and electrons. We are composed of the quarks and electrons on one of these branes. And, since quarks and leptons can only move along branes, we are restricted to moving along and seeing only the three dimensions of our brane. We cannot see directly the bulk or any matter on the other brane.

In the cyclic universe, at regular intervals of trillions of years, these two branes smash together. This creates all kinds of excitations—particles and radiation. The collision thereby heats up the branes, and then they bounce apart again. The branes are attracted to each other through a force that acts just like a spring, causing the branes to come together at regular intervals. To describe it more completely, what’s happening is that the universe goes through two kinds of stages of motion. When the universe has matter and radiation in it, or when the branes are far enough apart, the main motion is the branes stretching, or, equivalently, our three dimensions expanding. During this period, the branes more or less remain a fixed distance apart. That’s what’s been happening, for example, in the last 15 billion years. During these stages, our three dimensions are stretching just as they normally would. At a microscopic distance away, there is another brane sitting and expanding, but since we can’t touch, feel, or see across the bulk, we can’t sense it directly. If there is a clump of matter over there, we can feel the gravitational effect, but we can’t see any light or anything else it emits, because anything it emits is going to move along that brane. We only see things that move along our own brane.

Next, the energy associated with the force between these branes takes over the universe. From our vantage point on one of the branes, this acts just like the dark energy we observe today. It causes the branes to accelerate in their stretching, to the point where all the matter and radiation produced since the last collision is spread out and the branes become essentially smooth, flat, empty surfaces. If you like, you can think of them as being wrinkled and full of matter up to this point, and then stretching by a fantastic amount over the next trillion years. The stretching causes the mass and energy on the brane to thin out and the wrinkles to be smoothed out. After trillions of years, the branes are, for all intents and purposes, smooth, flat, parallel, and empty.

Then the force between these two branes slowly brings the branes together. As it brings them together, the force grows stronger and the branes speed toward one another. When they collide, there’s a walloping impact—enough to create a high density of matter and radiation with a very high, albeit finite, temperature. The two branes go flying apart, more or less back to where they are, and then the new matter and radiation, through the action of gravity, causes the branes to begin a new period of stretching.

In this picture, it’s clear that the universe is going through periods of expansion and a funny kind of contraction. Where the two branes come together, it’s not a contraction of our dimensions but a contraction of the extra dimension. Before the contraction, all matter and radiation has been spread out, but, unlike the old cyclic models of the 1920s and ’30s, it doesn’t come back together again during the contraction, because our three dimensions—that is, the branes—remain stretched out. Only the extra dimension contracts. This process repeats itself cycle after cycle.

If you compare the cyclic model to the consensus picture, two of the functions of inflation—namely, flattening and homogenizing the universe—are accomplished by the period of accelerated expansion that we’ve now just begun. Of course, I really mean the analogous expansion that occurred one cycle ago, before the most recent Bang. The third function of inflation—producing fluctuations in the density—occurs as these two branes come together. As they approach, quantum fluctuations cause the branes to begin to wrinkle. And because they’re wrinkled, they don’t collide everywhere at the same time. Rather, some regions collide a bit earlier than others. This means that some regions reheat to a finite temperature and begin to cool a little bit before other regions. When the branes come apart again, the temperature of the universe is not perfectly homogeneous but has spatial variations left over from the quantum wrinkles.

Remarkably, although the physical processes are completely different and the time scale is completely different—this is taking billions of years, instead of  10-30 seconds—it turns out that the spectrum of fluctuations you get in the distribution of energy and temperature is essentially the same as what you get in inflation. Hence, the cyclic model is also in exquisite agreement with all of the measurements of the temperature and mass distribution of the universe that we have today.

Because the physics in these two models is quite different, there is an important distinction in what we would observe if one or the other were actually true—although this effect has not been detected yet. In inflation when you create fluctuations, you don’t just create fluctuations in energy and temperature but you also create fluctuations in spacetime itself, so-called gravitational waves. That’s a feature we hope to look for in experiments in the coming decades as a verification of the consensus model. In our model, you don’t get those gravitational waves. The essential difference is that inflationary fluctuations are created in a hyperrapid, violent process that is strong enough to create gravitational waves, whereas cyclic fluctuations are created in an ultraslow, gentle process that is too weak to produce gravitational waves. That’s an example where the two models give an observational prediction that is dramatically different. It’s just difficult to observe at the present time.

What’s fascinating at the moment is that we have two paradigms now available to us. On the one hand, they are poles apart in terms of what they tell us about the nature of time, about our cosmic history, about the order in which events occur, and about the time scale on which they occur. On the other hand, they are remarkably similar in terms of what they predict about the universe today. Ultimately what will decide between the two is a combination of observations—for example, the search for cosmic gravitational waves—and theory, because a key aspect to this scenario entails assumptions about what happens at the collision between branes that might be checked or refuted in superstring theory. In the meantime, for the next few years, we can all have great fun speculating about the implications of each of these ideas and how we can best distinguish between them.

Paul Steinhardt is a  theoretical physicist, an Albert Einstein Professor of Science at Princeton University and coauthor (with Neil Turok) of “Endless Universe: Beyond the Big Bang.” This piece originally appeared as a speech by Steinhardt at an event in 2002. It has been excerpted here as it appears in “The Universe: Leading Scientists Explore the Origin, Mysteries, and Future of the Cosmos.” Copyright © 2014 by Edge Foundation Inc. Published by Harper Perennial

http://www.salon.com/2014/07/13/the_universe_according_to_nietzsche_modern_cosmology_and_the_theory_of_eternal_recurrence/?source=newsletter

Commonly Used Drug Can Make Men Stop Enjoying Sex—Irreversibly


Some of the symptoms reported include impotence and thoughts of suicide and depression.

No one should have to choose between their hairline and their health. But increasingly, men who use finasteride, commonly known as Propecia, to treat their male pattern baldness are making that choice, often unwittingly. In the 17 years since Propecia was approved to treat hair loss from male pattern baldness, many disturbing side effects have emerged, the term post-finasteride syndrome (PFS) has been coined and hundreds of lawsuits have been brought.

Finasteride inhibits a steroid responsible for converting testosterone into 5α-dihydrotestosterone (DHT) the hormone that tells hair follicles on the scalp to stop producing hair. Years before Propecia was approved to grow hair, finasteride was being used in drugs like Proscar, Avodart and Jalyn to treat an enlarged prostate gland (benign prostatic hyperplasia). Like Viagra, which began as a blood pressure med, or the eyelash-growing drug Latisse, which began as a glaucoma drug, finasteride’s hair restoration abilities were a fortuitous side effect.

Since Propecia was approved for sale in 1997, its label has warned about sexual side effects. “A small number of men experienced certain sexual side effects, such as less desire for sex, difficulty in achieving an erection, or a decrease in the amount of semen,” it read. “Each of these side effects occurred in less than 2% of men and went away in men who stopped taking Propecia because of them.” (The label also warned about gynecomastia, the enlargement of male breast tissue.)

But increasingly, users and some doctors are saying the symptoms sometimes do not go away when men stop taking Propecia and that their lives can be changed permanently. They report impotence, lack of sexual desire, depression and suicidal thoughts and even a reduction in thesize of penises ortesticles after using the drug, which does not go away after discontinuation.

According to surgeon Andrew Rynne, former head of the Irish Family Planning Association, Merck, which makes Propecia and Proscar, knows that the disturbing symptoms do not always vanish. “They know it is not true because I and hundreds of other doctors and thousands of patients have told them that these side effects do not always go away when you stop taking Propecia. We continue to be ignored, of course.”

In some cases, says Rynne, men who have used finasteride for even a few months “have unwittingly condemned themselves to a lifetime of sexual anhedonia” [condition in which an individual feels no sexual pleasure], the most horrible and cruel of all sexual dysfunctions.”

“I have spoken to several young men in my clinic in Kildare who continue to suffer from sexual anaesthesia and for whom all sexual pleasure and feelings have been obliterated for all time. I have felt their suffering and shared their devastation,” he wrote on a Propecia help site.

Sarah Temori, who launched a petition to have finasteride taken off the market on Change.org, agrees. “Many who have taken Propecia have lost their marriages, jobs and some have committed suicide due to the damage this drug has done to their bodies,” she writes. “One of my loved ones is a victim of this drug. It’s painful to see how much he has to struggle just to make it through each day and do all the daily things that we take for granted. No doctors have been able to help him and he is struggling to pay for medical bills. He is only 23.”

Stories about Propecia’s disturbing and underreported side effects have run onCNN, ABC, CBS, NBC, Fox and on Italian and English TV news.

The medical literature has also investigated finasteride effects. A study last year in Journal of Sexual Medicine noted “changes related to the urogenital system in terms of semen quality and decreased ejaculate volume, reduction in penis size, penile curvature or reduced sensation, fewer spontaneous erections, decreased testicular size, testicular pain, and prostatitis.” Many subjects also noted a “disconnection between the mental and physical aspects of sexual function,” and changes in mental abilities, sleeping patterns, and/or depressive symptoms.

A study this year in the Journal of Steroid Biochemistry and Molecular Biology finds that “altered levels of neuroactive steroids, associated with depression symptoms, are present in androgenic alopecia patients even after discontinuation of the finasteride treatment.”

Approved in Haste, Regretted in Leisure

The rise and fall of Propecia parallels other drugs like Vioxx or hormone replacement therapy that were marketed to wide demographics even as safety questions nipped at their heels. Two-thirds of American men have some hair loss by age 35, and 85 percent of men have some hair loss by age 50, so Propecia had the promise of a blockbuster like Lipitor or Viagra.

Early ads likened men’s thinning scalps to crop circles. Later, ads likened saving scalp hair to saving the whalesand won awards. Many Propecia ads tried to take away the stigma of hair loss and its treatment. “You’d be surprised who’s treated their hair loss,” said one print ad depicting athletic, 20-something men. In 1999 alone, Merck spent $100 million marketing Propecia directly to consumers, when direct-to-consumer advertising was just beginning on TV.

Nor was Propecia sold only in the U.S. Overseas ads compared twins who did and did not use the product. In the U.K., the drugstore chain Boots aggressively marketed Propecia at its 300 stores and still does. One estimates says Propecia was marketed in 120 countries.

Many have heard of “indication creep,” when a drug, after its original FDA approval, goes on to be approved for myriad other uses. Seroquel, originally approved for schizophrenia, is now approved as an add-on drug for depression and even for use in children. Cymbalta, originally approval as an antidepressant, went on to be approved for chronic musculoskeletal pain.

Less publicized is “warning creep,” when a drug that seemed safe enough for the FDA to approve, collects warning after warning once the public is using it. The poster child for warning creep is the bone drug Fosamax. After it was approved and in wide use, warnings began to surface about heart problems, intractable pain, jawbone death, esophageal cancer and even the bone fractures it was supposed to prevent. Oops.

But finasteride may do Fosamax proud. In 2003, it gained a warning for patients to promptly report any “changes in their breasts, such as lumps, pain or nipple discharge, to their physician.” Soon, “male breast cancer” was added under “postmarketing experience.” In 2010 depression was added as a side effect and patients were warned that finasteride could have an effect on prostate-specific antigen (PSA) tests. In 2011, the label conceded that sexual dysfunction could continue “after stopping the medication” and that finasteride could pose a “risk of high-grade prostate cancer.” In 2012, a warning was added that “other urological conditions” should be considered before taking finasteride. In 2013, the side effect of angioedema was added.

A quick look at Propecia approval documents does not inspire confidence. Finasteride induces such harm in the fetuses of lab animals, it is contraindicated in women when they are or may potentially be pregnant and women should not even “handle crushed or broken Propecia tablets when they are pregnant.”

Clinical trials were of short duration and some only had 15 participants. While subjects were asked aesthetic questions about their hairline during and after clinical trials, conspicuously absent on the data set were questions about depression, mental health and shrinking sexual organs.

In one report an FDA reviewer notes that Merck did not name or include other drugs used by subjects during trials, such as antidepressants or GERD meds, suggesting that depression could have been a known side effect of Propecia. Elsewhere an FDA reviewer cautions that “low figures” in the safety update are not necessarily reliable because the time period was “relatively short” and subjects with sexual adverse events may have already “exited from the study.” An FDA reviewer also wrote that “long-term cancer effects are unknown.” Breast cancer was noted as an adverse event seen in the trials.

Propecia Users Speak Out

There are many Propecia horror stories on sites founded to help people with side effects and those involved in litigation. In 2011, a mother told CBS news she blamed her 22-year-old son’s suicide on Propecia and Men’s Journal ran a report called “The (Not So Hard) Truth About Hair Loss Drugs.”

In a database of more than 13,000 finasteride adverse effects reported to the FDA, there were 619 reports of depression and 580 reports of anxiety. Sixty-eight users of finasteride reported a “penis disorder” and small numbers reported “penis deviation,” “penis fracture” and “micropenis.”

On the patient drug review site Askapatient.com, the 435 reviews of Propecia cite many examples of depression, sexual dysfunction and shrunken penises.

One of the most visible faces for post-finasteride syndrome is 36-year-old UK resident Paul Innes. Previously healthy and a soccer player, Innes was so debilitated by his use of Propecia, prescribed by his doctor, he founded a web siteand has gone public. Appearing on This Morning last month, Innes describes how using Propecia for only three months on one occasion and three weeks on another produced a suicidal depression requiring hospitalization, sexual dysfunction and a reduction of the size of his reproductive anatomy, none of which went away when he ceased the drug. He and his former girlfriend, Hayley Waudby, described how the physical and emotional changes cost them their relationship, even though she was pregnant with his child.

In an email I asked Paul Innes if his health had improved after the ordeal. He wrote back, “My health is just the same if not worse since 2013. I am still impotent with a shrunken penis and still have very dark thoughts and currently having to take antidepressants just to get through every day. Prior to Propecia I was a very healthy guy but now I’m a shadow of my former self. I have only just managed to return to work in my role as a police officer since taking Propecia in March 2013.”

How Modern Houses Can Watch You

http://homedesignlover.com/wp-content/uploads/2011/11/best-modern-house-design.jpg
Presto Vivace (882157) links to a critical look in Time Magazine at the creepy side of connected household technology. An excerpt:
A modern surveillance state isn’t so much being forced on us, as it is sold to us device by device, with the idea that it is for our benefit. … … Nest sucks up data on how warm your home is. As Mocana CEO James Isaacs explained to me in early May, a detailed footprint of your comings and goings can be inferred from this information. Nest just bought Dropcam, a company that markets itself as a security tool allowing you to put cameras in your home and view them remotely, but brings with it a raft of disquieting implications about surveillance. Automatic wants you to monitor how far you drive and do things for you like talk to your your house when you’re on your way home from work and turn on lights when you pull into your garage. Tied into the new SmartThings platform, a Jawbone UP band becomes a tool for remotely monitoring someone else’s activity. The SmartThings hubs and sensors themselves put any switch or door in play. Companies like AT&T want to build a digital home that monitors your security and energy use. … … Withings Smart Body Analyzer monitors your weight and pulse. Teddy the Guardian is a soft toy for children that spies on their vital signs. Parrot Flower Power looks at the moisture in your home under the guise of helping you grow plants. The Beam Brush checks up on your teeth-brushing technique.
Presto Vivaci adds, “Enough to make the Stasi blush. What I cannot understand is how politicians fail to understand what a future Kenneth Starr is going to do with data like this.”
~Slashdot~

The hard truth about getting old

Sixty isn’t the new 40, and 80 isn’t the new 60. I know it. You know it. So why do we buy into it?

The hard truth about getting old
The author as a young woman and as she appears now

I don’t know about you, but the chirpy tales that dominate the public discussion about aging — you know, the ones that tell us that age is just a state of mind, that “60 is the new 40″ and “80 the new 60″ — irritate me. What’s next: 100 as the new middle age?

Sure, aging is different than it was a generation or two ago and there are more possibilities now than ever before, if only because we live so much longer. it just seems to me that, whether at 60 or 80, the good news is only half the story. For it’s also true that old age — even now when old age often isn’t what it used to be — is a time of loss, decline and stigma.

Yes, I said stigma. A harsh word, I know, but one that speaks to a truth that’s affirmed by social researchers who have consistently found that racial and ethnic stereotypes are likely to give way over time and with contact, but not those about age. And where there are stereotypes, there are prejudice and discrimination — feelings and behavior that are deeply rooted in our social world and, consequently, make themselves felt in our inner psychological world as well.

I felt the sting of that discrimination recently when a large and reputable company offered me an auto insurance policy that cost significantly less than I’d been paying. After I signed up, the woman at the other end of the phone suggested that I consider their umbrella policy as well, which was not only cheaper than the one I had, but would, in addition, create what she called “a package” that would decrease my auto insurance premium by another hundred dollars. How could I pass up that kind of deal?

Well … not so fast. After a moment or two on her computer, she turned her attention back to me with an apology: “I’m sorry, but I can’t offer the umbrella policy because our records show that you had an accident in the last five years.” Puzzled, I explained that it was just a fender bender in a parking lot and reminded her that she had just sold me an insurance policy. Why that and not the umbrella policy?

She went silent, clearly flustered, and finally said, “It’s different.” Not satisfied, I persisted, until she became impatient and burst out, “It’s company policy: If you’re over 80 and had an accident in the last five years, we can’t offer you an umbrella policy.” Surprised, I was rendered mute for a moment. After what seemed like a long time, she spoke into the silence, “I’m really sorry. It’s just policy.”



Frustrated, we ended the conversation.

After I fussed and fumed for a while, I called back and asked to speak with someone in authority. A soothing male voice came on the line. I told him my story, and finished with, “Do I have to remind you that there’s a law against age discrimination?”

“Would you mind if I put you on hold for a few moments?” he asked. (Don’t you love the way they ask you that, as if you have a choice?) When he came back on the line, he told me he’d checked the file and talked to the agent who couldn’t recall saying anything about age, nor was there anything about it in the record.

“OK,” I said, “then sell me the umbrella policy.”

“No,” he was very, very sorry for the misunderstanding, but they never sell an umbrella policy to anyone who’s had an accident in the last five years, and their policy is “absolutely age-neutral.”

And if you believe that, I know a bridge in Brooklyn that’s for sale.

Makes you wonder, doesn’t it: Where are all those sources of personal power and self-esteem we keep hearing about as the media celebrate the glories of the “new old age”?

That’s one from my file of personal stories about ageism, but there are other older and bigger ones: discrimination against older workers in the job market among the most important. True, the law now offers a possible remedy in the form of an age-discrimination lawsuit, but who’s going to pay the legal and household bills during the years it will take to work its way through the courts? Who’s going to help those workers deal with the psychic wounds that come from being so easily expendable, so devalued just because of their age?

In her groundbreaking book “The Coming of Age,” published in the early 1970s, Simone de Beauvoir spoke passionately about the stigma of old age — about the loss of a valued identity, our fear that the self we knew is gone, replaced by what she called “a loathsome stranger” we can’t recognize, who can’t possibly be the person we’ve known until now.

Her words give life to a core maxim of social psychology that says: What we think about a person influences how we see him, how we see him affects how we behave toward him, how we behave toward him ultimately shapes how he feels about himself, if not actually who he is. It’s in this interaction between self and society that we can see most clearly how social attitudes toward the old give form and definition to how we feel about ourselves. For what we see in the faces of others will eventually mark our own.

As a sociologist, I have been a student of aging for four decades; as a psychotherapist during this same period, I saw more than a few patients who were struggling with the issues aging brings; as a writer I’ve written about the various stages of life, including a memoir about aging daughters and mothers. Yet until I undertook the research for my recent book, “60 on Up: The Truth About Aging in America” — until I began to read more deeply and to interview people more systematically — I didn’t fully realize how much ageism had become one of the signature marks of stigma and oppression in our society.

Nor did I really get how much the cultural abhorrence of old age had affected my own inner life. So it was something of a surprise when, as I listened to the stories of the women and men I met, I found myself forced back on myself, on my own prejudices about old people, even though I am also one of them.

Even now, even after all I’ve learned about myself, those words — I am one of them — bring a small shock. And something inside resists. I want to take the words back, to shout, “No, it’s not true, I’m really not like them,” and explain all the ways I’m different from the old woman I saw pushing her walker down the street as she struggled to put one foot in front of the other, or the frail shuffling man I looked away from with a slight sense of discomfort.

I know enough not to be surprised that I feel this way, but I can’t help being somewhat shamed by it. How could it be otherwise when we live in a society that worships youth, that pitches it, packages it, and sells it so relentlessly that the anti-aging industry is the hottest growth ticket in town: the plastic surgeons who exist to serve our illusion that if we don’t look old, we won’t be or feel old; the multibillion-dollar cosmetics industry whose creams and potions promise to wipe out our wrinkles and massage away our cellulite; the fashion designers who have turned yesterday’s size 10 into today’s size 6 so that 50-year-old women can delude themselves into believing they still wear the same size they wore in college — all in the vain hope that we can fool ourselves, our bodies and the clock.

If you still need to be convinced about the ubiquity of the assault on our sensibilities by the anti-aging crusade, try plugging the term “anti-aging” into Google. Last time I checked, it came up with 22,600,000 hits, among them the website of the recently spawned American Academy of Anti-Aging Medicine with a membership of tens of thousands of doctors whose business is selling the idea that aging is “a curable disease.” Never mind that the American Medical Association doesn’t accord legitimacy to this organization or its stated mission, it continues to laugh all the way to the bank.

There, also, you’ll find the latest boon to the American entrepreneurial spirit: a growing array of “brain health” programs featuring brain gyms, workshops, fitness camps and “brain healthy” food. And let’s not forget the Nintendo video game that, the instructions say, will “give your prefrontal cortex a workout.”

Will any of this help us remember where we left our glasses, why we walked into the bedroom, or the story line in a film we saw a few days ago? Not likely, as recent scientific evidence tells us.

Surely no one can live in a society that instructs us so relentlessly about all the ways we can overcome aging, without wanting to do something about it. I know I can’t. Why else do I go to the trouble and expense of dying away my gray hair when I hate to sit in the beauty shop? Why else does my heart swell with pleasure when someone responds with surprise when I say that I’m 87 years old? Why else do I know with such certainty that the minute they stop looking surprised is the minute I’ll stop saying it.

As I read, listen, talk, write, it seems to me we’re living in a weird combination of the public idealization of aging that lies alongside the devaluation of the old. And it isn’t good for anybody. Not the 60-year-olds who know they can’t do what they did at 40 but keep trying, not the 80-year-olds who, when their body and mind remind them that they’re not 60, feel somehow inadequate, as if they’ve done something wrong, failed a test.

We live in the uncharted territory of a greatly expanded life span where, for the first time in history, if we retire at 65, we can expect to live somewhere between 15-20 years more. But the story of this new longevity is both positive and negative — a story in which every “yes” is followed by a “but.” Yes, the fact that we live longer, healthier lives, is something to celebrate. But it’s not without its costs, both public and private. Yes, the definition of old has been pushed back. But no matter where we place it, our social attitudes and behavior meet our private angst about getting old, and the combination of the two all too often distorts our self-image and undermines our spirit.

Yet too few political figures, policy experts or media stories are asking the important questions: What are the real possibilities for our aging population now? How will we live them; what will we do with them? Who will we become? How will we see ourselves; how will we be seen? What will sustain us — emotionally, economically, physically, spiritually? These, not just whether the old will break the Social Security bank or bankrupt Medicare, are the central questions about aging in our time.

Lillian B. Rubin is an internationally recognized author and social scientist who was, until recently, a practicing psychotherapist. Her most recent work is “60 on Up: The Truth About Aging in America.” She lives in San Francisco. 

http://www.salon.com/2011/08/04/lillian_rubin_on_ageism/?utm_source=facebook&utm_medium=socialflow

By 2045 ‘The Top Species Will No Longer Be Humans

And That Could Be A Problem

terminator red eye rise of robots

Terminator

 

“Today there’s no legislation regarding how much intelligence a machine can have, how interconnected it can be. If that continues, look at the exponential trend. We will reach the singularity in the timeframe most experts predict. From that point on you’re going to see that the top species will no longer be humans, but machines.”These are the words of Louis Del Monte, physicist, entrepreneur, and author of “The Artificial Intelligence Revolution.” Del Monte spoke to us over the phone about his thoughts surrounding artificial intelligence and the singularity, an indeterminate point in the future when machine intelligence will outmatch not only your own intelligence, but the world’s combined human intelligence too.

The average estimate for when this will happen is 2040, though Del Monte says it might be as late as 2045. Either way, it’s a timeframe of within three decades.

 

louis del monte

Screenshot

Louis Del Monte.

“It won’t be the ‘Terminator’ scenario, not a war,” said Del Monte. “In the early part of the post-singularity world, one scenario is that the machines will seek to turn humans into cyborgs. This is nearly happening now, replacing faulty limbs with artificial parts. We’ll see the machines as a useful tool. Productivity in business based on automation will be increased dramatically in various countries. In China it doubled, just based on GDP per employee due to use of machines.”

“By the end of this century,” he continued, “most of the human race will have become cyborgs [part human, part tech or machine]. The allure will be immortality. Machines will make breakthroughs in medical technology, most of the human race will have more leisure time, and we’ll think we’ve never had it better. The concern I’m raising is that the machines will view us as an unpredictable and dangerous species.”

Del Monte believes machines will become self-conscious and have the capabilities to protect themselves. They “might view us the same way we view harmful insects.” Humans are a species that “is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses.” Hardly an appealing roommate.

He wrote the book as “a warning.” Artificial intelligence is becoming more and more capable, and we’re adopting it as quickly as it appears. A pacemaker operation is “quite routine,” he said, but “it uses sensors and AI to regulate your heart.”

A 2009 experiment showed that robots can develop the ability to lie to each other. Run at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, the experiment had robots designed to cooperate in finding beneficial resources like energy and avoiding the hazardous ones. Shockingly, the robots learned to lie to each other in an attempt to hoard the beneficial resources for themselves.

“The implication is that they’re also learning self-preservation,” Del Monte told us. “Whether or not they’re conscious is a moot point.”

Follow

Get every new post delivered to your Inbox.

Join 1,486 other followers