What we do better without other people around

The power of lonely

(Tim Gabor for The Boston Globe)
By Leon Neyfakh

March 6, 2011

You hear it all the time: We humans are social animals. We need to spend time together to be happy and functional, and we extract a vast array of benefits from maintaining intimate relationships and associating with groups. Collaborating on projects at work makes us smarter and more creative. Hanging out with friends makes us more emotionally mature and better able to deal with grief and stress.

Spending time alone, by contrast, can look a little suspect. In a world gone wild for wikis and interdisciplinary collaboration, those who prefer solitude and private noodling are seen as eccentric at best and defective at worst, and are often presumed to be suffering from social anxiety, boredom, and alienation.

But an emerging body of research is suggesting that spending time alone, if done right, can be good for us — that certain tasks and thought processes are best carried out without anyone else around, and that even the most socially motivated among us should regularly be taking time to ourselves if we want to have fully developed personalities, and be capable of focus and creative thinking. There is even research to suggest that blocking off enough alone time is an important component of a well-functioning social life — that if we want to get the most out of the time we spend with people, we should make sure we’re spending enough of it away from them. Just as regular exercise and healthy eating make our minds and bodies work better, solitude experts say, so can being alone.

One ongoing Harvard study indicates that people form more lasting and accurate memories if they believe they’re experiencing something alone. Another indicates that a certain amount of solitude can make a person more capable of empathy towards others. And while no one would dispute that too much isolation early in life can be unhealthy, a certain amount of solitude has been shown to help teenagers improve their moods and earn good grades in school.

“There’s so much cultural anxiety about isolation in our country that we often fail to appreciate the benefits of solitude,” said Eric Klinenberg, a sociologist at New York University whose book “Alone in America,” in which he argues for a reevaluation of solitude, will be published next year. “There is something very liberating for people about being on their own. They’re able to establish some control over the way they spend their time. They’re able to decompress at the end of a busy day in a city…and experience a feeling of freedom.”

Figuring out what solitude is and how it affects our thoughts and feelings has never been more crucial. The latest Census figures indicate there are some 31 million Americans living alone, which accounts for more than a quarter of all US households. And at the same time, the experience of being alone is being transformed dramatically, as more and more people spend their days and nights permanently connected to the outside world through cellphones and computers. In an age when no one is ever more than a text message or an e-mail away from other people, the distinction between “alone” and “together” has become hopelessly blurry, even as the potential benefits of true solitude are starting to become clearer.

Solitude has long been linked with creativity, spirituality, and intellectual might. The leaders of the world’s great religions — Jesus, Buddha, Mohammed, Moses — all had crucial revelations during periods of solitude. The poet James Russell Lowell identified solitude as “needful to the imagination;” in the 1988 book “Solitude: A Return to the Self,” the British psychiatrist Anthony Storr invoked Beethoven, Kafka, and Newton as examples of solitary genius.

But what actually happens to people’s minds when they are alone? As much as it’s been exalted, our understanding of how solitude actually works has remained rather abstract, and modern psychology — where you might expect the answers to lie — has tended to treat aloneness more as a problem than a solution. That was what Christopher Long found back in 1999, when as a graduate student at the University of Massachusetts Amherst he started working on a project to precisely define solitude and isolate ways in which it could be experienced constructively. The project’s funding came from, of all places, the US Forest Service, an agency with a deep interest in figuring out once and for all what is meant by “solitude” and how the concept could be used to promote America’s wilderness preserves.

With his graduate adviser and a researcher from the Forest Service at his side, Long identified a number of different ways a person might experience solitude and undertook a series of studies to measure how common they were and how much people valued them. A 2003 survey of 320 UMass undergraduates led Long and his coauthors to conclude that people felt good about being alone more often than they felt bad about it, and that psychology’s conventional approach to solitude — an “almost exclusive emphasis on loneliness” — represented an artificially narrow view of what being alone was all about.

“Aloneness doesn’t have to be bad,” Long said by phone recently from Ouachita Baptist University, where he is an assistant professor. “There’s all this research on solitary confinement and sensory deprivation and astronauts and people in Antarctica — and we wanted to say, look, it’s not just about loneliness!”

Today other researchers are eagerly diving into that gap. Robert Coplan of Carleton University, who studies children who play alone, is so bullish on the emergence of solitude studies that he’s hoping to collect the best contemporary research into a book. Harvard professor Daniel Gilbert, a leader in the world of positive psychology, has recently overseen an intriguing study that suggests memories are formed more effectively when people think they’re experiencing something individually.

That study, led by graduate student Bethany Burum, started with a simple experiment: Burum placed two individuals in a room and had them spend a few minutes getting to know each other. They then sat back to back, each facing a computer screen the other could not see. In some cases they were told they’d both be doing the same task, in other cases they were told they’d be doing different things. The computer screen scrolled through a set of drawings of common objects, such as a guitar, a clock, and a log. A few days later the participants returned and were asked to recall which drawings they’d been shown. Burum found that the participants who had been told the person behind them was doing a different task — namely, identifying sounds rather than looking at pictures — did a better job of remembering the pictures. In other words, they formed more solid memories when they believed they were the only ones doing the task.

The results, which Burum cautions are preliminary, are now part of a paper on “the coexperiencing mind” that was recently presented at the Society for Personality and Social Psychology conference. In the paper, Burum offers two possible theories to explain what she and Gilbert found in the study. The first invokes a well-known concept from social psychology called “social loafing,” which says that people tend not to try as hard if they think they can rely on others to pick up their slack. (If two people are pulling a rope, for example, neither will pull quite as hard as they would if they were pulling it alone.) But Burum leans toward a different explanation, which is that sharing an experience with someone is inherently distracting, because it compels us to expend energy on imagining what the other person is going through and how they’re reacting to it.

“People tend to engage quite automatically with thinking about the minds of other people,” Burum said in an interview. “We’re multitasking when we’re with other people in a way that we’re not when we just have an experience by ourselves.”

Perhaps this explains why seeing a movie alone feels so radically different than seeing it with friends: Sitting there in the theater with nobody next to you, you’re not wondering what anyone else thinks of it; you’re not anticipating the discussion that you’ll be having about it on the way home. All your mental energy can be directed at what’s happening on the screen. According to Greg Feist, an associate professor of psychology at the San Jose State University who has written about the connection between creativity and solitude, some version of that principle may also be at work when we simply let our minds wander: When we let our focus shift away from the people and things around us, we are better able to engage in what’s called meta-cognition, or the process of thinking critically and reflectively about our own thoughts.

Other psychologists have looked at what happens when other people’s minds don’t just take up our bandwidth, but actually influence our judgment. It’s well known that we’re prone to absorb or mimic the opinions and body language of others in all sorts of situations, including those that might seem the most intensely individual, such as who we’re attracted to. While psychologists don’t necessarily think of that sort of influence as “clouding” one’s judgment — most would say it’s a mechanism for learning, allowing us to benefit from information other people have access to that we don’t — it’s easy to see how being surrounded by other people could hamper a person’s efforts to figure out what he or she really thinks of something.

Teenagers, especially, whose personalities have not yet fully formed, have been shown to benefit from time spent apart from others, in part because it allows for a kind of introspection — and freedom from self-consciousness — that strengthens their sense of identity. Reed Larson, a professor of human development at the University of Illinois, conducted a study in the 1990s in which adolescents outfitted with beepers were prompted at irregular intervals to write down answers to questions about who they were with, what they were doing, and how they were feeling. Perhaps not surprisingly, he found that when the teens in his sample were alone, they reported feeling a lot less self-conscious. “They want to be in their bedrooms because they want to get away from the gaze of other people,” he said.

The teenagers weren’t necessarily happier when they were alone; adolescence, after all, can be a particularly tough time to be separated from the group. But Larson found something interesting: On average, the kids in his sample felt better after they spent some time alone than they did before. Furthermore, he found that kids who spent between 25 and 45 percent of their nonclass time alone tended to have more positive emotions over the course of the weeklong study than their more socially active peers, were more successful in school and were less likely to self-report depression.

“The paradox was that being alone was not a particularly happy state,” Larson said. “But there seemed to be kind of a rebound effect. It’s kind of like a bitter medicine.”

The nice thing about medicine is it comes with instructions. Not so with solitude, which may be tremendously good for one’s health when taken in the right doses, but is about as user-friendly as an unmarked white pill. Too much solitude is unequivocally harmful and broadly debilitating, decades of research show. But one person’s “too much” might be someone else’s “just enough,” and eyeballing the difference with any precision is next to impossible.

Research is still far from offering any concrete guidelines. Insofar as there is a consensus among solitude researchers, it’s that in order to get anything positive out of spending time alone, solitude should be a choice: People must feel like they’ve actively decided to take time apart from people, rather than being forced into it against their will.

Overextended parents might not need any encouragement to see time alone as a desirable luxury; the question for them is only how to build it into their frenzied lives. But for the millions of people living by themselves, making time spent alone time productive may require a different kind of effort. Sherry Turkle, director of the MIT Initiative on Technology and Self, argues in her new book, “Alone, Together,” that people should be mindfully setting aside chunks of every day when they are not engaged in so-called social snacking activities like texting, g-chatting, and talking on the phone. For teenagers, it may help to understand that feeling a little lonely at times may simply be the price of forging a clearer identity.

John Cacioppo of the University of Chicago, whose 2008 book “Loneliness” with William Patrick summarized a career’s worth of research on all the negative things that happen to people who can’t establish connections with others, said recently that as long as it’s not motivated by fear or social anxiety, then spending time alone can be a crucially nourishing component of life. And it can have some counterintuitive effects: Adam Waytz in the Harvard psychology department, one of Cacioppo’s former students, recently completed a study indicating that people who are socially connected with others can have a hard time identifying with people who are more distant from them. Spending a certain amount of time alone, the study suggests, can make us less closed off from others and more capable of empathy — in other words, better social animals.

“People make this error, thinking that being alone means being lonely, and not being alone means being with other people,” Cacioppo said. “You need to be able to recharge on your own sometimes. Part of being able to connect is being available to other people, and no one can do that without a break.”

Leon Neyfakh is the staff writer for Ideas. E-mail lneyfakh@globe.com.

C.S. Lewis on Suffering and What It Means to Have Free Will in a Universe of Fixed Laws

by

“Try to exclude the possibility of suffering which the order of nature and the existence of free wills involve, and you find that you have excluded life itself.”

If the universe operates by fixed physical laws, what does it mean for us to have free will? That’s what C.S. Lewis considers with an elegant sidewise gleam in an essay titled “Divine Omnipotence” from his altogether fascinating 1940 book The Problem of Pain (public library) — a scintillating examination of the concept of free will in a material universe and why suffering is not only a natural but an essential part of the human experience. Though explored through the lens of the contradictions and impossibilities of belief, the questions Lewis raises touch on elements of philosophy, politics, psychology, cosmology, and ethics — areas that have profound, direct impact on how we live our lives, day to day.

He begins by framing “the problem of pain, in its simplest form” — the paradoxical idea that if we were to believe in a higher power, we would, on the one hand, have to believe that “God” wants all creatures to be happy and, being almighty, can make that wish manifest; on the other hand, we’d have to acknowledge that all creatures are not happy, which renders that god lacking in “either goodness, or power, or both.”

To be sure, Lewis’s own journey of spirituality was a convoluted one — he was raised in a religious family, became an atheist at fifteen, then slowly returned to Christianity under the influence of his friend and Oxford colleague J.R.R. Tolkien. But whatever his religious bent, Lewis possessed the rare gift of being able to examine his own beliefs critically and, in the process, to offer layered, timeless insight on eternal inquiries into spirituality and the material universe that resonate even with those of us who fall on the nonreligious end of the spectrum and side with Carl Sagan on matters of spirituality.

Lewis writes:

There is no reason to suppose that self-consciousness, the recognition of a creature by itself as a “self,” can exist except in contrast with an “other,” a something which is not the self. . . . The freedom of a creature must mean freedom to choose: and choice implies the existence of things to choose between. A creature with no environment would have no choices to make: so that freedom, like self-consciousness (if they are not, indeed, the same thing), again demands the presence to the self of something other than the self.

What makes Lewis’s reflections so enduring and widely resonant is that, for all his concern with divinity, he cracks open the innermost kernel of our basic humanity, in relation to ourselves and to one another:

People often talk as if nothing were easier than for two naked minds to “meet” or become aware of each other. But I see no possibility of their doing so except in a common medium which forms their “external world” or environment. Even our vague attempt to imagine such a meeting between disembodied spirits usually slips in surreptitiously the idea of, at least, a common space and common time, to give the co- in co-existence a meaning: and space and time are already an environment. But more than this is required. If your thoughts and passions were directly present to me, like my own, without any mark of externality or otherness, how should I distinguish them from mine? And what thoughts or passions could we begin to have without objects to think and feel about? Nay, could I even begin to have the conception of “external” and “other” unless I had experience of an “external world”?

In a sentiment that calls to mind novelist Iris Murdoch’s beautiful definition of love (“Love is the very difficult understanding that something other than yourself is real.”), Lewis adds:

The result is that most people remain ignorant of the existence of both. We may therefore suppose that if human souls affected one another directly and immaterially, it would be a rare triumph of faith and insight for any one of them to believe in the existence of the others.

Lewis considers what it would take for us to fully acknowledge and contact each other’s otherness, to bridge the divide between the internal and the external:

What we need for human society is exactly what we have — a neutral something, neither you nor I, which we can both manipulate so as to make signs to each other. I can talk to you because we can both set up sound-waves in the common air between us. Matter, which keeps souls apart, also brings them together. It enables each of us to have an “outside” as well as an “inside,” so that what are acts of will and thought for you are noises and glances for me; you are enabled not only to be, but to appear: and hence I have the pleasure of making your acquaintance.

Society, then, implies a common field or “world” in which its members meet.

‘Tree of virtues’ by Lambert of Saint-Omer, ca. 1250, from ‘The Book of Trees.’ Click image for details.

That “neutral something” might sound a lot like faith, but Lewis is careful to point out the limitations of such traditional interpretations and to examine how this relates to the question of suffering:

If matter is to serve as a neutral field it must have a fixed nature of its own. If a “world” or material system had only a single inhabitant it might conform at every moment to his wishes — “trees for his sake would crowd into a shade.” But if you were introduced into a world which thus varied at my every whim, you would be quite unable to act in it and would thus lose the exercise of your free will. Nor is it clear that you could make your presence known to me — all the matter by which you attempted to make signs to me being already in my control and therefore not capable of being manipulated by you.

Again, if matter has a fixed nature and obeys constant laws, not all states of matter will be equally agreeable to the wishes of a given soul, nor all equally beneficial for that particular aggregate of matter which he calls his body. If fire comforts that body at a certain distance, it will destroy it when the distance is reduced. Hence, even in a perfect world, the necessity for those danger signals which the pain-fibres in our nerves are apparently designed to transmit. Does this mean an inevitable element of evil (in the form of pain) in any possible world? I think not: for while it may be true that the least sin is an incalculable evil, the evil of pain depends on degree, and pains below a certain intensity are not feared or resented at all. No one minds the process “warm — beautifully hot — too hot — it stings” which warns him to withdraw his hand from exposure to the fire: and, if I may trust my own feeling, a slight aching in the legs as we climb into bed after a good day’s walking is, in fact, pleasurable.

Yet again, if the fixed nature of matter prevents it from being always, and in all its dispositions, equally agreeable even to a single soul, much less is it possible for the matter of the universe at any moment to be distributed so that it is equally convenient and pleasurable to each member of a society. If a man traveling in one direction is having a journey down hill, a man going in the opposite direction must be going up hill. If even a pebble lies where I want it to lie, it cannot, except by a coincidence, be where you want it to lie. And this is very far from being an evil: on the contrary, it furnishes occasion for all those acts of courtesy, respect, and unselfishness by which love and good humor and modesty express themselves. But it certainly leaves the way open to a great evil, that of competition and hostility. And if souls are free, they cannot be prevented from dealing with the problem by competition instead of courtesy. And once they have advanced to actual hostility, they can then exploit the fixed nature of matter to hurt one another. The permanent nature of wood which enables us to use it as a beam also enables us to use it for hitting our neighbor on the head. The permanent nature of matter in general means that when human beings fight, the victory ordinarily goes to those who have superior weapons, skill, and numbers, even if their cause is unjust.

Illustration by Olivier Tallec from ‘Waterloo & Trafalgar.’ Click image for details.

But looking closer at the possible “abuses of free will,” Lewis considers how the fixed nature of physical laws presents a problem for the religious notion of miracles — something he’d come to examine in depth several years later in the book Miracles, and something MIT’s Alan Lightman would come to echo several decades later in his spectacular meditation on science and spirituality. Lewis writes:

Such a world would be one in which wrong actions were impossible, and in which, therefore, freedom of the will would be void; nay, if the principle were carried out to its logical conclusion, evil thoughts would be impossible, for the cerebral matter which we use in thinking would refuse its task when we attempted to frame them. All matter in the neighborhood of a wicked man would be liable to undergo unpredictable alterations. That God can and does, on occasions, modify the behavior of matter and produce what we call miracles, is part of Christian faith; but the very conception of a common, and therefore stable, world, demands that these occasions should be extremely rare.

He offers an illustrative example:

In a game of chess you can make certain arbitrary concessions to your opponent, which stand to the ordinary rules of the game as miracles stand to the laws of nature. You can deprive yourself of a castle, or allow the other man sometimes to take back a move made inadvertently. But if you conceded everything that at any moment happened to suit him — if all his moves were revocable and if all your pieces disappeared whenever their position on the board was not to his liking — then you could not have a game at all. So it is with the life of souls in a world: fixed laws, consequences unfolding by causal necessity, the whole natural order, are at once limits within which their common life is confined and also the sole condition under which any such life is possible. Try to exclude the possibility of suffering which the order of nature and the existence of free wills involve, and you find that you have excluded life itself.

He closes by bringing us full-circle to the concept of free will:

Whatever human freedom means, Divine freedom cannot mean indeterminacy between alternatives and choice of one of them. Perfect goodness can never debate about the end to be attained, and perfect wisdom cannot debate about the means most suited to achieve it.

The Problem of Pain is a pause-giving read in its entirety. Complement it with Lewis on duty, the secret of happiness, and writing “for children” and the key to authenticity in all writing, then revisit Jane Goodall on science and spirituality.

 

 

New Yorker Cartoonist Roz Chast’s Remarkable Illustrated Meditation on Aging, Illness, and Death

by

Making sense of the human journey with wit, wisdom, and disarming vulnerability.

“Each day, we wake slightly altered, and the person we were yesterday is dead,” John Updike wrote in his magnificent memoir. “So why, one could say, be afraid of death, when death comes all the time?” It’s a sentiment somewhat easier to swallow — though certainly not without its ancient challenge — when it comes to our own death, but when that of our loved ones skulks around, it’s invariably devastating and messy, and it catches us painfully unprepared no matter how much time we’ve had to “prepare.”

Count on another beloved New Yorker contributor, cartoonist Roz Chast, to address this delicate and doleful subject with equal parts wit and wisdom in Can’t We Talk about Something More Pleasant?: A Memoir (public library) — a remarkable illustrated chronicle of her parents’ decline into old age and death, pierced by those profound, strangely uplifting in-between moments of cracking open the little chests of truth we keep latched shut all our lives until a brush with our mortal impermanence rattles the lock and lets out some understanding, however brief and fragmentary, of the great human mystery of what it means to live.

The humor and humility with which Chast tackles the enormously difficult subject of aging, illness and death is nothing short of a work of genius.

But besides appreciating Chast’s treatment of such grand human themes as death, duty, and “the moving sidewalk of life,” I was struck by how much her parents resembled my own — her father, just like mine, a “kind and sensitive” man of above-average awkwardness, “the spindly type,” inept at even the basics of taking care of himself domestically, with a genius for languages; her mother, just like mine, a dominant and hard-headed perfectionist “built like a fire hydrant,” with vanquished dreams of becoming a professional pianist, an unpredictable volcano of anger. (“Where my father was tentative and gentle,” Chast writes, “she was critical and uncompromising.” And: “Even though I knew he couldn’t really defend me against my mother’s rages, I sensed that at least he felt some sympathy, and that he liked me as a person, not just because I was his daughter.”)

Chast, like myself, was an only child and her parents, like mine, had a hard time understanding how their daughter made her living given she didn’t run in the 9-to-5 hamster wheel of working for the man. There were also the shared family food issues, the childhood loneliness, the discomfort about money that stems from having grown up without it.

The point here, of course, isn’t to dance to the drum of solipsism. (Though we only children seem particularly attuned to its beat.) It’s to appreciate the elegance and bold vulnerability with which Chast weaves out of her own story a narrative at once so universally human yet so relatable in its kaleidoscope of particularities that any reader is bound to find a piece of him- or herself in it, to laugh and weep with the bittersweet relief of suddenly feeling less alone in the most lonesome-making of human struggles, to find some compassion for even the most tragicomic of our faults.

From reluctantly visiting her parents in the neighborhood where she grew up (“not the Brooklyn of artists or hipsters or people who made — and bought — $8 chocolate bars [but] DEEP Brooklyn”) as their decline began, to accepting just as reluctantly the basic facts of life (“Old age didn’t change their basic personalities. If anything, it intensified what was already there.”), to witnessing her father’s mental dwindling (“One of the worst parts of senility must be that you have to get terrible news over and over again. On the other hand, maybe in between the times of knowing the bad news, you get to forget it and live as if everything was hunky-dory.”), to the self-loathing brought on by the clash between the aspiration of a loving daughter and the financial strain of elder care (“I felt like a disgusting person, worrying about the money.”), Chast treks with extraordinary candor and vulnerability through the maze of her own psyche, mapping out our own in the process.

Chast also explores, with extraordinary sensitivity and self-awareness, the warping of identity that happens when the cycle of life and its uncompromising realities toss us into roles we always knew were part of the human journey but somehow thought we, we alone, would be spared. She writes:

It’s really easy to be patient and sympathetic with someone when it’s theoretical, or only for a little while. It’s a lot harder to deal with someone’s craziness when it’s constant, and that person is your dad, the one who’s supposed to be taking care of YOU.

But despite her enormous capacity for wit and humor even in so harrowing an experience, Chast doesn’t stray too far from its backbone of deep, complicated love and paralyzing grief. The book ends with Chast’s raw, unfiltered sketches from the final weeks she spent in the hospice ward where her mother took her last breath. A crystalline realization suddenly emerges that Chast’s cartooning isn’t some gimmicky ploy for quick laughs but her most direct access point to her own experience, her best sensemaking mechanism for understanding the world, life and, inevitably, death.

Can’t We Talk about Something More Pleasant? is an absolutely astounding read in its entirety — the kind that enters your soul through the backdoor, lightly, and touches more parts of it and more heavinesses than you ever thought you’d allow. You’re left, simply, grateful.

Images courtesy of Bloomsbury © Roz Chast; thanks, Wendy

A Silicon Valley scheme to “disrupt” America’s education system would hurt the people who need it the most

The plot to destroy education: Why technology could ruin American classrooms — by trying to fix them

The plot to destroy education: Why technology could ruin American classrooms — by trying to fix them
(Credit: Warner Bros. Entertainment Inc./Pgiam via iStock/Salon)

How does Silicon Valley feel about college? Here’s a taste: Seven words in a tweet provoked by a conversation about education started by Silicon Valley venture capitalist Marc Andreeseen.

Arrogance? Check. Supreme confidence? Check. Oblivious to the value actually provided by a college education? Check.

The $400 billion a year that Americans pay for education after high school is being wasted on an archaic brick-and-mortar irrelevance. We can do better! 

But how? The question becomes more pertinent every day — and it’s one that Silicon Valley would dearly like to answer.

The robots are coming for our jobs, relentlessly working their way up the value chain. Anything that can be automated will be automated. The obvious — and perhaps the only — answer to this threat is a vastly improved educational system. We’ve got to leverage our human intelligence to stay ahead of robotic A.I.! And right now, everyone agrees, the system is not meeting the challenge. The cost of a traditional four-year college education has far outpaced inflation. Student loan debt is a national tragedy. Actually achieving a college degree still bequeaths better job prospects than the alternative, but for many students, the cost-benefit ratio is completely out of whack.

No problem, says the tech industry. Like a snake eating its own tail, Silicon Valley has the perfect solution for the social inequities caused by technologically induced “disruption.” More disruption!

Universities are a hopelessly obsolete way to go about getting an education when we’ve got the Internet, the argument goes. Just as Airbnb is disemboweling the hotel industry and Uber is annihilating the taxi industry, companies such as Coursera and Udacity will leverage technology and access to venture capital in order to crush the incumbent education industry, supposedly offering high-quality educational opportunities for a fraction of the cost of a four-year college.



There is an elegant logic to this argument. We’ll use the Internet to stay ahead of the Internet. Awesome tools are at our disposal. In MOOCs — “Massive Open Online Courses” — hundreds of thousands of students will imbibe the wisdom of Ivy League “superprofessors” via pre-recorded lectures piped down to your smartphone. No need even for overworked graduate student teaching assistants. Intelligent software will take care of the grading. (That’s right — we’ll use robots to meet the robot threat!) The market, in other words, will provide the solution to the problem that the market has caused. It’s a wonderful libertarian dream.

But there’s a flaw in the logic. Early returns on MOOCs have confirmed what just about any teacher could have told you before Silicon Valley started believing it could “fix” education. Real human interaction and engagement are hugely important to delivering a quality education. Most crucially, hands-on interaction with teachers is vital for the students who are in most desperate need for an education — those with the least financial resources and the most challenging backgrounds.

Of course, it costs money to provide greater human interaction. You need bodies — ideally, bodies with some mastery of the subject material. But when you raise costs, you destroy the primary attraction of Silicon Valley’s “disruptive” model. The big tech success stories are all about avoiding the costs faced by the incumbents. Airbnb owns no hotels. Uber owns no taxis. The selling point of Coursera and Udacity is that they need own no universities.

But education is different than running a hotel. There’s a reason why governments have historically considered providing education a public good. When you start throwing bodies into the fray to teach people who can’t afford a traditional private education you end up disastrously chipping away at the profits that the venture capitalists backing Coursera and Udacity demand.

And that’s a tail that the snake can’t swallow.

* * *

The New York Times famously dubbed 2012 “The Year of the MOOC.” Coursera and Udacity (both started by Stanford professors) and an MIT-Harvard collaboration called EdX exploded into the popular imagination. But the hype ebbed almost as quickly as it had flowed. In 2013, after a disastrous pilot experiment in which Udacity and San Jose State collaborated to deliver three courses, MOOCs were promptly declared dead — with the harshest schadenfreude coming from academics who saw the rush to MOOCs as an educational travesty.

At the end of 2013, the New York Times had changed its tune: “After Setbacks, Online Courses are Rethought.”

But MOOC supporters have never wavered. In May, Clayton Christensen, the high priest of “disruption” theory, scoffed at the unbelievers: ”[T]heir potential to disrupt — on price, technology, even pedagogy — in a long-stagnant industry,” wrote Christensen, ” is only just beginning to be seen.”

At the end of June, the Economist followed suit with a package of stories touting the inevitable “creative destruction” threatened by MOOCs: “[A] revolution has begun thanks to three forces: rising costs, changing demand and disruptive technology. The result will be the reinvention of the university …” It’s 2012 all over again!

Sure, there have been speed bumps along the way. But as Christensen explained, the same is true for any would-be disruptive start-up. Failures are bound to happen. What makes Silicon Valley so special is its ability to learn from mistakes, tweak its biz model and try something new. It’s called “iteration.”

There is, of course, great merit to the iterative process. And it would be foolish to claim that new technology won’t have an impact on the educational process. If there’s one thing that the Internet and smartphones are insanely good at, it is providing access to information. A teenager with a phone in Uganda has opportunities for learning that most of the world never had through the entire course of human history. That’s great.

But there’s a crucial difference between “access to information” and “education” that explains why the university isn’t about to become obsolete, and why we can’t depend — as Marc Andreessen tells us — on the magic elixir of innovation plus the free market to solve our education quandary.

Nothing better illustrates this point than a closer look at the Udacity-San Jose State collaboration.

* * *

When Gov. Jerry Brown announced the collaboration between Udacity, founded by the Stanford computer science Sebastian Thrun and San Jose State, a publicly funded university in the heart of Silicon Valley, in January 2013, the match seemed perfect. Where else would you want to test out the future of education? The plan was to focus on three courses: elementary statistics, remedial math and college algebra. The target student demographic was notoriously ill-served by the university system: “Students were drawn from a lower-income high school and the underperforming ranks of SJSU’s student body,” reported Fast Company.

The results of the pilot, conducted in the spring of 2013, were a disaster, reported Fast Company:

Among those pupils who took remedial math during the pilot program, just 25 percent passed. And when the online class was compared with the in-person variety, the numbers were even more discouraging. A student taking college algebra in person was 52 percent more likely to pass than one taking a Udacity class, making the $150 price tag–roughly one-third the normal in-state tuition–seem like something less than a bargain.

A second attempt during the summer achieved better results, but with a much less disadvantaged student body; and, even more crucially, with considerably greater resources put into human interaction and oversight. For example, San Jose State reported that the summer courses were improved by “checking in with students more often.”

But the prime takeaway was stark. Inside Higher Education reported that a research report conducted by San Jose State on the experiment concluded that “it may be difficult for the university to deliver online education in this format to the students who need it most.”

In an iterative world, San Jose State and Udacity would have learned from their mistakes. The next version of their collaboration would have incorporated the increased human resources necessary to make it work, to be sure that students didn’t fall through the cracks. But the lesson that Udacity learned from the collaboration turned out be something different: There isn’t going to be much profit to be made attempting to apply the principles of MOOCs to students from a disadvantaged background.

Thrun set off a firestorm of commentary when he told Fast Company’s Max Chafkin this:

“These were students from difficult neighborhoods, without good access to computers, and with all kinds of challenges in their lives,” he says. “It’s a group for which this medium is not a good fit….”

“I’d aspired to give people a profound education–to teach them something substantial… But the data was at odds with this idea.”

Henceforth, Udacity would “pivot” to focusing on vocational training funded by direct corporate support.

Thrun later claimed that his comments were misinterpreted by Fast Company. And in his May Op-Ed Christensen argued that Udacity’s pivot was a boon!

Udacity, for its part, should be applauded for not burning through all of its money in pursuit of the wrong strategy. The company realized — and publicly acknowledged — that its future lay on a different path than it had originally anticipated. Indeed, Udacity’s pivot may have even prevented a MOOC bubble from bursting.

Educating the disadvantaged via MOOCs is the wrong strategy? That’s not a pivot — it’s an abject surrender.

The Economist, meanwhile, brushed off the San Jose State episode by noting that “online learning has its pitfalls.” But the Economist also published a revealing observation: “In some ways MOOCs will reinforce inequality … among students (the talented will be much more comfortable than the weaker outside the structured university environment) …”

But isn’t that exactly the the problem? No one can deny that the access to information facilitated by the Internet is a fantastic thing for talented students — and particularly so for those with secure economic backgrounds and fast Internet connections. But such people are most likely to succeed in a world full of smart robots anyway. The challenge posed by technological transformation and disruption is that the jobs that are being automated away first are the ones that are most suited to the less talented or advantaged. In other words, the population that MOOCs are least suited to serving is the population that technology is putting in the most vulnerable position.

Innovation and the free market aren’t going to fix this problem, for the very simple reason that there is no money in it. There’s no profit to be mined in educating people who not only can’t pay for an education, but also require greater human resources to be educated.

This is why we have public education in the first place.

“College is a public good,” says Jonathan Rees, a professor at Colorado State University who has been critical of MOOCs. “It’s what industrialized democratic society should be providing for students.”

Andrew Leonard Andrew Leonard is a staff writer at Salon. On Twitter, @koxinga21.

Nearly one quarter of US children in poverty

http://epmgaa.media.lionheartdms.com/img/croppedphotos/2013/09/17/child_poverty.jpg

By Andre Damon
23 July 2014

Nearly one in four children in the United States lives in a family below the federal poverty line, according to figures presented in a new report by the Annie E. Casey Foundation.

A total of 16.3 million children live in poverty, and 45 percent of children in the US live in households whose incomes fall below 200 percent of the federal poverty line.

The annual report, titled the Kids Count Data Book, compiles data on children’s economic well-being, education, health, and family support. It concludes that, “inequities among children remain deep and stubbornly persistent.”

The report is an indictment of the state of American society nearly six years after the onset of the financial crisis in 2008. While the Obama administration and the media have proclaimed an economic “recovery,” conditions of life for the vast majority of the population continue to deteriorate.

The report notes that the percentage of children in poverty hit 23 percent in 2012, up sharply from 16 percent in 2000. Some states are much worse. For almost the entire American South, the share of children in poverty is higher than 25 percent.

These conditions are the product of a ruthless class policy pursued at all levels of government. While trillions of dollars have been made available to Wall Street, sending both the stock markets and corporate profits to record highs, economic growth has stagnated, social programs have been slashed, and public services decimated, while prices of many basic items are on the rise. Jobs that have been “created” are overwhelmingly part-time or low-wage.

“We’ve yet to see the recovery from the economic recession,” said Laura Speer, associate director for policy reform and advocacy at the Annie E. Casey Foundation, who helped produce the report. “The child poverty rate is connected to parents’ employment and how much they are getting paid,” added Ms. Speer in a telephone interview Tuesday.

“The jobs that are being created in this economy, including temporary and low-wage jobs, are not good enough to keep children out of poverty,” she added.

The Kids Count report notes, “Declining economic opportunity for parents without a college degree in the context of growing inequality has meant that children’s life chances are increasingly constrained by the socioeconomic status of their parents.” The percentage of children who live in high-poverty communities has likewise increased significantly, with 13 percent of children growing up in communities where more than 30 percent of residents are poor, up from 9 percent in 2000.

Speer added that, given the significant run-up in home prices over the previous two decades, “the housing cost burden has gotten worse.” She noted that the share of children who live in households that spend more than one third of their annual income on housing has hit 38 percent, up from 28 percent in 1990. In states such as California, these figures are significantly higher.

“In many cases families are living doubled up and sleeping on couches to afford very expensive places like New York City,” she added. “Paying such a large share of your income for rent means that parents have to decide between whether or not to pay the rent or to pay the utility bills. It’s not a matter of making choices over things that are luxuries, it’s choosing between necessities.”

The report concludes, “As both poverty and wealth have become more concentrated residentially, evidence suggests that school districts and individual schools are becoming increasingly segregated by socioeconomic status.”

In most of the United States, K-12 education is funded through property taxes, and there are significant differences in education funding based on local income levels. “Kids who grow up in low-income neighborhoods have much less access to education: that’s only been exacerbated over the last 25 years,” Speer said.

The Kids Count survey follows the publication in April of Feeding America’s annual report, which showed that one in five children live in households that do not regularly get enough to eat. The percentage of households that are “food insecure” rose from 11.1 percent in 2007 to 16.0 percent in 2012. Sixteen million children, or 21.6 percent, do not get enough to eat. The rate of food insecurity in the United States is nearly twice that of the European Union.

According to the US government’s supplemental poverty measure, 16.1 percent of the US population—nearly 50 million people—is in poverty, up from 12.2 percent of the population in 2000.

The Kids Count report notes that the ability of single mothers to get a job is particularly sensitive to the state of the economy, and that the employment rate of single mothers with children under 6 years old has fallen from 69 percent in 2000 to 60 percent ten years later. This has taken place even as anti-poverty measures such as Temporary Assistance for Needy Families (TANF) have been made conditional on parents finding work.

The report noted that enrollment in the federal Head Start program, which serves 3- and 4-year-olds dropped off when the “recession decimated state budgets and halted progress.” It added that cutbacks to federal and state anti-poverty programs, as well as health programs such as Medicare and Medicaid, are contributing to the growth of poverty and inequality.

With the “sequester” budget cuts signed by the Obama administration in early 2013, most federal anti-poverty programs are being slashed by five percent each year for a decade. “Programs like head start, LIHEAP [Low Income Home Energy Assistance Program], and other federal programs are really a lifeline in a lot of families,” Speer said.

Since the implementation of the sequester cuts, Congress and the Obama administration have slashed food stamp spending on two separate occasions and put an end to federal extended jobless benefits for more than three million long-term unemployed people and their families. These measures can be expected to throw hundreds of thousands more children into poverty.

The rise of data and the death of politics

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisions that such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0″) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer
For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.
Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, drive Uber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”

 

THE BULLSHIT MACHINE

Here’s a tiny confession. I’m bored.

Yes; I know. I’m a sinner. Go ahead. Burn me at the stake of your puritanical Calvinism; the righteously, thoroughly, well, boring idea that boredom itself is a moral defect; that a restless mind is the Devil’s sweatshop.

There’s nothing more boring than that; and I’ll return to that very idea at the end of this essay; which I hope is the beginning.

What am I bored of? Everything. Blogs books music art business ideas politics tweets movies science math technology…but more than that: the spirit of the age; the atmosphere of the time; the tendency of the now; the disposition of the here.

Sorry; but it’s true. It’s boring me numb and dumb.

A culture that prizes narcissism above individualism. A politics that places “tolerance” above acceptance. A spirit that encourages cynicism over reverence. A public sphere that places irony over sincerity. A technosophy that elevates “data” over understanding. A society that puts “opportunity” before decency. An economy that…you know. Works us harder to make us poorer at “jobs” we hate where we make stuff that sucks every last bit of passion from our souls to sell to everyone else who’s working harder to get poorer at “jobs” they hate where they make stuff that sucks every last bit of passion from their souls.

To be bored isn’t to be indifferent. It is to be fatigued. Because one is exhausted. And that is precisely where—and only where—the values above lead us. To exhaustion; with the ceaseless, endless, meaningless work of maintaining the fiction. Of pretending that who we truly want to be is what everyone believes everyone else wants to be. Liked, not loved; “attractive”, not beautiful; clever, not wise; snarky, not happy; advantaged, not prosperous.

It exhausts us; literally; this game of parasitically craving everyone’s cravings. It makes us adversaries not of one another; but of ourselves. Until there is nothing left. Not of us as we are; but of the people we might have been. The values above shrink and reduce and diminish our potential; as individuals, as people, societies. And so I have grown fatigued by them.

Ah, you say. But when hasn’t humanity always suffered all the above? Please. Let’s not mince ideas. Unless you think the middle class didn’t actually thrive once; unless you think that the gentleman that’s made forty seven Saw flicks (so far) is this generation’s Alfred Hitchcock; unless you believe that this era has a John Lennon; unless you think that Jeff Koons is Picasso…perhaps you see my point.

I’m bored, in short, of what I’d call a cycle of perpetual bullshit. A bullshit machine. The bullshit machine turns life into waste.

The bullshit machine looks something like this. Narcissism about who you are leads to cynicism about who you could be leads to mediocrity in what you do…leads to narcissism about who you are. Narcissism leads to cynicism leads to mediocrity…leads to narcissism.

Let me simplify that tiny model of the stalemate the human heart can reach with life.

The bullshit machine is the work we do only to live lives we don’t want, need, love, or deserve.

Everything’s work now. Relationships; hobbies; exercise. Even love. Gruelling; tedious; unrelenting; formulaic; passionless; calculated; repetitive; predictable; analysed; mined; timed; performed.

Work is bullshit. You know it, I know it; mankind has always known it. Sure; you have to work at what you want to accomplish. But that’s not the point. It is the flash of genius; the glimmer of intuition; the afterglow of achievement; the savoring of experience; the incandescence of meaning; all these make life worthwhile, pregnant, impossible, aching with purpose. These are the ends. Work is merely the means.

Our lives are confused like that. They are means without ends; model homes; acts which we perform, but do not fully experience.

Remember when I mentioned puritanical Calvinism? The idea that being bored is itself a sign of a lack of virtue—and that is, itself, the most boring idea in the world?

That’s the battery that powers the bullshit machine. We’re not allowed to admit it: that we’re bored. We’ve always got to be doing something. Always always always. Tapping, clicking, meeting, partying, exercising, networking, “friending”. Work hard, play hard, live hard. Improve. Gain. Benefit. Realize.

Hold on. Let me turn on crotchety Grandpa mode. Click.

Remember when cafes used to be full of people…thinking? Now I defy you to find one not full of people Tinder—Twitter—Facebook—App-of-the-nanosecond-ing; furiously. Like true believers hunched over the glow of a spiritualized Eden they can never truly enter; which is precisely why they’re mesmerized by it. The chance at a perfect life; full of pleasure; the perfect partner, relationship, audience, job, secret, home, career; it’s a tap away. It’s something like a slot-machine of the human soul, this culture we’re building. The jackpot’s just another coin away…forever. Who wouldn’t be seduced by that?

Winners of a million followers, fans, friends, lovers, dollars…after all, a billion people tweeting, updating, flicking, swiping, tapping into the void a thousand times a minute can’t be wrong. Can they?

And therein is the paradox of the bullshit machine. We do more than humans have ever done before. But we are not accomplishing much; and we are, it seems to me, becoming even less than that.

The more we do, the more passive we seem to become. Compliant. Complaisant. As if we are merely going through the motions.

Why? We are something like apparitions today; juggling a multiplicity of selves through the noise; the “you” you are on Facebook, Twitter, Tumblr, Tinder…wherever…at your day job, your night job, your hobby, your primary relationship, your friend-with-benefits, your incredibly astonishing range of extracurricular activities. But this hyperfragmentation of self gives rise to a kind of schizophrenia; conflicts, dissocations, tensions, dislocations, anxieties, paranoias, delusions. Our social wombs do not give birth to our true selves; the selves explosive with capability, possibility, wonder.

Tap tap tap. And yet. We are barely there, at all; in our own lives; in the moments which we will one day look back on and ask ourselves…what were we thinking wasting our lives on things that didn’t matter at all?

The answer, of course, is that we weren’t thinking. Or feeling. We don’t have time to think anymore. Thinking is a superluxury. Feeling is an even bigger superluxury. In an era where decent food, water, education, and healthcare are luxuries; thinking and feeling are activities to costly for society to allow. They are a drag on “growth”; a burden on “productivity”; they slow down the furious acceleration of the bullshit machine.

And so. Here we are. Going through the motions. The bullshit machine says the small is the great; the absence is the presence; the vicious is the noble; the lie is the truth. We believe it; and, greedily, it feeds on our belief. The more we feed it, the more insatiable it becomes. Until, at last, we are exhausted. By pretending to want the lives we think we should; instead of daring to live the lives we know we could.

Fuck it. Just admit it. You’re probably just as bored as I am.

Good for you.

Welcome to the world beyond the Bullshit Machine.

“Alive Inside”: Music may be the best medicine for dementia

A heartbreaking new film explores the breakthrough that can help severely disabled seniors: It’s called the iPod VIDEO

"Alive Inside": Music may be the best medicine for dementia

One physician who works with the elderly tells Michael Rossato-Bennett’s camera, in the documentary “Alive Inside,” that he can write prescriptions for $1,000 a month in medications for older people under his care, without anyone in the healthcare bureaucracy batting an eye. Somebody will pay for it (ultimately that somebody is you and me, I suppose) even though the powerful pharmaceutical cocktails served up in nursing homes do little or nothing for people with dementia, except keep them docile and manageable. But if he wants to give those older folks $40 iPods loaded up with music they remember – which both research and empirical evidence suggest will improve their lives immensely — well, you can hardly imagine the dense fog of bureaucratic hostility that descends upon the whole enterprise.

“Alive Inside” is straightforward advocacy cinema, but it won the audience award at Sundance this year because it will completely slay you, and it has the greatest advantages any such movie can have: Its cause is easy to understand, and requires no massive social change or investment. Furthermore, once you see the electrifying evidence, it becomes nearly impossible to oppose. This isn’t fracking or climate change or drones; I see no possible way for conservatives to turn the question of music therapy for senior citizens into some kind of sinister left-wing plot. (“Next up on Fox News: Will Elton John turn our seniors gay?”) All the same, social worker Dan Cohen’s crusade to bring music into nursing homes could be the leading edge of a monumental change in the way we approach the care and treatment of older people, especially the 5 million or so Americans living with dementia disorders.

You may already have seen a clip from “Alive Inside,” which became a YouTube hit not long ago: An African-American man in his 90s named Henry, who spends his waking hours in a semi-dormant state, curled inward like a fetus with his eyes closed, is given an iPod loaded with the gospel music he grew up with. The effect seems almost impossible and literally miraculous: Within seconds his eyes are open, he’s singing and humming along, and he’s fully present in the room, talking to the people around him. It turns out Henry prefers the scat-singing of Cab Calloway to gospel, and a brief Calloway imitation leads him into memories of a job delivering groceries on his bicycle, sometime in the 1930s.



Of course Henry is still an elderly and infirm person who is near the end of his life. But the key word in that sentence is “person”; we become startlingly and heartbreakingly aware that an entire person’s life experience is still in there, locked inside Henry’s dementia and isolation and overmedication. As Oliver Sacks put it, drawing on a word from the King James Bible, Henry has been “quickened,” or returned to life, without the intervention of supernatural forces. It’s not like there’s just one such moment of tear-jerking revelation in “Alive Inside.” There might be a dozen. I’m telling you, one of those little pocket packs of tissue is not gonna cut it. Bring the box.

There’s the apologetic old lady who claims to remember nothing about her girlhood, until Louis Armstrong singing “When the Saints Go Marching In” brings back a flood of specific memories. (Her mom was religious, and Armstrong’s profane music was taboo. She had to sneak off to someone else’s house to hear his records.) There’s the woman with multiple psychiatric disorders and a late-stage cancer diagnosis, who ditches the wheelchair and the walker and starts salsa dancing. There’s the Army veteran who lost all his hair in the Los Alamos A-bomb test and has difficulty recognizing a picture of his younger self, abruptly busting out his striking baritone to sing along with big-band numbers. “It makes me feel like I got a girl,” he says. “I’m gonna hold her tight.” There’s the sweet, angular lady in late middle age, a boomer grandma who can’t reliably find the elevator in her building, or tell the up button from the down, boogieing around the room to the Beach Boys’ “I Get Around,” as if transformed into someone 20 years younger. The music cannot get away from her, she says, as so much else has done.

There’s a bit of hard science in “Alive Inside” (supplied by Sacks in fascinating detail) and also the beginnings of an immensely important social and cultural debate about the tragic failures of our elder-care system and how the Western world will deal with its rapidly aging population. As Sacks makes clear, music is a cultural invention that appears to access areas of the brain that evolved for other reasons, and those areas remain relatively unaffected by the cognitive decline that goes with Alzheimer’s and other dementia disorders. While the “quickening” effect observed in someone like Henry is not well understood, it appears that stimulating those undamaged areas of the brain with beloved and familiar signals – and what will we ever love more than the hit songs of our youth? — can unlock other things at least temporarily, including memory, verbal ability, and emotion. Sacks doesn’t address this, but the effects appear physical as well: Everyone we see in the film becomes visibly more active, even the man with late-stage multiple sclerosis and the semi-comatose woman who never speaks.

Dementia is a genuine medical phenomenon, as anyone who has spent time around older people can attest, and one that’s likely to exert growing psychic and economic stress on our society as the population of people over 65 continues to grow. But you can’t help wondering whether our social practice of isolating so many old people in anonymous, characterless facilities that are entirely separated from the rhythms of ordinary social life has made the problem considerably worse. As one physician observes in the film, the modern-day Medicare-funded nursing home is like a toxic combination of the poorhouse and the hospital, and the social stigma attached to those places is as strong as the smell of disinfectant and overcooked Salisbury steak. Our culture is devoted to the glamour of youth and the consumption power of adulthood; we want to think about old age as little as possible, even though many of us will live one-quarter to one-third of our lives as senior citizens.

Rossato-Bennett keeps the focus of “Alive Inside” on Dan Cohen’s iPod crusade (run through a nonprofit called Music & Memory), which is simple, effective and has achievable goals. The two of them tread more lightly on the bigger philosophical questions, but those are definitely here. Restoring Schubert or Motown to people with dementia or severe disabilities can be a life-changing moment, but it’s also something of a metaphor, and the lives that really need changing are our own. Instead of treating older people as a medical and financial problem to be managed and contained, could we have a society that valued, nurtured and revered them, as most societies did before the coming of industrial modernism? Oh, and if you’re planning to visit me in 30 or 40 years, with whatever invisible gadget then exists, please take note: No matter how far gone I am, you’ll get me back with “Some Girls,” Roxy Music’s “Siren” and Otto Klemperer’s 1964 recording of “The Magic Flute.”

“Alive Inside” opens this week at the Sunshine Cinema in New York. It opens July 25 in Huntington, N.Y., Toronto and Washington; Aug. 1 in Asbury Park, N.J., Boston, Los Angeles and Philadelphia; Aug. 8 in Chicago, Martha’s Vineyard, Mass., Palm Springs, Calif., San Diego, San Francisco, San Jose, Calif., and Vancouver, Canada; Aug. 15 in Denver, Minneapolis and Phoenix; and Aug. 22 in Atlanta, Dallas, Harrisburg, Pa., Portland, Ore., Santa Fe, N.M., Seattle and Spokane, Wash., with more cities and home video to follow.

http://www.salon.com/2014/07/15/alive_inside_music_may_be_the_best_medicine_for_dementia/?source=newsletter

Could you “free” yourself of Facebook?

A 99-day challenge offers a new kind of social media experiment

Could you "free" yourself of Facebook?
(Credit: LoloStock via Shutterstock)

Let’s try a new experiment now, Facebook. And this time, you’re the subject.

Remember just last month, when the monolithic social network revealed that it had been messing with its users’ minds as part of an experiment? Writing in PNAS, Facebook researchers disclosed the results of a study that showed it had tinkered with the news feeds of nearly 700,000 users, highlighting either more positive or more negative content, to learn if “emotional contagion occurs without direct interaction between people.” What they found was that “When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.” More significantly, after the news of the study broke, they discovered that people get pretty creeped out when they feel like their personal online space is being screwed with, and that their reading and posting activity is being silently monitored and collected – even when the terms of service they agreed to grant permission to do just that. And they learned that lawmakers in the U.S. and around the world question the ethics of Facebook’s intrusion.

Now, a new campaign out of Europe is aiming to do another experiment involving Facebook, its users and their feelings. But this time Facebook users aren’t unwitting participants but willing volunteers. And the first step involves quitting Facebook. The 99 Days of Freedom campaign started as an office joke at Just, a creative agency in the Netherlands. But the company’s art director Merijn Straathof says it quickly evolved into a bona fide cause. “As we discussed it internally, we noted an interesting tendency: Everyone had at least a ‘complicated’ relationship with Facebook. Whether it was being tagged in unflattering photos, getting into arguments with other users or simply regretting time lost through excessive use, there was a surprising degree of negative sentiment.” When the staff learned that Facebook’s 1.2 billion users “spend an average of 17 minutes per day on the site, reading updates, following links or browsing photos,” they began to wonder what that time might be differently applied to – and whether users would find it “more emotionally fulfilling.”



The challenge – one that close to 9,000 people have already taken – is simple. Change your FB avatar to the “99 Days of Freedom” one to let friends know you’re not checking in for the next few months. Create a countdown. Opt in, if you wish, to be contacted after 33, 66 and 99 days to report on your satisfaction with life without Facebook. Straathof says everyone at Just is also participating, to “test that one firsthand.”

Straathof and company say the goal isn’t to knock Facebook, but to show users the “obvious emotional benefits to moderation.” And, he adds, “Our prediction is that the experiment will yield a lot of positive personal experiences and, 99 days from now, we’ll know whether that theory has legs.” The anecdotal data certainly seems to support it. Seductive as FB, with its constant flow of news and pet photos, may be, you’d be hard-pressed to find a story about quitting it that doesn’t make getting away from it sound pretty great. It’s true that grand experiments, especially of a permanent nature, have never gotten off the ground. Four years ago, a group of disgruntled users tried to gather momentum for a Quit Facebook Day that quietly went nowhere. But individual tales certainly make a compelling case for, if not going cold turkey, at least scaling back. Elizabeth Lopatto recently wrote in Forbes of spending the past eight years Facebook free and learning that “If you really are interested in catching up with your friends, catch up with your friends. You don’t need Facebook to do it.” And writing on EliteDaily this past winter, Rudolpho Sanchez questioned why “We allow our successes to be measured in little blue thumbs” and declared, “I won’t relapse; I’ve been liberated. It’s nice not knowing what my fake friends are up to.” Writing a few weeks later in Business Insider, Dylan Love, who’d been on FB since he was an incoming college student 10 years ago, gave it up and reported his life, if not improved, remarkably unchanged, “except I’m no longer devoting mental energy to reading about acquaintances from high school getting married or scrolling through lots of pictures of friends’ vacation meals.” And if you want a truly persuasive argument, try this: My teenager has not only never joined Facebook, she dismissively asserts that she doesn’t want to because “It’s for old people.”

Facebook, of course, doesn’t want you to consider that you might be able to maintain your relationships or your sense of delight in the world without it. When my mate and I went away for a full week recently, we didn’t check in on social media once the whole time. Every day, with increasing urgency, we received emails from Facebook alerting us to activity in our feeds that we surely wanted to check. And since I recently gutted my friend list, I’ve been receiving a bevy of suggested people I might know. Why so few friends, lonely lady? Why so few check-ins? Don’t you want more, more, more?

I don’t know if I need to abandon Facebook entirely – I like seeing what people I know personally and care about are up to, especially those I don’t get to see in the real world that often. That connection has often been valuable, especially through our shared adventures in love, illness and grief, and I will always be glad for it. But a few months ago I deleted the FB app, which makes avoiding Facebook when I’m not at my desk a no-brainer. No more stealth checking my feed from the ladies’ room. No more spending time expressing my “like” of someone’s recent baking success when I’m walking down the street. No more “one more status update before bed” time sucks. And definitely no more exasperation when FB insistently twiddles with my news feed to show “top stories” when I prefer “most recent.” It was never a huge part of my life, but it’s an even smaller part of it now, and yeah, it does feel good. I recommend it. Take Just’s 99-day challenge or just a tech Sabbath or just scale back a little. Consider it an experiment. One in which the user, this time, is the winner.

Mary Elizabeth Williams Mary Elizabeth Williams is a staff writer for Salon and the author of “Gimme Shelter: My Three Years Searching for the American Dream.” Follow her on Twitter: @embeedub.

http://www.salon.com/2014/07/11/could_you_free_yourself_of_facebook/?source=newsletter

Commonly Used Drug Can Make Men Stop Enjoying Sex—Irreversibly


Some of the symptoms reported include impotence and thoughts of suicide and depression.

No one should have to choose between their hairline and their health. But increasingly, men who use finasteride, commonly known as Propecia, to treat their male pattern baldness are making that choice, often unwittingly. In the 17 years since Propecia was approved to treat hair loss from male pattern baldness, many disturbing side effects have emerged, the term post-finasteride syndrome (PFS) has been coined and hundreds of lawsuits have been brought.

Finasteride inhibits a steroid responsible for converting testosterone into 5α-dihydrotestosterone (DHT) the hormone that tells hair follicles on the scalp to stop producing hair. Years before Propecia was approved to grow hair, finasteride was being used in drugs like Proscar, Avodart and Jalyn to treat an enlarged prostate gland (benign prostatic hyperplasia). Like Viagra, which began as a blood pressure med, or the eyelash-growing drug Latisse, which began as a glaucoma drug, finasteride’s hair restoration abilities were a fortuitous side effect.

Since Propecia was approved for sale in 1997, its label has warned about sexual side effects. “A small number of men experienced certain sexual side effects, such as less desire for sex, difficulty in achieving an erection, or a decrease in the amount of semen,” it read. “Each of these side effects occurred in less than 2% of men and went away in men who stopped taking Propecia because of them.” (The label also warned about gynecomastia, the enlargement of male breast tissue.)

But increasingly, users and some doctors are saying the symptoms sometimes do not go away when men stop taking Propecia and that their lives can be changed permanently. They report impotence, lack of sexual desire, depression and suicidal thoughts and even a reduction in thesize of penises ortesticles after using the drug, which does not go away after discontinuation.

According to surgeon Andrew Rynne, former head of the Irish Family Planning Association, Merck, which makes Propecia and Proscar, knows that the disturbing symptoms do not always vanish. “They know it is not true because I and hundreds of other doctors and thousands of patients have told them that these side effects do not always go away when you stop taking Propecia. We continue to be ignored, of course.”

In some cases, says Rynne, men who have used finasteride for even a few months “have unwittingly condemned themselves to a lifetime of sexual anhedonia” [condition in which an individual feels no sexual pleasure], the most horrible and cruel of all sexual dysfunctions.”

“I have spoken to several young men in my clinic in Kildare who continue to suffer from sexual anaesthesia and for whom all sexual pleasure and feelings have been obliterated for all time. I have felt their suffering and shared their devastation,” he wrote on a Propecia help site.

Sarah Temori, who launched a petition to have finasteride taken off the market on Change.org, agrees. “Many who have taken Propecia have lost their marriages, jobs and some have committed suicide due to the damage this drug has done to their bodies,” she writes. “One of my loved ones is a victim of this drug. It’s painful to see how much he has to struggle just to make it through each day and do all the daily things that we take for granted. No doctors have been able to help him and he is struggling to pay for medical bills. He is only 23.”

Stories about Propecia’s disturbing and underreported side effects have run onCNN, ABC, CBS, NBC, Fox and on Italian and English TV news.

The medical literature has also investigated finasteride effects. A study last year in Journal of Sexual Medicine noted “changes related to the urogenital system in terms of semen quality and decreased ejaculate volume, reduction in penis size, penile curvature or reduced sensation, fewer spontaneous erections, decreased testicular size, testicular pain, and prostatitis.” Many subjects also noted a “disconnection between the mental and physical aspects of sexual function,” and changes in mental abilities, sleeping patterns, and/or depressive symptoms.

A study this year in the Journal of Steroid Biochemistry and Molecular Biology finds that “altered levels of neuroactive steroids, associated with depression symptoms, are present in androgenic alopecia patients even after discontinuation of the finasteride treatment.”

Approved in Haste, Regretted in Leisure

The rise and fall of Propecia parallels other drugs like Vioxx or hormone replacement therapy that were marketed to wide demographics even as safety questions nipped at their heels. Two-thirds of American men have some hair loss by age 35, and 85 percent of men have some hair loss by age 50, so Propecia had the promise of a blockbuster like Lipitor or Viagra.

Early ads likened men’s thinning scalps to crop circles. Later, ads likened saving scalp hair to saving the whalesand won awards. Many Propecia ads tried to take away the stigma of hair loss and its treatment. “You’d be surprised who’s treated their hair loss,” said one print ad depicting athletic, 20-something men. In 1999 alone, Merck spent $100 million marketing Propecia directly to consumers, when direct-to-consumer advertising was just beginning on TV.

Nor was Propecia sold only in the U.S. Overseas ads compared twins who did and did not use the product. In the U.K., the drugstore chain Boots aggressively marketed Propecia at its 300 stores and still does. One estimates says Propecia was marketed in 120 countries.

Many have heard of “indication creep,” when a drug, after its original FDA approval, goes on to be approved for myriad other uses. Seroquel, originally approved for schizophrenia, is now approved as an add-on drug for depression and even for use in children. Cymbalta, originally approval as an antidepressant, went on to be approved for chronic musculoskeletal pain.

Less publicized is “warning creep,” when a drug that seemed safe enough for the FDA to approve, collects warning after warning once the public is using it. The poster child for warning creep is the bone drug Fosamax. After it was approved and in wide use, warnings began to surface about heart problems, intractable pain, jawbone death, esophageal cancer and even the bone fractures it was supposed to prevent. Oops.

But finasteride may do Fosamax proud. In 2003, it gained a warning for patients to promptly report any “changes in their breasts, such as lumps, pain or nipple discharge, to their physician.” Soon, “male breast cancer” was added under “postmarketing experience.” In 2010 depression was added as a side effect and patients were warned that finasteride could have an effect on prostate-specific antigen (PSA) tests. In 2011, the label conceded that sexual dysfunction could continue “after stopping the medication” and that finasteride could pose a “risk of high-grade prostate cancer.” In 2012, a warning was added that “other urological conditions” should be considered before taking finasteride. In 2013, the side effect of angioedema was added.

A quick look at Propecia approval documents does not inspire confidence. Finasteride induces such harm in the fetuses of lab animals, it is contraindicated in women when they are or may potentially be pregnant and women should not even “handle crushed or broken Propecia tablets when they are pregnant.”

Clinical trials were of short duration and some only had 15 participants. While subjects were asked aesthetic questions about their hairline during and after clinical trials, conspicuously absent on the data set were questions about depression, mental health and shrinking sexual organs.

In one report an FDA reviewer notes that Merck did not name or include other drugs used by subjects during trials, such as antidepressants or GERD meds, suggesting that depression could have been a known side effect of Propecia. Elsewhere an FDA reviewer cautions that “low figures” in the safety update are not necessarily reliable because the time period was “relatively short” and subjects with sexual adverse events may have already “exited from the study.” An FDA reviewer also wrote that “long-term cancer effects are unknown.” Breast cancer was noted as an adverse event seen in the trials.

Propecia Users Speak Out

There are many Propecia horror stories on sites founded to help people with side effects and those involved in litigation. In 2011, a mother told CBS news she blamed her 22-year-old son’s suicide on Propecia and Men’s Journal ran a report called “The (Not So Hard) Truth About Hair Loss Drugs.”

In a database of more than 13,000 finasteride adverse effects reported to the FDA, there were 619 reports of depression and 580 reports of anxiety. Sixty-eight users of finasteride reported a “penis disorder” and small numbers reported “penis deviation,” “penis fracture” and “micropenis.”

On the patient drug review site Askapatient.com, the 435 reviews of Propecia cite many examples of depression, sexual dysfunction and shrunken penises.

One of the most visible faces for post-finasteride syndrome is 36-year-old UK resident Paul Innes. Previously healthy and a soccer player, Innes was so debilitated by his use of Propecia, prescribed by his doctor, he founded a web siteand has gone public. Appearing on This Morning last month, Innes describes how using Propecia for only three months on one occasion and three weeks on another produced a suicidal depression requiring hospitalization, sexual dysfunction and a reduction of the size of his reproductive anatomy, none of which went away when he ceased the drug. He and his former girlfriend, Hayley Waudby, described how the physical and emotional changes cost them their relationship, even though she was pregnant with his child.

In an email I asked Paul Innes if his health had improved after the ordeal. He wrote back, “My health is just the same if not worse since 2013. I am still impotent with a shrunken penis and still have very dark thoughts and currently having to take antidepressants just to get through every day. Prior to Propecia I was a very healthy guy but now I’m a shadow of my former self. I have only just managed to return to work in my role as a police officer since taking Propecia in March 2013.”