How the Mindfulness Movement Went Mainstream — And the Backlash That Came With It

Meditation is more complex than some people realized.
mindfulness

In 1979, a 35-year-old American Buddhist and MIT-trained molecular biologist was on a two-week meditation retreat when he had a vision of what his life’s work—his “karmic assignment”—would be. While he sat alone one afternoon, it all came to him at once: he’d bring the ancient Eastern disciplines he’d followed for 13 years—mindfulness meditation and yoga—to chronically sick people right here in modern America. What’s more, he’d bring these practices into the very belly of the Western scientific beast—a big teaching hospital where he happened to be working as a post-doc in cell biology and gross anatomy. Somehow, he’d convince scientifically trained medical professionalsand patients—ordinary people, who’d never heard of the Dharma and wouldn’t be caught dead in a zendo or an ashram—that learning to follow the breath and do a few gentle yoga postures would help relieve intractable pain and suffering. In the process, he’d manage to reconcile what was then considered fringy, New Age folderol with empirical biological research, sparking a radical new approach to healing in mainstream medical practice.

Not exactly a modest scheme, and in retrospect, it seems astonishing that this nervy young guy—Jon Kabat-Zinn, the originator of Mindfulness-Based Stress Reduction (MBSR)—would manage to pull it off. And yet, as the now oft-told origin story goes, he convinced the medical bigwigs at the University of Massachusetts (UMass) Medical Center that this idea was worth trying. With a core body of “interns”—anybody on staff who wanted to learn about meditation—he set up shop and began putting patients through an intensive 10-week (now 8-week) program of weekly classes, yoga postures, 45-minute guided home-meditation practice six times a week, and an all-day retreat during the sixth week. The idea was to teach a set of active self-regulation skills that patients could practice by themselves to help them cope with medical conditions—chronic pain foremost—for which standard medical remedies, such as drugs, rehab, and surgery, had proven useless. The program was, Kabat-Zinn recalled later, “just a little pilot on zero dollars.”

There was just one small impediment to this plan: how was he going to persuade mainstream Americans that this approach wasn’t just New Age hokum? From the beginning, to sell his program to the masses, he decided to use what Buddhists call skillful means, teaching Buddhist principles and practices, but disguising their origin in plain American-style talk—promoting a kind of stealth Buddhism, scrubbed clean of bells, chants, prayers, and terms like dharma, karma, and dukkha, not to mention The Four Noble Truths, The Noble Eightfold Path, and so on. As he has said, “I bent over backward to structure it and find ways to speak about it that avoided as much as possible the risk of its being seen as Buddhist, New Age, Eastern Mysticism, or just plain flaky.”

Kabat-Zinn emphasized that this was to be a high-demand program, meaning that patients would be expected to take full responsibility for developing their own inner resources. In other words, they should fully engage themselves, not just make the motions. They needed to work hard every day but without—paradoxically—striving for any particular goal, like relief from pain. They might, however, hope for healing, but only in the sense that it meant “coming to terms with things as they are.” The idea was that hard-won mental and emotional acceptance could generate an inner shift in experience that often resembled a kind of cure—or as he put it, “As you befriend the pain, it can begin to go away.” At the same time, sounding more like a football coach than a spiritual teacher, he said, “I’m a strong advocate of getting tough with yourself. ‘Kicking butt,’ so to speak, or ‘Getting your ass on the cushion.’ You don’t have to like it. You just have to do it. And at the end of the eight-week clinic, you can tell us whether it was of any use or not.”

The patients, most of whom were demoralized and skeptical when they entered the program, actually did what they were instructed to do. They earnestly practiced watching their breath, following along to Kabat-Zinn’s taped instructions for the body scan, doing yoga at home, and learning to meditate, which—as he’s pointed out innumerable times—simply means “to pay attention on purpose in the present moment nonjudgmentally.” Improbably, most of these patients wound up feeling better, and in 1982, Kabat-Zinn’s first research article on mindfulness in the treatment of pain—the first such study ever in a bona fide academic journal—was published, with the ungainly title “An Outpatient Program in Behavioral Medicine for Chronic Pain Patients Based on the Practice of Mindfulness Meditation: Theoretical Considerations and Preliminary Results.” He wrote that a majority of 51 chronic pain patients reported “great” or “moderate” pain reduction, and even if their pain didn’t disappear, they experienced less depression, tension, anxiety, fatigue, and confusion.

The study was small, without a control group, based on paper-and-pencil self-reports, and rated by the author, rather than a panel of independent judges—a beginning, certainly, but no slam dunk as research papers go, and it didn’t seem destined to smash paradigms. Nor did it inspire many attempts at replication. By 1990, a grand total of 12 papers on the use of mindfulness in medical treatment had been published, with Kabat-Zinn himself producing 5. So while the fledgling program could claim a kind of success, it might have seemed a stretch to imagine it having much staying power as an intervention in mainstream medicine, or much of anywhere else for that matter.

A Movement Is Born

Thirty-five years later and my, how that “little pilot” has grown! Today, more than 20,000 patients have participated in the UMass program, which has produced 1,000 certified MBSR instructors and MBSR programs in about 720 medical settings in more than 30 countries. MBSR—or, more generically, mindfulness training—and other forms of meditation are now used for an almost unimaginable range of medical conditions, including cancer, heart disease, diabetes, brain injuries, fibromyalgia, HIV/Aids, Parkinson’s, organ transplants, psoriasis, irritable bowel syndrome, and tinnitus. Mindfulness has become central to the mental health profession and is commonly used in the treatment of attention-deficit hyperactivity disorder, depression, anxiety, obsessive-compulsive disorder, personality disorders, substance abuse, and autism. In addition, it’s at the heart of psychotherapeutic approaches like mindfulness-based cognitive therapy (MBCT), acceptance and commitment therapy (ACT), dialectical behavior therapy (DBT), mindfulness-based relapse prevention (MBRP), mindfulness-based trauma therapy (MBTT), and mindfulness-based eating awareness training (MB-EAT).

Mindfulness has also spilled (or poured) out of the healthcare/psychotherapy world and into the rest of society. It’s migrated to schools, with training programs and curricula for K-12 teachers and students sprouting up like mushrooms. It’s in universities and often, but not always, attached to medical schools or psychology departments as mindfulness research and teaching centers, including at the University of Illinois, the University of California at Los Angeles, Duke University, the University of Miami, the University of Pennsylvania, the University of Wisconsin, and Brown University.

It’s in prisons, where different meditation styles and traditions are brought to prisoners by way of organizations like the Prison Mindfulness Institute in Rhode Island, which directs prison programs itself, acts as an kind of clearinghouse for groups or individuals providing “mindfulness, meditation, yoga (or other contemplative traditions)” to prisoners, engages in or initiates research, and publishes books under the winsome name Prison Dharma Press.

It’s in the US military, which is training soldiers in what they call mindfulness-based mind fitness training (M-fit), drawn from MBSR, as a form of “mental armor,” a kind of inoculation against post-traumatic stress disorder. In fact, a nonprofit called the Mind Fitness Training Institute provides courses and tutorials not only to military personnel, but to law-enforcement officers, intelligence analysts and agents, firefighters, and emergency responders. Today, even soldiers learning how to fire M-16s are being given mindfulness training to synchronize their breathing with squeezing the trigger.

Finally, meditation has made its way into high-level sports, beginning with Phil Jackson teaching Michael Jordan and his Chicago Bulls teammates to meditate and win NBA championships in the 1990s, and continuing, more recently, to the NFL’s Seattle Seahawks, who won the Super Bowl in 2013 after spending spring training focusing on mantras like “Quiet your mind,” “Focus your attention inwardly,” and “Visualize success.”

Of course, when any major social trend looms into view, US corporations are going to muscle their way to the front of it. Corporate culture has taken to mindfulness training avidly: not only are mindfulness programs and courses showing up in business schools (Harvard, New York University–Stern, Georgetown University–McDonough, for example), but quite an impressive sampling of corporations are bringing it into the workplace. Outside of the usual suspects in Silicon Valley—Apple, Facebook, eBay, Google, Twitter, and Yahoo—traditional corporate stalwarts such as Hughes Aircraft, General Mills, Abbot Laboratories, General Motors, Ford Motor Company, AOL Time Warner, Reebok, Xerox, IBM, Safeway, Proctor and Gamble, Texas Instruments, and Goldman Sachs (!) are jumping on the bandwagon. Naturally, teaching mindfulness to business leaders—“mind fitness corporate training” it’s sometimes called—is itself a growth industry.

But in America, the real test of whether an idea, system, service, or practice has any popular traction is the marketplace. And here, mindfulness is a boffo bestseller, both as a product and as a source of almost endless product spinoffs. Besides the centers, institutes, training organizations, retreats, workshops, courses, seminars, conferences, resorts, and travel packages, all selling various experiences of mindfulness, there’s a vast bazaar out there of mindfulness stuff. A quick look at Amazon under “meditation, books,” reveals 82,405 titles for sale, but “meditation, all departments” lists 483,672 items, including—besides books, CDs, and DVDs—all the accouterments the well-accessorized meditator could ever want: cushions, mats, chimes, timers, gongs, incense burners, prayer beads, meditation benches, prayer shawls, yoga pants, baby rompers with the Om symbol, oriental-style indirect lighting fixtures, mugs (embossed with Om), aromatherapy kits, prayer banners, statuettes, tabletop fountains, and—pièce de résistance—a Carlos Santana fedora with a pin shaped like a combination guitar and Om symbol ($37.99). Needless to say, lots of meditation apps are available for your devices, including one called Buddhify, which allows you to set your iPhone or Android on any of 16 different meditation opportunities, including eating, feeling stressed, walking around, going to sleep, being unable to sleep, taking a work break, and “just meditating I and II.”

How many people actually meditate in America, or at least claim to? Hard to say—a 2007 census report of adults seeking complementary or alternative medicines indicated that 20 million used meditation for health purposes, but these 7-year-old figures appear to be the only ones available for the present and don’t seem remotely big enough to account for the mindfulness/meditation deluge washing over the country.

Finally, remember that little preliminary, imperfect research study that Kabat-Zinn turned out in 1982? For much of the next decade, he almost single-handedly kept the tiny MBSR research flame alive with a gaggle of articles in mainstream journals—on pain reduction, psoriasis, anxiety disorders, and heart disease. Mindfulness research didn’t really take off until after 2000—but then, did it ever! By the end of November 2014, the total number of research articles in the database of the American Mindfulness Research Association was an impressive 3,403—around 1,000 for the last two years alone. And those figures don’t take into account research articles on transcendental meditation (TM), a mantra-based technique first taught by the Maharishi Mahesh Yogi during the 1950s before catching on in the West (thanks partly to the Beatles). TM began accumulating a research base as early as 1970 with an article in Science, and now 350 or 430 or 600 (accounts vary) studies can be found in peer-reviewed journals supporting TM’s psychological, medical, social, and cognitive benefits, in both clinical and nonclinical populations.

Even these databases are undoubtedly a serious undercount, if all published “research” studies (ranging in quality from abysmal to middling to excellent) on various forms of meditation (mindfulness, TM, tai chi, yoga, and qigong) are included. In a meta-analysis of meditation programs, psychological stress, and well-being in the March 2014 issue of the Journal of the American Medical Association, Internal Medicine, Madhav Goyal, a Johns Hopkins assistant professor of medicine, and his colleagues identified a staggering 18,753 citations, before winnowing the number down to a paltry 47 randomized controlled trials with 3,515 participants. As to what all these studies are trying to demonstrate, a better question is what aren’t they trying to demonstrate: they explore mindfulness as a remedy for every issue that has even a passing connection to physical or mental health or general human well-being.

Mindfulness Goes Mainstream

How did this all happen? In the popular mind, about the only people really interested in meditation during the 1970s were New Age hippies, Asian studies scholars, and a small population of home-grown seekers (young middle-class adults, often left-wing Vietnam War dissenters at odds with consumer capitalism and looking for a spiritual lift they weren’t getting from drugs or the rejected Main Street religion of their parents). Mention meditation to Mr. and Mrs. Regular American, and you might just get a blank look, or worse, “Why would any normal person want to get caught up with one of those Eastern cults?” What peculiar constellation of forces and factors were coming into alignment so that one day soon, millions of perfectly normal people wouldn’t just be sitting cross-legged with their eyes closed, watching themselves breathe, but would believe this was the best health intervention since vitamins?

First, to start at the outer ring of the circle, the medical profession badly needed help with its “problem patients.” In a 2010 interview, Kabat-Zinn explained that before opening his clinic, he asked doctors, “What percentage of your patients do you feel like you help?” and was stunned by their answers. At most, they thought they helped only about 10 to 15 percent, while the other 85 to 90 percent either got better on their own or never got better at all, becoming the bane of medical practice: chronic patients with chronic conditions chronically unresponsive to anything the doctors tried. So when Kabat-Zinn offered what was basically a way to take these patients off physicians’ hands, the docs responded enthusiastically, saying things like, “Well, I can think of a hundred people off the top of my head we could send tomorrow.” In short, even though he didn’t really hide the Buddhist and Yogic origins of his plan from other medical professionals, these doctors were desperate enough not to look a gift horse in the mouth, whatever its suspicious origins.

Second, Kabat-Zinn’s idea—to repackage Eastern meditation as a secular health intervention that wouldn’t frighten the locals—had already been road tested. Just four years earlier, in 1975, cardiologist Herbert Benson of Harvard had introduced millions of Americans to a kind of proto-meditation with his bestselling book (its sales aided and abetted by the indefatigable Oprah), The Relaxation Response, in which he described the stress-reducing effects of simply focusing the mind on one thing for a little while. Benson had almost accidentally made his discoveries during the early 1970s, when, much against his better judgment as a self-conscious man of science, he’d been talked by TM practitioners into doing some quick research on what they claimed was their ability to reduce their own blood pressure at will simply by meditating on a mantra. He was astounded by the study results: 20 minutes of meditation caused a decrease in heart rate, blood pressure, breathing, pulse, stress hormone production, and so on, in effect, reversing the fight-or-flight response. But he was terrified that the merest taint of Eastern spirituality—particularly if it was associated with a long-haired, bearded, robed, flower-draped Hindu with the foreign name of Maharishi Mahesh Yogi—might spell doom for his professional reputation.

So in his book, he made only the barest of references to TM, substituted the word relaxation for the dreaded M-word (meditation), and argued for the purely physical health benefits of focusing single-mindedly on a “mental device,” which could be almost anything at all—a mantra or a prayer, sure, but also a neutral word, nonsense syllable, or object—or the “device” could consist of concentrating on a process or “muscular activity,” like yoga or qigong, but also walking, jogging, rowing, swimming, or knitting. It all sounded so normal, so ordinary—which is exactly what made the book sell . . . and sell, and sell; it’s sold more than 4 million copies and is in at least its 64th printing.

By 1979, even though the scientific and medical communities still weren’t entirely on board with this meditation thing, the times they were a-changing. That year, the Dalai Lama made his first visit to the United States, amid a media blitz, visiting cities, houses of worship (including a visit to Cardinal Cooke at St. Patrick’s Cathedral in New York), and universities; giving addresses; and generally wowing people with his unexpectedly charming personality. An obvious exotic, with his shaven head and maroon robes, the master of an alien but strangely glamorous religious tradition, he also professed a totally disarming interest in Western science—since he was a boy, he said—and confessed that he’d probably have become an engineer if he hadn’t gotten into the lama biz. He was particularly interested in any scientific linkages between consciousness and matter. Bingo! Scientists with an interest in Buddhism and/or meditation in general were thrilled: here was a revered spiritual leader who was also a scientist! And here was the chance to find the philosopher’s stone—to bridge the so-far unbridgeable gap not only between science and spirituality, but between mind and matter.

One such smitten scientist was Herbert Benson (apparently no longer worried about associating with “spiritual” types), who boldly asked His Holiness for permission to visit India and study the physiology of Tibetan monks. They could, he’d heard, raise their own body temperature in the freezing Himalayan air just through the intensity of their meditation. The Dalai Lama first said no: the monks were meditating for religious reasons, not so they could have their bodies poked, prodded, and stuck with various measuring devices (including rectal thermometers) for some hare-brained Western scheme. Then, mid-sentence almost, he changed his mind and agreed, explaining to his monks later, “For skeptics, you must show something spectacular because, without that, they won’t believe.” So early in the 1980s, Benson and some colleagues made several trips to India, looking for something “spectacular.” On one trip in 1985, described by Anne Harrington in her book The Cure Within: A History of Mind-Body Medicine, the monks duly complied with a demonstration of “g Tum-mo,” or “inner heat” meditation, raising their own body temperature while, in this case, draped with wet sheets (which sent up eddies of steam in the frigid air for added drama).

Four years later, in 1989, the Dalai Lama received the Nobel Peace Prize (for his efforts on behalf of Tibetan liberation from the Chinese, but also to protest the Chinese massacre of Tiananmen Square protesters), which generated a new paroxysm of infatuation for all things Tibetan. In particular, the Dalai Lama was lionized even more as somehow embodying the exotic mystery and ancient wisdom of Tibetan Buddhism and the rational, empirical spirit of modern Western science.

Soon, it became all the rage to study Tibetan monks scientifically to discover how, through long and advanced forms of meditation, they could not only transcend the usual limits of bodily self-control—raise and lower body temperature, control blood pressure, even improve immune function—but also achieve levels of spiritual transcendence hardly known in the West. Over the next two decades, a panoply of high-tech instrumentation—body temperature sensors, calorimeters (for measuring metabolism), EEGs, fMRIs, and so on—had been used in Western research centers and lugged up Himalayan mountains directly to monks’ huts for studies of what actually happens in the brains and bodies of these adepts.

Meanwhile, back down in the lowlands of pragmatic healthcare, in 1990, Kabat-Zinn produced a book titled Full Catastrophe Living: Using the Wisdom of Your Body and Mind to Face Stress, Pain, and Illness, based on the Stress Reduction and Relaxation Program (SR-RP) at UMass Medical. A hefty read of more than 500 pages, it recapitulated in print what he’d been teaching for the previous 12 years, but now promoting its relevance for everybody, not just sick people. The “catastrophe” is taken from a line from the movie Zorba the Greek, when Zorba is asked if he’s ever been married and replies (in Kabat-Zinn’s paraphrase), “Am I not a man? Of course I’ve been married. Wife, house, kids, everything . . . the full catastrophe!”—shorthand for the poignant enormity of our life experience.

The real big break for mindfulness came in 1993, when Bill Moyers featured Kabat-Zinn’s SR-RP in a 40-minute segment of a five-part PBS television series, Healing and the Mind, which included a look at Chinese traditional medicine and other examples of alternative healing methods. The series won an Emmy Award and turned Kabat-Zinn’sFull Catastrophe Living into a bestseller. While Americans were clearly warming up to Easternism in whatever form—tai chi, yoga, meditations of various kinds—the show substantially boosted the stock of mindfulness. Here were regular people—teachers, truck drivers, carpenters, schoolteachers, business executives, stay-at-home mothers—trying to find the inner stillness beneath the turmoil of their life with bemused tolerance and growing trust. And here was Kabat-Zinn, an intense, good-looking man, whose tough-minded candor and deep kindness to his students made this mindfulness thing look okay. Even better than okay! Who could watch the interaction between patient and teacher—a woman obviously struggling through severe pain to do a simple yoga position and Kabat-Zinn on his knees, speaking softly to her, resting a hand gently on her back, tenderly wiping tears and sweat from her face—and not be moved?

The McMindfulness Backlash

The explosive growth of mindfulness in America has inevitably triggered a backlash—a low, rumbling protest, particularly from Buddhists claiming that mindfulness has increasingly become yet another banal, commercialized self-help consumer product, hawked mostly to rich and upper-middle class white people who still wouldn’t be caught dead in a real zendo. While few critics quarrel with using MBSR as a way to alleviate suffering in mind or body, they’re disturbed by how much meditation in America appears to have been individualized, monetized, corporatized, therapized, taken over, flattened, and generally coopted out of all resemblance to its noble origins in an ancient spiritual and moral tradition.

In a 2013 blog for The Huffington Post titled “Beyond McMindfulness,” Ron Purser and David Loy—American academics and well-known Buddhist teachers—declared that enough was enough. The effort to “commodify mindfulness into a marketable technique,” they wrote, required engaging in a kind of bait and switch, branding mindfulness programs “‘Buddhist inspired’ to give them a certain hip cachet,” but leaving out the heart and soul of the original practice. As a result, what was once a powerful philosophical and ethical discipline intended to help free people from greed, ill will, and delusion becomes just another mass-marketed self-fulfillment tool, which can reinforce the same negative qualities. People can, in effect, use mindfulness to becomebetter at being worse. “According to the Pali Canon (the earliest recorded teachings of the Buddha),” the authors reminded readers, “even a person committing a premeditated and heinous crime can be exercising mindfulness, albeit wrong mindfulness.” A terrorist, an assassin, and a white-collar criminal can be mindful, Purser and Loy tell us, but not exactly in the same way as the Dalai Lama.

Further, mindfulness in America is so relentlessly marketed as a form ofpersonal stress reduction that it tends to blind adherents to the larger conditions that create and perpetuate widespread emotional and physical stress in the first place. Stress originating in social and economic arrangements is, the author wrote, “framed as a personal problem, and mindfulness is offered as just the right medicine to help employees work more efficiently and calmly within toxic environments . . . re-fashioned into a safety valve, as a way to let off steam—a technique for coping with and adapting to the stresses and strains of corporate life.”

Google, for example, has developed a now famous mindfulness/emotional intelligence training program, the Search Inside Yourself Leadership Institute (SIYLI)—led by Chade Meng Tan, who has the supercute nickname Google’s Jolly Good Fellow—which it offers to employees and scores of corporate and institutional clients. According to the SILYI media kit, “We help professionals at all levels adapt, management teams evolve, and leaders optimize their impact and influence.”

A peculiar example of a corporate leader’s “optimizing” what might have been an embarrassing situation occurred early in 2014 at the Wisdom 2.0 conference in San Francisco, a huge yearly Silicon Valley glamorfest, where stars from high-tech firms, academia, science, and show biz come to be inspired by each other and listen to “mindful living” luminaries, including Kabat-Zinn, Roshi Joan Halifax, Jack Kornfield, and Eckhart Tolle. During a panel discussion (with Jolly himself on the panel) titled “3 Steps to Corporate Mindfulness the Google Way,” a group of activists infiltrated a hall packed with attendees, mounted the stage in front of the speakers, unrolled a banner reading “Eviction-Free San Francisco,” and handed out leaflets to protest the frequent evictions of low-income local residents from the city so landlords could jack up the rents for well-heeled employees of Google and other area corporations.

After the security guards had managed to disrupt the disruption, a Google senior manager on the panel solemnly asked the audience to “check in with your body [to] see what it’s like to be around conflict with people who have heartfelt ideas that may be different from what we’re thinking. . . . Take a second to see what it’s like.” The crowd dutifully closed their eyes and settled right down. Critics pointed out that rather than even consider what and why the activists were protesting, Google tossed the attendees some vintage mindfulness pablum, exposing what Christopher Titmus, a retired Buddhist monk and teacher, called “the shallowness and absurdity of corporate mindfulness.”

Recently, even the scientific foundations of mindfulness have been the subject of increased critical scrutiny. After all, the science of mindfulness is what got it in the door of the healthcare system; the science that impressed academics, therapists, educators, prison administrators, and corporate honchos; thescience that made millions of Americans not embarrassed to say they were taking up meditation. Science is what makes meditation different from Eastern faith healing techniques and New Age woo. So how solid is all that science, anyway—those thousands of studies attesting to the empirical evidence of its power to help heal or relieve just about any physical or mental ailment to which human flesh is heir? There’s little doubt that mindfulness and other meditative discipline are genuinely useful to many people in many ways for many conditions. The question is what discipline, for what conditions, under whatcircumstances, how helpful, for whom, and when? At this point, the issue gets a little gnarly.

There’s a reason why Madhav Goyal and colleagues, the authors of the massive 2014 review and meta-analysis, found it necessary to winnow nearly 19,000 trials down to 47—or three percent of the total. Mindfulness/meditation-based interventions are inherently difficult to study and compare, like trying to pin down clouds in a gale. Variables of different approaches—TM, MBSR, tai chi, yoga, qigong—are hard to compare. Simply defining what meditation is or what it’s supposed to do tends to devolve into a word-salad of mix-n-match terms, like awareness, nonreactivity, openness, curiosity, describing, acceptance, nonjudging, and self-transcendence. The thousands of trials represented a mishmash of programs, which varied widely by type of meditation, length of meditation practice (three weeks to six years), training duration (roughly 12 to 39 hours), experience and training of the teacher, whether or not physical techniques (tai chi, yoga, qigong) were included, and population (how experienced participants were in meditation, what their complaints or ailments were). Most trials have been small before-and-after snapshots—uncontrolled and unrandomized, with no predetermined criteria, usually no long-term follow up, and high dropout rates.

From the saving remnant of the 47 good (or at least okay) randomized, controlled studies, the authors focused narrowly on research showing that meditation alleviated psychological stress (including pain) associated with medical problems. They came up with some mildly encouraging results, finding “moderately strong evidence” that mindfulness/meditation had a “small but consistent benefit” in relieving anxiety, depression, and pain (though what kind of pain—chronic, acute, or both—couldn’t be determined). The depressive symptoms were improved by roughly 10 to 20 percent, similar to the effect of antidepressants. As for the rest of the literature extolling the healing properties of mindfulness for all kinds of other specific conditions or its ability to improve the overall quality of life, the authors don’t reject these claims; they simply explain that the scientific evidence is insufficient to draw firm conclusions about them one way or the other.

These authors suggest that one reason for the low-to-middling results (compared to the hype, that is), and for the difficulties of doing this sort of research at all, may reflect a profound division between Eastern and Western attitudes toward meditation in the first place. The West, and particularly the research world, views meditation largely as a pragmatic, expedient, short-termintervention, comprising specific behavioral steps, intended to achieve clear, observable goals—such as relieving anxiety, pain, and depression. But historically, Goyal and his colleagues remind us, meditation was conceived as a lifelong practice, a hard-won skill within a rich spiritual, ethical, and social framework. It was never intended to be a quick-acting mental Ibuprofen/Xanax, but a long-term discipline that increased awareness, resulting in deep insight into the subtleties of existence itself—something that perhaps can never be measured or quantified without access to some wizardry for measuring the “subtle energy body” the Tibetan Buddhists describe. As the authors put it in a laconic understatement, “The translation of these traditions into [Western] research studies remains challenging.”

Even if the research doesn’t—can’t—live up to its popular media hype, the hype keeps on expanding. In an interview with Tricycle magazine, Catherine Kerr, assistant professor of medicine at Brown University, who directs translational neuroscience for Brown’s Contemplative Studies Initiative, points out that the problem is twofold. First, the media cherry-pick newsworthy scientific results and ride roughshod over cautiously worded research findings, typically reducing them to a one-sentence factoid, “a circulating meme that people put up on their Facebook pages and that becomes ‘true’ through repetition alone.”

Kerr knows of which she speaks: she was herself a coauthor, with Sara Lazar, on the famous 2005 paper “Meditation Experience Is Associated with Increased Cortical Thickness,” which led to countless articles suggesting that you can engineer a complete brain makeover with just a few weeks of mindfulness. As she recalls, “The typical headline in the popular press was ‘Mindfulness Makes Your Brain Grow.’” The problem was that all this took place in the absence of other replicating studies showing genuine evidence for some kind of structural brain change following mindfulness training and without any explanation of what physiologically was causing the apparent change and how it was affecting people’s experience and behavior.

Carefully parsing the results of her own study in the cautious language of science, Kerr will say only, “There are some clues from brain science that meditation might help enhance brain function. That is an evidence-based statement. The mistake is investing 100 percent in a result and not holding a probabilistic view of scientific truth or risk and benefit.”

The second problem is that the science of mindfulness itself isn’t immune to hype and distortion. Researchers need to generate a certain amount of buzz about their early research in order to get grants to carry it on. Observes Kerr, “To get things going, get collaborators, and gain NIH interest, you need to be a little entrepreneurial. . . . Researchers have to strike a tricky balance between expressing genuine enthusiasm and cautioning about limitations.” Scientists, like everybody else with something to sell, increasingly need to advertise, to do the kind of PR that will get them noticed.

And meditating scientists aren’t necessarily less inclined toward bias and overstatement than their non-meditating colleagues—possibly the reverse. “When we first started research on meditation, there was this principle that the scientists should be meditators because they understood it,” says Willoughby Britton, an assistant professor of psychiatry and human behavior at Brown. “But we are all also incredibly biased! Meditation is not just a practice we do, like ‘I like to run.’ It’s an entire worldview and religion. I worry about this kind of bias in meditation research.”

Finally, meditation isn’t without risks of its own. Marketed as a kind of warm bath for the psyche, Britton says, meditation has a shadow side, familiar to experienced meditation teachers but almost never mentioned in the popular media—that is, the not uncommon tendency of some people when they begin practicing in earnest to freak out (lose ego boundaries, hallucinate, relive old wounds and traumas, experience intense fear, and even have psychotic breaks, as well as exhibit strange physical symptoms, like spasms, involuntary movements, hot flashes, burning sensations, and hypersensitivity). These effects are well documented in Buddhist texts as stages along the long, hard path to inner wisdom, but they haven’t been studied in the West and aren’t featured in mindfulness/meditation brochures. Britton is one of the first to begin researching these phenomena seriously, in an undertaking she at first called the Dark Night Project. But since that name wasn’t attractive to funding sources, she renamed it The Varieties of Contemplative Experience Project. Along with the mass enthusiasm for meditation, Britton says, has come “an epidemic of casualties,” which needs to be recognized and incorporated into the promotion and study of these disciplines.

In short, while meditation has been acclaimed and sold as a quick, no-risk, easily mastered technique to achieve just about any conceivable desired goal—health, happiness, freedom from physical or mental pain, relaxation, self-confidence, career success, sexual success, inner peace, world peace!—it’s, in fact, a far deeper, more complex, and less well-understood process than many people realize. For one thing, whatever the measurable effects of meditation on behavior or physiology, the cognitions and feelings it arouses inside are entirely subjective—which makes it inherently unfriendly to the necessarily objective methods of empirical science. But these conscious experiences are as real as the people having them, and, quantifiable or not, they’re as much a part of mindfulness meditation as the mind and brain that produced them. The fact that meditation is fundamentally a matter of consciousness is exactly the problem: what, exactly, consciousness is or why we have it has been called “the hard problem” by many scientists and philosophers precisely because it resists cogent scientific explanation.

All of which doesn’t mean that scientific research into meditation isn’t improving and producing more genuinely valid studies, which will help us better understand what meditation is and how it changes our brains, experience, and behavior. It’s simply that, as both Kerr and Britton argue, a little caution would be advised. After all, we still know precious little about the neurophysiology—and not so much more about the psychology—of love, sex, anger, fear, learning, sleep, feeling, emotion, and thought. How much less do we know about that quieting of the mind, so unusual for Americans, called mindfulness?

People still aren’t entirely clear about what mindfulness is, Britton argues, or what distinguishes the different practices, or “which practices are best or worst suited to which types of people. When is it skillful to stop meditating and do something else? I think this is the most logical direction to follow becausenothing is good for everything. Mindfulness is not going to be an exception to that. . . . If we think anything is going to fix everything, we should probably take a moment and meditate on that.”

What Purity?

To the outpouring of complaints by some Buddhist practitioners that secular mindfulness is basically a fraud dressed in bodhisattva clothing, the response of many others is essentially “Chill out, people.” Meditation, these critics of the critics say in effect, is a good discipline that’s helped suffering people all over the world. And if it isn’t always done in a perfect spirit of selfless “right mindfulness,” or doesn’t always produce better, more compassionate, wiser human beings, well, this is Planet Earth, inhabited by the same imperfect human race that lived here 2,500-plus years ago, when Siddhartha Gautama wandered around India preaching the Dharma. To the accusation that Buddhism has lost its purity to the crass ravages of modern corporate America, for example, antipurist critics respond cheerfully, “What purity?”

Jeff Wilson, author of several books on Buddhism in the West, includingMindful America: The Mutual Transformation of Buddhist Meditation and American Culture, has pointed out that there has never been just oneBuddhism, but a welter of Buddhist practices, organizations, ways of life, and opinions in a never-very-centralized tradition that’s moved from India to China to Burma, Japan, and finally the West, picking up accretions along the way. Whatever anybody said about Buddhism, somebody else could say the opposite. What you had, Wilson said, was just “a great big mess called ‘Buddhism,’” which adapted itself to an astonishing variety of social and political circumstances everywhere it landed.

As to the question of whether poor, innocent, little Buddhism can withstand the withering pressures of the marketplace, there never was a time when it wasn’tdeeply connected to the political and economic realities of the world. “The truth of the matter,” says Wilson, “is that Buddhism has not ever at any point from its very beginning, or at any stage of its evolution, been apart from economic matters.” The ideal of the master sitting alone in his cave or high on a mountain, isolated from the nonspiritual hoi polloi, is essentially a myth. Buddhism has long been deeply embedded in the larger political economy. Monks have often exchanged spiritual goods (chanting to produce merit for a donor or a donor’s family) for economic support by the community.

Furthermore, the benefits people hoped to achieve by supporting the sangha (but not meditating; that was the prerogative of the monks) were often less spiritual and more worldly, practical, personal, and even selfish: success in love and business, good health, relief from pain, protection from evil, safe childbirth, better karma for the next go-around. This makes what most lay Buddhists in historical times wanted from their religion no different from what most people in most eras have always wanted, including people today: protection from disaster and harm, hope for the next life (however conceived), and a sense of peace in the security of knowing that there was some greater meaning to the unpredictable, often frightening, frequently miserable ebb and flow of mortal existence.

In a deeply stressed and stressful society, the basic, general mood du jour seems to be some variable mix of depression, anxiety, fear, rage, self-loathing, loneliness, alienation, yearning, envy, and other items in a much longer catalog. So what are we to make of the purist critics of the mainstreaming of mindfulness practices? In a blog last year, Seth Zuiho Segall, science writer forMindfulness Research Monthly and editor of the blog “The Existential Buddhist,” asked himself the question that seems addressed to the purist critics’ concerns: “Is mindfulness guilty of making people happier without making them enlightened?” His answer? “You bet. Guilty as charged.” And then he went on, “Down through the ages, most nominal Buddhists have chosen to pursue better karma and rebirth rather than aiming for enlightenment. If mindfulness only results in happier human beings, then . . . so be it. Those of us who choose to pursue awakening and transformation can still do so, happily untroubled by the sight of all those cheerful, mindful people milling about in our vicinity.”

Now if that’s the worst-case scenario for the vast majority of those who take up mindfulness training (with appropriate psychiatric attention to the freaker-outers, of course), wouldn’t most people be more than willing to make a deal? Mindfulness? Bring it on!

 

http://www.alternet.org/personal-health/how-mindfulness-movement-went-mainstream-and-backlash-came-it?akid=12743.265072.UI53-e&rd=1&src=newsletter1031195&t=7

The sharing economy is a lie

Uber, Ayn Rand and the truth about tech and libertarians

Disruptive companies talk a good game about sharing. Uber’s really just an under-regulated company making riches

 

The sharing economy is a lie: Uber, Ayn Rand and the truth about tech and libertarians
Ayn Rand, Rand Paul (Credit: AP/Manuel Balce Ceneta/Photo montage by Salon)

Horror stories about the increasingly unpopular taxi service Uber have been commonplace in recent months, but there is still much to be learned from its handling of the recent hostage drama in downtown Sydney, Australia. We’re told that we reveal our true character in moments of crisis, and apparently that’s as true for companies as it is for individuals.

A number of experts have challenged the idea that the horrific explosion of violence in a Sydney café was “terrorism,” since the attacker was mentally unbalanced and acted alone. But, terror or not, the ordeal was certainly terrifying. Amid the chaos and uncertainty, the city believed itself to be under a coordinated and deadly attack.

Uber had an interesting, if predictable, response to the panic and mayhem: It raised prices. A lot.

In case you missed the story, the facts are these: Someone named Man Haron Monis, who was considered mentally unstable and had been investigated for murdering his ex-wife, seized hostages in a café that was located in Sydney’s Central Business District or “CBD.” In the process he put up an Islamic flag – “igniting,” as Reuters reported, “fears of a jihadist attack in the heart of the country’s biggest city.”

In the midst of the fear, Uber stepped in and tweeted this announcement: “We are all concerned with events in CBD. Fares have increased to encourage more drivers to come online & pick up passengers in the area.”

As Mashable reports, the company announced that it would charge a minimum of $100 Australian to take passengers from the area immediately surrounding the ongoing crisis, and prices increased by as much as four times the standard amount. A firestorm of criticism quickly erupted – “@Uber_Sydney stop being assholes,” one Twitter response began – and Uber soon found itself offering free rides out of the troubled area instead.

What can we learn from this incident? Let’s start by parsing that tweet:

“We are all concerned with events in CBD …”

That opener suggests that Uber, as part of a community under siege, is preparing to respond in a civic manner.

“… Fares have increased to encourage more drivers to come online & pick up passengers in the area.”



But, despite the expression of shared concern, there is no sense of civitas to be found in the statement that follows. There is only a transaction, executed at what the corporation believes to be market value. Lesson #1 about Uber is, therefore, that in its view there is no heroism, only self-interest. This is Ayn Rand’s brutal, irrational and primitive philosophy in its purest form: altruism is evil, and self-interest is the only true heroism.

There was once a time when we might have read of “hero cabdrivers” or “hero bus drivers” placing themselves in harm’s way to rescue their fellow citizens. For its part, Uber might have suggested that it would use its network of drivers and its scheduling software to recruit volunteer drivers for a rescue mission.

Instead, we are told that Uber’s pricing surge was its expression of concern. Uber’s way to address a human crisis is apparently by letting the market govern human behavior, as if there were (in libertarian economist Tyler Cowen’s phrase) “markets in everything” – including the lives of a city’s beleaguered citizens (and its Uber drivers).

Where would this kind of market-driven practice leave poor or middle-income citizens in a time of crisis? If they can’t afford the “surged” price, apparently it would leave them squarely in the line of fire. And come to think of it, why would Uber drivers value their lives so cheaply, unless they’re underpaid?

One of the lessons of Sydney is this: Uber’s philosophy, whether consciously expressed or not, is that life belongs to the highest bidder – and therefore, by implication, the highest bidder’s life has the greatest value. Society, on the other hand, may choose to believe that every life has equal value – or that lifesaving services should be available at affordable prices.

If nothing else, the Sydney experience should prove once and for all that there is no such thing as “the sharing economy.” Uber is a taxi company, albeit an under-regulated one, and nothing more. It’s certainly not a “ride sharing” service, where someone who happens to be going in the same direction is willing to take along an extra passenger and split gas costs. A ride-sharing service wouldn’t find itself “increasing fares to encourage more drivers” to come into Sydney’s terrorized Central Business District.

A “sharing economy,” by definition, is lateral in structure. It is a peer-to-peer economy. But Uber, as its name suggests, is hierarchical in structure. It monitors and controls its drivers, demanding that they purchase services from it while guiding their movements and determining their level of earnings. And its pricing mechanisms impose unpredictable costs on its customers, extracting greater amounts whenever the data suggests customers can be compelled to pay them.

This is a top-down economy, not a “shared” one.

A number of Uber’s fans and supporters defended the company on the grounds that its “surge prices,” including those seen during the Sydney crisis, are determined by an algorithm. But an algorithm can be an ideological statement, and is always a cultural artifact. As human creations, algorithms reflect their creators.

Uber’s tweet during the Sydney crisis made it sound as if human intervention, rather than algorithmic processes, caused prices to soar that day. But it doesn’t really matter if that surge was manually or algorithmically driven. Either way the prices were Uber’s doing – and its moral choice.

Uber has been strenuously defending its surge pricing in the wake of accusations (apparently justified) that the company enjoyed windfall profits during Hurricane Sandy. It has now promised the state of New York that it will cap its surge prices (at three times the highest rate on two non-emergency days). But if Uber has its way, it will soon enjoy a monopolistic stranglehold on car service rates in most major markets. And it has demonstrated its willingness to ignore rules and regulations. That means predictable and affordable taxi fares could become a thing of the past.

In practice, surge pricing could become a new, privatized form of taxation on middle-class taxi customers.

Even without surge pricing, Uber and its supporters are hiding its full costs. When middle-class workers are underpaid or deprived of benefits and full working rights, as Uber’s reportedly are, the entire middle-class economy suffers. Overall wages and benefits are suppressed for the majority, while the wealthy few are made even richer. The invisible costs of ventures like Uber are extracted over time, far surpassing whatever short-term savings they may occasionally offer.

Like Walmart, Uber underpays its employees – many of its drivers are employees, in everything but name – and then drains the social safety net to make up the difference. While Uber preaches libertarianism, it practices a form of corporate welfare. It’s reportedly celebrating Obamacare, for example, since the Affordable Care Act allows it to avoid providing health insurance to its workforce. But the ACA’s subsidies, together with Uber’s often woefully insufficient wages, mean that the rest of us are paying its tab instead. And the lack of income security among Uber’s drivers creates another social cost for Americans – in lost tax revenue, and possibly in increased use of social services.

The company’s war on regulation will also carry a social price. Uber and its supporters don’t seem to understand that regulations exist for a reason. It’s true that nobody likes excessive bureaucracy, but not all regulations are excessive or onerous. And when they are, it’s a flaw in execution rather than principle.

Regulations were created because they serve a social purpose, ensuring the free and fair exchange of services and resources among all segments of society. Some services, such as transportation, are of such importance that the public has a vested interest in ensuring they will be readily available at reasonably affordable prices. That’s not unreasonable for taxi services, especially given the fact that they profit from publicly maintained roads and bridges.

Uber has presented itself as a modernized, efficient alternative to government oversight. But it’s an evasion of regulation, not its replacement. As Alexis Madrigal reports, Uber has deliberately ignored city regulators and used customer demand to force its model of inadequate self-governance (my conclusion, not his) onto one city after another.

Uber presented itself as a refreshing alternative to the over-bureaucratized world of urban transportation. But that’s a false choice. We can streamline sclerotic city regulators, upgrade taxi fleets and even provide users with fancy apps that make it easier to call a cab. The company’s binary presentation – us, or City Hall – frames the debate in artificial terms.

Uber claims that its driver rating system is a more efficient way to monitor drivers, but that’s an entirely unproven assumption. While taxi drivers have been known to misbehave, the worldwide litany of complaints against Uber drivers – for everything from dirty cars and spider bites to assault with a hammer, fondling and rape – suggest that Uber’s system may not work as well as old-fashioned regulation. It’s certainly not noticeably superior.

In fact, prosecutors in San Francisco and Los Angeles say Uber has been lying to its customers about the level and quality of its background checks. The company now promises it will do a better job at screening drivers. But it won’t tell us what measures its taking to improve its safety record, and it’s fighting the kind of driver scrutiny that taxicab companies have been required to enforce for many decades.

Many reports suggest that beleaguered drivers don’t feel much better about the company than victimized passengers do. They tell horror stories about the company’s hiring and management practices. Uber unilaterally slashes drivers’ rates, while claiming they don’t need to unionize. (The Teamsters disagree.)

The company also pushes sketchy, substandard loans onto its drivers – but hey, what could go wrong?

Uber has many libertarian defenders. And yet, it deceives the press and threatens to spy on journalists, lies to its own employees, keeps its practices a secret and routinely invades the privacy of civilians – sometimes merely for entertainment. (It has a tool, with the Orwellian name the “God View,” that it can use for monitoring customers’ personal movements.)

Aren’t those the kinds of things libertarians say they hate about government?

This isn’t a “gotcha” exercise. It matters. Uber is the poster child for the pro-privatization, anti-regulatory ideology that ascribes magical powers to technology and the private sector. It is deeply a political entity, from its Nietzschean name to its recent hiring of White House veteran David Plouffe. Uber is built around a relatively simple app (which relies on government-created technology), but it’s not really a tech company. Above all else Uber is an ideological campaign, a neoliberal project whose real products are deregulation and the dismantling of the social contract.

Or maybe, as that tweeter in Sydney suggested, they’re just assholes.

Either way, it’s important that Uber’s worldview and business practices not be allowed to “disrupt” our economy or our social fabric. People who work hard deserve to make a decent living. Society at large deserves access to safe and affordable transportation. And government, as the collective expression of a democratic society, has a role to play in protecting its citizens.

And then there’s the matter of our collective psyche. In her book “A Paradise Built in Hell: The Extraordinary Communities that Arise in Disaster,” Rebecca Solnit wrote of the purpose, meaning and deep satisfaction people find when they pull together to help one another in the face of adversity.  But in the world Uber seeks to create, those surges of the spirit would be replaced by surge pricing.

You don’t need a “God view” to see what happens next. When heroism is reduced to a transaction, the soul of a society is sold cheap.

 

http://www.salon.com/2015/02/01/the_sharing_economy_is_a_lie_uber_ayn_rand_and_the_truth_about_tech_and_libertarians/?source=newsletter

WikiLeaks considers legal action over Google’s compliance with US search orders

Wikileaks-001

By Evan Blake
29 January 2015

On Monday, lawyers for WikiLeaks announced at a press conference that they may pursue legal action against Google and the US government following revelations that the Internet company complied with Justice Department demands that it hand over communications and documents of WikiLeaks journalists.

More than two and a half years after complying with the surveillance orders, Google sent notifications to three victims of these unconstitutional searches—WikiLeaks investigations editor Sarah Harrison, organization spokesman Kristinn Hrafnsson and senior editor Joseph Farrell. The company informed WikiLeaks that that it had complied fully with “search and seizure” orders to turn over digital data, including all sent, received, draft and deleted emails, IP addresses, photographs, calendars and other personal information.

The government investigation ostensibly relates to claims of espionage, conspiracy to commit espionage, the theft or conversion of property belonging to the United States government, violation of the Computer Fraud and Abuse Act, and conspiracy, which combine to carry up to 45 years in prison. The ongoing investigation into WikiLeaks was first launched in 2010 by the Obama administration, which has so far led to the 35-year sentence for Chelsea (Bradley) Manning.

At the press conference, Hrafnsson stated, “I believe this is an attack on me as a journalist. I think this is an attack on journalism. I think this is a very serious issue that should concern all of you in here, and everybody who is working on, especially, sensitive security stories, as we have been doing as a media organization.”

Baltasar Garzon, the Legal Director for Julian Assange’s legal team, told reporters at the event, “We believe the way the documents were taken is illegal.”

On Sunday, prior to the press conference, Michael Ratner, the lead lawyer of the counsel for WikiLeaks and president emeritus at the Center for Constitutional Rights, penned a letter to Eric Schmidt, the executive chairman of Google, stating, “We are astonished and disturbed that Google waited over two and a half years to notify its subscribers that a search warrant was issued for their records.”

Google claims that they withheld this information from the three journalists due to a court-imposed gag order. A Google spokesperson told the Guardian, “Our policy is to tell people about government requests for their data, except in limited cases, like when we are gagged by a court order, which sadly happens quite frequently.”

In his letter, Ratner reminds Schmidt of a conversation he had with Julian Assange on April 19, 2011, in which Schmidt allegedly agreed to recommend that Google’s general counsel contest such a gag order were it to arise.

The letter requests that Google provide the counsel for WikiLeaks with “a list of all materials Google disclosed or provided to law enforcement in response to these search warrants,” as well as all other information relevant to the case, whether or not Google challenged the case prior to relinquishing their clients’ data, and whether Google attempted to remove the gag order at any point since they received their orders on March 22, 2012.

At the Monday press conference, Harrison noted that the government was not “going after specific things they thought could help them. What they were actually doing was blanketly going after a journalist’s personal and private email account, in the hopes that this fishing expedition would get them something to use to attack the organization and our editor-in-chief Julian Assange.”

The case, Harrison said, pointed to the “breakdown of legal processes within the US government, when it comes to dealing with WikiLeaks.”

Harrison assisted Edward Snowden for four months, shortly after his initial revelations on NSA spying in 2013, helping him leave Hong Kong. She is one of Assange’s closest collaborators, highlighting the inherent value of her personal email correspondence. Through her and her colleagues’ email accounts and other personal information, the Justice Department is seeking to manufacture a case against Assange.

Assange currently faces trumped up accusations of sexual assault in Sweden, along with the threat of extradition to the US. He has been forced to take refuge in the Ecuadorian embassy in London for over two and a half years, under round-the-clock guard by British police ready to arrest him if he steps out of the embassy.

In various media accounts, Google has postured as a crusader for democratic rights. A Google attorney, Albert Gidari, told the Washington Post that ever since a parallel 2010 order for the data of WikiLeaks’ volunteer and security researcher Jacob Appelbaum, “Google litigated up and down through the courts trying to get the orders modified so that notice could be given.”

In reality, the company serves as an integral component of, and is heavily invested in, the military-intelligence apparatus. In their 2014 “transparency report,” Google admitted to complying with 66 percent of the 32,000 data requests they received from governments worldwide during the first six months of 2014 alone, including 84 percent of those submitted by the US government, by far the largest requester.

In his book When Google Met WikiLeaks, published in September 2014, Assange detailed the company’s ties to Washington and its wide-ranging influence on geopolitics.

In a statement published by WikiLeaks, the organization noted that “The US government is claiming universal jurisdiction to apply the Espionage Act, general Conspiracy statute and the Computer Fraud and Abuse Act to journalists and publishers—a horrifying precedent for press freedoms around the world. Once an offence is alleged in relation to a journalist or their source, the whole media organisation, by the nature of its work flow, can be targeted as alleged ‘conspiracy.’”

 

http://www.wsws.org/en/articles/2015/01/29/wiki-j29.html

The Killing of America’s Creative Class

hqdefault

A review of Scott Timberg’s fascinating new book, ‘Culture Crash.’

Some of my friends became artists, writers, and musicians to rebel against their practical parents. I went into a creative field with encouragement from my folks. It’s not too rare for Millennials to have their bohemian dreams blessed by their parents, because, as progeny of the Boomers, we were mentored by aging rebels who idolized rogue poets, iconoclast cartoonists, and scrappy musicians.

The problem, warns Scott Timberg in his new book Culture Crash: The Killing of the Creative Class, is that if parents are basing their advice on how the economy used to support creativity – record deals for musicians, book contracts for writers, staff positions for journalists – then they might be surprised when their YouTube-famous daughter still needs help paying off her student loans. A mix of economic, cultural, and technological changes emanating from a neoliberal agenda, writes Timberg, “have undermined the way that culture has been produced for the past two centuries, crippling the economic prospects of not only artists but also the many people who supported and spread their work, and nothing yet has taken its place.”

 

Tech vs. the Creative Class

Timberg isn’t the first to notice. The supposed economic recovery that followed the recession of 2008 did nothing to repair the damage that had been done to the middle class. Only a wealthy few bounced back, and bounced higher than ever before, many of them the elites of Silicon Valley who found a way to harvest much of the wealth generated by new technologies. InCulture Crash, however, Timberg has framed the struggle of the working artist to make a living on his talents.

Besides the overall stagnation of the economy, Timberg shows how information technology has destabilized the creative class and deprofessionalized their labor, leading to an oligopoly of the mega corporations Apple, Google, and Facebook, where success is measured (and often paid) in webpage hits.

What Timberg glances over is that if this new system is an oligopoly of tech companies, then what it replaced – or is still in the process of replacing – was a feudal system of newspapers, publishing houses, record labels, operas, and art galleries. The book is full of enough discouraging data and painful portraits of artists, though, to make this point moot. Things are definitely getting worse.

Why should these worldly worries make the Muse stutter when she is expected to sing from outside of history and without health insurance? Timberg proposes that if we are to save the “creative class” – the often young, often from middle-class backgrounds sector of society that generates cultural content – we need to shake this old myth. The Muse can inspire but not sustain. Members of the creative class, argues Timberg, depend not just on that original inspiration, but on an infrastructure that moves creations into the larger culture and somehow provides material support for those who make, distribute, and assess them. Today, that indispensable infrastructure is at risk…

Artists may never entirely disappear, but they are certainly vulnerable to the economic and cultural zeitgeist. Remember the Dark Ages? Timberg does, and drapes this shroud over every chapter. It comes off as alarmist at times. Culture is obviously no longer smothered by an authoritarian Catholic church.

 

Art as the Province of the Young and Independently Wealthy

But Timberg suggests that contemporary artists have signed away their rights in a new contract with the market. Cultural producers, no matter how important their output is to the rest of us, are expected to exhaust themselves without compensation because their work is, by definition, worthless until it’s profitable. Art is an act of passion – why not produce it for free, never mind that Apple, Google, and Facebook have the right to generate revenue from your production? “According to this way of thinking,” wrote Miya Tokumitsu describing the do-what-you-love mantra that rode out of Silicon Valley on the back of TED Talks, “labor is not something one does for compensation, but an act of self-love. If profit doesn’t happen to follow, it is because the worker’s passion and determination were insufficient.”

The fact is, when creativity becomes financially unsustainable, less is created, and that which does emerge is the product of trust-fund kids in their spare time. “If working in culture becomes something only for the wealthy, or those supported by corporate patronage, we lose the independent perspective that artistry is necessarily built on,” writes Timberg.

It would seem to be a position with many proponents except that artists have few loyal advocates on either side of the political spectrum. “A working artist is seen neither as the salt of the earth by the left, nor as a ‘job creator’ by the right – but as a kind of self-indulgent parasite by both sides,” writes Timberg.

That’s with respect to unsuccessful artists – in other words, the creative class’s 99 percent. But, as Timberg disparages, “everyone loves a winner.” In their own way, both conservatives and liberals have stumbled into Voltaire’sCandide, accepting that all is for the best in the best of all possible worlds. If artists cannot make money, it’s because they are either untalented or esoteric elitists. It is the giants of pop music who are taking all the spoils, both financially and morally, in this new climate.

Timberg blames this winner-take-all attitude on the postmodernists who, beginning in the 1960s with film critic Pauline Kael, dismantled the idea that creative genius must be rescued from underneath the boots of mass appeal and replaced it with the concept of genius-as-mass-appeal. “Instead of coverage of, say, the lost recordings of pioneering bebop guitarist Charlie Christian,” writes Timberg, “we read pieces ‘in defense’ of blockbuster acts like the Eagles (the bestselling rock band in history), Billy Joel, Rush – groups whose songs…it was once impossible to get away from.”

Timberg doesn’t give enough weight to the fact that the same rebellion at the university liberated an enormous swath of art, literature, and music from the shadow of an exclusive (which is not to say unworthy) canon made up mostly of white men. In fact, many postmodernists have taken it upon themselves to look neither to the pop charts nor the Western canon for genius but, with the help of the Internet, to the broad creative class that Timberg wants to defend.

 

Creating in the Age of Poptimism

This doesn’t mean that today’s discovered geniuses can pay their bills, though, and Timberg is right to be shocked that, for the first time in history, pop culture is untouchable, off limits to critics or laypeople either on the grounds of taste or principle. If you can’t stand pop music because of the hackneyed rhythms and indiscernible voices, you’ve failed to appreciate the wonders of crowdsourced culture – the same mystery that propels the market.

Sadly, Timberg puts himself in checkmate early on by repeatedly pitting black mega-stars like Kanye West against white indie-rockers like the Decembrists, whose ascent to the pop-charts he characterizes as a rare triumph of mass taste.

But beyond his anti-hip-hop bias is an important argument: With ideological immunity, the pop charts are mimicking the stratification of our society. Under the guise of a popular carnival where a home-made YouTube video can bring a talented nobody the absurd fame of a celebrity, creative industries have nevertheless become more monotonous and inaccessible to new and disparate voices. In 1986, thirty-one chart-toppers came from twenty-nine different artists. Between 2008 and mid-2012, half of the number-one songs were property of only six stars. “Of course, it’s never been easy to land a hit record,” writes Timberg. “But recession-era rock has brought rewards to a smaller fraction of the artists than it did previously. Call it the music industry’s one percent.”

The same thing is happening with the written word. In the first decade of the new millennium, points out Timberg, citing Wired magazine, the market share of page views for the Internet’s top ten websites rose from 31 percent to 75 percent.

Timberg doesn’t mention that none of the six artists dominating the pop charts for those four years was a white man, but maybe that’s beside the point. In Borges’s “Babylon Lottery,” every citizen has the chance to be a sovereign. That doesn’t mean they were living in a democracy. Superstars are coming up from poverty, without the help of white male privilege, like never before, at the same time that poverty – for artists and for everyone else – is getting worse.

Essayists are often guilted into proposing solutions to the problems they perceive, but in many cases they should have left it alone. Timberg wisely avoids laying out a ten-point plan to clean up the mess, but even his initial thrust toward justice – identifying the roots of the crisis – is a pastiche of sometimes contradictory liberal biases that looks to the past for temporary fixes.

Timberg puts the kibosh on corporate patronage of the arts, but pines for the days of newspapers run by wealthy families. When information technology is his target because it forces artists to distribute their work for free, removes the record store and bookstore clerks from the scene, and feeds consumer dollars to only a few Silicon Valley tsars, Timberg’s answer is to retrace our steps twenty years to the days of big record companies and Borders book stores – since that model was slightly more compensatory to the creative class.

When his target is postmodern intellectuals who slander “middle-brow” culture as elitist, only to expend their breath in defense of super-rich pop stars, Timberg retreats fifty years to when intellectuals like Marshall McLuhan and Norman Mailer debated on network television and the word “philharmonic” excited the uncultured with awe rather than tickled them with anti-elitist mockery. Maybe television back then was more tolerable, but Timberg hardly even tries to sound uplifting. “At some point, someone will come up with a conception better than middlebrow,” he writes. “But until then, it beats the alternatives.”

 

The Fallacy of the Good Old Days

Timberg’s biggest mistake is that he tries to find a point in history when things were better for artists and then reroute us back there for fear of continued decline. What this translates to is a program of bipartisan moderation – a little bit more public funding here, a little more philanthropy there. Something everyone can agree on, but no one would ever get excited about.

Why not boldly state that a society is dysfunctional if there is enough food, shelter, and clothing to go around and yet an individual is forced to sacrifice these things in order to produce, out of humanistic virtue, the very thing which society has never demanded more of – culture? And if skeptics ask for a solution, why not suggest something big, a reorganization of society, from top to bottom, not just a vintage flotation device for the middle class? Rather than blame technological innovation for the poverty of artists, why not point the finger at those who own the technology and call for a system whereby efficiency doesn’t put people out of work, but allows them to work fewer hours for the same salary; whereby information is free not because an unpaid intern wrote content in a race for employment, but because we collectively pick up the tab?

This might not satisfy the TED Talk connoisseur’s taste for a clever and apolitical fix, but it definitely trumps championing a middle-ground littered with the casualties of cronyism, colonialism, racism, patriarchy, and all their siblings. And change must come soon because, if Timberg is right, “the price we ultimately pay” for allowing our creative class to remain on its crash course “is in the decline of art itself, diminishing understanding of ourselves, one another, and the eternal human spirit.”

 

http://www.alternet.org/news-amp-politics/killing-americas-creative-class?akid=12719.265072.45wrwl&rd=1&src=newsletter1030855&t=9

How the CIA made Google

google_cia

Inside the secret network behind mass surveillance, endless war, and Skynet—

part 1

By Nafeez Ahmed

INSURGE INTELLIGENCE, a new crowd-funded investigative journalism project, breaks the exclusive story of how the United States intelligence community funded, nurtured and incubated Google as part of a drive to dominate the world through control of information. Seed-funded by the NSA and CIA, Google was merely the first among a plethora of private sector start-ups co-opted by US intelligence to retain ‘information superiority.’

The origins of this ingenious strategy trace back to a secret Pentagon-sponsored group, that for the last two decades has functioned as a bridge between the US government and elites across the business, industry, finance, corporate, and media sectors. The group has allowed some of the most powerful special interests in corporate America to systematically circumvent democratic accountability and the rule of law to influence government policies, as well as public opinion in the US and around the world. The results have been catastrophic: NSA mass surveillance, a permanent state of global war, and a new initiative to transform the US military into Skynet.

THIS IS PART ONE. READ PART TWO HERE.


This exclusive is being released for free in the public interest, and was enabled by crowdfunding. I’d like to thank my amazing community of patrons for their support, which gave me the opportunity to work on this in-depth investigation. Please support independent, investigative journalism for the global commons.


In the wake of the Charlie Hebdo attacks in Paris, western governments are moving fast to legitimize expanded powers of mass surveillance and controls on the internet, all in the name of fighting terrorism.

US and European politicians have called to protect NSA-style snooping, and to advance the capacity to intrude on internet privacy by outlawing encryption. One idea is to establish a telecoms partnership that would unilaterally delete content deemed to “fuel hatred and violence” in situations considered “appropriate.” Heated discussions are going on at government and parliamentary level to explore cracking down on lawyer-client confidentiality.

What any of this would have done to prevent the Charlie Hebdo attacks remains a mystery, especially given that we already know the terrorists were on the radar of French intelligence for up to a decade.

There is little new in this story. The 9/11 atrocity was the first of many terrorist attacks, each succeeded by the dramatic extension of draconian state powers at the expense of civil liberties, backed up with the projection of military force in regions identified as hotspots harbouring terrorists. Yet there is little indication that this tried and tested formula has done anything to reduce the danger. If anything, we appear to be locked into a deepening cycle of violence with no clear end in sight.

As our governments push to increase their powers, INSURGE INTELLIGENCE can now reveal the vast extent to which the US intelligence community is implicated in nurturing the web platforms we know today, for the precise purpose of utilizing the technology as a mechanism to fight global ‘information war’ — a war to legitimize the power of the few over the rest of us. The lynchpin of this story is the corporation that in many ways defines the 21st century with its unobtrusive omnipresence: Google.

Google styles itself as a friendly, funky, user-friendly tech firm that rose to prominence through a combination of skill, luck, and genuine innovation. This is true. But it is a mere fragment of the story. In reality, Google is a smokescreen behind which lurks the US military-industrial complex.

The inside story of Google’s rise, revealed here for the first time, opens a can of worms that goes far beyond Google, unexpectedly shining a light on the existence of a parasitical network driving the evolution of the US national security apparatus, and profiting obscenely from its operation.

The shadow network

For the last two decades, US foreign and intelligence strategies have resulted in a global ‘war on terror’ consisting of prolonged military invasions in the Muslim world and comprehensive surveillance of civilian populations. These strategies have been incubated, if not dictated, by a secret network inside and beyond the Pentagon.

Established under the Clinton administration, consolidated under Bush, and firmly entrenched under Obama, this bipartisan network of mostly neoconservative ideologues sealed its dominion inside the US Department of Defense (DoD) by the dawn of 2015, through the operation of an obscure corporate entity outside the Pentagon, but run by the Pentagon.

In 1999, the CIA created its own venture capital investment firm, In-Q-Tel, to fund promising start-ups that might create technologies useful for intelligence agencies. But the inspiration for In-Q-Tel came earlier, when the Pentagon set up its own private sector outfit.

Known as the ‘Highlands Forum,’ this private network has operated as a bridge between the Pentagon and powerful American elites outside the military since the mid-1990s. Despite changes in civilian administrations, the network around the Highlands Forum has become increasingly successful in dominating US defense policy.

Giant defense contractors like Booz Allen Hamilton and Science Applications International Corporation are sometimes referred to as the ‘shadow intelligence community’ due to the revolving doors between them and government, and their capacity to simultaneously influence and profit from defense policy. But while these contractors compete for power and money, they also collaborate where it counts. The Highlands Forum has for 20 years provided an off the record space for some of the most prominent members of the shadow intelligence community to convene with senior US government officials, alongside other leaders in relevant industries.

I first stumbled upon the existence of this network in November 2014, when I reported for VICE’s Motherboard that US defense secretary Chuck Hagel’s newly announced ‘Defense Innovation Initiative’ was really about building Skynet — or something like it, essentially to dominate an emerging era of automated robotic warfare.

That story was based on a little-known Pentagon-funded ‘white paper’ published two months earlier by the National Defense University (NDU) in Washington DC, a leading US military-run institution that, among other things, generates research to develop US defense policy at the highest levels. The white paper clarified the thinking behind the new initiative, and the revolutionary scientific and technological developments it hoped to capitalize on.

The Highlands Forum

The co-author of that NDU white paper is Linton Wells, a 51-year veteran US defense official who served in the Bush administration as the Pentagon’s chief information officer, overseeing the National Security Agency (NSA) and other spy agencies. He still holds active top-secret security clearances, and according to a report by Government Executive magazine in 2006 hechaired the ‘Highlands Forum’, founded by the Pentagon in 1994.

Linton Wells II (right) former Pentagon chief information officer and assistant secretary of defense for networks, at a recent Pentagon Highlands Forum session. Rosemary Wenchel, a senior official in the US Department of Homeland Security, is sitting next to him

New Scientist magazine (paywall) has compared the Highlands Forum to elite meetings like “Davos, Ditchley and Aspen,” describing it as “far less well known, yet… arguably just as influential a talking shop.” Regular Forum meetings bring together “innovative people to consider interactions between policy and technology. Its biggest successes have been in the development of high-tech network-based warfare.”

Given Wells’ role in such a Forum, perhaps it was not surprising that his defense transformation white paper was able to have such a profound impact on actual Pentagon policy. But if that was the case, why had no one noticed?

Despite being sponsored by the Pentagon, I could find no official page on the DoD website about the Forum. Active and former US military and intelligence sources had never heard of it, and neither did national security journalists. I was baffled.

The Pentagon’s intellectual capital venture firm

In the prologue to his 2007 book, A Crowd of One: The Future of Individual Identity, John Clippinger, an MIT scientist of the Media Lab Human Dynamics Group, described how he participated in a “Highlands Forum” gathering, an “invitation-only meeting funded by the Department of Defense and chaired by the assistant for networks and information integration.” This was a senior DoD post overseeing operations and policies for the Pentagon’s most powerful spy agencies including the NSA, the Defense Intelligence Agency (DIA), among others. Starting from 2003, the position was transitioned into what is now the undersecretary of defense for intelligence. The Highlands Forum, Clippinger wrote, was founded by a retired US Navy captain named Dick O’Neill. Delegates include senior US military officials across numerous agencies and divisions — “captains, rear admirals, generals, colonels, majors and commanders” as well as “members of the DoD leadership.”

What at first appeared to be the Forum’s main website describes Highlands as “an informal cross-disciplinary network sponsored by Federal Government,” focusing on “information, science and technology.” Explanation is sparse, beyond a single ‘Department of Defense’ logo.

But Highlands also has another website describing itself as an “intellectual capital venture firm” with “extensive experience assisting corporations, organizations, and government leaders.” The firm provides a “wide range of services, including: strategic planning, scenario creation and gaming for expanding global markets,” as well as “working with clients to build strategies for execution.” ‘The Highlands Group Inc.,’ the website says, organizes a whole range of Forums on these issue.

For instance, in addition to the Highlands Forum, since 9/11 the Group runs the ‘Island Forum,’ an international event held in association with Singapore’s Ministry of Defense, which O’Neill oversees as “lead consultant.” The Singapore Ministry of Defense website describes the Island Forum as “patterned after the Highlands Forum organized for the US Department of Defense.” Documents leaked by NSA whistleblower Edward Snowden confirmed that Singapore played a key role in permitting the US and Australia to tap undersea cables to spy on Asian powers like Indonesia and Malaysia.

The Highlands Group website also reveals that Highlands is partnered with one of the most powerful defense contractors in the United States. Highlands is “supported by a network of companies and independent researchers,” including “our Highlands Forum partners for the past ten years at SAIC; and the vast Highlands network of participants in the Highlands Forum.”

SAIC stands for the US defense firm, Science Applications International Corporation, which changed its name to Leidos in 2013, operating SAIC as a subsidiary. SAIC/Leidos is among the top 10 largest defense contractors in the US, and works closely with the US intelligence community, especially the NSA. According to investigative journalist Tim Shorrock, the first to disclose the vast extent of the privatization of US intelligence with his seminal book Spies for Hire, SAIC has a “symbiotic relationship with the NSA: the agency is the company’s largest single customer and SAIC is the NSA’s largest contractor.”

CONTINUED:  https://medium.com/@NafeezAhmed/how-the-cia-made-google-e836451a959e

What Makes You You?

When you say the word “me,” you probably feel pretty clear about what that means. It’s one of the things you’re clearest on in the whole world—something you’ve understood since you were a year old. You might be working on the question, “Who am I?” but what you’re figuring out is the who am part of the question—the part is obvious. It’s just you. Easy.

But when you stop and actually think about it for a minute—about what “me” really boils down to at its core—things start to get pretty weird. Let’s give it a try.

The Body Theory

We’ll start with the first thing most people equate with what a person is—the physical body itself. The Body Theory says that that’s what makes you you. And that would make sense. It doesn’t matter what’s happening in your life—if your body stops working, you die. If Mark goes through something traumatic and his family says, CH“It really changed him—he’s just not the same person anymore,” they don’t literally mean Mark isn’t the same person—he’s changed, but he’s still Mark, because Mark’s body is Mark, no matter what he’s acting like. Humans believe they’re so much more than a hunk of flesh and bone, but in the end, a physical ant is the ant, a squirrel’s body is the squirrel, and a human is its body. This is the Body Theory—let’s test it:

So what happens when you cut your fingernails? You’re changing your body, severing some of its atoms from the whole. Does that mean you’re not you anymore? Definitely not—you’re still you.

How about if you get a liver transplant? Bigger deal, but definitely still you, right?

What if you get a terrible disease and need to replace your liver, kidney, heart, lungs, blood, and facial tissue with synthetic parts, but after all the surgery, you’re fine and can live your life normally. Would your family say that you had died, because most of your physical body was gone? No, they wouldn’t. You’d still be you. None of that is needed for you to be you.

Well maybe it’s your DNA? Maybe that’s the core thing that makes you you, and none of these organ transplants matter because your remaining cells all still contain your DNA, and they’re what maintains “you.” One major problem—identical twins have identical DNA, and they’re not the same person. You are you, and your identical twin is most certainly not you. DNA isn’t the answer.

So far, the Body Theory isn’t looking too good. We keep changing major parts of the body, and you keep being you.

But how about your brain?

The Brain Theory

Let’s say a mad scientist captures both you and Bill Clinton and locks the two of you up in a room.

CH

The scientist then performs an operation on both of you, whereby he safely removes each of your brains and switches them into the other’s head. Then he seals up your skulls and wakes you both up. You look down and you’re in a totally different body—Bill Clinton’s body. And across the room, you see your body—with Bill Clinton’s personality.

CFO

Now, are you still you? Well, my intuition says that you’re you—you still have your exact personality and all your memories—you’re just in Bill Clinton’s body now. You’d go find your family to explain what happened:

CF1

CF2

So unlike your other organs, which could be transplanted without changing your identity, when you swapped brains, it wasn’t a brain transplant—it was a body transplant. You’d still feel like you, just with a different body. Meanwhile, your old body would not be you—it would be Bill Clinton. So what makes you you must be your brain. The Brain Theory says that wherever the brain goes, you go—even if it goes into someone else’s skull.

The Data Theory

Consider this—

What if the mad scientist, after capturing you and Bill Clinton, instead of swapping your physical brains, just hooks up a computer to each of your brains, copies every single bit of data in each one, then wipes both of your brains completely clean, and then copies each of your brain data onto the other person’s physical brain? So you both wake up, both with your own physical brains in your head, but you’re not in your body—you’re in Bill Clinton’s body. After all, Bill Clinton’s brain now has all of your thoughts, memories, fears, hopes, dreams, emotions, and personality. The body and brain of Bill Clinton would still run out and go freak out about this to your family. And again, after a significant amount of convincing, they would indeed accept that you were alive, just in Bill Clinton’s body.

Philosopher John Locke’s memory theory of personal identity suggests that what makes you you is your memory of your experiences. Under Locke’s definition of you, the new Bill Clinton in this latest example is you, despite not containing any part of your physical body, not even your brain. 

This suggests a new theory we’ll call The Data Theory, which says that you’re not your physical body at all. Maybe what makes you you is your brain’s data—your memories and your personality.

We seem to be honing in on something, but the best way to get to concrete answers is by testing these theories in hypothetical scenarios. Here’s an interesting one, conceived by British philosopher Bernard Williams:

The Torture Test

Situation 1: The mad scientist kidnaps you and Clinton, switches your brain data with Clinton’s, as in the latest example, wakes you both up, and then walks over to the body of Clinton, where you supposedly reside, and says, “I’m now going to horribly torture one of you—which one should I torture?”

What’s your instinct? Mine is to point at my old body, where I no longer reside, and say, “Him.” And if I believe in the Data Theory, then I’ve made a good choice. My brain data is in Clinton’s body, so I’m now in Clinton’s body, so who cares about my body anymore? Sure, it sucks for anyone to be tortured, but if it’s between me and Bill Clinton, I’m choosing him.

Situation 2: The mad scientist captures you and Clinton, except he doesn’t do anything to your brains yet. He comes over to you—normal you with your normal brain and body—and asks you a series of questions. Here’s how I think it would play out:

Mad Scientist: Okay so here’s what’s happening. I’m gonna torture one of you. Who should I torture?

You: [pointing at Clinton] Him.

MS: Okay but there’s something else—before I torture whoever I torture, I’m going to wipe both of your brains of all memories, so when the torture is happening, neither of you will remember who you were before this. Does that change your choice?

You: Nope. Torture him.

MS: One more thing—before the torture happens, not only am I going to wipe your brains clean, I’m going to build new circuitry into your brain that will convince you that you’re Bill Clinton. By the time I’m done, you’ll think you’re Bill Clinton and you’ll have all of his memories and his full personality and anything else that he thinks or feels or knows. I’ll do the same thing to him, convincing him he’s you. Does that change your choice?

You: Um, no. Regardless of any delusion I’m going through and no matter who Ithink I am, I don’t want to go through the horrible pain of being tortured. Insane people still feel pain. Torture him.

So in the first situation, I think you’d choose to have your own body tortured. But in the second, I think you’d choose Bill Clinton’s body—at least I would. But the thing is—they’re the exact same example. In both cases, before any torture happens, Clinton’s brain ends up with all of your data and your brain has his—the difference is just at which point in the process you were asked to decide. In both cases, your goal is for you to not be tortured, but in the first situation, you felt that after the brain data swap, you were in Clinton’s body, with all of your personality and memories there with you—while in the second situation, if you’re like me, you didn’t care what was going to happen with the two brains’ data, you believed that you would remain with your physical brain, and body, either way.

Choosing your body to be the one tortured in the first situation is an argument for the Data Theory—you believe that where your data goes, you go. Choosing Clinton’s body to be tortured in the second situation is an argument for the Brain Theory, because you believe that regardless of what he does with your brain’s data, you will continue to be in your own body, because that’s where your physical brain is. Some might even take it a step further, and if the mad scientist told you he was even going to switch your physical brains, you’d still choose Clinton’s body, with your brain in it, to be tortured. Those that would torture a body with their own brain in it over torturing their own body believe in the Body Theory.

Not sure about you, but I’m finishing this experiment still divided. Let’s try another. Here’s my version of modern philosopher Derek Parfit’s teletransporter thought experiment, which he first described in his book Reasons and Persons

The Teletransporter Thought Experiment

It’s the year 2700. The human race has invented all kinds of technology unimaginable in  today’s world. One of these technologies is teleportation—the ability to transport yourself to distant places at the speed of light. Here’s how it works—

You go into a Departure Chamber—a little room the size of a small cubicle.

cube stand

You set your location—let’s say you’re in Boston and your destination is London—and when you’re ready to go, you press the button on the wall. The chamber walls then scan your entire body, uploading the exact molecular makeup of your body—every atom that makes up every part of you and its precise location—and as it scans, it destroys, so every cell in your body is destroyed by the scanner as it goes.

cube beam

When it’s finished (the Departure Chamber is now empty after destroying all of your cells), it beams your body’s information to an Arrival Chamber in London, which has all the necessary atoms waiting there ready to go. The Arrival Chamber uses the data to re-form your entire body with its storage of atoms, and when it’s finished you walk out of the chamber in London looking and feeling exactly how you did back in Boston—you’re in the same mood, you’re hungry just like you were before, you even have the same paper cut on your thumb you got that morning.

The whole process, from the time you hit the button in the Departure Chamber to when you walk out of the Arrival Chamber in London, takes five minutes—but to you it feels instantaneous. You hit the button, things go black for a blink, and now you’re standing in London. Cool, right?

In 2700, this is common technology. Everyone you know travels by teleportation. In addition to the convenience of speed, it’s incredibly safe—no one has ever gotten hurt doing it.

But then one day, you head into the Departure Chamber in Boston for your normal morning commute to your job in London, you press the big button on the wall, and you hear the scanner turn on, but it doesn’t work.

cubicle broken

The normal split-second blackout never happens, and when you walk out of the chamber, sure enough, you’re still in Boston. You head to the check-in counter and tell the woman working there that the Departure Chamber is broken, and you ask her if there’s another one you can use, since you have an early meeting and don’t want to be late.

She looks down at her records and says, “Hm—it looks like the scanner worked and collected its data just fine, but the cell destroyer that usually works in conjunction with the scanner has malfunctioned.”

“No,” you explain, “it couldn’t have worked, because I’m still here. And I’m late for this meeting—can you please set me up with a new Departure Chamber?”

She pulls up a video screen and says, “No, it did work—see? There you are in London—it looks like you’re gonna be right on time for your meeting.” She shows you the screen, and you see yourself walking on the street in London.

“But that can’t be me,” you say, “because I’m still here.”

At that point, her supervisor comes into the room and explains that she’s correct—the scanner worked as normal and you’re in London as planned. The only thing that didn’t work was the cell destroyer in the Departure Chamber here in Boston. “It’s not a problem, though,” he tells you, “we can just set you up in another chamber and activate its cell destroyer and finish the job.”

And even though this isn’t anything that wasn’t going to happen before—in fact, you have your cells destroyed twice every day—suddenly, you’re horrified at the prospect.

“Wait—no—I don’t want to do that—I’ll die.”

The supervisor explains, “You won’t die sir. You just saw yourself in London—you’re alive and well.”

“But that’s not me. That’s a replica of me—an imposterI’m the real me—you can’t destroy my cells!”

The supervisor and the woman glance awkwardly at each other. “I’m really sorry sir—but we’re obligated by law to destroy your cells. We’re not allowed to form the body of a person in an Arrival Chamber without destroying the body’s cells in a Departure Chamber.”

You stare at them in disbelief and then run for the door. Two security guards come out and grab you. They drag you toward a chamber that will destroy your cells, as you kick and scream…

__________

If you’re like me, in the first part of that story, you were pretty into the idea of teletransportation, and by the end, you were not.

The question the story poses is, “Is teletransportation, as described in this experiment, a form of traveling? Or a form of dying?

This question might have been ambiguous when I first described it—it might have even felt like a perfectly safe way of traveling—but by the end, it felt much more like a form of dying. Which means that every day when you commute to work from Boston to London, you’re killed by the cell destroyer, and a replica of you is created.1 To the people who know you, you survive teletransportation just fine, the same way your wife seems just fine when she arrives home to you after her own teletransportation, talking about her day and discussing plans for next week. But is it possible that your wife was actually killed that day, and the person you’re kissing now was just created a few minutes ago?

Well again, it depends on what you are. Someone who believes in the Data Theory would posit that London you is you as much as Boston you, and that teletransportation is perfectly survivable. But we all related to Boston you’s terror at the end there—could anyone really believe that he should be fine with being obliterated just because his data is safe and alive over in London? Further, if the teletransporter could beam your data to London for reassembly, couldn’t it also beam it to 50 other cities and create 50 new versions of you? You’d be hard-pressed to argue that those were all you. To me, the teletransporter experiment is a big strike against the Data Theory.

Similarly, if there were an Ego Theory that suggests that you are simply your ego, the teletransporter does away nicely with that. Thinking about London Tim, I realize that “Tim Urban” surviving means nothing to me. The fact that my replica in London will stay friends with my friends, keep Wait But Why going with his Tuesday-ish posts, and live out the whole life I was planning for myself—the fact that no one will miss me or even realize that I’m dead, the same way in the story you never felt like you lost your wife—does almost nothing for me. I don’t care about Tim Urban surviving. I care about me surviving.

All of this seems like very good news for Body Theory and Brain Theory. But let’s not judge things yet. Here’s another experiment:

The Split Brain Experiment

A cool fact about the human brain is that the left and right hemispheres function as their own little worlds, each with their own things to worry about, but if you remove one half of someone’s brain, they can sometimes not only survive, but their remaining brain half can learn to do many of the other half’s previous jobs, allowing the person to live a normal life. That’s right—you could lose half of your brain and potentially function normally.

So say you have an identical twin sibling named Bob who developes a fatal brain defect. You decide to save him by giving him half of your brain. Doctors operate on both of you, discarding his brain and replacing it with half of yours. When you wake up, you feel normal and like yourself. Your twin (who already has your identical DNA because you’re twins) wakes up with your exact personality and memories.

twins

When you realize this, you panic for a minute that your twin now knows all of your innermost thoughts and feelings on absolutely everything, and you’re about to make him promise not to tell anyone, when it hits you that you of course don’t have to tell him. He’s not your twin—he’s you. He’s just as intent on your privacy as you are, because it’s his privacy too.

As you look over at the guy who used to be Bob and watch him freak out that he’s in Bob’s body now instead of his own, you wonder, “Why did I stay in my body and not wake up in Bob’s? Both brain halves are me, so why am I distinctly in my body and not seeing and thinking in dual split-screen right now, from both of our points of view? And whatever part of me is in Bob’s head, why did I lose touch with it? Who is the me in Bob’s head, and how did he end up over there while I stayed here?”

Brain Theory is shitting his pants right now—it makes no sense. If people are supposed to go wherever their brains go, what happens when a brain is in two places at once? Data Theory, who was badly embarrassed by the teletransporter experiment, is doing no better in this one.

But Body Theory—who was shot down at the very beginning of the post—is suddenly all smug and thrilled with himself. Body Theory says “Of course you woke up in your own body—your body is what makes you you. Your brain is just the tool your body uses to think. Bob isn’t you—he’s Bob. He’s just now a Bob who has your thoughts and personality. There’s nothing Bob’s body can ever do to not be Bob.” This would help explain why you stayed in your body.

So a nice boost for Body Theory, but let’s take a look at a couple more things—

What we learned in the teletransporter experiment is that if your brain data is transferred to someone else’s brain, even if that person is molecularly identical to you, all it does is create a replica of you—a total stranger who happens to be just like you. There’s something distinct about Boston you that was important. When you were recreated out of different atoms in London, something critical was lost—something that made you you.

Body Theory (and Brain Theory) would point out that the only difference between Boston you and London you was that London you was made out of different atoms. London you’s body was like your body, but it was still made of different material. So is that it? Could Body Theory explain this too?

Let’s put it through two tests:

The Cell Replacement Test

Imagine I replace a cell in your arm with an identical, but foreign, replica cell. Are you not you anymore? Of course you are. But how about if, one at a time, I replace 1% of your cells with replicas? How about 10%? 30%? 60%? The London you was composed of 100% replacement cells, and we decided that that was not you—so when does the “crossover” happen? How many of your cells do we need to swap out for replicas before you “die” and what’s remaining becomes your replica?

Something feels off with this, right? Considering that the cells we’re replacing are molecularly identical to those we’re removing, and someone watching this all happen wouldn’t even notice anything change about you, it seem implausible that you’d ever die during this process, even if we eventually replaced 100% of your cells with replicas. But if your cells are eventually all replicas, how are you any different from London you?

The Body Scattering Test 

Imagine going into an Atom Scattering Chamber that completely disassembles your body’s atoms so that all that’s left in the room is a light gas of floating atoms—and then a few minutes later, it perfectly reassembles the atoms into you, and you walk out feeling totally normal.

disassemble

Is that still you? Or did you die when you were disassembled and what has been reassembled is a replica of you? It doesn’t really make sense that this reassembled you would be the real you and London you would be a replica, when the only difference between the two cases is that the scattering room preserves your exact atoms and the London chamber assembles you out of different atoms. At their most basic level, atoms are identical—a hydrogen atom from your body is identical in every way to a hydrogen atom in London. Given that, I’d say that if we’re deciding London you is not you, then reassembled you is probably not you either.

The first thing these two tests illustrate is that the key distinction between Boston you and London you isn’t about the presence or absence of your actual, physical cells. The Cell Replacement Test suggests that you can gradually replace much or all of your body with replica material and still be you, and the Body Scattering Test suggests that you can go through a scatter and a reassembly, even with all of your original physical material, and be no more you than the you in London. Not looking great for Body Theory anymore.

The second thing these tests reveal is that the difference between Boston and London you might not be the nature of the particular atoms or cells involved, but about continuity. The Cell Replacement Test might have left you intact because it changed you gradually, one cell at a time. And if the Body Scattering Test were the end of you, maybe it’s because it happened all at the same time, breaking thecontinuity of you. This could also explain why the teletransporter might be a murder machine—London you has no continuity with your previous life.

So could it be that we’ve been off the whole time pitting the brain, the body, and the personality and memories against each other? Could it be that anytime you relocate your brain, or disassemble your atoms all at once, transfer your brain data onto a new brain, etc., you lose you because maybe, you’re not defined by any of these things on their own, but rather by a long and unbroken string of continuous existence?

Continuity

A few years ago, my late grandfather, in his 90s and suffering from dementia, pointed at a picture on the wall of himself as a six-year-old. “That’s me!” he explained.

He was right. But come on. It seems ridiculous that the six-year-old in the picture and the extremely old man standing next to me could be the same person. Those two people had nothing in common. Physically, they were vastly different—almost every cell in the six-year-old’s body died decades ago. As far as their personalities—we can agree that they wouldn’t have been friends. And they shared almost no common brain data at all. Any 90-year-old man on the street is much more similar to my grandfather than that six-year-old.

But remember—maybe it’s not about similarity, but about continuity. If similarity were enough to define you, Boston you and London you, who are identical, would be the same person. The thing that my grandfather shared with the six-year-old in the picture is something he shared with no one else on Earth—they were connected to each other by a long, unbroken string of continuous existence. As an old man, he may not know anything about that six-year-old boy, but he knows something about himself as an 89-year-old, and that 89-year-old might know a bunch about himself as an 85-year-old. As a 50-year-old, he knew a ton about him as a 43-year-old, and when he was seven, he was a pro on himself as a 6-year-old. It’s a long chain of overlapping memories, personality traits, and physical characteristics.

It’s like having an old wooden boat. You may have repaired it hundreds of times over the years, replacing wood chip after wood chip, until one day, you realize that not one piece of material from the original boat is still part of it. So is that still your boat? If you named your boat Polly the day you bought it, would you change the name now? It would still be Polly, right?

In this way, what you are is not really a thing as much as a story, or a progression, or one particular theme of person. You’re a bit like a room with a bunch of things in it—some old, some new, some you’re aware of, some you aren’t—but the room is always changing, never exactly the same from week to week.

Likewise, you’re not a set of brain data, you’re a particular database whose contents are constantly changing, growing, and being updated. And you’re not a physical body of atoms, you’re a set of instructions on how to deal with and organize the atoms that bump into you.

People always say the word soul and I never really know what they’re talking about. To me, the word soul has always seemed like a poetic euphemism for a part of the brain that feels very inner to us; or an attempt to give humans more dignity than just being primal biological organisms; or a way to declare that we’re eternal. But maybe when people say the word soul what they’re talking about is whatever it is that connects my 90-year-old grandfather to the boy in the picture. As his cells and memories come and go, as every wood chip in his canoe changes again and again, maybe the single common thread that ties it all together is his soul. After examining a human from every physical and mental angle throughout the post, maybe the answer this whole time has been the much less tangible Soul Theory.

______

It would have been pleasant to end the post there, but I just can’t do it, because I can’t quite believe in souls.

The way I actually feel right now is completely off-balance. Spending a week thinking about clones of yourself, imagining sharing your brain or merging yours with someone else’s, and wondering whether you secretly die every time you sleep and wake up as a replica will do that to you. If you’re looking for a satisfying conclusion, I’ll direct you to the sources below since I don’t even know who I am right now.

The only thing I’ll say is that I told someone about the topic I was posting on for this week, and their question was, “That’s cool, but what’s the point of trying to figure this out?” While researching, I came across this quote by Parfit: “The early Buddhist view is that much or most of the misery of human life resulted from the false view of self.” I think that’s probably very true, and that’s the point of thinking about this topic.

___________

Here’s how I’m working on this false view of self thing.

Sources
Very few of the ideas or thought experiments in this post are my original thinking. I read and listened to a bunch of personal identity philosophy this week and gathered my favorite parts together for the post. The two sources I drew from the most were philosopher Derek Parfit’s book Reasons and Persons and Yale professor Shelly Kagan’s fascinating philosophy course on death—the lectures are all watchableonline for free.

Other Sources:
David Hume: Hume on Identity Over Time and Persons
Derek Parfit: We Are Not Human Beings
Peter Van Inwagen: Materialism and the Psychological-Continuity Account of Personal Identity
Bernard Williams: The Self and the Future
John Locke: An Essay Concerning Human Understanding (Chapter: Of Identity and Diversity)
Douglas Hofstadter: Gödel, Escher, Bach
Patrick Bailey: Concerning Theories of Personal Identity

 

http://waitbutwhy.com/2014/12/what-makes-you-you.html

More Cowbells: new NSA leaks reveal extent of spying tactics

By Associated Whistleblowing Press On January 24, 2015

Post image for More Cowbells: new NSA leaks reveal extent of spying tactics
New leaks from the NSA archive, seen exclusively by ROAR, reveal that even the Internet’s most basic architecture — the DNS database — is compromised.
In the last few years we have been living a critical moment in the history of the Internet. The good old days, in which optimism was widespread among engineers and new technologies were considered a solution to the great problems of humanity, seem to have disappeared. Nowadays, the Internet has become a very lucrative spying machine, and many of those same engineers are fighting to preserve the most basic rights to privacy.It’s mostly thanks to Edward Snowden and Wikileaks that we have caught a glimpse of the most obscure practices in the world of industrial-level spying, carried out by the National Security Agency (NSA) and its allies.Under the pretense of fighting terrorism, these agencies now have direct access to the servers of the largest internet platforms like Facebook, Google, Apple and Yahoo; they are able to access, in their own words, “nearly everything a user does on the internet,” including email and social networking contents; and they pay tech companies to install back-doors and get access to encrypted communications and all of this without any legal restriction or judicial order.

The fact that, in addition to millions of unaware citizens, this technology has been used to spy on the Presidents of Mexico, Germany and Brazil, foreign embassies, state companies such as Petrobras and United Nations delegates sent a clear message: privacy in the virtual space we have come to live in has become an illusion. Nothing is private on the Internet, and there are powerful interests bent on keeping it that way.

Today, new leaks by Le Monde and the Associated Whistleblowing Press, seen exclusively by ROAR, demonstrate that even the fundamental architecture of the internet — the so-called Domain Name System or DNS — is compromised by the NSA and its allies.

DNS: from problem solver to problem

When you do something on the Internet, almost everything begins with a request to the Domain Name System. It is thanks to this fundamental protocol that users with no technical knowledge can access different services on the web by looking up names (such as http://www.example.com) instead of complex IP addresses (like 2001:DB8:4145::4242).

DNS was invented to solve a very basic usability issue of the newly born internet. With time it has become so widespread that it is used by virtually everybody to carry out their daily activities on the web. The system, built during the early 1980s, was never intended to preserve the user’s privacy: every DNS database is public and stores the contents of requests, their answers and user’s sensitive metadata (like information on duration, time and place of access) without any kind of encryption.

Given these critical vulnerabilities in this system, it is only natural that the big spy agencies like the NSA and its allies in the UK, Australia, Canada and New Zealand, are ahead of the pack to exploit them for their own benefit. Thanks to the new leaks, we now know exactly how.

MORECOWBELL

DNS has always been an open book and MORECOWBELL is the program the NSA has developed exclusively to read it. As the leaked slides show, the system allows the agency to monitor the availability of sites and web services, changes in content and a wide array of metadata that can help it build complete profiles for targeted users. If necessary, it can even be used to find weak points for launching direct attacks.

Given the widespread use of DNS in the public internet, the implications of this program are huge, as it affects users on a global level. To achieve its goals, the MORECOWBELL program uses dedicated infrastructure camouflaged in different locations, including Malaysia, Germany and Denmark, besides 13 other countries that sustain their network of servers.

This distributed and secret network gives the NSA a series of strategic advantages. On the one hand, they have a global overview of DNS resolutions and the availability of services, and on the other, it makes it impossible to attribute the operation to the US government.

Monitoring the battlefield

This last point is particularly important, as MORECOWBELL can have a very practical application in war operations, particularly in what the NSA calls “Battle Damage Indicators.”

During war, communications, energy and computer networks are frequently targets. Thanks to MORECOWBELL, the US Government can have a real time estimation of the efficiency of the attack by having access to information on the availability of services in the region.

This method is a cheap, efficient and easily applicable tool for optimizing aerial attacks in zones of difficult access or visibility.

From the Internet to the internets

Even though the DNS community is conscious of the privacy issues described above, conflicting interests make it virtually impossible to come up with a consensus on the solution. On the one hand, modifying a system as widely distributed as DNS can result in major problems for the daily Internet use of billions of users. On the other, a change that seeks to solve these problems can clash head on with business models and powerful national interests.

As for now, there are many technical solutions, although none is completely satisfactory. Among them, and without going into detail, there is a query minimization proposal and other more or less radical projects like Confidential DNS, DNSCurve, GNU Name System or Namecoin, all of them with their strong and weak points.

In the end, however, any real solution has to pass through a high-level political barrier. We must understand that the internet is not a truly decentralized system. It has a clear owner: the United States. The Domain Name System and the general register for IP addresses, for instance, the two major databases that hold the global Internet in place, are both controlled by US institutions.

Because of this, and thanks to the reckless exploitation of the network as a spying machine, the trend towards an Internet divided according to national interests is accelerating. In the future there might not be one Internet, but many strategically separated internets.

Something similar is already a reality in countries like China and Iran, which have isolated their networks in order to control the flow of information and exercise censorship according to their own specific interests.

Towards greater decentralization

However, since the Snowden revelations caused a huge stir in international politics, the debate has opened up completely. “Brazil is in favor of greater decentralization: Internet governance must be multilateral and multisectoral with a broader participation,” Communications Minister Paulo Bernardo told a congressional panel in 2013, and other BRICS countries such as Russia have openly declared that they will start laying their own fiber optic cables.

At the same time, Germany has proposed a closed system that protects European communications roughly along the lines of the Schengen agreement. Their argument is very logical: why does an email sent from Berlin to Paris have to pass through New York or London?

This trend towards greater decentralization clearly goes against the interests of the United States, which is why the US government is fighting hard and on many fronts to oppose it. In this sense, the future remains unclear.

What we do know, however, is that as long as the Internet is still using the same outdated architecture and protocols, without a movement to decentralize and guarantee user privacy, it will continue to be used as a tool for surveillance and indiscriminate control for political, economical and military dominance.

The Associated Whistleblowing Press is a not-for-profit information agency dedicated to releasing and analyzing leaked content. This article was developed with thanks to technical analysis made by Christian Grothoff, Matthias Wachs, Monika Ermer and Jacob Appelbaum.

 

http://roarmag.org/2015/01/nsa-leak-domain-name-system/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+roarmag+%28ROAR+Magazine%29