Your Cellphone Could Be a Major Health Risk

…and the Industry Could Be a Lot More Upfront About It

The science is becoming clearer: Sustained EMF exposure is dangerous.

Photo Credit: Jason Stitt/Shutterstock.com

The following is an excerpt from “Overpowered: What Science Tells Us About the Dangers of Cell Phones and Other Wifi-age Devices” by Martin Blank, PhD. Published by Seven Stories Press, March 2014. ISBN 978-1-60980-509-8. All rights reserved.

This excerpt was originally published by Salon.com.

You may not realize it, but you are participating in an unauthorized experiment—“the largest biological experiment ever,” in the words of Swedish neuro-oncologist Leif Salford. For the first time, many of us are holding high-powered microwave transmitters—in the form of cell phones—directly against our heads on a daily basis.

Cell phones generate electromagnetic fields (EMF), and emit electromagnetic radiation (EMR). They share this feature with all modern electronics that run on alternating current (AC) power (from the power grid and the outlets in your walls) or that utilize wireless communication. Different devices radiate different levels of EMF, with different characteristics.

What health effects do these exposures have?

Therein lies the experiment.

The many potential negative health effects from EMF exposure (including many cancers and Alzheimer’s disease) can take decades to develop. So we won’t know the results of this experiment for many years—possibly decades. But by then, it may be too late for billions of people.

Today, while we wait for the results, a debate rages about the potential dangers of EMF. The science of EMF is not easily taught, and as a result, the debate over the health effects of EMF exposure can get quite complicated. To put it simply, the debate has two sides. On the one hand, there are those who urge the adoption of a precautionary approach to the public risk as we continue to investigate the health effects of EMF exposure. This group includes many scientists, myself included, who see many danger signs that call out strongly for precaution. On the other side are those who feel that we should wait for definitive proof of harm before taking any action. The most vocal of this group include representatives of industries who undoubtedly perceive threats to their profits and would prefer that we continue buying and using more and more connected electronic devices.

This industry effort has been phenomenally successful, with widespread adoption of many EMF-generating technologies throughout the world. But EMF has many other sources as well. Most notably, the entire power grid is an EMF-generation network that reaches almost every individual in America and 75% of the global population. Today, early in the 21st century, we find ourselves fully immersed in a soup of electromagnetic radiation on a nearly continuous basis.

What we know

The science to date about the bioeffects (biological and health outcomes) resulting from exposure to EM radiation is still in its early stages. We cannot yet predict that a specific type of EMF exposure (such as 20 minutes of cell phone use each day for 10 years) will lead to a specific health outcome (such as cancer). Nor are scientists able to define what constitutes a “safe” level of EMF exposure.

However, while science has not yet answered all of our questions, it has determined one fact very clearly—all electromagnetic radiation impacts living beings. As I will discuss, science demonstrates a wide range of bioeffects linked to EMF exposure. For instance, numerous studies have found that EMF damages and causes mutations in DNA—the genetic material that defines us as individuals and collectively as a species. Mutations in DNA are believed to be the initiating steps in the development of cancers, and it is the association of cancers with exposure to EMF that has led to calls for revising safety standards. This type of DNA damage is seen at levels of EMF exposure equivalent to those resulting from typical cell phone use.

The damage to DNA caused by EMF exposure is believed to be one of the mechanisms by which EMF exposure leads to negative health effects. Multiple separate studies indicate significantly increased risk (up to two and three times normal risk) of developing certain types of brain tumors following EMF exposure from cell phones over a period of many years. One review that averaged the data across 16 studies found that the risk of developing a tumor on the same side of the head as the cell phone is used is elevated 240% for those who regularly use cell phones for 10 years or more. An Israeli study found that people who use cell phones at least 22 hours a month are 50% more likely to develop cancers of the salivary gland (and there has been a four-fold increase in the incidence of these types of tumors in Israel between 1970 and 2006). And individuals who lived within 400 meters of a cell phone transmission tower for 10 years or more were found to have a rate of cancer three times higher than those living at a greater distance. Indeed, the World Health Organization (WHO) designated EMF—including power frequencies and radio frequencies—as a possible cause of cancer.

While cancer is one of the primary classes of negative health effects studied by researchers, EMF exposure has been shown to increase risk for many other types of negative health outcomes. In fact, levels of EMF thousands of times lower than current safety standards have been shown to significantly increase risk for neurodegenerative diseases (such as Alzheimer’s and Lou Gehrig’s disease) and male infertility associated with damaged sperm cells. In one study, those who lived within 50 meters of a high voltage power line were significantly more likely to develop Alzheimer’s disease when compared to those living 600 meters or more away. The increased risk was 24% after one year, 50% after 5 years, and 100% after 10 years. Other research demonstrates that using a cell phone between two and four hours a day leads to 40% lower sperm counts than found in men who do not use cell phones, and the surviving sperm cells demonstrate lower levels of motility and viability.

EMF exposure (as with many environmental pollutants) not only affects people, but all of nature. In fact, negative effects have been demonstrated across a wide variety of plant and animal life. EMF, even at very low levels, can interrupt the ability of birds and bees to navigate. Numerous studies link this effect with the phenomena of avian tower fatalities (in which birds die from collisions with power line and communications towers). These same navigational effects have been linked to colony collapse disorder (CCD), which is devastating the global population of honey bees (in one study, placement of a single active cell phone in front of a hive led to the rapid and complete demise of the entire colony). And a mystery illness affecting trees around Europe has been linked to WiFi radiation in the environment.

There is a lot of science—highquality, peer-reviewed science—demonstrating these and other very troubling outcomes from exposure to electromagnetic radiation. These effects are seen at levels of EMF that, according to regulatory agencies like the Federal Communications Commission (FCC), which regulates cell phone EMF emissions in the United States, are completely safe.

An unlikely activist

I have worked at Columbia University since the 1960s, but I was not always focused on electromagnetic fields. My PhDs in physical chemistry from Columbia University and colloid science from the University of Cambridge provided me with a strong, interdisciplinary academic background in biology, chemistry, and physics. Much of my early career was spent investigating the properties of surfaces and very thin films, such as those found in a soap bubble, which then led me to explore the biological membranes that encase living cells.

I studied the biochemistry of infant respiratory distress syndrome (IRDS), which causes the lungs of newborns to collapse (also called hyaline membrane disease). Through this research, I found that the substance on the surface of healthy lungs could form a network that prevented collapse in healthy babies (the absence of which causes the problem for IRDS sufferers).

A food company subsequently hired me to study how the same surface support mechanism could be used to prevent the collapse of the air bubbles added to their ice cream. As ice cream is sold by volume and not by weight, this enabled the company to reduce the actual amount of ice cream sold in each package. (My children gave me a lot of grief about that job, but they enjoyed the ice cream samples I brought home.)

I also performed research exploring how electrical forces interact with the proteins and other components found in nerve and muscle membranes. In 1987, I was studying the effects of electric fields on membranes when I read a paper by Dr. Reba Goodman demonstrating some unusual effects of EMF on living cells. She had found that even relatively weak power fields from common sources (such as those found near power lines and electrical appliances) could alter the ability of living cells to make proteins. I had long understood the importance of electrical forces on the function of cells, but this paper indicated that magnetic forces (which are a key aspect of electromagnetic fields) also had significant impact on living cells.

Like most of my colleagues, I did not think this was possible. By way of background, there are some types of EMF that everyone had long acknowledged are harmful to humans. For example, X-rays and ultraviolet radiation are both recognized carcinogens. But these are ionizing forms of radiation. Dr. Goodman, however, had shown that even non-ionizingradiation, which has much less energy than X-rays, was affecting a very basic property of cells—the ability to stimulate protein synthesis.

Because non-ionizing forms of EMF have so much less energy than ionizing radiation, it had long been believed that non-ionizing electromagnetic fields were harmless to humans and other biological systems. And while it was acknowledged that a high enough exposure to non-ionizing EMF could cause a rise in body temperature—and that this temperature increase could cause cell damage and lead to health problems—it was thought that low levels of non-ionizing EMF that did not cause this rise in temperature were benign.

In over 20 years of experience at some of the world’s top academic institutions, this is what I’d been taught and this is what I’d been teaching. In fact, my department at Columbia University (like every other comparable department at other universities around the world) taught an entire course in human physiology without even mentioning magnetic fields, except when they were used diagnostically to detect the effects of the electric currents in the heart or brain. Sure magnets and magnetic fields can affect pieces of metal and other magnets, but magnetic fields were assumed to be inert, or essentially powerless, when it came to human physiology.

As you can imagine, I found the research in Dr. Goodman’s paper intriguing. When it turned out that she was a colleague of mine at Columbia, with an office just around the block, I decided to follow up with her, face-to-face. It didn’t take me long to realize that her data and arguments were very convincing. So convincing, in fact, that I not only changed my opinion on the potential health effects of magnetism, but I also began a long collaboration with her that has been highly productive and personally rewarding.

During our years of research collaboration, Dr. Goodman and I published many of our results in respected scientific journals. Our research was focused on the cellular level—how EMF permeate the surfaces of cells and affect cells and DNA—and we demonstrated several observable, repeatable health effects from EMF on living cells. As with all findings published in such journals, our data and conclusions were peer reviewed. In other words, our findings were reviewed prior to publication to ensure that our techniques and conclusions, which were based on our measurements, were appropriate. Our results were subsequently confirmed by other scientists, working in other laboratories around the world, independent from our own.

A change in tone

Over the roughly 25 years Dr. Goodman and I have been studying the EMF issue, our work has been referenced by numerous scientists, activists, and experts in support of public health initiatives including the BioInitiative Report, which was cited by the European Parliament when it called for stronger EMF regulations. Of course, our work was criticized in some circles, as well. This was to be expected, and we welcomed it—discussion and criticism is how science advances. But in the late 1990s, the criticism assumed a different character, both angrier and more derisive than past critiques.

On one occasion, I presented our findings at a US Department of Energy annual review of research on EMF. As soon as I finished my talk, a well-known Ivy League professor said (without any substantiation) that the data I presented were “impossible.” He was followed by another respected academic, who stated (again without any substantiation) that I had most likely made some “dreadful error.” Not only were these men wrong, but they delivered their comments with an intense and obvious hostility.

I later discovered that both men were paid consultants of the power industry—one of the largest generators of EMF. To me, this explained the source of their strong and unsubstantiated assertions about our research. I was witnessing firsthand the impact of private, profit-driven industrial efforts to confuse and obfuscate the science of EMF bioeffects.

Not the first time

I knew that this was not the first time industry opposed scientific research that threatened their business models. I’d seen it before many times with tobacco, asbestos, pesticides, hydraulic fracturing (or “fracking”), and other industries that paid scientists to generate “science” that would support their claims of product safety.

That, of course, is not the course of sound science. Science involves generating and testing hypotheses. One draws conclusions from the available, observable evidence that results from rigorous and reproducible experimentation. Science is not sculpting evidence to support your existing beliefs. That’s propaganda. As Dr. Henry Lai (who, along with Dr. Narendra Singh, performed the groundbreaking research demonstrating DNA damage from EMF exposure) explains, “a lot of the studies that are done right now are done purely as PR tools for the industry.”

An irreversible trend

Of course EMF exposure—including radiation from smart phones, the power lines that you use to recharge them, and the other wide variety of EMF-generating technologies—is not equivalent to cigarette smoking. Exposure to carcinogens and other harmful forces from tobacco results from the purely voluntary, recreational activity of smoking. If tobacco disappeared from the world tomorrow, a lot of people would be very annoyed, tobacco farmers would have to plant other crops, and a few firms might go out of business, but there would be no additional impact.

In stark contrast, modern technology (the source of the humanmade electromagnetic fields discussed here) has fueled a remarkable degree of innovation, productivity, and improvement in the quality of life. If tomorrow the power grid went down, all cell phone networks would cease operation, millions of computers around the world wouldn’t turn on, and the night would be illuminated only by candlelight and the moon—we’d have a lot less EMF exposure, but at the cost of the complete collapse of modern society.

EMF isn’t just a by-product of modern society. EMF, and our ability to harness it for technological purposes, is the cornerstone of modern society. Sanitation, food production and storage, health care—these are just some of the essential social systems that rely on power and wireless communication. We have evolved a society that is fundamentally reliant upon a set of technologies that generate forms and levels of electromagnetic radiation not seen on this planet prior to the 19th century.

As a result of the central role these devices play in modern life, individuals are understandably predisposed to resist information that may challenge the safety of activities that result in EMF exposures. People simply cannot bear the thought of restricting their time with— much less giving up—these beloved gadgets. This gives industry a huge advantage because there is a large segment of the public that would rather not know.

Precaution

My message is not to abandon gadgets—like most people, I too love and utilize EMF-generating gadgets. Instead, I want you to realize that EMF poses a real risk to living creatures and that industrial and product safety standards must and can be reconsidered. The solutions I suggest are not prohibitive. I recommend that as individuals we adopt the notion of “prudent avoidance,” minimizing our personal EMF exposure and maximizing the distance between us and EMF sources when those devices are in use. Just as you use a car with seat belts and air bags to increase the safety of the inherently dangerous activity of driving your car at a relatively high speed, you should consider similar risk-mitigating techniques for your personal EMF exposure.

On a broader social level, adoption of the Precautionary Principle in establishing new, biologically based safety standards for EMF exposure for the general public would be, I believe, the best approach. Just as the United States became the first nation in the world to regulate the production of chlorofluorocarbons (CFCs) when science indicated the threat to earth’s ozone layer—long before there was definitive proof of such a link—our governments should respond to the significant public health threat of EMF exposure. If EMF levels were regulated just as automobile carbon emissions are regulated, this would force manufacturers to design, create, and sell devices that generate much lower levels of EMF.

No one wants to return to the dark ages, but there are smarter and safer ways to approach our relationship—as individuals and across society—with the technology that exposes us to electromagnetic radiation.

Dr. Martin Blank is an expert on the health-related effects of electromagnetic fields and has been studying the subject for more than thirty years. He earned his first PhD from Columbia University in physical chemistry and his second from the University of Cambridge in colloid science. From 1968 to 2011, he taught as an associate professor at Columbia University, where he now acts as a special lecturer. Dr. Blank has served as an invited expert regarding EMF safety for Canadian Parliament, for the House Committee on Natural Resources and Energy (HNRE) in Vermont, and for Brazil’s Supreme Federal Court.

 

http://www.alternet.org/books/your-cellphone-could-be-major-health-risk-and-industry-could-be-lot-more-upfront-about-it?akid=11734.265072.RMLVql&rd=1&src=newsletter983753&t=7&paging=off&current_page=1#bookmark

Sunlight in cities is an endangered species

Welcome to the permanent dusk:

As cities grow taller, light has become a precious commodity. Is it time for it to be regulated like one?

Welcome to the permanent dusk: Sunlight in cities is an endangered species

What would you pay for more natural light in your apartment? $10,000 per sunlit window, in TriBeCa? A 15 percent surcharge for an apartment that faced south, in London? An annual levy of 60 pounds for 20 windows, as the English monarchy demanded during a 150-year period beginning in 1696, under the so-called Window Tax?

Would you support a municipal effort to install a giant mirror to reflect winter sunshine into the town square? The Norwegian mountain town of Rjukan spent $800,000 to do just that. In Islamic Cairo, researchers have developed a sheet of corrugated plastic that can double the amount of light that trickles into the narrow alleyways.

The importance of light to great architecture is no secret. But in cities, where natural light is instrumental to urban design and property values, sunlight is a fickle friend. It can account for the prices of apartments, the popularity of parks, and even influence commercial rents on big avenues. Its holistic properties are obvious, but its economic benefits no less important, including the effect of solar radiation on heating costs and the burgeoning potential for urban solar panel use. But sunlight can be taken away in an instant, from a backyard, a kitchen window or a treasured park, with neither notice nor consequence.

As American cities grow taller and denser — and most everyone agrees that they must — natural light becomes a more precious commodity. Does that mean it should be regulated like one? Or would preserving current sun patterns — so-called “solar rights” — grind real estate development to a halt? Put simply: Should Americans, in their homes and in their cities, have a right to light?

Planners, lawyers and homeowners have been arguing about this for two millennia. The Greeks incorporated the sun in their city planning; the Roman emperor Justinian ensured that no neighbor could block light “previously enjoyed for heat, light or sundial operation.” In desert climes, the same consideration was incorporated into city planning with even greater verve, for opposite results. In the Mozabite enclave of Ghardaia, Algeria, streets wind and curve so that the Saharan sun cannot penetrate.



In England, as the first throes of the Industrial Revolution wrought their transformations, Parliament attempted to legislate this concept with more objectivity. The so-called “Ancient Lights” law, passed in 1832, prevents new constructions from blocking light that has continuously reached the interior of a building for 20 years. The amount of light protected is determined by “the grumble line,” the point at which one might begin to complain about the lack of light.

It’s a law that British homeowners can still invoke today, though with only partial success. On the BBC reality show “The Planners,” an 87-year-old homeowner failed to prevent the construction of a neighbor’s light-hogging extension… and the law is even less likely to order the demolition of a larger, more expensive construction.

In Japan, where tall buildings are more common, a similar law, called “nissho-ken,” is more frequently cited. As skyscrapers proliferated in Japanese cities alongside small homes during the 1960s, sunshine suits exploded, from six in 1968 to 83 in 1972. More than 300 cities adopted “sunshine hour codes,” specifying penalties that developers must pay for casting shadows. Tokyo adopted a stricter zoning code for residential areas. In 1976, the Tokyo District Court delivered $7,000 in sunshine damages to residents at the foot of a new office tower. “Sunshine is essential to a comfortable life,” the court opined, “and therefore a citizen’s right to enjoy sunshine at his home should be duly protected by law.” Such rewards are not common, though in theory a developer can be forced to pay as much as $10,000 to each shaded homeowner.

But in America, the concept of property has never been so expansive. In the second half of the 19th century, the subject was a hot legal issue, but never overcame opposition from pro-growth circles. As the New York Times wrote in 1878, “Courts have rendered decisions that the law of ancient lights is inappropriate and inapplicable in America… Our sparsely settled country, they say, has not required such a law; encouragement of building is more needed than restrictions upon it.” That same logic still fuels opposition to zoning measures today.

Quality-of-life concerns struck back. In a series of tenement laws, New York City required habitable domiciles to include features like external windows in every room. It was the seven-acre shadow of the Equitable Building, completed in 1915, that inspired the nation’s first comprehensive zoning resolution. New York’s setback laws required buildings to taper as they rose, and shaped the city’s skyline and its streets for the next half-century. Many cities followed suit.

But there are few direct protections on the books, and the issue has again come to the forefront as a rash of super-tall buildings rise in Midtown Manhattan, casting half-mile-long shadows on Central Park. A quarter-century ago, activists led by Jackie Kennedy Onassis and the Municipal Arts Society successfully obtained architectural concessions from developer Mort Zuckerman when his plans threatened to devour sunlit stretches of Central Park. Today the issue has spawned scattered complaints but no results.

If New York had a law like San Francisco’s, that would be different. Voters in the famously sun-starved city passed a ballot ordinance in 1984 that prohibited new buildings from casting significant shadows on public parks. It has since required hundreds of real estate projects to be altered, and is regularly targeted by developers for repeal.

It’s a microcosm of a much larger debate about the wisdom of zoning, and the balance between regulation and development. High-rent cities like New York and San Francisco desperately need new units of housing. Which quality-of-life requirements represent basic human rights, and which are not-in-my-backyard claims to stymie new construction in a crowded city? Some proponents of maximizing the potential for new housing in American cities have proposed repealing some of the Progressive-era stipulations for proper apartments.

The rise of solar power further complicated the debate, even as it neatly quantifies the pecuniary value of sunlight. Even decades ago, American legal ambivalence to the sunbeams was called “the single most important legal issue concerning solar energy.” These days, many U.S. states and a handful of U.S. cities have introduced “solar permits,” through which an owner can ensure that his or her solar access cannot be disrupted. In Portland, Ore., existing vegetation (i.e., a tree that grows taller) is exempted. In Ashland, solar collectors are protected from encroaching vegetation but not from new construction. Boulder, Colo., has some of the most extensive solar rights in the U.S.

Sometimes this leads to odd conflicts. In Sunnyvale, Calif., one neighbor sued another over a crop of redwood trees that were casting shadows on his solar panels. Under the state’s 1978 solar rights law, he won — the neighbors had to trim their trees to let more sun through to his panels.

Developers contend that such regulations can amount to extortion: solar panels could be used to extract limitless concessions from nearby properties. Then again, without the assurance of continuing sunlight, what homeowner could invest in solar power?

 

http://www.salon.com/2014/04/20/welcome_to_the_permanent_dusk_sunlight_in_cities_is_an_endangered_species/?source=newsletter

Artists brains are ‘structurally different’ claims new study

 

Limited study found more grey and white matter in artists’ brains connected to visual imagination and fine motor control

It’s a truism to say that artists see the world differently from the rest of us, but new research suggests that their brains are structurally different as well.

The small study, published in journal NeuroImage, looked at the brain scans of 21 art students and 23 non-artists using a scanning method known as voxel-based morphometry.

Comparisons between the two groups showed that the artist has more neural matter in the parts of their brain relating to visual imagery and fine motor control.

Although this is certainly a physical difference it does not mean that artists’ talents are innate. The balance between the influence of nature and nurture is never easy to divine, and the authors say that training and upbringing also plays a large role in ability.

The brain scans were accompanied by various drawing tasks, with the researchers finding that those who performed best at these tests routinely had more grey and white matter in the motor areas of the brain.

“The people who are better at drawing really seem to have more developed structures in regions of the brain that control for fine motor performance and what we call procedural memory,” lead author Rebecca Chamberlain from KU Leuven University, Belgium told the BBC.

The artists also showed significantly more grey matter in the part of the brain called the parietal lobe, a region involved with a range of activities that include the capacity to imagine, deconstruct and combine visual imagery.

Scientists also suggest that the study would help put to rest the idea that artists predominantly use the right side of their brain, as the study showed that increased grey and white matter was found equally distributed.

Despite this, previous research has suggested that there are some hard-wired structural differences between individuals’ brains, with some of the divides falling across gender lines.

A ‘pioneering study’ published in December last year found that male brains had more neural connections running front to back while female brains had more connections between the right and left hemisphere. Scientist suggested that this could explain why men are ‘better at reading maps’ and women are ‘better at remembering a conversation.

 

http://www.independent.co.uk/news/science/artists-brains-are-structurally-different-claims-new-study-9267513.html

The 2,000-Year History of GPS Tracking

| Tue Apr. 15, 2014 3:00 AM PDT
Egyptian geographer Claudius Ptolemy and Hiawatha Bray’s “You Are Here”

Boston Globe technology writer Hiawatha Bray recalls the moment that inspired him to write his new book, You Are Here: From the Compass to GPS, the History and Future of How We Find Ourselves. “I got a phone around 2003 or so,” he says. “And when you turned the phone on—it was a Verizon dumb phone, it wasn’t anything fancy—it said, ‘GPS’. And I said, ‘GPS? There’s GPS in my phone?’” He asked around and discovered that yes, there was GPS in his phone, due to a 1994 FCC ruling. At the time, cellphone usage was increasing rapidly, but 911 and other emergency responders could only accurately track the location of land line callers. So the FCC decided that cellphone providers like Verizon must be able to give emergency responders a more accurate location of cellphone users calling 911. After discovering this, “It hit me,” Bray says. “We were about to enter a world in which…everybody had a cellphone, and that would also mean that we would know where everybody was. Somebody ought to write about that!”

So he began researching transformative events that lead to our new ability to navigate (almost) anywhere. In addition, he discovered the military-led GPS and government-led mapping technologies that helped create new digital industries. The result of his curiosity is You Are Here, an entertaining, detailed history of how we evolved from primitive navigation tools to our current state of instant digital mapping—and, of course, governments’ subsequent ability to track us. The book was finished prior to the recent disappearance of Malaysia Airlines flight 370, but Bray says gaps in navigation and communication like that are now “few and far between.”

Here are 13 pivotal moments in the history of GPS tracking and digital mapping that Bray points out in You Are Here:

1st century: The Chinese begin writing about mysterious ladles made of lodestone. The ladle handles always point south when used during future-telling rituals. In the following centuries, lodestone’s magnetic abilities lead to the development of the first compasses.

Image: ladle

Model of a Han Dynasty south-indicating ladle Wikimedia Commons

2nd century: Ptolemy’s Geography is published and sets the standard for maps that use latitude and longitude.

Image: Ptolemy map

Ptolemy’s 2nd-century world map (redrawn in the 15th century) Wikimedia Commons

1473: Abraham Zacuto begins working on solar declination tables. They take him five years, but once finished, the tables allow sailors to determine their latitude on any ocean.

Image: declination tables

The Great Composition by Abraham Zacuto. (A 17th-century copy of the manuscript originally written by Zacuto in 1491.) Courtesy of The Library of The Jewish Theological Seminary

1887: German physicist Heinrich Hertz creates electromagnetic waves, proof that electricity, magnetism, and light are related. His discovery inspires other inventors to experiment with radio and wireless transmissions.

Image: Hertz

The Hertz resonator John Jenkins. Sparkmuseum.com

1895: Italian inventor Guglielmo Marconi, one of those inventors inspired by Hertz’s experiment, attaches his radio transmitter antennae to the earth and sends telegraph messages miles away. Bray notes that there were many people before Marconi who had developed means of wireless communication. “Saying that Marconi invented the radio is like saying that Columbus discovered America,” he writes. But sending messages over long distances was Marconi’s great breakthrough.

Image: Marconi

Inventor Guglielmo Marconi in 1901, operating an apparatus similar to the one he used to transmit the first wireless signal across Atlantic Wikimedia Commons

1958: Approximately six months after the Soviets launched Sputnik, Frank McLure, the research director at Johns Hopkins Applied Physics Laboratory, calls physicists William Guier and George Weiffenbach into his office. Guier and Weiffenbach used radio receivers to listen to Sputnik’s consistent electronic beeping and calculate the Soviet satellite’s location; McLure wants to know if the process could work in reverse, allowing a satellite to location their position on earth. The foundation for GPS tracking is born.

​1969: A pair of Bell Labs scientists named William Boyle and George Smith create a silicon chip that records light and coverts it into digital data. It is called a charge-coupled device, or CCD, and serves as the basis for digital photography used in spy and mapping satellites.

1976: The top-secret, school-bus-size KH-11 satellite is launched. It uses Boyle and Smith’s CCD technology to take the first digital spy photographs. Prior to this digital technology, actual film was used for making spy photographs. It was a risky and dangerous venture for pilots like Francis Gary Powers, who was shot down while flying a U-2 spy plane and taking film photographs over the Soviet Union in 1960.

Image: KH-11 image

KH-11 satellite photo showing construction of a Kiev-class aircraft carrier Wikimedia Commons

1983: Korean Air Lines flight 007 is shot down after leaving Anchorage, Alaska, and veering into Soviet airspace. All 269 passengers are killed, including Georgia Democratic Rep. Larry McDonald. Two weeks after the attack, President Ronald Reagan directs the military’s GPS technology to be made available for civilian use so that similar tragedies would not be repeated. Bray notes, however, that GPS technology had always been intended to be made public eventually. Here’s Reagan’s address to the nation following the attack:

1989: The US Census Bureau releases (PDF) TIGER (Topologically Integrated Geographic Encoding and Referencing) into the public domain. The digital map data allows any individual or company to create virtual maps.

1994: The FCC declares that wireless carriers must find ways for emergency services to locate mobile 911 callers. Cellphone companies choose to use their cellphone towers to comply. However, entrepreneurs begin to see the potential for GPS-integrated phones, as well. Bray highlights SnapTrack, a company that figures out early on how to squeeze GPS systems into phones—and is purchased by Qualcomm in 2000 for $1 billion.

1996: GeoSystems launches an internet-based mapping service called MapQuest, which uses the Census Bureau’s public-domain mapping data. It attracts hundreds of thousands of users and is purchased by AOL four years later for $1.1 billion.

2004: Google buys Australian mapping startup Where 2 Technologies and American satellite photography company Keyhole for undisclosed amounts. The next year, they launch Google Maps, which is now the most-used mobile app in the world.

2012: The Supreme Court ruling in United States v. Jones (PDF) restricts police usage of GPS to track suspected criminals. Bray tells the story of Antoine Jones, who was convicted of dealing cocaine after police placed a GPS device on his wife’s Jeep to track his movements. The court’s decision in his case is unanimous: The GPS device had been placed without a valid search warrant. Despite the unanimous decision, just five justices signed off on the majority opinion. Others wanted further privacy protections in such cases—a mixed decision that leaves future battles for privacy open to interpretation.

 

http://www.motherjones.com/mixed-media/2014/04/you-are-here-book-hiawatha-bray-gps-navigation

New study finds US to be ruled by oligarchic elite

by Jerome Roos on April 17, 2014

Post image for New study finds US to be ruled by oligarchic elite

Political scientists show that average American has “near-zero” influence on policy outcomes, but their groundbreaking study is not without problems.

 

It’s not every day that an academic article in the arcane world of American political science makes headlines around the world, but then again, these aren’t normal days either. On Wednesday, various mainstream media outlets — including even the conservative British daily The Telegraph — ran a series of articles with essentially the same title: “Study finds that US is an oligarchy.” Or, as the Washington Post summed up: “Rich people rule!” The paper, according to the review in the Post, “should reshape how we think about American democracy.”

The conclusion sounds like it could have come straight out of a general assembly or drum circle at Zuccotti Park, but the authors of the paper in question — two Professors of Politics at Princeton and Northwestern University — aren’t quite of the radical dreadlocked variety. No, like Piketty’s book, this article is real “science”. It’s even got numbers in it! Martin Gilens of Princeton and Benjamin Page of Northwestern University took a dataset of 1,779 policy issues, ran a bunch of regressions, and basically found that the United States is not a democracy after all:

Multivariate analysis indicates that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence. The results provide substantial support for theories of Economic Elite Domination and for theories of Biased Pluralism, but not for theories of Majoritarian Electoral Democracy or Majoritarian Pluralism.

The findings, of course, are both very interesting and very obvious. What Gilens and Page claim to have empirically demonstrated is that policy outcomes by and large favor the interests of business and the wealthiest segment of the population, while the preferences of the vast majority of Americans are of little to no consequence for policy outcomes. As the authors show, this new data backs up the conclusions of a number of long-forgotten studies from the 1950s and 1960s — not least the landmark contributions by C.W. Mills and Ralph Miliband — that tried to debunk the assertion of mainstream pluralist scholars that no single interest group dominates US policymaking.

But while Gilens and Page’s study will undoubtedly be considered a milestone in the study of business power, there’s also a risk in focusing too narrowly on the elites and their interest groups themselves; namely the risk of losing sight of the broader set of social relations and institutional arrangements in which they are embedded. What I am referring to, of course, is the dreaded C-word: capitalism — a term that appears only once in the main body of Gilens and Page’s text, in a superficial reference to The Communist Manifesto, whose claims are quickly dismissed as empirically untestable. How can you talk about oligarchy and economic elites without talking about capitalism?

What’s missing from the analysis is therefore precisely what was missing from C.W. Mills’ and Miliband’s studies: an account of the nature of the capitalist state as such. By branding the US political system an “oligarchy”, the authors conveniently sidestep an even thornier question: what if oligarchy, as opposed to democracy, is actually the natural political form in capitalist society? What if the capitalist state is by its very definition an oligarchic form of domination? If that’s the case, the authors have merely proved the obvious: that the United States is a thoroughly capitalist society. Congratulations for figuring that one out! They should have just called a spade a spade.

That, of course, wouldn’t have raised many eyebrows. But it’s worth noting that this was precisely the critique that Nicos Poulantzas leveled at Ralph Miliband in the New Left Review in the early 1970s — and it doesn’t take an Althusserian structuralist to see that he had a point. Miliband’s study of capitalist elites, Poulantzas showed, was very useful for debunking pluralist illusions about the democratic nature of US politics, but by focusing narrowly on elite preferences and the “instrumental” use of political and economic resources to influence policy, Miliband’s empiricism ceded way too much methodological ground to “bourgeois” political science. By trying to painstakingly prove the existence of a causal relationship between instrumental elite behavior and policy outcomes, Miliband ended up missing the bigger picture: the class-bias inherent in the capitalist state itself, irrespective of who occupies it.

These methodological and theoretical limitations have consequences that extend far beyond the academic debate: at the end of the day, these are political questions. The way we perceive business power and define the capitalist state will inevitably have serious implications for our political strategies. The danger with empirical studies that narrowly emphasize the role of elites at the expense of the deeper structural sources of capitalist power is that they will end up reinforcing the illusion that simply replacing the elites and “taking money out of politics” would be sufficient to restore democracy to its past glory. That, of course, would be profoundly misleading. If we are serious about unseating the oligarchs from power, let’s make sure not to get carried away by the numbers and not to lose sight of the bigger picture.

Jerome Roos is a PhD candidate in International Political Economy at the European University Institute, and founding editor of ROAR Magazine.

Oligarchy, not democracy: Americans have ‘near-zero’ input on policy – report

Reuters / Amr Abdallah Dalsh

The first-ever scientific study that analyzes whether the US is a democracy, rather than an oligarchy, found the majority of the American public has a “minuscule, near-zero, statistically non-significant impact upon public policy” compared to the wealthy.

The study, due out in the Fall 2014 issue of the academic journal Perspectives on Politics, sets out to answer elusive questions about who really rules in the United States. The researchers measured key variables for 1,779 policy issues within a single statistical model in an unprecedented attempt “to test these contrasting theoretical predictions” – i.e. whether the US sets policy democratically or the process is dominated by economic elites, or some combination of both.

“Despite the seemingly strong empirical support in previous studies for theories of majoritarian democracy, our analyses suggest that majorities of the American public actually have little influence over the policies our government adopts,” the researchers from Princeton University and Northwestern University wrote.

While “Americans do enjoy many features central to democratic governance, such as regular elections, freedom of speech and association,” the authors say the data implicate “the nearly total failure of ‘median voter’ and other Majoritarian Electoral Democracy theories [of America]. When the preferences of economic elites and the stands of organized interest groups are controlled for, the preferences of the average American appear to have only a minuscule, near-zero, statistically non-significant impact upon public policy.”

The authors of “Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens” say that even as their model tilts heavily toward indications that the US is, in fact, run by the most wealthy and powerful, it actually doesn’t go far enough in describing the stranglehold connected elites have on the policymaking process.

“Our measure of the preferences of wealthy or elite Americans – though useful, and the best we could generate for a large set of policy cases – is probably less consistent with the relevant preferences than are our measures of the views of ordinary citizens or the alignments of engaged interest groups,” the researcher said.

“Yet we found substantial estimated effects even when using this imperfect measure. The real-world impact of elites upon public policy may be still greater.”

They add that the “failure of theories of Majoritarian Electoral Democracy is all the more striking because it goes against the likely effects of the limitations of our data. The preferences of ordinary citizens were measured more directly than our other independent variables, yet they are estimated to have the least effect.”

Despite the inexact nature of the data, the authors say with confidence that “the majority does not rule — at least not in the causal sense of actually determining policy outcomes.”

“We believe that if policymaking is dominated by powerful business organizations and a small number of affluent Americans, then America’s claims to being a democratic society are seriously threatened,” they concluded.

http://www.trueskool.com/forum/topics/oligarchy-not-democracy-americans-have-near-zero-input-on-policy-

To Reduce the Health Risk of Barbecuing Meat, Just Add Beer

http://rioc.com/images/BBQ.jpg

 

“Grilling meat gives it great flavour. This taste, though, comes at a price, since the process creates molecules called polycyclic aromatic hydrocarbons (PAHs) which damage DNA and thus increase the eater’s chances of developing colon cancer. But a group of researchers led by Isabel Ferreira of the University of Porto, in Portugal, think they have found a way around the problem. When barbecuing meat, they suggest, you should add beer. The PAHs created by grilling form from molecules called free radicals which, in turn, form from fat and protein in the intense heat of this type of cooking. One way of stopping PAH-formation, then, might be to apply chemicals called antioxidants that mop up free radicals. And beer is rich in these, in the shape of melanoidins, which form when barley is roasted.”

~Slashdot~

Why Atheists Like Dawkins and Hitchens Are Dead Wrong



Acolytes of Dawkins & Hitchens pretend that ignorant evangelicals represent all of religion. Here’s what they miss.

Photo Credit: ollyy/Shutterstock.com

I’m supposed to hate science. Or so I’m told.

I spent my childhood with my nose firmly placed between the pages of books on reptiles, dinosaurs, marine life and mammals. When I wasn’t busy wondering if I wanted to be more like Barbara Walters or Nancy Drew, I was busy digging holes in my parents’ backyard hoping to find lost bones of some great prehistoric mystery. I spent hours sifting through rocks that could possibly connect me to the past or, maybe, a hidden crystalline adventure inside. Potatoes were both  apart of a delicious dinner and batteries for those ‘I got this’ moments; magnets repelling one another were a sorcery I needed to, somehow, defeat. The greatest teachers I ever had were Miss Frizzle and Bill Nye the Science Guy.

I also spent my childhood reciting verses from the Qur’an and a long prayer for everyone — in my family and the world — every night before going to bed. I spoke to my late grandfather, asking him to save me a spot in heaven. I went to the mosque and stepped on the shoes resting outside a prayer hall filled with worshippers. I tried fasting so I could be cool like my parents; played with prayer beads and always begged my mother to tell me more stories from the lives of the Abrahamic prophets.

With age, my wonder with religion and science did not cease. Both were, to me, extraordinary portals into the life around me that left me constantly bewildered, breathless and amazed.

Science would come to dominate my adolescent and early teenage years: papier mache cigarettes highlighting the most dangerous carcinogens, science fair projects on the virtues of chocolate consumption during menstruation; lamb lung and eye dissections, color coded notes, litmus tests on pretty papers, and disturbingly thorough study guides for five-question quizzes. My faith, too, remained operational in my day-to-day life: longer conversations with my late grandfather and all 30 Ramadan fasts, albeit with begrudging pre-dawn prayers. I attended Qur’anic recitation classes where I could not, for the life of me, recite anything that was not in English. I still read and listened to the stories of the prophets, with perhaps a greater sense of historical wonder and on occasion I would perform some of the daily prayers. Unsupervised access to the internet also led to the inevitable debates in Yahoo chat rooms about how Islam did not subjugate me as a woman. At the age of 16, I was busting out Quranic verses and references from the traditions of the Prophet Muhammad to shut up internet dwellers like Crusade563 and PopSmurf1967.

It never once occurred to me during those years, and later, that there could be any sort of a conflict between my faith and science; to me both were part of the same things: This universe and my existence within it.

And yet, here we are today being told that the two are irreconcilable; that religion begets an anti-science crusade and science pushes anti-religion valor. When did this become the only conversation on religion and science that we’re allowed to have?

This current discourse that pits faith and science against one another like Nero’s lions versus Christians — inappropriate analogy intended — borrows directly from the conflation of all religious traditions with the history and experience of Euro-American Christianity, specifically of the evangelical variety.

In my own religious tradition, Islam, there is a vibrant history of religion and science not just co-existing but informing one another intimately. Astrophysicistschemistsbiologistsalchemistssurgeonspsychologistsgeographerslogiciansmathematicians– amongst so many others – would often function as theologians, saints, spiritual masters, jurists and poets as much as they would as scientists. Indeed, a quick survey of some of the most well known Muslim intellectuals of the past 1,400 years illustrates their masterful polymathy, their ability to reach across fields of expertise without blinking at any supposed “dissonance.” And, of course, this is not something exclusive to Islam; across the religious terrain we can find countless polymaths who delved into the worlds of God and science.

Despite the history of the intellectual output of, well, the whole rest of the world, contemporary discussions in this country on the relationship between science and religion take religion to consist solely, again, of Euro-American Evangelical Christianity.  Thus “religious perspectives on human origins” are not really all that encompassing. Muslims, for instance, do not believe in Christian creationism and, actually, have differences on the nature of human origin. The Muslim creationism movement, headed by Turkish author and creationist activist Adnan Oktar (known popularly by the pseudonym Harun Yahya), is actually relatively recent and borrows much from Christian creationism – including even directly copied passages and arguments from anti-evolution Christian literature.

The absence of a centralized religious clergy and authority in Sunni Islam allows for individual and scholarly theological negotiation – meaning that there is not, necessarily, a “right” answer embedded in Divine Truth to social and political questions. Some of the most influential and fundamental Islamic legal texts are filled with arguments and counter-arguments which all come from the same source (divine revelation), just different approaches to it.

In other words: There’s plenty of wiggle room and then some. On anything that is not established as theological Truth (e.g. God’s existence, the finality of Prophethood, pillars and articles of faith), there is ample room for examination, debate and disagreement, because it does not undercut the fabric of faith itself.

Muslims, generally, accept evolution as a fundamental part of the natural process; they differ, however, on human evolution – specifically the idea that humans and apes share an ancestor in common.  In the 13th century, Shi’i Persian polymath Nasir al-din al-Tusi discussed biological evolution in his book “Akhlaq-i-Nasri” (Nasirean Ethics). While al-Tusi’s theory of evolution differs from the one put forward by Charles Darwin 600 years later and the theory of evolution that we have today, he argued that the elemental source of all living things was one. From this single elemental source came four attributes of nature: water, air, soil and fire – all of which would evolve into different living species through hereditary variability. Hierarchy would emerge through differences in learning how to adapt and survive. Al-Tusi’s discussion on biological evolution and the relationship of synchronicity between animate and inanimate (how they emerge from the same source and work in tandem with one another) objects is stunning in its observational precision as well as its fusion with theistic considerations. Yet it is, at best, unacknowledged today in the Euro-centric conversation on religion and science. Why?

My point here in this conversation about religion and science’s falsely created incommensurability isn’t about the existence of God – I would like to think that ultimately there is space for belief and disbelief. I would like to also believe, however, that the conversation on belief and disbelief can move beyond the Dawkinsean vitriol that disguises bigotry as a self-righteous claim to the sanctity of science; a claim that makes science the proudly held property of the Euro-American civilization and experience.

Hoisted into popular culture by the Holy Trinity of Dawkins-Hitchens-Harris, New Atheism mirrors the very religious zealotry it claims is at the root of so much moral, political and social decay. In particular, these authors and their posse of followers have – as Nathan Lean characterized it in this publication back in March of last year – taken a particular penchant for “flirting with Islamophobia.” Instead of engaging with Islamic theology, New Atheists – the most prominent figurehead being Richard Dawkins – are more interested in ridiculing Muslims and Islam by employing the use of the same tired, racist talking points and images that situate Muslims in need of ‘enlightenment’ – or, salvation.

The Evangelical Christian Right is a formidable force to be reckoned with in American national politics; there are legitimate fears by believing, non-believing and non-caring Americans that the course of the nation, from women’s rights to education, can and will be significantly set back because of the whims of loud and large group of citizens who refuse to acknowledge certain facts and changing realities and want the lives of all citizens to be subservient to their own will. This segment of the world’s religious topography, however, does not represent Religion or, in particular, Religion’s relationship with science.

Religion is a vast historical experience between human communities, its individual parts, the environment and something Sacred that acts as that elemental glue between everything. Science and religion are not incommensurable – and it’s time we stop treating them like they are.

 

Sana Saeed is a writer on politics with an interest in minority politics, media critique and religion in the public sphere. Follow her on Twitter@SanaSaeed.

http://www.alternet.org/belief/why-atheists-dawkins-and-hitchens-are-dead-wrong?akid=11690.265072.L-s5s2&rd=1&src=newsletter978792&t=11&paging=off&current_page=1

A surprising new warning on robots and jobs

When even the Economist starts hemming and hawing

about automation and labor markets, it’s time to get worried

A surprising new warning on robots and jobs
(Credit: josefkubes via Shutterstock)

When the Economist magazine starts warning about the threat of robots, it’s high time to grab your survival gear and light out for the back country. Journalism’s preeminent defender of the market wisdom of Adam Smith’s invisible hand rarely questions the forward march of innovation. But in a special report on our fast-arriving robot future published in the print edition this week, the Economist does just that. Kind of.

As headlines go, the warning is hardly definitive: “Job destruction by robots could outweigh creation.” The story itself is hedged so thickly one can barely see the central thesis: Robots might be threatening our jobs, but we’re not really sure. Globalization is also a problem — as is the arrival of women in the workforce. Pick your poison men: women or robots!

But just the fact that the Economist is even asking the question of whether robots could conceivably have a negative impact on labor markets is worth taking notice of. It’s a reflection of a shift in opinion on the part of people the Economist takes seriously — credentialed economists.

Nick Bloom, an economics professor at Stanford, has seen a big change of heart about such technological unemployment in his discipline recently. The received wisdom used to be that although new technologies put some workers out of jobs, the extra wealth they generated increased consumption and thus created jobs elsewhere. Now many economists are taking the short- to medium-term risk to jobs far more seriously, and some think the potential scale of change may be huge.

The Economist also mentions the work of MIT’s Erik Brynjolfsson and Andrew McAfee, who argue that “technological dislocation may create great problems for moderately skilled workers in the coming decades.” (Salon’s interview with Brynjolfsson and McAfee can be found here.)

But the magazine doesn’t go much further than pointing out that there are some new concerns. Far more ink is lavished in this special report on the wonders of the new technology coming down the pike than the potential dangers. There’s even a wave of the hand at everyone’s favorite utopian technological dream: Once robots are doing all the work, our main problem will be figuring out what to do with our abundance of leisure time.



It is even conceivable that the fruits of greater productivity will be distributed so as to allow people to work less and spend more time doing other things. After all, the humor in the double meaning of the message that “Our robots put people to work” depends on understanding that people do not necessarily want to work, if they have better things to do.

Wouldn’t that be nice! The real question is: distributed by whom? The benefits of a potential robot utopia are unlikely to be widely distributed without strong political leadership. Unfortunately, so far, there’s very little evidence to support the notion that governments, anywhere, have a clue on how to steer us through a robot future.

Climate change: Apocalyptish

“THE four horsemen of the apocalypse”: that was the disparaging appraisal by Richard Tol of the University of Sussex of a report published in Yokohama on March 31st by the Intergovernmental Panel on Climate Change (IPCC), a group of scientists (including Dr Tol) who provide governments round the world with mainstream scientific guidance on the climate. Every six or so years, the IPCC produces a monster three-part encyclopedia; the first instalment of its most recent assessment came out last September and argued that climate change was speeding up, even if global surface temperatures were flat. The new tranche looks at the even more pertinent matter of how the climate is affecting the Earth’s ecosystems, the economy and peoples’ livelihoods.

Profoundly, is the headline answer, even though temperatures have warmed by only 0.8°C since 1800. They are likely to warm by at least twice that amount (and probably much more) by 2100. The report—the first since the collapse in 2009 of attempts to draw up a global treaty to reduce greenhouse-gas emissions—argues that climate change is having an impact in every ecosystem, from equator to pole and from ocean to mountain. It says that while there are a few benefits to a warmer climate, the overwhelming effects are negative and will get worse. It talks of “extreme weather events leading to breakdown of…critical services such as electricity, water supply and health and emergency services”; about a “risk of severe ill-health and disrupted livelihoods for large urban populations due to inland flooding”; and sounds the alarm about “the breakdown of food systems, linked to warming”.

Behind such headline scares, though, lies a subtler story, in which the effects of global warming vary a lot, in which climate change is one risk among many, and in which the damage—and the possibility of reducing it—depends as much on the other factors, such as health systems or rural development, as it does on global warming itself.

Compared with previous IPCC reports—the last was in 2007—the new one is confident about its assessment of damages, and more willing to attribute the harm to human influence on the climate. Take the rise in sea levels, which (pushed up by thermal expansion) has been increasing more in the 14 years of this century than it did from 1971 to 2000. The report reckons that, at current rates, average sea levels could rise by another half a metre or more by the end of the century, if greenhouse gases are not significantly curtailed. That is nothing but bad news for the people living in cities vulnerable to flooding from the sea: they now number 271m, and may increase to 345m by 2050, says the IPCC (some estimates put the figures higher). Nor are there any benefits from ocean acidification (caused by the absorption of carbon dioxide in seawater) which the report calls “a fundamental challenge to marine organisms and ecosystems”.

Equally stressed are terrestrial ecosystems facing sudden and irreversible change: climate “tipping points”. In the Arctic, for example, which is warming faster than any other large environment on earth, new shrubs and plants are invading formerly inhospitable areas. The vast boreal forests or Siberia and Canada are dying back faster than was expected in 2007, and may be more sensitive to warming than was then thought.

On the other hand, there are substantial areas where the influence of the climate is modest compared with other factors. Health is one. Pollution from factories adds to global warming and causes health problems directly: a new report by the World Health Organisation linked around 7m deaths to air pollution. In a warmer world, some diseases, such as malaria, will spread their range. Heat-related deaths will rise but cold-related ones will fall. In parts of the world where there are more cold-related than heat-related deaths, such as northern Europe, warmer temperatures could actually reduce the number of early deaths. By and large, the report says, the negative impacts will outweigh the positive ones but, for good or ill, the climate is not the dominant influence on mortality and morbidity. Public health and nutrition matter more.

Something similar is true for civil conflicts. Poverty and economic shocks help cause conflicts and are themselves influenced by climate change. Global warming can bring about changes in land use, reduce water supplies, and push up food prices, all of which contribute to riots (arguably, all this happened in Darfur). But it is hard to show that climate change has had a direct impact on levels or patterns of violence. If anything, it is the other way around: conflict reduces peoples’ ability to cope with climate change by, for example, laying mines in farmland. Surprisingly, global warming does not seem to be the culprit in most extinctions, either. With the exception of some frog species in Central America, no recent extinctions have been attributed to climate change.

So climate change has been powerful (in the oceans) and secondary (in health). But there is a third category: areas where the climate has had a large distributional effect, which may be good or bad, but usually appears to be negative. Fish are the most mobile of creatures and as the seas warm, marine animals and plants follow the cooler waters, migrating from the tropics to temperate latitudes. Benthic (bottom-feeding) algae are moving polewards at a rate of 10km per decade; phytoplankton are moving at over 400km a decade. The result, says the report, is that by 2055, fish yields in temperate latitudes could be 30%-70% higher than they were in 2005 (assuming there are any fish left by then) whereas the tropical fish yield will fall 40-60%. A similar distributional change, the scientists argue, is affecting the hydrological cycle: the rate at which groundwater is recharged is likely to increase in temperate climes and fall in tropical ones, leading to further drying of the soil in the dry tropics.

The most important distributional change, the IPCC reckons, concerns food, especially cereal crops. A warmer climate, in principle, should lengthen the growing season, since it becomes warm enough to plant seeds earlier. More carbon in the atmosphere should increase the rate of photosynthesis. Both these influences should mean that some plants will do better in a world with higher temperatures and more carbon dioxide. The previous IPCC assessment thought the world’s main cereals—wheat, rice, maize and soya—would see improved yields in temperate climates, offsetting yield declines elsewhere. Some climate sceptics have used this to argue that, at least until the middle of the century, a modest amount of global warming might be good for the world.

The new report pours cold water on that. It confirms that tropical cereals suffer declining yields when temperatures rise 2°C but finds that the benefits to temperate-climate crops are smaller than was thought. Rainfed or water-stressed crops, which were once thought to respond well to higher levels of carbon dioxide, now seem not to. Plants—especially maize—may like a long growing season but they hate temperature spikes more: even one day above 30°C may be enough to damage them. And it turns out that rates of photosynthesis in maize, sorghum and sugarcane are not responsive to changes in concentrations of CO2, so the effect of more carbon on temperate crops is patchy. Whether more heat and carbon produce yield increases seems to depend mostly on local conditions.

Meanwhile, the impact of other negative influences is more important than was thought. Weeds seem to benefit more than cereals at temperate latitudes, so they provide more competition to food crops for water, sunlight and nutrients. Greater concentrations of ozone are more damaging than was thought: the new report reckons high ozone levels cause an 8-15% reduction in yields compared with normal crops. Perhaps most important, higher CO2 concentrations reduces the quality of cereals, that is, their protein and starch content, taste and mineral components (and hence nutritional value). This is particularly significant for forage crops: with poorer quality grains, animals are smaller and less healthy. Cattle are suffering anyway because they are being bred for meat yield alone, which, in practice, has made them more heat-sensitive: a double burden.

At the moment, the report concludes, wheat yields are being pushed down by 2% a decade compared with what would have happened without climate change; maize is down 1% a decade; rice and soyabeans are unaffected. Over time, though, this could worsen. If you look at studies of likely cereal yield in the next decade, roughly half of them forecast an increase and half a decline. But for the 2030s, twice as many studies are forecasting a fall as a rise.

So how much might all these influences affect the world economy? The IPCC’s surprising answer to that is: hardly at all. A 2°C rise in temperature, it says, could result in worldwide economic losses of only 0.2% to 2% of GDP a year. The trouble is, as the IPCC also says, this figure is misleading. GDP is a bad measure of climate impacts and the economic models used are hopeless (“completely made up”, said one recent critic). GDP does not account for catastrophic losses, which may be the most important kind. As an income measure, it gives less weight to the poor—but the poor are more vulnerable to climate change than the rich. That is true both between countries (Bangladesh is more vulnerable to floods than the Netherlands) and within them (richer Bangladeshis live in safer areas). The models do not take account of things like “tipping points”; do not care if carbon concentrations go sky-high and assume that if an economy were ravaged by drought or floods, it would suddenly have lots of “spare capacity” that could be redeployed.

But in some ways, the IPCC’s new assessment also explains why all this does not really matter. Models are useful for calculating costs and benefits: you invest this much in new capacity and earn that much as a result. But, as the report implies, climate change is not a problem just because its costs outweigh its benefits. Rather, it matters because it increases risk, causes unpredictable interactions between climate and social or factors and because it manifests itself as extreme events (floods, heat waves) which inflict huge damage in a flash. Previous IPCC reports are looked at particular parts of this picture. The new assessment for the first time looks at climate change not just as a problem in its own right but as something that is merely part of an even bigger context.

http://www.economist.com/blogs/newsbook/2014/03/climate-change-0?fsrc=nlw|newe|3-31-2014|8191566|37449986|

Follow

Get every new post delivered to your Inbox.

Join 1,386 other followers