Stephen Hawking: Automation and AI is going to decimate middle class jobs

stephen hawking scientist science physics

British scientist Prof. Stephen Hawking gives his ‘The Origin of the Universe’ lecture to a packed hall December 14, 2006 at the Hebrew University of Jerusalem, Israel. Hawking suffers from ALS (Amyotrophic Lateral Sclerosis or Lou Gehrigs disease), which has rendered him quadriplegic, and is able to speak only via a computerized voice synthesizer which is operated by batting his eyelids. David Silverman/Getty Images

Artificial intelligence and increasing automation is going to decimate middle class jobs, worsening inequality and risking significant political upheaval, Stephen Hawking has warned.

In a column in The Guardian, the world-famous physicist wrote that“the automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.”

He adds his voice to a growing chorus of experts concerned about the effects that technology will have on workforce in the coming years and decades. The fear is that while artificial intelligence will bring radical increases in efficiency in industry, for ordinary people this will translate into unemployment and uncertainty, as their human jobs are replaced by machines.

Technology has already gutted many traditional manufacturing and working class jobs — but now it may be poised to wreak similar havoc with the middle classes.

A report put out in February 2016 by Citibank in partnership with the University of Oxford predicted that 47% of US jobs are at risk of automation. In the UK, 35% are. In China, it’s a whopping 77% — while across the OECD it’s an average of 57%.

And three of the world’s 10 largest employers are now replacing their workers with robots.

Automation will, “in turn will accelerate the already widening economic inequality around the world,” Hawking wrote. “The internet and the platforms that it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive.”

He frames this economic anxiety as a reason for the rise in right-wing, populist politics in the West: “We are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent.”

Combined with other issues — overpopulation, climate change, disease — we are, Hawking warns ominously, at “the most dangerous moment in the development of humanity.” Humanity must come together if we are to overcome these challenges, he says.

Stephen Hawking has previously expressed concerns about artificial intelligence for a different reason — that it might overtake and replace humans. “The development of artificial intelligence could spell the end of the human race,” he said in late 2014. “It would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

 

 

http://www.businessinsider.com/stephen-hawking-ai-automation-middle-class-jobs-most-dangerous-moment-humanity-2016-12?r=UK&IR=T

Quit Social Media. Your Career May Depend on It.

Preoccupations
By CAL NEWPORT

I’m a millennial computer scientist who also writes books and runs a blog. Demographically speaking I should be a heavy social media user, but that is not the case. I’ve never had a social media account.

At the moment, this makes me an outlier, but I think many more people should follow my lead and quit these services. There are many issues with social media, from its corrosion of civic life to its cultural shallowness, but the argument I want to make here is more pragmatic: You should quit social media because it can hurt your career.

This claim, of course, runs counter to our current understanding of social media’s role in the professional sphere. We’ve been told that it’s important to tend to your so-called social media brand, as this provides you access to opportunities you might otherwise miss and supports the diverse contact network you need to get ahead. Many people in my generation fear that without a social media presence, they would be invisible to the job market.

In a recent New York magazine essay, Andrew Sullivan recalled when he started to feel obligated to update his blog every half-hour or so. It seemed as if everyone with a Facebook account and a smartphone now felt pressured to run their own high-stress, one-person media operation, and “the once-unimaginable pace of the professional blogger was now the default for everyone,” he wrote.

I think this behavior is misguided. In a capitalist economy, the market rewards things that are rare and valuable. Social media use is decidedly not rare or valuable. Any 16-year-old with a smartphone can invent a hashtag or repost a viral article. The idea that if you engage in enough of this low-value activity, it will somehow add up to something of high value in your career is the same dubious alchemy that forms the core of most snake oil and flimflam in business.

Professional success is hard, but it’s not complicated. The foundation to achievement and fulfillment, almost without exception, requires that you hone a useful craft and then apply it to things that people care about. This is a philosophy perhaps best summarized by the advice Steve Martin used to give aspiring entertainers: “Be so good they can’t ignore you.” If you do that, the rest will work itself out, regardless of the size of your Instagram following.

A common response to my social media skepticism is the idea that using these services “can’t hurt.” In addition to honing skills and producing things that are valuable, my critics note, why not also expose yourself to the opportunities and connections that social media can generate? I have two objections to this line of thinking.

First, interesting opportunities and useful connections are not as scarce as social media proponents claim. In my own professional life, for example, as I improved my standing as an academic and a writer, I began receiving more interesting opportunities than I could handle. I currently have filters on my website aimed at reducing, not increasing, the number of offers and introductions I receive.

My research on successful professionals underscores that this experience is common: As you become more valuable to the marketplace, good things will find you. To be clear, I’m not arguing that new opportunities and connections are unimportant. I’m instead arguing that you don’t need social media’s help to attract them.

My second objection concerns the idea that social media is harmless. Consider that the ability to concentrate without distraction on hard tasks is becoming increasingly valuable in an increasingly complicated economy. Social media weakens this skill because it’s engineered to be addictive. The more you use social media in the way it’s designed to be used — persistently throughout your waking hours — the more your brain learns to crave a quick hit of stimulus at the slightest hint of boredom.

Once this Pavlovian connection is solidified, it becomes hard to give difficult tasks the unbroken concentration they require, and your brain simply won’t tolerate such a long period without a fix. Indeed, part of my own rejection of social media comes from this fear that these services will diminish my ability to concentrate — the skill on which I make my living.

The idea of purposefully introducing into my life a service designed to fragment my attention is as scary to me as the idea of smoking would be to an endurance athlete, and it should be to you if you’re serious about creating things that matter.

Perhaps more important, however, than my specific objections to the idea that social media is a harmless lift to your career, is my general unease with the mind-set this belief fosters. A dedication to cultivating your social media brand is a fundamentally passive approach to professional advancement. It diverts your time and attention away from producing work that matters and toward convincing the world that you matter. The latter activity is seductive, especially for many members of my generation who were raised on this message, but it can be disastrously counterproductive.

Most social media is best described as a collection of somewhat trivial entertainment services that are currently having a good run. These networks are fun, but you’re deluding yourself if you think that Twitter messages, posts and likes are a productive use of your time.

If you’re serious about making an impact in the world, power down your smartphone, close your browser tabs, roll up your sleeves and get to work.

Internet of Things isn’t fun anymore

IoT’s growing faster than the ability to defend it

The recent DDoS attack was a wake-up call for the IoT, which will get a whole lot bigger this holiday season

Internet of Things isn't fun anymore: IoT's growing faster than the ability to defend it

(Credit: iStockphoto/sorsillo)

This article was originally published by Scientific American.

Scientific AmericanWith this year’s approaching holiday gift season, the rapidly growing “Internet of Things” or IoT — which was exploited to help shut down parts of the Web very recently — is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other internet-connected gadgets that upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.

The recent distributed denial of service (DDoS) attacks — in which tens of millions of hacked devices were exploited to jam and take down internet computer servers — is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same website at the same time, creating data traffic bottlenecks that cut off access to the site. In this case, the attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.

The IoT is a vast and growing virtual universe that includes automobiles, medical devices, industrial systems and a growing number of consumer electronics devices. These include video game consoles, smart speakers such as the Amazon Echo and connected thermostats like the Nest, not to mention the smart home hubs and network routers that connect those devices to the internet and one another. Technology items have accounted for more than 73 percent of holiday gift spending in the United States each year for the past 15 years, according to the Consumer Technology Association. This year the CTA expects about 170 million people to buy presents that contribute to the IoT, and research and consulting firm Gartner predicts these networks will grow to encompass 50 billion devices worldwide by 2020. With Black Friday less than one month away, it is unlikely makers of these devices will be able to patch the security flaws that opened the door to the DDoS attack.

Before the IoT attack that temporarily paralyzed the internet across much of the Northeast and other broad patches of the United States, there had been hints that such a large assault was imminent. In September a network, or “botnet,” of Mirai-infected IoT devices launched a DDoS that took down the KrebsOnSecurity website run by investigative cybersecurity journalist Brian Krebs. A few weeks later someone published the source code for Mirai openly on the Internet for anyone to use. Within days Mirai was at the heart of the latest attacks against U.S. Dynamic Network Services, or DYN, a domain name system (DNS) service provider. DYN’s computer servers act like an internet switchboard by translating a website address into its corresponding internet protocol (IP) address. A browser needs that IP address to find and connect to the server hosting that site’s content.

The attacks kept the Sony PlayStation Network, Twitter, GitHub and Spotify’s web teams busy most of the day but had little impact on the owners of the devices hijacked to launch the attacks. Most of the people whose cameras and other digital devices were involved will never know, said Matthew Cook, a co-founder of Panopticon Laboratories, a company that specializes in developing cybersecurity for online games. Cook was speaking on a panel at a cybersecurity conference in New York last week.

But consumers will likely start paying more attention when they realize that someone could spy on them by hacking into their home’s web cameras, said another conference speaker, Andrew Lee, CEO of security software maker ESET North America. An attacker could use a Web camera to learn occupants’ daily routines — and thus know when no one is home — or even to record passwords as they are typed them into computers or mobile devices, Lee added.

The IoT is expanding faster than device makers’ interest in cybersecurity. In a report released last week by the National Cyber Security Alliance and ESET, only half of the 15,527 consumers surveyed said that concerns about the cybersecurity of an IoT device have discouraged them from buying one. Slightly more than half of those surveyed said they own up to three devices — in addition to their computers and smartphones — that connect to their home routers, with another 22 percent having between four and 10 additional connected devices. Yet 43 percent of respondents reported either not having changed their default router passwords or not being sure if they had. Also, some devices’ passwords are difficult to change and others have permanent passwords coded in.

With little time for makers of connected devices to fix security problems before the holidays, numerous cybersecurity researchers recommend consumers at the very least make sure their home internet routers are protected by a secure password.

SALON

How the Internet Is Loosening Our Grip on the Truth

Next week, if all goes well, someone will win the presidency. What happens after that is anyone’s guess. Will the losing side believe the results? Will the bulk of Americans recognize the legitimacy of the new president? And will we all be able to clean up the piles of lies, hoaxes and other dung that have been hurled so freely in this hyper-charged, fact-free election?

Much of that remains unclear, because the internet is distorting our collective grasp on the truth. Polls show that many of us have burrowed into our own echo chambers of information. In a recent Pew Research Center survey, 81 percent of respondents said that partisans not only differed about policies, but also about “basic facts.”

For years, technologists and other utopians have argued that online news would be a boon to democracy. That has not been the case.

More than a decade ago, as a young reporter covering the intersection of technology and politics, I noticed the opposite. The internet was filled with 9/11 truthers, and partisans who believed against all evidence that George W. Bush stole the 2004 election from John Kerry, or that Barack Obama was a foreign-born Muslim. (He was born in Hawaii and is a practicing Christian.

Of course, America has long been entranced by conspiracy theories. But the online hoaxes and fringe theories appeared more virulent than their offline predecessors. They were also more numerous and more persistent. During Mr. Obama’s 2008 presidential campaign, every attempt to debunk the birther rumor seemed to raise its prevalence online.

In a 2008 book, I argued that the internet would usher in a “post-fact” age. Eight years later, in the death throes of an election that features a candidate who once led the campaign to lie about President Obama’s birth, there is more reason to despair about truth in the online age.

Why? Because if you study the dynamics of how information moves online today, pretty much everything conspires against truth.

You’re Not Rational

The root of the problem with online news is something that initially sounds great: We have a lot more media to choose from.

In the last 20 years, the internet has overrun your morning paper and evening newscast with a smorgasbord of information sources, from well-funded online magazines to muckraking fact-checkers to the three guys in your country club whose Facebook group claims proof that Hillary Clinton and Donald J. Trump are really the same person.

A wider variety of news sources was supposed to be the bulwark of a rational age — “the marketplace of ideas,” the boosters called it.

But that’s not how any of this works. Psychologists and other social scientists have repeatedly shown that when confronted with diverse information choices, people rarely act like rational, civic-minded automatons. Instead, we are roiled by preconceptions and biases, and we usually do what feels easiest — we gorge on information that confirms our ideas, and we shun what does not.

This dynamic becomes especially problematic in a news landscape of near-infinite choice. Whether navigating Facebook, Google or The New York Times’s smartphone app, you are given ultimate control — if you see something you don’t like, you can easily tap away to something more pleasing. Then we all share what we found with our like-minded social networks, creating closed-off, shoulder-patting circles online.

That’s the theory, at least. The empirical research on so-called echo chambers is mixed. Facebook’s data scientists have run large studies on the idea and found it wanting. The social networking company says that by exposing you to more people, Facebook adds diversity to your news diet.

Others disagree. A study published last year by researchers at the IMT School for Advanced Studies Lucca, in Italy, found that homogeneous online networks help conspiracy theories persist and grow online.

“This creates an ecosystem in which the truth value of the information doesn’t matter,” said Walter Quattrociocchi, one of the study’s authors. “All that matters is whether the information fits in your narrative.”

No Power in Proof

Digital technology has blessed us with better ways to capture and disseminate news. There are cameras and audio recorders everywhere, and as soon as something happens, you can find primary proof of it online.

You would think that greater primary documentation would lead to a better cultural agreement about the “truth.” In fact, the opposite has happened.

Consider the difference in the examples of the John F. Kennedy assassination and 9/11. While you’ve probably seen only a single film clip of the scene from Dealey Plaza in 1963 when President Kennedy was shot, hundreds of television and amateur cameras were pointed at the scene on 9/11. Yet neither issue is settled for Americans; in one recent survey, about as many people said the government was concealing the truth about 9/11 as those who said the same about the Kennedy assassination.

Documentary proof seems to have lost its power. If the Kennedy conspiracies were rooted in an absence of documentary evidence, the 9/11 theories benefited from a surfeit of it. So many pictures from 9/11 flooded the internet, often without much context about what was being shown, that conspiracy theorists could pick and choose among them to show off exactly the narrative they preferred. There is also the looming specter of Photoshop: Now, because any digital image can be doctored, people can freely dismiss any bit of inconvenient documentary evidence as having been somehow altered.

This gets to the deeper problem: We all tend to filter documentary evidence through our own biases. Researchers have shown that two people with differing points of view can look at the same picture, video or document and come away with strikingly different ideas about what it shows.

That dynamic has played out repeatedly this year. Some people look at the WikiLeaks revelations about Mrs. Clinton’s campaign and see a smoking gun, while others say it’s no big deal, and that besides, it’s been doctored or stolen or taken out of context. Surveys show that people who liked Mr. Trump saw the Access Hollywood tape where he casually referenced groping women as mere “locker room talk”; those who didn’t like him considered it the worst thing in the world.

Lies as an Institution

One of the apparent advantages of online news is persistent fact-checking. Now when someone says something false, journalists can show they’re lying. And if the fact-checking sites do their jobs well, they’re likely to show up in online searches and social networks, providing a ready reference for people who want to correct the record.

But that hasn’t quite happened. Today dozens of news outlets routinely fact-check the candidates and much else online, but the endeavor has proved largely ineffective against a tide of fakery.

That’s because the lies have also become institutionalized. There are now entire sites whose only mission is to publish outrageous, completely fake news online (like real news, fake news has become a business). Partisan Facebook pages have gotten into the act; a recent BuzzFeed analysis of top political pages on Facebook showed that right-wing sites published false or misleading information 38 percent of the time, and lefty sites did so 20 percent of the time.

“Where hoaxes before were shared by your great-aunt who didn’t understand the internet, the misinformation that circulates online is now being reinforced by political campaigns, by political candidates or by amorphous groups of tweeters working around the campaigns,” said Caitlin Dewey, a reporter at The Washington Post who once wrote a column called “What Was Fake on the Internet This Week.”

Ms. Dewey’s column began in 2014, but by the end of last year, she decided to hang up her fact-checking hat because she had doubts that she was convincing anyone.

“In many ways the debunking just reinforced the sense of alienation or outrage that people feel about the topic, and ultimately you’ve done more harm than good,” she said.

Other fact-checkers are more sanguine, recognizing the limits of exposing online hoaxes, but also standing by the utility of the effort.

“There’s always more work to be done,” said Brooke Binkowski, the managing editor of Snopes.com, one of the internet’s oldest rumor-checking sites. “There’s always more. It’s Sisyphean — we’re all pushing that boulder up the hill, only to see it roll back down.”

Yeah. Though soon, I suspect, that boulder is going to squash us all.

AT&T, Time Warner and the Death of Privacy

att

OCTOBER 27, 2016

By Amy Goodman and Denis Moynihan

It has been 140 years since Alexander Graham Bell uttered the first words through his experimental telephone, to his lab assistant: “Mr. Watson—come here—I want to see you.” His invention transformed human communication, and the world. The company he started grew into a massive monopoly, AT&T. The federal government eventually deemed it too powerful, and broke up the telecom giant in 1982. Well, AT&T is back and some would say on track to become bigger and more powerful than before, announcing plans to acquire Time Warner, the media company, to create one of the largest entertainment and communications conglomerates on the planet. Beyond the threat to competition, the proposed merger—which still must pass regulatory scrutiny—poses significant threats to privacy and the basic freedom to communicate.

AT&T is currently No. 10 on the Forbes 500 list of the U.S.‘s highest-grossing companies. If it is allowed to buy Time Warner, No. 99 on the list, it will form an enormous, “vertically integrated” company that controls a vast pool of content and how people access that content.

Free Press, the national media policy and activism group, is mobilizing the public to oppose the deal. “This merger would create a media powerhouse unlike anything we’ve ever seen before. AT&T would control mobile and wired internet access, cable channels, movie franchises, a film studio and more,” Candace Clement of Free Press wrote. “That means AT&T would control internet access for hundreds of millions of people and the content they view, enabling it to prioritize its own offerings and use sneaky tricks to undermine net neutrality.”

Net neutrality is that essential quality of the internet that makes it so powerful. Columbia University law professor Tim Wu coined the term “net neutrality.” After the Federal Communications Commission approved strong net neutrality rules last year, Wu told us on the Democracy Now! News hour, “There need to be basic rules of the road for the internet, and we’re not going to trust cable and telephone companies to respect freedom of speech or respect new innovators, because of their poor track record.”

Millions of citizens weighed in with public comments to the FCC in support of net neutrality, along with groups like Free Press and The Electronic Frontier Foundation. They were joined by titans of the internet like Google, Amazon and Microsoft. Arrayed against this coalition were the telecom and cable companies, the oligopoly of internet service providers that sell internet access to hundreds of millions of Americans. It remains to be seen if AT&T doesn’t in practice break net neutrality rules and create a fast lane for its content and slow down content from its competitors, including the noncommercial sector.

Another problem that AT&T presents, that would only be exacerbated by the merger, is the potential to invade the privacy of its millions of customers. In 2006, AT&T whistleblower Mark Klein revealed that the company was secretly sharing all of its customers’ metadata with the National Security Agency. Klein, who installed the fiber-splitting hardware in a secret room at the main AT&T facility in San Francisco, had his whistleblowing allegations confirmed several years later by Edward Snowden’s NSA leaks. While that dragnet surveillance program was supposedly shut down in 2011, a similar surveillance program still exists. It’s called “Project Hemisphere.” It was exposed by The New York Times in 2013, with substantiating documents just revealed this week in The Daily Beast.

In “Project Hemisphere,” AT&T sells metadata to law enforcement, under the aegis of the so-called war on drugs. A police agency sends in a request for all the data related to a particular person or telephone number, and, for a major fee and without a subpoena, AT&T delivers a sophisticated data set, that can, according to The Daily Beast, “determine where a target is located, with whom he speaks, and potentially why.”

Where you go, what you watch, text and share, with whom you speak, all your internet searches and preferences, all gathered and “vertically integrated,” sold to police and perhaps, in the future, to any number of AT&T’s corporate customers. We can’t know if Alexander Graham Bell envisioned this brave new digital world when he invented the telephone. But this is the future that is fast approaching, unless people rise up and stop this merger.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

AT&T-Time Warner merger to expand corporate, state control of media

att

By Barry Grey
24 October 2016

AT&T, the telecommunications and cable TV colossus, announced Saturday that it has struck a deal to acquire the pay TV and entertainment giant Time Warner. The merger, if approved by the Justice Department and US regulatory agencies under the next administration, will create a corporate entity with unprecedented control over both the distribution and content of news and entertainment. It will also mark an even more direct integration of the media and the telecomm industry with the state.

AT&T, the largest US telecom group by market value, already controls huge segments of the telephone, pay-TV and wireless markets. Its $48.5 billion purchase of the satellite provider DirecTV last year made it the biggest pay-TV provider in the country, ahead of Comcast. It is the second-largest wireless provider, behind Verizon.

Time Warner is the parent company of such cable TV staples as HBO, Cinemax, CNN and the other Turner System channels: TBS, TNT and Turner Sports. It also owns the Warner Brothers film and TV studio.

The Washington Post on Sunday characterized the deal as a “seismic shift” in the “media and technology world,” one that “could turn the legacy carrier [AT&T] into a media titan the likes of which the United States has never seen.” The newspaper cited Craig Moffett, an industry analyst at Moffett-Nathanson, as saying there was no precedent for a telecom company the size of AT&T seeking to acquire a content company such as Time Warner.

“A [telecom company] owning content is something that was expressly prohibited for a century” by the government, Moffett told the Post.

Republican presidential candidate Donald Trump, in keeping with his anti-establishment pose, said Saturday that the merger would lead to “too much concentration of power in the hands of too few,” and that, if elected, he would block it.

The Clinton campaign declined to comment on Saturday. Democratic vice-presidential candidate Tim Kaine, speaking on the NBC News program “Meet the Press” on Sunday, said he had “concerns” about the merger, but he declined to take a clear position, saying he had not seen the details.

AT&T, like the other major telecom and Internet companies, has collaborated with the National Security Agency (NSA) in its blanket, illegal surveillance of telephone and electronic communications. NSA documents released last year by Edward Snowden show that AT&T has played a particularly reactionary role.

As the New York Times put it in an August 15, 2015 article reporting the Snowden leaks: “The National Security Agency’s ability to spy on vast quantities of Internet traffic passing through the United States has relied on its extraordinary, decades-long partnership with a single company: the telecom giant AT&T.”

The article went on to cite an NSA document describing the relationship between AT&T and the spy agency as “highly collaborative,” and quoted other documents praising the company’s “extreme willingness to help” and calling their mutual dealings “a partnership, not a contractual relationship.”

The Times noted that AT&T installed surveillance equipment in at least 17 of its Internet hubs based in the US, provided technical assistance enabling the NSA to wiretap all Internet communications at the United Nations headquarters, a client of AT&T, and gave the NSA access to billions of emails.

If the merger goes through, this quasi-state entity will be in a position to directly control the content of much of the news and entertainment accessed by the public via television, the movies and smart phones. The announcement of the merger agreement is itself an intensification of a process of telecom and media convergence and consolidation that has been underway for years, and has accelerated under the Obama administration.

In 2009, the cable provider Comcast announced its acquisition for $30 billion of the entertainment conglomerate NBCUniversal, which owns both the National Broadcasting Company network and Universal Studios. The Obama Justice Department and Federal Communications Commission ultimately approved the merger.

Other recent mergers involving telecoms and content producers include, in addition to AT&T’s 2015 purchase of DirecTV: Verizon Communications’ acquisition of the Huffington Post, Yahoo and AOL; Lionsgate’s deal to buy the pay-TV channel Starz; Verizon’s agreement announced in the spring to buy DreamWorks Animation; and Charter Communications’ acquisition of the cable provider Time Warner Cable, approved this year.

The AT&T-Time Warner announcement will itself trigger a further restructuring and consolidation of the industry, as rival corporate giants scramble to compete within a changing environment that has seen the growth of digital and streaming companies such as Netflix and Hulu at the expense of the traditional cable and satellite providers.

The Financial Times wrote on Saturday that “the mooted deal could fire the starting gun on a round of media and technology consolidation.” Referring to a new series of mergers and acquisitions, the Wall Street Journal on Sunday quoted a “top media executive” as saying that an AT&T-Time Warner deal would “certainly kick off the dance.”

The scale of the buyout agreed unanimously by the boards of both companies is massive. AT&T is to pay Time Warner a reported $85.4 billion in cash and stocks, at a price of $107.50 per Time Warner share. This is significantly higher than the current market price of Time Warner shares, which rose 8 percent to more than $89 Friday on rumors of the merger deal.

In addition, AT&T is to take on Time Warner’s debt, pushing the actual cost of the deal to more than $107 billion. The merged company would have a total debt of $150 billion, making inevitable a campaign of cost-cutting and job reduction.

The unprecedented degree of monopolization of the telecom and media industries is the outcome of the policy of deregulation, launched in the late 1970s by the Democratic Carter administration and intensified by every administration, Republican or Democratic, since then. In 1982, the original AT&T, colloquially known as “Ma Bell,” was broken up into seven separate and competing regional “Baby Bell” companies.

This was sold to the public as a means of ending the tightly regulated AT&T monopoly over telephone service and unleashing the “competitive forces” of the market, where increased competition would supposedly lower consumer prices and improve service. What ensued was a protracted process of mergers and disinvestments involving the destruction of hundreds of thousands of jobs, which drove up stock prices at the expense of both employees and the consuming public.

Dallas-based Southwestern Bell was among the most aggressive of the “Baby Bells” in expanding by means of acquisitions and ruthless cost-cutting, eventually evolving into the new AT&T. Now, the outcome of deregulation has revealed itself to be a degree of monopolization and concentrated economic power beyond anything previously seen.

http://www.wsws.org/en/articles/2016/10/24/merg-o24.html?view=mobilearticle

Secrets of the Ghent Altarpiece

Everything you thought you knew about this work of art might be wrong

One of the most famous — and most frequently stolen — works of Western art reveals new truths about its past

Secrets of the Ghent Altarpiece: Everything you thought you knew about this work of art might be wrong
A detail of the Ghent Altarpiece in the Saint Bavo Cathedral, post-restoration. (Credit: Dominique Provost)

When, in 1994, the Sistine Chapel reopened to visitors after a decade of restoration, the world drew a collective gasp. Michelangelo’s painting, the most famous fresco in the world, looked nothing like it had for the past few centuries. The figures appeared clad in Day-Glo spandex, skin blazed an uproarious pink, and the background shone as if back-lit. Was this some awful mistake, an explosion of colors perhaps engineered by the sponsor, Kodak? Of course not. This was how the work that would launch the Mannerist movement and its passionate followers of Michelangelo’s revolutionary painting style originally looked before centuries of dirt, smog, and candle and lantern smoke clogged the ceiling with a skin of dark shadow. This restoration required a reexamination on the part of everyone who had ever written about the Sistine Chapel and Michelangelo.

After four years of restoration by the Royal Institute of Cultural Heritage (KIK-IRPA, Brussels), an equally important work of art was revealed on Oct. 12, and with similarly reverberant consequences. The painting looks gorgeous, with centuries of dirt and varnish peeled away to unclog the electric radiance of the work as it was originally seen, some six centuries ago. But this restoration not only reveals new facts about what has been called “the most influential painting ever made,” but also solves several lasting mysteries about its physical history, for it has also been called “the most coveted masterpiece in history,” and it is certainly the most frequently stolen.

On Oct. 12, I broke the story of the discoveries of the recent restoration of the painting. But there are many more details to tell, some of which have not yet made print.

***

“The Adoration of the Mystic Lamb,” often referred to as the Ghent Altarpiece, is an elaborate polyptych consisting of 12 panels painted in oils, which is displayed in the cathedral of St. Bavo in Ghent, Belgium. It was probably begun by Hubert van Eyck around 1426, but he died that year, so early in the painting process that it is unlikely than any of his work is visible. But it was certainly completed by his younger brother, Jan van Eyck, likely in 1432. It is among the most famous artworks in the world, a point of pilgrimage for educated tourists and artists from its completion to today. It is a hugely complex work of Catholic iconography, featuring an Annunciation scene on the exterior wing panels (viewed when the altarpiece is closed, as it would be on all but holidays), as well as portraits of the donors, grisaille (grey-scale) representations of Saints John the Baptist and John the Evangelist, and Old Testament prophets and sibyls. These exterior panels on the wings of the altarpiece are what has been restored so far, and what has revealed such rich discoveries.

The complex iconography is something of a pantheon of Catholicism. Adam and Eve represent the start, and Adam’s Original Sin is what required the creation of Christ in the Annunciation, and his ultimate sacrifice is what reversed Original Sin. But the visual puzzle of the painting is just one of its mysteries. For the physical painting itself, and its component panels, have had adventures of their own. The painting, all or in part, was stolen six times, and was the object of some 13 crimes and mysteries, several of which are as yet unsolved. But the discoveries made by conservators have peeled away not just varnish, but the veils on several of those mysteries, as well.

***

After the 2010 study of the painting, it was determined that the altarpiece needed conservation treatment and the removal of several layers of synthetic Keton varnishes, as well as thinning down the older varnishes added by past conservators, while adjusting the colors of older retouches. Bart Devolder, the young, dynamic on-site coordinator of the conservation work, explains, “Once we began the project, and the extent of over-painting became clear, the breadth of the work increased, as a committee of international experts decided that the conservators should peel away later additions and resuscitate, therefore, as much of the original work of van Eyck as possible.”

A 1.3 million EUR grant (80 percent of which came from the Flemish government, with 20 percent from the private sponsor Baillet Latour Fund) and four years later, only one-third of the altarpiece has been restored (the exterior wing panels of the polyptych), but the discoveries found are astonishing, and tell the story of a fraternal love and admiration that is as beautiful as any in history.

Surprise discoveries included silver leaf painted onto the frames themselves, which produce a three-dimensional effect and make the overall painting look very different. The inscription that Jan was “second in art,” and Hubert was the really great one, was proven to have been part of the original painting — almost certainly by Jan’s hand, a humble homage to his late brother. It also found that many different “hands” were involved in the painting.

Computer analysis of the paint, carried out by a team from University of Ghent, clearly demonstrates different hands involved — just as linguistic analysis programs can spot authorial styles, and so claim that at least five different people “wrote” the Pentateuch of the Old Testament, computers can also differentiate painterly techniques, even subtle ones (one man’s cross-hatching differs enough from another’s from the same studio, just like handwriting differs, even though we’ve all learned cursive). That different “hands” were involved is not a surprise, as van Eyck, like most artists of his time, ran a studio and works “by” him were, in fact, collaborative products of his studio. The outcome of the analysis is just proof of this, but examples of works certain to have been by Hubert are not known, so it is impossible to yet tell whether his paint strokes are visible today, among the several painters whose technique may be found in the altarpiece. If another work could firmly be linked to Hubert’s hand, then it could be compared via this same software to the Ghent Altarpiece to see if it appears. But some mysteries remain for future art detectives to solve.

“Damage was apparent in x-rays of the two painted donor figures” explains Devolder, “and we assumed that, in cleaning away overpainting and varnish layers, they would expose the damaged layer.” It was first thought that the damage had taken place during the initial painting phase — perhaps in Hubert’s studio, and Jan then “fixed it” by painting over it, thereby also repairing his brother’s legacy. But it later proved to be a 16th or early 17th century overpaint.

The conventional dating of the painting was likewise confirmed through dendrochronology (the panels in it came from the same tree), likely disproving a recent theory that the work may have been finished many years later than the 1432 date on which most scholars believe. “During the recent conservation campaign, two additional panels, one from the painting of Eve and the one plank from the panel of the hermits, were dendrochronologically tested by KIK-IRPA and shown to have come from the same tree trunk,” Devolder notes. “In an earlier study, a different pair of panels likewise matched.”

It is unlikely that different panels would come from the same tree and remain in van Eyck’s studio for a decade before being used in different sections of the same painting, so it is safe to let the current estimation hold, that it was completed in 1432 and installed as a backdrop for the baptism of the son of Duke Philip the Good of Burgundy (van Eyck’s patron — the painter also acted as godfather to his son). It also suggests that Jan immediately took up the project of his late brother, aware of its importance to his brother’s legacy and to his burgeoning career, rather than setting it aside and only “getting to it” later on.

The biggest discovery is that up to 70 percent of the work was found to contain over-painting, or later painters adding their own touch to the original, whether for restoration or editorial reasons. If, for centuries, scholars have based their interpretation on a careful analysis of every detail, and it now turns out that some of those details were never part of the original conception of the work, then the reading of the work must be reexamined.

The current round of funding (which was already increased once) allowed for a complete exploration and restoration only of the exterior of the wing panels. Yet the one-third that has been fully restored has revealed such a wealth of information, requiring every chapter and article on the painting to be rewritten, that it raises the question of what might be revealed if, in the future, the rest of the work can be similarly explored. While art historians are already primed to rework their van Eyck publications, there may be more discoveries to come.

Noah Charney is a Salon arts columnist and professor specializing in art crime, and author of “The Art of Forgery” (Phaidon).