AT&T-Time Warner merger to expand corporate, state control of media


By Barry Grey
24 October 2016

AT&T, the telecommunications and cable TV colossus, announced Saturday that it has struck a deal to acquire the pay TV and entertainment giant Time Warner. The merger, if approved by the Justice Department and US regulatory agencies under the next administration, will create a corporate entity with unprecedented control over both the distribution and content of news and entertainment. It will also mark an even more direct integration of the media and the telecomm industry with the state.

AT&T, the largest US telecom group by market value, already controls huge segments of the telephone, pay-TV and wireless markets. Its $48.5 billion purchase of the satellite provider DirecTV last year made it the biggest pay-TV provider in the country, ahead of Comcast. It is the second-largest wireless provider, behind Verizon.

Time Warner is the parent company of such cable TV staples as HBO, Cinemax, CNN and the other Turner System channels: TBS, TNT and Turner Sports. It also owns the Warner Brothers film and TV studio.

The Washington Post on Sunday characterized the deal as a “seismic shift” in the “media and technology world,” one that “could turn the legacy carrier [AT&T] into a media titan the likes of which the United States has never seen.” The newspaper cited Craig Moffett, an industry analyst at Moffett-Nathanson, as saying there was no precedent for a telecom company the size of AT&T seeking to acquire a content company such as Time Warner.

“A [telecom company] owning content is something that was expressly prohibited for a century” by the government, Moffett told the Post.

Republican presidential candidate Donald Trump, in keeping with his anti-establishment pose, said Saturday that the merger would lead to “too much concentration of power in the hands of too few,” and that, if elected, he would block it.

The Clinton campaign declined to comment on Saturday. Democratic vice-presidential candidate Tim Kaine, speaking on the NBC News program “Meet the Press” on Sunday, said he had “concerns” about the merger, but he declined to take a clear position, saying he had not seen the details.

AT&T, like the other major telecom and Internet companies, has collaborated with the National Security Agency (NSA) in its blanket, illegal surveillance of telephone and electronic communications. NSA documents released last year by Edward Snowden show that AT&T has played a particularly reactionary role.

As the New York Times put it in an August 15, 2015 article reporting the Snowden leaks: “The National Security Agency’s ability to spy on vast quantities of Internet traffic passing through the United States has relied on its extraordinary, decades-long partnership with a single company: the telecom giant AT&T.”

The article went on to cite an NSA document describing the relationship between AT&T and the spy agency as “highly collaborative,” and quoted other documents praising the company’s “extreme willingness to help” and calling their mutual dealings “a partnership, not a contractual relationship.”

The Times noted that AT&T installed surveillance equipment in at least 17 of its Internet hubs based in the US, provided technical assistance enabling the NSA to wiretap all Internet communications at the United Nations headquarters, a client of AT&T, and gave the NSA access to billions of emails.

If the merger goes through, this quasi-state entity will be in a position to directly control the content of much of the news and entertainment accessed by the public via television, the movies and smart phones. The announcement of the merger agreement is itself an intensification of a process of telecom and media convergence and consolidation that has been underway for years, and has accelerated under the Obama administration.

In 2009, the cable provider Comcast announced its acquisition for $30 billion of the entertainment conglomerate NBCUniversal, which owns both the National Broadcasting Company network and Universal Studios. The Obama Justice Department and Federal Communications Commission ultimately approved the merger.

Other recent mergers involving telecoms and content producers include, in addition to AT&T’s 2015 purchase of DirecTV: Verizon Communications’ acquisition of the Huffington Post, Yahoo and AOL; Lionsgate’s deal to buy the pay-TV channel Starz; Verizon’s agreement announced in the spring to buy DreamWorks Animation; and Charter Communications’ acquisition of the cable provider Time Warner Cable, approved this year.

The AT&T-Time Warner announcement will itself trigger a further restructuring and consolidation of the industry, as rival corporate giants scramble to compete within a changing environment that has seen the growth of digital and streaming companies such as Netflix and Hulu at the expense of the traditional cable and satellite providers.

The Financial Times wrote on Saturday that “the mooted deal could fire the starting gun on a round of media and technology consolidation.” Referring to a new series of mergers and acquisitions, the Wall Street Journal on Sunday quoted a “top media executive” as saying that an AT&T-Time Warner deal would “certainly kick off the dance.”

The scale of the buyout agreed unanimously by the boards of both companies is massive. AT&T is to pay Time Warner a reported $85.4 billion in cash and stocks, at a price of $107.50 per Time Warner share. This is significantly higher than the current market price of Time Warner shares, which rose 8 percent to more than $89 Friday on rumors of the merger deal.

In addition, AT&T is to take on Time Warner’s debt, pushing the actual cost of the deal to more than $107 billion. The merged company would have a total debt of $150 billion, making inevitable a campaign of cost-cutting and job reduction.

The unprecedented degree of monopolization of the telecom and media industries is the outcome of the policy of deregulation, launched in the late 1970s by the Democratic Carter administration and intensified by every administration, Republican or Democratic, since then. In 1982, the original AT&T, colloquially known as “Ma Bell,” was broken up into seven separate and competing regional “Baby Bell” companies.

This was sold to the public as a means of ending the tightly regulated AT&T monopoly over telephone service and unleashing the “competitive forces” of the market, where increased competition would supposedly lower consumer prices and improve service. What ensued was a protracted process of mergers and disinvestments involving the destruction of hundreds of thousands of jobs, which drove up stock prices at the expense of both employees and the consuming public.

Dallas-based Southwestern Bell was among the most aggressive of the “Baby Bells” in expanding by means of acquisitions and ruthless cost-cutting, eventually evolving into the new AT&T. Now, the outcome of deregulation has revealed itself to be a degree of monopolization and concentrated economic power beyond anything previously seen.

Secrets of the Ghent Altarpiece

Everything you thought you knew about this work of art might be wrong

One of the most famous — and most frequently stolen — works of Western art reveals new truths about its past

Secrets of the Ghent Altarpiece: Everything you thought you knew about this work of art might be wrong
A detail of the Ghent Altarpiece in the Saint Bavo Cathedral, post-restoration. (Credit: Dominique Provost)

When, in 1994, the Sistine Chapel reopened to visitors after a decade of restoration, the world drew a collective gasp. Michelangelo’s painting, the most famous fresco in the world, looked nothing like it had for the past few centuries. The figures appeared clad in Day-Glo spandex, skin blazed an uproarious pink, and the background shone as if back-lit. Was this some awful mistake, an explosion of colors perhaps engineered by the sponsor, Kodak? Of course not. This was how the work that would launch the Mannerist movement and its passionate followers of Michelangelo’s revolutionary painting style originally looked before centuries of dirt, smog, and candle and lantern smoke clogged the ceiling with a skin of dark shadow. This restoration required a reexamination on the part of everyone who had ever written about the Sistine Chapel and Michelangelo.

After four years of restoration by the Royal Institute of Cultural Heritage (KIK-IRPA, Brussels), an equally important work of art was revealed on Oct. 12, and with similarly reverberant consequences. The painting looks gorgeous, with centuries of dirt and varnish peeled away to unclog the electric radiance of the work as it was originally seen, some six centuries ago. But this restoration not only reveals new facts about what has been called “the most influential painting ever made,” but also solves several lasting mysteries about its physical history, for it has also been called “the most coveted masterpiece in history,” and it is certainly the most frequently stolen.

On Oct. 12, I broke the story of the discoveries of the recent restoration of the painting. But there are many more details to tell, some of which have not yet made print.


“The Adoration of the Mystic Lamb,” often referred to as the Ghent Altarpiece, is an elaborate polyptych consisting of 12 panels painted in oils, which is displayed in the cathedral of St. Bavo in Ghent, Belgium. It was probably begun by Hubert van Eyck around 1426, but he died that year, so early in the painting process that it is unlikely than any of his work is visible. But it was certainly completed by his younger brother, Jan van Eyck, likely in 1432. It is among the most famous artworks in the world, a point of pilgrimage for educated tourists and artists from its completion to today. It is a hugely complex work of Catholic iconography, featuring an Annunciation scene on the exterior wing panels (viewed when the altarpiece is closed, as it would be on all but holidays), as well as portraits of the donors, grisaille (grey-scale) representations of Saints John the Baptist and John the Evangelist, and Old Testament prophets and sibyls. These exterior panels on the wings of the altarpiece are what has been restored so far, and what has revealed such rich discoveries.

The complex iconography is something of a pantheon of Catholicism. Adam and Eve represent the start, and Adam’s Original Sin is what required the creation of Christ in the Annunciation, and his ultimate sacrifice is what reversed Original Sin. But the visual puzzle of the painting is just one of its mysteries. For the physical painting itself, and its component panels, have had adventures of their own. The painting, all or in part, was stolen six times, and was the object of some 13 crimes and mysteries, several of which are as yet unsolved. But the discoveries made by conservators have peeled away not just varnish, but the veils on several of those mysteries, as well.


After the 2010 study of the painting, it was determined that the altarpiece needed conservation treatment and the removal of several layers of synthetic Keton varnishes, as well as thinning down the older varnishes added by past conservators, while adjusting the colors of older retouches. Bart Devolder, the young, dynamic on-site coordinator of the conservation work, explains, “Once we began the project, and the extent of over-painting became clear, the breadth of the work increased, as a committee of international experts decided that the conservators should peel away later additions and resuscitate, therefore, as much of the original work of van Eyck as possible.”

A 1.3 million EUR grant (80 percent of which came from the Flemish government, with 20 percent from the private sponsor Baillet Latour Fund) and four years later, only one-third of the altarpiece has been restored (the exterior wing panels of the polyptych), but the discoveries found are astonishing, and tell the story of a fraternal love and admiration that is as beautiful as any in history.

Surprise discoveries included silver leaf painted onto the frames themselves, which produce a three-dimensional effect and make the overall painting look very different. The inscription that Jan was “second in art,” and Hubert was the really great one, was proven to have been part of the original painting — almost certainly by Jan’s hand, a humble homage to his late brother. It also found that many different “hands” were involved in the painting.

Computer analysis of the paint, carried out by a team from University of Ghent, clearly demonstrates different hands involved — just as linguistic analysis programs can spot authorial styles, and so claim that at least five different people “wrote” the Pentateuch of the Old Testament, computers can also differentiate painterly techniques, even subtle ones (one man’s cross-hatching differs enough from another’s from the same studio, just like handwriting differs, even though we’ve all learned cursive). That different “hands” were involved is not a surprise, as van Eyck, like most artists of his time, ran a studio and works “by” him were, in fact, collaborative products of his studio. The outcome of the analysis is just proof of this, but examples of works certain to have been by Hubert are not known, so it is impossible to yet tell whether his paint strokes are visible today, among the several painters whose technique may be found in the altarpiece. If another work could firmly be linked to Hubert’s hand, then it could be compared via this same software to the Ghent Altarpiece to see if it appears. But some mysteries remain for future art detectives to solve.

“Damage was apparent in x-rays of the two painted donor figures” explains Devolder, “and we assumed that, in cleaning away overpainting and varnish layers, they would expose the damaged layer.” It was first thought that the damage had taken place during the initial painting phase — perhaps in Hubert’s studio, and Jan then “fixed it” by painting over it, thereby also repairing his brother’s legacy. But it later proved to be a 16th or early 17th century overpaint.

The conventional dating of the painting was likewise confirmed through dendrochronology (the panels in it came from the same tree), likely disproving a recent theory that the work may have been finished many years later than the 1432 date on which most scholars believe. “During the recent conservation campaign, two additional panels, one from the painting of Eve and the one plank from the panel of the hermits, were dendrochronologically tested by KIK-IRPA and shown to have come from the same tree trunk,” Devolder notes. “In an earlier study, a different pair of panels likewise matched.”

It is unlikely that different panels would come from the same tree and remain in van Eyck’s studio for a decade before being used in different sections of the same painting, so it is safe to let the current estimation hold, that it was completed in 1432 and installed as a backdrop for the baptism of the son of Duke Philip the Good of Burgundy (van Eyck’s patron — the painter also acted as godfather to his son). It also suggests that Jan immediately took up the project of his late brother, aware of its importance to his brother’s legacy and to his burgeoning career, rather than setting it aside and only “getting to it” later on.

The biggest discovery is that up to 70 percent of the work was found to contain over-painting, or later painters adding their own touch to the original, whether for restoration or editorial reasons. If, for centuries, scholars have based their interpretation on a careful analysis of every detail, and it now turns out that some of those details were never part of the original conception of the work, then the reading of the work must be reexamined.

The current round of funding (which was already increased once) allowed for a complete exploration and restoration only of the exterior of the wing panels. Yet the one-third that has been fully restored has revealed such a wealth of information, requiring every chapter and article on the painting to be rewritten, that it raises the question of what might be revealed if, in the future, the rest of the work can be similarly explored. While art historians are already primed to rework their van Eyck publications, there may be more discoveries to come.

Noah Charney is a Salon arts columnist and professor specializing in art crime, and author of “The Art of Forgery” (Phaidon).

Clinton: The Silicon Valley Candidate

By refusing to release the transcripts of her paid speeches to Wall Street bankers, Democratic presidential candidate Hillary Clinton cast doubt on her independence from the crooks who run the financial system.  By contrast, Clinton’s program for “technology and innovation policy” has been an open book since June 2016.  What she publicized is as revealing – and as disturbing – as what she tried to keep secret.

Clinton paints her tech agenda in appealing terms.  She says it’s about reducing social and economic inequality, creating good jobs, and bridging the digital divide. The real goals – and beneficiaries – are different.  The document is described as “a love letter to Silicon Valley” by a journalist,[1] and as a “Silicon Valley wish list” by theWashington Post.[2]

On the domestic side, Clinton promises to invest in STEM education and immigration reform to expand the STEM workforce by allowing green cards for foreign workers who’ve earned STEM degrees in the US. The internet industry has been lobbying Congress for years to reform US immigration policy to gain flexibility in hiring, to ease access to a global pool of skilled labor, and to weaken employees’ bargaining power.[3]

Clinton’s blanket endorsement of online education opens new room for an odious private industry.  With buzzwords like “entrepreneurship,” “competitive,” and “bootstrap,” Clinton wants to “leverage technology”: by “delivering high-speed broadband to all Americans” she declares it will be feasible to provide “wrap-around learning for our students in the home and in our schools.”[4] Absent an overt commitment to public education, this is an encouragement to online vendors to renew their attack on the U.S. education system – despite a track record of failure and flagrant corruption. Still more deceitful is Hillary’s lack of acknowledgment of a personal conflict of interest.  According to a Financial Times analysis, after stepping down as Secretary of State in 2013, Hillary accepted hundreds of thousands of dollars for speeches to private education providers; her husband Bill has “earned” something like $21 million from for-profit education companies since 2010.[5]

Clinton’s proposal for access to high-speed Internet for all by 2020 would further relax regulation to help the Internet industry to build new networks, tap into existing public infrastructure, and encourage “public and private” partnerships. These are euphemisms for corporate welfare, after the fashion of the Google fiber project – which is substantially subsidized by taxpayers, as cities lease land to the giant company for its broadband project at far below market value and offer city services for free or below cost.[6] Clinton’s policy program also backs the 5G wireless network initiative and the release of unlicensed spectrum to fuel the “Internet of Thing.” (IoT). 5G wireless and IoT are a solution in search of a problem – unless you are a corporate supplier or a business user of networks.  This is an unacknowledged program to accelerate and expand digital commodification.

Clinton’s international plans are equally manipulative. She will press for “an open Internet abroad,” that is, for “internet freedom” and “free flow of information across borders.” Despite the powerful appeal of this rhetoric, which she exploited systematically when she was Secretary of State, Clinton actually is pushing to bulwark U.S. big capital in general, and U.S. internet and media industries, in particular.  Secretary Clinton’s major speech on Internet freedom[7]in 2010 came mere days after Google’s exit from China, supposedly on grounds of principle, making it plain that the two interventions – one private, one public – were coordinated elements of a single campaign.  Outside the United States, especially since the disclosures by Edward Snowden in 2013, it is increasingly well-understood that the rhetoric of human rights is a smokescreen for furthering U.S. business interests.[8] Reviving this approach is cynical electioneering rather than an endeavor to advance human rights or, indeed, more just international relations.

This in turn provides the context in which to understand Clinton’s vow to support the “multi-stakeholder” approach to Internet governance.  “Multi-stakeholderism” endows private corporations with public responsibilities, while it downgrades the ability of governments to influence Internet policy – as they have tried to do, notably, in the United Nations.  By shifting the domain in this way, the multi-stakeholder model actually reduces the institutional room available to challenge U.S. power over the global Internet.  It was for this very reason that the Obama Administration recently elevated multi-stakeholderism into the reigning principle for global Internet governance:  On 1 October, the U.S. Commerce Department preempted (other) governments from exercising a formal role.

This is, once again, the preferred agenda of Silicon Valley.[9] Aaron Cooper, vice president of strategic initiatives for the Software Alliance, a Washington trade group representing software developers, crowed in a Washington Post interview, “A lot of the proposals that are in the Clinton initiative are consistent with the broad themes that [we] and other tech associations have been talking about, so we’re very pleased.”[10]

To build up her policy platform in this vital field, Clinton has assembled a network of more than 100 tech and telecom advisors.[11] The members of this shadowy group have not been named, but they are said to include former advisors and officials, affiliates of think-tanks and trade groups, and executives at media corporations.  Apparently, just as with respect to Wall Street, the public has no right to know who is shaping Clinton’s program for technology.  Equally clearly, however, it is meant to resonate with Apple’s Tim Cook, Tesla CEO Elon Musk, and Facebook co-founder Dustin Moskovitz – all of whom have publicly rallied to her campaign.[12]

Some might choose to emphasize that the Republican candidate, Donald Trump, has not even bothered to hint to voters about his tech and information policy. Fair enough. Clinton’s program, though, is both surreptitious and plutocratic. It’s not that she’s not good enough – it’s that she’s in the wrong camp.  England’s Labour Party leader Jeremy Corbyn’s “Digital Democracy” program offers a better entry point for thinking about democratic information policy, as it includes publicly financed universal internet access, fair wages for cultural workers, release to open source of publicly funded software and hardware, cooperative ownership of digital platforms and more.  That would be a start.


[1] Noah Kulwin, “Hillary Clinton’s tech policy proposal sounds like a love letter to Silicon Valley,” recode, June 28, 2016.

[2] Brian Fung, “Hillary Clinton’s tech agenda is really a huge economic plan in disguise,Washington Post, June 28, 2016.

[3] Schiller, D. & Yeo. S. (Forthcoming, Fall 2016) Science and Engineering Into Digital Capitalism, in Tyfield, D., Lave, R., Randalls, S., and Thorpe, C. (Eds.), Routledge Handbook of the Political Economy of Science.

[4] “Hillary Clinton’s Initiative on Technology and Innovation,” The Briefing, June 27, 2016.

[5] Gary Silverman, “Hillary and Bill Clinton: The For-Profit Partnership,” Financial Times, July 21, 2016.

[6] Kenric Ward, “Taxpayers subsidize Google Fiber in this city with bargain land leases,”, August 16, 2016; Timothy B. Lee,”How Kansas City taxpayers support Google Fiber,” arstechnica, September 7, 2012.

[7] Hillary Rodham Clinton, Secretary of State, “Remarks on Internet Freedom,” January 21, 2010, The Newseum, Washington, DC.

[8] Dan Schiller, Digital Depression.  Urbana: University of Illinois Press, 2014: 161-69.

[9] Heather Greenfield, “CCIA Applauds Hillary Clinton’s Tech Agenda,” Computer & Communications Industry Association, June 28, 2016.

[10] Brian Fung, “Hillary Clinton’s tech agenda is really a huge economic plan in disguise,” Washington Post, June 28, 2016.

[11] Margaret Harding McGill & Nancy Scola, “Clinton quietly amasses tech policy corps,” Politico, August 24, 2016; Steven Levy, “How Hillary Clinton Adopted the Wonkiest Tech Policy Ever,” Backchannel, August 29, 2016 ; Tony Romm, “Inside Clinton’s tech policy circle,”Politico, June 7, 2016.

[12]Sen. Hilary Clinton,; Levy Sumagaysay, “Facebook co-founder pledges $20 million to help Hillary Clinton defeat Donald Trump,” The Mercury News, September 9, 2016;  Russell Brandom, “Tim Cook is hosting a fundraiser for Hillary Clinton,Verge, July 29, 2016.

This article originally appeared on Information Observatory.

Dan Schiller is a historian of information and communications at the University of Illinois. His most recent book is Digital Depression: Information Technology and Economic Crisis Shinjoung Yeo is an assistant prof at Loughborough University in London.

The Silicon Valley Candidate

Werner Herzog’s Lo and Behold: Reveries of The Connected World

Exploring the origins and impact of the Internet

By Kevin Reed
8 October 2016

German filmmaker Werner Herzog’s new documentary Lo and Behold: Reveries of The Connected World was released in August at select theatres across the US and for home viewing from various on-demand services. The movie—which examines the origins and implications of the Internet and related technologies such as artificial intelligence, robotics, the Internet of Things and space travel—has received generally favorable reviews following its premiere at the Sundance Film Festival in late January.

Lo and Behold

The work is divided into ten segments with titles like “The Early Days,” “The Glory of the Net” and “The Future,” with Herzog serving as narrator. Through a series of interviews, the director stitches his disparate topics together to explain something about how the Internet and World Wide Web were created and then to paint a troubling picture of the globally interconnected landscape.

The movie begins with a visit to the campus of the University of California, Los Angeles (UCLA), the birthplace—along with the Stanford Research Institute—of the Internet. The first interviewee is Leonard Kleinrock, one of the research scientists responsible for the development of the precursor of the Internet called ARPANET (Advanced Research Projects Agency Network of the US Defense Department). At age 82, Kleinrock is obviously thrilled at the opportunity to describe how the first-ever electronic message was transmitted between two points on the network.

As he opens a cabinet of early Internet hardware called a “packet switch,” Kleinrock describes in detail the events of October 29, 1969 at 10:30 pm. As the UCLA sender began typing the word “login”—and checking by telephone with his counterpart at Stanford University—only the first two characters of the message were successfully transmitted before his computer crashed. Despite this seemingly failed communication attempt, Kleinrock explains that “Lo” was an entirely appropriate word for the accomplishment. “It was from here,” he says, “that a revolution began.”

With Herzog occasionally interjecting off-camera during the interviews, the director’s goal seems clear enough. He wants the audience to share his sense of wonder and amazement at the transformative impact of the Internet. This is reinforced by equally intriguing interviews with several others who participated in the birth of the Net. The enthusiasm—and clarity on complex topics—expressed by these pioneers leaves one with a desire to hear more of their stories of discovery and progress.

As the film goes on, however, it emerges that Herzog has another plan; he abandons any historically logical accounting of the Internet and begins eclectically focusing on its various byproducts and offshoots, limitations and negative consequences. Herzog’s interview with Ted Nelson—a philosopher and sociologist credited with theoretically anticipating the World Wide Web and coining the terms “hypertext” and “hypermedia”—becomes the starting point for these wanderings.

Werner Herzog in 2007 (Photo: Erinc Salor)

As a student at Harvard University, Ted Nelson began working in 1960 on a computer system called Project Xanadu that he conceived of as “a digital repository scheme for world-wide electronic publishing.” Nelson also wrote an important book in 1974 entitled Computer Lib/Dream Machines, a kind of manifesto for hobbyists on the social and revolutionary implications of the personal computer.

Although it is left unexplained in the film, the Internet is the technical infrastructure upon which the World Wide Web was developed beginning in 1989. Ever since the widespread adoption of the World Wide Web, Nelson has been a public critic of its structure and implementation, especially HTML (Hypertext Markup Language). He has called HTML a gross oversimplification of his pioneering ideas and said that it “trivializes our original hypertext model with one-way, ever-breaking links and no management of version or contents.”

Why is it that HTML and the World Wide Web emerged as the dominant graphical layer of the Internet as opposed to a competing set of ideas? Is it possible that a solution more comprehensive, expressing more completely the potential of the technology and more effective and useful could have been adopted instead?

One aspect of the rapid global adoption of the World Wide Web—originally created by Tim Berners-Lee in 1989 at CERN in Switzerland—was the open access policy of its inventor. As Berners-Lee, who is also interviewed in the film, has explained, “Had the technology been proprietary, and in my total control, it would probably not have taken off. You can’t propose that something be a universal space and at the same time keep control of it.” However, while the non-proprietary nature of Berners-Lee’s creation was a significant factor in its success, it does not automatically follow that the core technology of the World Wide Web represented an advance over the ideas represented by others such as Ted Nelson.

These are important and complex questions that have been repeated again and again in the evolution of the information revolution of the past half-century, the further exploration of which would point to fundamental problems of modern technology, i.e. the contradiction between “what is possible” versus “what is required” within the economic and political framework of global capitalist society.

Showing little interest in exploring these matters more deeply, Lo and Behold goes on to present Nelson—a gifted but socially awkward man—as something of a high-tech Don Quixote. Herzog concludes the interview with the quip, “To us you appear to be the only one around who is clinically sane.”

Lo and Behold

Having made nearly forty documentaries in his five-decade career, Herzog is accomplished at gaining access to people with compelling stories to tell. The interview with Elon Musk, founder of Tesla Motors and SpaceX, raises important points. A consistently outspoken opponent of artificial intelligence, Musk makes the following warning: “[I]f you were running a hedge fund or private equity fund and all I want my AI to do is maximize the value of my portfolio, then AI could decide to short consumer stocks, go long on defense stocks, and start a war. Ah, and that obviously would be quite bad.”

This possible scenario under capitalism is not explored any further. While the US military is never specifically mentioned, it is remarkable that the only reference to war in the course of a 98-minute critical look at modern technology comes from a billionaire entrepreneur. Above all, Musk’s comments show that the new technologies by themselves bring no fundamental change to the class relations within capitalist society; indeed the Internet and artificial intelligence in the hands of the ruling elite enable a further and accelerated integration of financial parasitism and imperialist war.

Given that Lo and Behold is sponsored by Netscout Systems, a major corporate supplier of networking hardware and software, it is possible that such topics were off limits. However, the lack of a broader or coherent critical perspective is not something new for Werner Herzog.

While he made some interesting and disturbing fiction films in the 1970s (The Enigma of Kaspar Hauser, Aguirre: The Wrath of God and Stroszek in particular), the end of the period of radicalization had an impact on Herzog, as it did on other New German Cinema directors like R. W. Fassbinder, Wim Wenders and Volker Schlöndorff. There was always an overwrought element in Herzog’s work and an emphasis on physical or spiritual excess, without much reference to the content of the action.

In media interviews about his latest film, Herzog has been careful to explain that he does not blame technology itself for the aberrations depicted. “The Internet is not good or evil, dark or light hearted,” he says, “it is human beings” that are the problem. Following the advice of experts, Herzog suggests that people need some kind of “filter” to help them use the technology appropriately.

Leaving things so very much at the level of the individual does not begin to get at the source of the contradiction between the positive and destructive potential of modern technology. This contradiction, so clearly demonstrated during World War II with nuclear technology, is itself an expression of the alternatives facing mankind of socialism versus barbarism.

Lack of an understanding about—or refusal to acknowledge—the deeper social and class interests embedded in the forms of human technology leads to only two possible conclusions: (1) the utopian idea that technology develops automatically without wars and crisis toward the improvement of mankind, or (2) the dystopian belief that technological advancement always develops without any hope of revolutionary transformation of society in the direction of an existential threat to humanity. While Herzog and his producers believe they have provided a balanced perspective between these two, in the end, Lo and Behold comes down on the latter side.


The Ugly Truth Behind Apple’s New iPhone 7

Posted on Sep 20, 2016

New product releases from Apple often are a time for analysis, comparison and celebration. But the arrival of the iPhone 7 has brought unwanted attention to the company’s darker side of globalization, oppression and greed.

In a report from The Guardian, Aditya Chakrabortty says that Apple oppresses Chinese workers, does not pay its fair share of taxes and deprives Americans of high-paying jobs while making enormous profits.

Apple’s iPhones are assembled at three firms in China: Foxconn, Wistron and Pegatron. While Apple CEO Tim Cook says the company cares about all its workers—calling any words to the contrary are “patently false and offensive”—the facts on the ground show the opposite.

In 2010, Foxconn employees were killing themselves in high numbers—an estimated 18 attempted suicide and 14 of them died. The company responded by putting up suicide-prevention netting to catch them before their deaths. Apple vowed to improve worker conditions at the plant, yet in August, after reports surfaced that changes in overtime policies caused great stress among workers, two employees killed themselves.

At the Wistron factory, a Danish human-rights organization found it forces thousands of students to work the same hours as adults, for less pay. Students were told they were required to work if they wanted to receive their diplomas. Using young people to work is not a new revelation about Apple. In 2010, the company admitted that 15-year-old children were working in factories supplying Apple products. At a plant run by Wintek in Suzhou, China, workers reportedly were being poisoned by n-Hexane, a toxic chemical that causes muscular atrophy and blurred eyesight.

At Pegatron—the other iPhone assembler—U.S.-based China Labor Watch found staff members work 12-hour days, six days a week. They are forced to work overtime, and 1½ hours are unpaid.  One researcher working there had to stand during his entire 10½-hour shift. When the local government raised the minimum wage, Pegatron cut subsidies for medical insurance.

The Guardian reports:

While iPhone workers for Pegatron saw their hourly pay drop to just $1.60 an hour, Apple remained the most profitable big company in America, pulling in over $47bn in profit in 2015 alone.

What does this add up to? At $231bn, Apple has a bigger cash pile than the US government, but apparently won’t spend even a sliver on improving conditions for those who actually make its money. Nor will it make those iPhones in America, which would create jobs and still leave it as the most profitable smartphone in the world.

It would rather accrue more profits, to go to those who hold Apple stock—such as company boss Tim Cook, whose hoard of company shares is worth $785m. Friends of Cook point to his philanthropy, but while he’s happy to spend on pet projects, he rejects a €13bn tax bill from the EU  as “political crap”—while boasting about how he won’t bring Apple’s billions back to the US “until there’s a fair rate … . It doesn’t go that the more you pay, the more patriotic you are.” The tech oligarch seems to think he knows better than 300 million Americans what tax rates their elected government should set.

When the historians of globalisation ask why it died, they will surely find that companies such as Apple form a large part of the answer. Faced with a binary choice between an economic model that lavishly rewarded a few and a populism that makes lavish promises to many, between Cook on the one hand and [Nigel] Farage on the other, the voters went for the one who at least didn’t bang on about “courage”.

According to a new report from Global Justice Now, a group based in the United Kingdom, 69 of the top 100 economies in the world are corporate entities (an increase from 63 a year ago). Apple is one of those corporate entities. With $234 billion in revenue in 2015, Apple is the ninth-largest company in the world and is wealthier than most countries.

I Came to San Francisco to Change My Life: I Found a Tribe of Depressed Workaholics Living on Top of One Another

Hacker House Blues: my life with 12 programmers, 2 rooms and one 21st-century dream.
By David Garczynski / Salon September 18, 2016

I might have been trespassing up there, but I would often go to the 19th-floor business lounge to work and study. Located on the top floor of the a luxury high-rise in the SOMA district of San Francisco, the lounge was only accessible to residents of the building. Yet for a while I found myself there almost every day.

Seventeen floors below, I lived in an illegal Airbnb with 12 roommates split between two rooms. There were six people packed into my bedroom alone — seven, if you included the guy who lived in the closet. Three bunk beds adorned the walls, and I was fortunate enough to score a bottom bunk. Unfortunately, though, it was not the one by the window, which, with the exception of one dim lamp, was the only source of light in the room. Even at midday, the room never lit up much more than a shadowed cave. At most hours of the day, you could find someone sleeping in there. Getting in and out of bed was a precarious dance in the darkness to avoid stepping into the suitcases on the floor, out of which most of us lived.

In the shared kitchen, the sink more often than not held a giant pile of dishes, and the fridge, packed with everyone’s groceries and leftovers, emanated a slightly moldy aroma. Mixed in there were the half-eaten meals and unfinished condiment jars of tenants who had long since moved out — all left to rot, but often too far buried in the mass of food to be located.

Let’s just say the room was not as advertised.

The Airbnb posting did boast of access to a 24-hour gym, roof deck and bocce courts. The building has an indoor basketball court, an outdoor hot tub and even a rock climbing wall. The 19th-floor business lounge alone comes with a pool table, a porch, several flat-screen TVs and an enviable view of much of San Francisco. For $1,200 a month, it all seemed worth it. The post did say it was a four-person apartment, not 13, and included a picture of a sunny room with a pair of bunk beds, but I figured for a short sub-lease while I attended coding school, it wouldn’t be so bad. The reviews, after all, were pretty positive, too: mostly 5-stars. However, none of them mentioned the fact that I wouldn’t even be given a front door key.

I’d have to sneak into the building every night. The only way I entered the building was by waiting until someone exited or entered, and then I’d slip through the door before it closed. From there I’d walk straight past the front desk guard and head to the bank of elevators. Despite my nerves, that part was surprisingly easy. The building caters to the young tech elite, so a backwards hat and a collegiate T-shirt practically made me invisible. When I got to my floor, I’d make sure none of the neighbors were watching, and if no one was around, I’d stand on my tiptoes and grab the communal key hidden atop the exit sign. Once the door was unlocked, I’d return the key to its perch for the next tenant to use.

I had moved to San Francisco to break into the tech world after being accepted into one of those ubiquitous 12-week coding boot camps. I had dreams of becoming a programmer, hoping one day I could land a remote contracting gig — a job where I could work from wherever and make a good living. My life would be part ski bum and part professional.

In my mid-20s uncertainty, the coding route seemed to have the most promise — high paychecks in companies that prized work-life balance, or so it seemed from afar. I knew the road wouldn’t be easy, but any time I’d mention my ambitions to family and friends, they responded with resounding positivity, affirming my belief that it was a well-worn path to an obtainable goal.

All of the people in that Airbnb were programmers. Some were trying to break into the industry through boot camps, but most were already full-time professional coders. They headed out early in the morning to their jobs at start-ups in the neighborhood. A lot of them hailed from some of the top schools in the country: Stanford, MIT, Dartmouth. If I was going to get through my program, I needed to rely on them, academically and emotionally. Once the program started up, I would find myself coding 15 hours a day during the week, with that number mercifully dropping to 10-12 hours on the weekends. Late at night, when my stressed-out thoughts would form an ever-intensifying feedback loop of questioning despair — What am I doing? Is this really worth it? — I would need to be able to look to the people around me as living reminders of the possibility of my goals.

Every night, the people whose jobs I coveted would come home from 10- to 12-hour shifts in front of a computer and proceed to the couch, where they’d open up their laptops and spend the remaining hours of the night in silence, sifting through more and more lines of code. Beyond preternatural math abilities and a penchant for problem solving, it seemed most didn’t have much in the way of life skills. They weren’t who I thought they would be — a community of intelligent and inspiring men and women bouncing ideas back and forth. Rather they were boys and girls, coddled by day in the security of companies that fed them, entertained them and nursed them. At home, they could barely take care of themselves.

Take for example the programmer who lived in my closet: Every night he’d come home around 9 p.m. He’d sit on the couch, pour himself a bowl of cereal and eat in silence. Then he would grab his laptop and head directly into the closet — a so-called “private room” listed on Airbnb for $1,400 a month. It was the only time I’d ever see him. The only way I could tell he was home was by the glow of his laptop seeping out from under the closet door. Hours later, deep into the night, the light would go out, and I would know he had to gone to sleep. By the time I arrived, he had been living there for 16 months, in a windowless closet with a thin mattress placed right on the floor. During the day he codes for Pinterest. Yeah . . . that Pinterest.

Maybe there were people working in this city who were living out the tech dreams of everyone else, but I’ve realized the number of people who dream about it far outnumber of people who obtain it. Everyone I spoke to in this town seemed doe-eyed about the future, even while they were living in illegal Airbnbs and working at failing startups across the city.

The odds weren’t in my favor. Most likely I’d find myself in the 92 percent of start-ups that go under in three years, trapped like some of my friends — much smarter and better programmers than I’ll ever be — bouncing from failing company to failing company.
Or maybe not. Maybe I would make it, only to become like my friends who earn six-figure paychecks and still lament that they’ll never be able to buy a home here. What illusions could I continue to maintain then?

There was a good chance I’d find myself in a situation like another roommate’s. During salary negotiations for a job at a start-up, he was encouraged to accept the pay tier with a lower salary but higher equity stake. Now he works 12-hour days just to try to keep the company (and his potential payout) afloat on a paycheck not much higher than some entry-level, non-programming jobs.

The most likely scenario, however, was that I’d become like the mid-30s man who slept in the bunk above me. The reality of his situation slowly slipped him into a depressive state, until he was sleeping most hours of the day. The rest of his waking hours were spent walking around slumped and gloomy.

Programming for me was never supposed to be more than a means to an end, but that end started to feel farther and farther away. The longer I lived in that Airbnb, the longer I realized my dreams would never be met. In all likelihood I would be swept up in an economy here that traded on hopes and dreams of the people clamoring to break in. The illegal Airbnbs that dot the city can afford to charge their amounts because there is no shortage of people wanting to break in. There is another smart kid around the corner who believes that despite the working and living conditions this is just the first step to striking it big. Never tell them the odds.

I had hinged my happiness on an illusion and naively fought to get into a community that wouldn’t help me advance in the direction of my dreams. Maybe in the end I would get everything I needed or at least a nice paycheck, but I’d lose all of myself in the process. I’d be churned and beaten by the underbelly of the tech world here long before I could ever make it out.

If you are interested, it’s not that hard to sneak up to the 19th-floor lounge. I still do sometimes, despite having long since moved out and given up programming. From up there the view of San Francisco takes on the artificial quality of a miniature model. To the north, you’ll see a sea of tech start-ups, their signs and symbols a wild mash of colors. From this distance, it can all look so peaceful. Just know that somewhere in that view is another “hacker house” with bright kids living in almost migrant-worker conditions. Somewhere out there is a coding boot camp with slightly inflated numbers, selling a dream. Their fluorescent halls and cramped bedrooms are filled with the perennially hopeful looking to take the place of those who have already realized this dream isn’t all it’s cracked up to be.

It is a beautiful view, though. Just one I no longer want for myself.

David Garczynski has lived in the Bay Area for one year now. In that time, he’s lived in an illegal Airbnb, on his cousin’s couch, in two short-term subleases, and has been evicted once. He just signed an official (and legal) lease last week.

Standing Up to Apple

Posted on Sep 4, 2016

By Robert Reich /

For years, Washington lawmakers on both sides of the aisle have attacked big corporations for avoiding taxes by parking their profits overseas. Last week the European Union did something about it.

The European Union’s executive commission ordered Ireland to collect $14.5 billion in back taxes from Apple.

But rather than congratulate Europe for standing up to Apple, official Washington is outraged.

Republican House Speaker Paul Ryan calls it an “awful” decision. Democratic Senator Charles Schumer, who’s likely to become Senate Majority Leader next year, says it’s  “a cheap money grab by the European Commission.” Republican Orrin Hatch, chairman of the Senate Finance Committee, accuses Europe of “targeting” American businesses. Democratic Senator Ron Wyden says it “undermines our tax treaties and paints a target on American firms in the eyes of foreign governments.”


These are taxes America should have required Apple to pay to the U.S. Treasury. But we didn’t – because of Ryan, Schumer, Hatch, Wyden, and other inhabitants of Capitol Hill haven’t been able to agree on how to close the loophole that has allowed Apple, and many other global American corporations, to avoid paying the corporate income taxes they owe.

Let’s be clear. The products Apple sells abroad are designed and developed in the United States. So the foreign royalties Apple collects on them logically should be treated as corporate income to Apple here in America.

But Apple and other Big Tech corporations like Google and Amazon – along with much of Big Pharma, and even Starbucks – have avoided paying hundreds of billions of dollars in taxes on their worldwide earnings because they don’t really sell things like cars or refrigerators or television sets that they make here and ship abroad.

Their major assets are designs, software, and patented ideas.

Although most of this intellectual capital originates here, it can be transferred instantly around the world – finding its way into a vast array of products and services abroad.

Intellectual capital is hard to see, measure, value, and track. So it’s a perfect vehicle for tax avoidance.

Apple transfers its intellectual capital to an Apple subsidiary in Ireland, which then “sells” Apple products all over Europe. And it keeps most of the money there. Ireland has been more than happy to oblige by imposing on Apple a tax rate that’s laughably low – 0.005 percent in 2014, for example.

Apple is America’s most profitable high-tech company and also one of America’s biggest tax cheats. It maintains a worldwide network of tax havens to park its global profits, some of which don’t even have any employees.

Sitting atop this network is “Apple Operations International,” incorporated in Ireland. Never mind that Apple Operations International keeps its bank accounts and records in the United States and holds board meetings in California. It’s still considered Irish. And its main job is allocating Apple’s earnings among its international subsidiaries in order to keep taxes as low as possible.

As a result, over last decade alone Apple has amassed a stunning $231.5 billion cash pile abroad, subjected to little or no taxes.

This hasn’t stopped Apple from richly rewarding its American shareholders with fat dividends and stock buybacks that raise share prices. But rather than use its overseas cash to fund these, Apple has taken on billions of dollars of additional debt.

It’s a scam, at the expense of American taxpayers.

Add in the worldwide sales of America’s Big Tech, Big Pharma, and Big Franchise operations, and the scam is sizeable. Over 2 trillion dollars of U.S. corporate profits are now parked abroad – all of it escaping the U.S. corporate income tax.

To make up the difference, you and I and millions of other Americans have to pay more in income taxes and payroll taxes to finance the U.S. government.

Why can’t this loophole be closed? In fact, what’s stopping the Internal Revenue Service from doing what the European Commission just did – telling Apple it owes tens of billions of dollars, but to America rather than to Ireland?

The dirty little secret is the loophole could be closed, and the IRS could probably do what Europe just did even under existing law. But neither will happen because Big Tech, Big Pharma, and Big Franchise have enough political clout to stop them from happening.

Ironically, the European Commission’s ruling is having the opposite effect in the United States. It’s adding fuel to the demand Apple and other giant U.S. global corporations have been making, that the United States slash taxes on corporations that move their overseas earnings back to the United States.

In other words, they want another tax amnesty.

Congress’s last tax amnesty occurred in 2004, when global U.S. corporations brought back about $300 billion from overseas, and paid just a tax rate of 5.25 percent rather than the regular 35 percent U.S. corporate rate.

Corporate executives argued then – as they argue now – that the amnesty would allow them to reinvest those earnings in America.

The argument was baloney then and it’s baloney now. A study by the National Bureau of Economic Research found that 92 percent of the repatriated cash was used to pay for dividends, share buybacks or executive bonuses.

“Repatriations did not lead to an increase in domestic investment, employment or R.&D., even for the firms that lobbied for the tax holiday stating these intentions,” the studyconcluded.

The political establishment in Washington is preparing for another tax amnesty nonetheless. In a white paper published last week, the Treasury Department warned that an American corporation like Apple, ordered by the European Commission to make tax repayments, might eventually use such payments to offset its U.S. tax bill “when its offshore earnings are repatriated or treated as repatriated as part of possible U.S. tax reform.”

Rather than another tax amnesty, we need a crackdown on corporate tax avoidance.

Instead of criticizing the European Commission for forcing Apple to pay up, American politicians ought to be thanking Europe for standing up to Apple.

At least someone has.