Monolithic corporations aren’t our saviors — they’re the central part of the problem.

Tech Companies Are Peddling a Phony Version of Security, Using the Govt. as the Bogeyman

http://kielarowski.files.wordpress.com/2014/11/b4817-tech.png?w=399&h=337

This week the USA Freedom Act was blocked in the Senate as it failed to garner the 60 votes required to move forward. Presumably the bill would have imposed limits on NSA surveillance. Careful scrutiny of the bill’s text however reveals yet another mere gesture of reform, one that would codify and entrench existing surveillance capabilities rather than eliminate them.

Glenn Greenwald, commenting from his perch at the Intercept, opined:

“All of that illustrates what is, to me, the most important point from all of this: the last place one should look to impose limits on the powers of the U.S. government is . . . the U.S. government. Governments don’t walk around trying to figure out how to limit their own power, and that’s particularly true of empires.”

Anyone who followed the sweeping deregulation of the financial industry during the Clinton era, the Gramm–Leach–Bliley Act of 1999 which effectively repealed Glass-Steagall and the Commodity Futures Modernization Act of 2000, immediately sees through Greenwald’s impromptu dogma. Let’s not forget the energy market deregulation in California and subsequent manipulation that resulted in blackouts throughout the state. Ditto that for the latest roll back of arms export controls that opened up markets for the defense industry. And never mind all those hi-tech companies that want to loosen H1-B restrictions.

The truth is that the government is more than happy to cede power and authority… just as long as doing so serves the corporate factions that have achieved state capture. The “empire” Greenwald speaks of is a corporate empire. In concrete analytic results that affirm Thomas Ferguson’s Investment Theory of Party Competition, researchers from Princeton and Northwestern University conclude that:

“Multivariate analysis indicates that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.”

Glenn’s stance reveals a broader libertarian theme. One that the Koch brothers would no doubt find amenable: the government is suspect and efforts to rein in mass interception must therefore arise from the corporate entities. Greenwald appears to believe that the market will solve everything. Specifically, he postulates that consumer demand for security will drive companies to offer products that protect user privacy, adopt “strong” encryption, etc.

The Primacy of Security Theater

Certainly large hi-tech companies care about quarterly earnings. That definitely explains all of the tax evasion, wage ceilings, and the slave labor. But these same companies would be hard pressed to actually protect user privacy because spying on users is a fundamental part of their business model. Like government spies, corporate spies collect and monetize oceans of data.

Furthermore hi-tech players don’t need to actually bullet-proof their products to win back customers. It’s far more cost effective to simply manufacture the perception of better security: slap on some crypto, flood the news with public relation pieces, and get some government officials (e.g. James ComeyRobert Hannigan, and Stewart Baker) to whine visibly about the purported enhancements in order to lend the marketing campaign credibility. The techno-libertarians of Silicon Valley are masters of Security Theater.

Witness, if you will, Microsoft’s litany of assurances about security over the years, followed predictably by an endless train of critical zero-day bugs. Faced with such dissonance it becomes clear that “security” in high-tech is viewed as a public relations issue, a branding mechanism to boost profits. Greenwald is underestimating the contempt that CEOs have for the credulity of their user base, much less their own workers.

Does allegedly “strong” cryptography offer salvation? Cryptome’s John Young thinks otherwise:

“Encryption is a citizen fraud, bastard progeny of national security, which offers malware insecurity requiring endless ‘improvements’ to correct the innately incorrigible. Its advocates presume it will empower users rather than subject them to ever more vulnerability to shady digital coders complicit with dark coders of law in exploiting fear, uncertainty and doubt.”

Business interests, having lured customers in droves with a myriad of false promises, will go back to secretly cooperating with government spies as they always have: introducing subtle weaknesses into cryptographic protocols, designing backdoors that double as accidental zero-day bugs, building rootkits which hide in plain sight, and handing over user data. In other words all of the behavior that was described by Edward Snowden’s documents. Like a jilted lover, consumers will be pacified with a clever sales pitch that conceals deeper corporate subterfuge.

Ultimately it’s a matter of shared class interest. The private sector almost always cooperates with the intelligence services because American spies pursue the long-term prerogatives of neoliberal capitalism; open markets and access to resources the world over. Or perhaps someone has forgotten the taped phone call of Victoria Nuland selecting the next prime minister of Ukraine as the IMF salivates over austerity measures? POTUS caters to his constituents, the corporate ruling class, which transitively convey their wishes to clandestine services like the CIA. Recall Ed Snowden’s open letter to Brazil:

“These programs were never about terrorism: they’re about economic spying, social control, and diplomatic manipulation. They’re about power.”

To confront the Deep State Greenwald is essentially advocating that we elicit change by acting like consumers instead of constitutionally endowed citizens. This is a grave mistake because profits can be decoupled from genuine security in a society defined by secrecy, propaganda, and state capture. Large monolithic corporations aren’t our saviors. They’re the central part of the problem. We shouldn’t run to the corporate elite to protect us. We should engage politically to retake and remake our republic.

 

Bill Blunden is an independent investigator whose current areas of inquiry include information security, anti-forensics, and institutional analysis.

http://www.alternet.org/tech-companies-are-peddling-phony-version-security-using-govt-bogeyman?akid=12501.265072.yCLOb-&rd=1&src=newsletter1027620&t=29&paging=off&current_page=1#bookmark

You should actually blame America for everything you hate about internet culture

November 21

The tastes of American Internet-users are both well-known and much-derided: Cat videos. Personality quizzes. Lists of things that only people from your generation/alma mater/exact geographic area “understand.”

But in France, it turns out, even viral-content fiends are a bit more … sophistiqués.

“In France, articles about cats do not work,” Buzzfeed’s Scott Lamb told Le Figaro, a leading Parisian paper. Instead, he explained, Buzzfeed’s first year in the country has shown it that “the French love sharing news and politics on social networks – in short, pretty serious stuff.”

This is interesting for two reasons: first, as conclusive proof that the French are irredeemable snobs; second, as a crack in the glossy, understudied facade of what we commonly call “Internet culture.”

When the New York Times’s David Pogue tried to define the term in 2009, he ended up with a series of memes: the “Star Wars” kid, the dancing baby, rickrolling, the exploding whale. Likewise, if you look to anyone who claims to cover the Internet culture space — not only Buzzfeed, but Mashable, Gawker and, yeah, yours truly — their coverage frequently plays on what Lamb calls the “cute and positive” theme. They’re boys who work at Target and have swoopy hair, videos of babies acting like “tiny drunk adults,” hamsters eating burritos and birthday cakes.

That is the meaning we’ve assigned to “Internet culture,” itself an ambiguous term: It’s the fluff and the froth of the global Web.

But Lamb’s observations on Buzzfeed’s international growth would actually seem to suggest something different. Cat memes and other frivolities aren’t the work of an Internet culture. They’re the work of an American one.

American audiences love animals and “light content,” Lamb said, but readers in other countries have reacted differently. Germans were skeptical of the site’s feel-good frivolity, he said, and some Australians were outright “hostile.” Meanwhile, in France — land of la mode and le Michelin — critics immediately complained, right at Buzzfeed’s French launch, that the articles were too fluffy and poorly translated. Instead, Buzzfeed quickly found that readers were more likely to share articles about news, politics and regional identity, particularly in relation to the loved/hated Paris, than they were to share the site’s other fare.

A glance at Buzzfeed’s French page would appear to bear that out. Right now, its top stories “Ça fait le buzz” — that’s making the buzz, for you Americaines — are “21 photos that will make you laugh every time” and “26 images that will make you rethink your whole life.” They’re not making much buzz, though. Neither has earned more than 40,000 clicks — a pittance for the reigning king of virality, particularly in comparison to Buzzfeed’s versions on the English site.

All this goes to show that the things we term “Internet culture” are not necessarily born of the Internet, itself — the Internet is everywhere, but the insatiable thirst for cat videos is not. If you want to complain about dumb memes or clickbait or other apparent instances of socially sanctioned vapidity, blame America: We started it, not the Internet.

Appelons un chat un chat.

Caitlin Dewey runs The Intersect blog, writing about digital and Internet culture. Before joining the Post, she was an associate online editor at Kiplinger’s Personal Finance.
http://www.washingtonpost.com/news/the-intersect/wp/2014/11/21/you-should-actually-blame-america-for-everything-you-hate-about-internet-culture/

Amnesty International Releases Tool To Combat Government Spyware

https://www.eff.org/files/2014/11/20/detekt-1d.png

Human rights charity Amnesty International has released Detekt, a tool that finds and removes known government spyware programs. Describing the free software as the first of its kind, Amnesty commissioned the tool from prominent German computer security researcher and open source advocate Claudio Guarnieri, aka ‘nex’. While acknowledging that the only sure way to prevent government surveillance of huge dragnets of individuals is legislation, Marek Marczynski of Amnesty nevertheless called the tool (downloadable here) a useful countermeasure versus spooks. According to the app’s instructions, it operates similarly to popular malware or virus removal suites, though systems must be disconnected from the Internet prior to it scanning.

DIGITAL MUSIC NEWS

Federal Judge Rejects Sirius XM’s Call

For Summary Judgment In Pre-1972 Case

 

     The Turtles keep on rolling to copyright victory, as a federal judge in New York has ruled against Sirius XM in the ongoing battle to collect royalties on recordings made before 1972. Last Friday (Nov. 14) Judge Colleen McMahon of United States District Court in Manhattan rejected Sirius XM’s motion for summary judgment, saying the Turtles have performing rights to their recordings under New York State law. She gave Sirius XM until Dec. 5 to dispute the remaining facts in the case; otherwise Sirius XM will be ruled liable for infringement.

“General principles of common copyright law dictate that public performance rights in pre-1972 sound recordings do exist,” Judge McMahon wrote in her decision. The ruling comes after a separate win for the Turtles in September, when a federal judge in California found Sirius XM liable for infringement under state laws there. According to The New York Times, that decision was viewed as a major victory for artists and record companies, although its wider impact was unclear because it applied only to that state.

Judge McMahon’s decision strengthened the music industry’s position that pre-1972 recordings are covered under state laws. Still, recording and broadcast industry executives say the potential for widespread confusion over music licensing – for example, it may mean that thousands of AM-FM radio stations, as well as restaurants or sports arenas where music is performed, may have been infringing on recording rights for decades – likely will require clarification from Congress. 

YouTube Launches Music Key In

Already-Crowded Streaming Field

 

     After years of false starts and seemingly endless label negotiations, YouTube’s Music Key launched earlier this week to the ultimate question: will users actually pay $9.99 for something  they previously received free of charge? That’s the monthly rate Google set for its ad-free service that also offers offline functionality, with a company spokesperson telling Billboard, “The goal is more ways to play music on YouTube, giving artists more ways to reach fans and make money.”

So why create a subscription service, especially given the competitive landscape? As Billboard notes, Apple is certain to grow its share of the streaming market, Amazon is going after middle-of-the-road listeners with Music Prime, and Spotify has a head start of 12.5 million U.S. subscribers (28 million worldwide in 2013, according to IFPI).

Still, many industry executives hope Music Key will help YouTube clean up the metadata that often gets lost in uploads of master recordings and drives users to the original composer and purchase links. This has been a core asset of YouTube’s Content ID system, which has disbursed more than $1 billion in revenue to labels and content creators since 2007. 

YouTube Refuses To Remove Songs

By Artists Represented By Azoff’s GMR

 

     YouTube apparently has refused to remove songs composed by artists represented by Irving Azoff’s Global Music Rights (GMR), forcing a showdown between the 42 artists the music icon represents and the Google-owned video site. The dispute stems from YouTube’s claim that it has licensing deals in place with the record labels, while Azoff contends that in order to publicly perform those 42 artists’ songs, the company also has to pay the songwriters, which Azoff says are “massively underpaid” when it comes to digital services.

According to The Hollywood Reporter, the primary question here is whether YouTube has a right to perform these songs until proven otherwise. GMR thinks the burden of proving a valid license is on YouTube, which reportedly says it has a multiyear license for the public performance of works represented by GMR. The licensors aren’t identified, but it’s possible that YouTube believes its covered by prior deals made with ASCAP, BMI, SESAC, or a foreign performing rights organization.

Howard King, an attorney representing GMR, says YouTube has failed to comply with demands to stop performing those 20,000 songs. “Obviously, if YouTube contends it has properly licensed any of the songs for public broadcast, a contention we believe to be untrue, demand is hereby made that we be furnished with documentation of such licenses,” he says.

By contrast, a spokesperson for YouTube told THR, “We’ve done deals with labels, publishers, collection societies, and more to bring artists’ music into YouTube Music Key. To achieve our goal of enabling this service’s features on all the music on YouTube, we’ll keep working with both the music community and with the music fans invited to our beta phase.” 

Music Key Could Thwart Apple’s Move

To Reduce Monthly Subscription Fee

 

     It’s no secret that Apple has been engaged in heated discussions with the major record labels to lower the price of on-demand music to $5 per month from the standard $9.99 currently charged by such subscription services as Spotify, Rhapsody, Google, Rdio, and its own Beats Music. According to Forbes, Apple is telling record labels that $5/month for all-you-can-hear on-demand music is the right price because the best iTunes customers spend about $60 per year on music downloads. The obvious thinking here is that this $60 annual revenue per user (ARPU) could be the best record companies can hope to get from those consumers who still actually pay for music.

This may be a convenient talking point for Apple’s negotiators, but – as Forbes points out – two important factors strongly counter that argument. First, for all the talk about monthly subscription fees (and Taylor Swift, below), the vast majority of users of on-demand music services don’t pay for them at all. Second, in 2011 Google introduced

a technology called Content ID that enables copyright owners to make money, if they choose, when users upload content to YouTube. The system detects users’ uploads of copyrighted works and gives copyright owners several options, including to block the uploads or monetize them through ad revenue sharing. By 2011, the major labels had opted to allow many user uploads of their content for monetization, and they also supply their own “official” music videos.

As a result, YouTube is a de facto on-demand music service and, as noted by Forbes, possibly is the biggest one in the game. Market research from Edison Research and Triton Digital suggests that, strictly as a music service, YouTube has about four times the U.S. user base of Spotify, Rhapsody, and Google Play Music All Access combined. No one pays for YouTube, although some may pay for its Music Key service, which will hit that same $10 monthly price point when it comes out of beta. 

Big Machine’s Scott Borchetta: Spotify

Paid Less Than $500,000 To TS Last Year

 

     The verbal fisticuffs between Spotify and Taylor Swift have not let up, with the streaming music service’s Daniel Ek insisting the pop music icon was on track to earn over $6 million in royalties this year. This claim came after a Spotify spokesperson said Swift had been paid a total of $2 million over the last 12 months for the global streaming of her songs. But Scott Borchetta (above left), CEO of Swift’s label Big Machine Records, says that’s nowhere near the truth, maintaining Swift earned less than $500,000 from Spotify streams over the last 12 months.

“The facts show that the music industry was much better off before Spotify hit these shores,” Borchetta told The New York Times. Noting that the amount Spotify paid out over the last year was “the equivalent of less than 50,000 albums sold, he said Swift actually earns more from her videos on Vevo than she did from having her music on Spotify.

While half a million dollars will cause few people to weep, it should be noted that Swift’s most recent album, 1989, became the first this year to sell more than a million copies in a week – a feat only equaled by 18 albums in history. Unlike most performers, she can make millions of dollars from traditional album sales, but by keeping her music away from Spotify even as it begs for her to come back, she and Borchetta say they’re trying to make the larger point that the service doesn’t pay its artists a reasonable fee. “[Taylor Swift] is the most successful artist in music today,” Borchetta says. “What about the rest of the artists out there struggling to make a career?” 

Sony Music Wary Of Ad-Supported

Streaming After Taylor Swift Move

 

     Taylor Swift’s claim that subscription streaming services hurt music sales has caused Sony Music to reconsider its own digital music plans. PC World reports that, during a recent company briefing, Sony Music CFO Kevin Kelleher questioned whether or not the free, ad-supported services are taking away from how quickly, and to what extent, the company can grow those paid services. “That’s something we’re paying attention to… It’s an area that’s gotten everyone’s attention,” he observed.

This is important because, as CD sales and digital music downloads continue to shrink, streaming services offer a potential ray of sunshine for the recorded music industry. Such companies as Pandora and Spotify routinely lose money due to a combination of high royalty fees and low revenue, an imbalance that appears to be due to poor ROI on ad-supported tiers and not enough premium subscribers to sustain a business model.

While Sony says the move by Taylor Swift (not a Sony artist) to pull her music from Spotify made the company sit up and take notice, it isn’t enough to make anyone want to change the dynamics of the digital music business. In fact, Sony says it’s “very encouraged with the pay streaming model.” Approximately 27 million people worldwide pay for streaming subscriptions, Sony says, and the company is focused on helping that number grow.

 

A publication of Bunzel Media Resources © 2014

The Interregnum: Why the Future is so chaotic

The Interregnum:

Why the Future is so chaotic

“The old is dying,and the new cannot be born; in this interregnum there arises a diversity of morbid symptoms”-Antonio Gramsci

The morbid symptoms began to appear in the spring of 2003. The Department of Homeland Security was officially formed and despite the street protests of millions around the world, the United States invaded Iraq on the pretext of capturing Saddam’s “weapons of mass destruction”. By summer it was obvious that there were no such weapons and that we had been tricked into a war from which there was no easy exit. Pollsters began to notice that a majority of American’s felt we were “on the wrong track” and the distrust of our leadership has gotten worse every year.

So while the citizens exhibit historical levels of anger with the country’s drift, neither the political nor the economic leaders have put forth an alternative vision of our future. We are in an Interregnum: the often painful uprooting of old traditions and the hard-fought emergence of the new. The traditional notion of an interregnum refers to the time when a king died and a new king had not been coronated. But for our purposes, the notion of interregnum refers to those hinges in time when the old order is dead, but the new direction has not been determined. Quite often, the general populace does not understand that the transition is taking place and so a great deal of tumult arises as the birth pangs of a new social and political order. We are in such a time in America.

For those of us who work in the field of media and communications the signs of the Interregnum are everywhere. Internet services decimate the traditional businesses of music and journalism. For individual journalists or musicians, the old order is clearly dying, but a new way to make a living cannot seem to be birthed. Those who work in the fields of film and television can only hope a similar fate does not await their careers. In the world of politics a similar dynamic is destroying traditional political parties and the insurgent bottom up, networked campaigns pioneered by Barack Obama now become the standard. And yet we realize that for all it’s insurgency, the Obama campaign really did not usher in a new era. It is clear that there is an American Establishment that seems to stay in power no matter which party controls The White House. And the recent election only makes this more obvious. But this top-down establishment order is clearly dying, but it clings to it privileges and the networked, bottom-up society is not yet empowered.

Since 1953 when two senior partners of a Wall Street law firm, the brothers John Foster and Allen Dulles began running American foreign (and often domestic) policy, an establishment view, through Democratic and Republican presidencies alike, has been the norm. As Stephen Kinzer (in his book The Brothers)has written about the Dulles brothers, “Their life’s work was turning American money and power into global money and power. They deeply believed, or made themselves believe, that what benefited them and their clients would benefit everyone.” They created a world in which the Wall Street elites at first set our foreign policy and eventually (under Ronald Reagan) came to dominate domestic and tax policy — all to the benefit of themselves and their clients.

In 1969 the median salary for a male worker was $35,567 (in 2012 dollars). Today it is $33,904. So for 44 years, while wages for the top 10% have continued to climb, most Americans have been caught in a ”Great Stagnation”, bringing into question the whole purpose of the American capitalist economy. The notion that what benefited the establishment would benefit everyone, had been thoroughly discredited.

Seen through this lens, the savage partisanship of the current moment makes an odd kind of sense. What were the establishment priorities that moved inexorably forward in both Republican and Democratic administrations? The first was a robust and aggressive foreign policy. As Kinzer writes of the Dulles brothers, “Exceptionalism — the view that the United States has a right to impose its will because it knows more, sees farther, and lives on a higher moral plane than other nations — was to them not a platitude, but the organizing principle of daily life and global politics.” From Eisenhower to Obama, this principle has been the guiding light of our foreign policy, bringing with it annual defense expenditures that dwarf those of all the world’s major powers combined and drive us deeper in debt. The second principle of the establishment was, “what is good for Wall Street is good for America.” Despite Democrats efforts to paint the GOP as the party of Wall Street, one would only have to look at the efforts of Clinton’s Treasury secretaries Rubin and Summers to kill the Glass-Steagal Act and deregulate the big banks, to see that the establishment rules no matter who is in power. Was it any surprise that Obama then appointed the architects of bank deregulation, Summers and Geithner, to clean up the mess their policies had caused?

So when we observe politicians as diverse as Elizabeth Warren and Rand Paul railing against the twin poles of establishment orthodoxy, can we really be surprised? Is there not a new consensus that the era of America as global policeman is over? Is there not agreement from the Tea Party to Occupy Wall Street that the domination of domestic policy by financial elites is over? But here is our Interregnum dilemma. It is one thing to forecast a kind of liberal-libertarian coalition around the issues of defense spending, corporate welfare and even the privacy rights of citizens in a national security state. It is a much more intractable problem to find consensus on the causes and cures of the Great Stagnation. It does seem like we need to understand the nature of the current stagnation by looking back to the late sixties when the economy was very different than it is today. In 1966, net investment as a percentage of GDP peaked at 14% and it has been on a steady decline ever since, despite the computer revolution which was only getting started in the early 1970’s.

Economic growth only comes from three sources: consumption, investment or foreign earnings from trade (the Current Account). We have been living so long with a negative current account balance and falling investment that economic growth is almost totally dependent on the third leg of the stool, consumer spending. But with the average worker unable to get a raise since 1969, consumption can only come from loosened credit standards. As long as the average family could use their home equity as an ATM, the party could continue, driven by the increasing sophistication of advertising and “branded entertainment” to induce mall fever to a strapped consumer. And by the late 1990’s consumer preferences began to drive a winner take all digital economy where one to three firms dominated each sector: Apple and Google; Verizon and AT&T, Comcast and Time Warner Cable; Disney, Fox, Viacom and NBC Universal; Facebook and Twitter. All of this was unloosed by the establishment meme of deregulation — a world in which anti-trust regulators had little influence and laissez-faire ruled. These oligopolies began making so much money they didn’t have enough places to invest so corporate cash as a percentage of assets rose to an all time high.

Here is my fear. That our current version of capitalism is not working. Apple holds on to $158 billion in cash because it can’t find a profitable investment. And because U.S. worker participation rates are only 64%, a huge number of people can never afford an I Phone and so domestic demand is flat (though very profitable) and the real growth in the digital economy will be in Asia, Africa and South America. There is not much the Fed lowering interest rates can do to alter this picture. What is needed is not more easy money loans; it more decent jobs.

But unlike our left-right consensus on military spending, there is a fierce debate raging between economists about the causes and solutions to this stagnation. Though both left and right agree the economy has stagnated, there are huge differences in the prospects for emerging from this condition. On the right, the political economist Tyler Cowen’s new book is called Average is Over: Powering America Beyond the Age of the Great Stagnation. Here is how Cowen sees the next twenty years.

The rise of intelligent machines will spawn new ideologies along with the new economy it is creating. Think of it as a kind of digital social Darwinism, with clear winners and losers: Those with the talent and skills to work seamlessly with technology and compete in the global marketplace are increasingly rewarded, while those whose jobs can just as easily be done by foreigners, robots or a few thousand lines of code suffer accordingly. This split is already evident in the data: The median male salary in the United States was higher in 1969 than it is today. Middle-class manufacturing jobs have been going away due to a mix of automation and trade, and they are not being replaced. The most lucrative college majors are in the technical fields, such as engineering. The winners are doing much better than ever before, but many others are standing still or even seeing wage declines.

On the left, Paul Krugman is not so sure we can emerge from this stagnation.

But what if the world we’ve been living in for the past five years is the new normal? What if depression-like conditions are on track to persist, not for another year or two, but for decades?…In fact, the case for “secular stagnation” — a persistent state in which a depressed economy is the norm, with episodes of full employment few and far between — was made forcefully recently at the most ultrarespectable of venues, the I.M.F.’s big annual research conference. And the person making that case was none other than Larry Summers. Yes, that Larry Summers.

Cowen forecasts a dystopian world where 10% of the population do very well and “the rest of the country will have stagnant or maybe even falling wages in dollar terms, but they will also have a lot more opportunities for cheap fun and cheap education.” That’s real comforting. He predicts the 90% will put up with this inequality for two reasons. First, the country is aging: “remember that riots and protests are typically the endeavors of young hotheads, not sage (or tired) senior citizens.” And second, because of the proliferation of social networks, “envy is local…Right now, the biggest medium for envy in the United States is probably Facebook, not the big yachts or other trophies of the rich and famous.”

Although Cowen cites statistics about the fall in street crime to back up the notion that the majority of citizens are passively accepting gross inequality, I think he completely misunderstands the nature of anti-social pathologies in the Internet Age of Stagnation. Take the example of the Web Site Silk Road.

Silk Road already stands as a tabloid monument to old-fashioned vice and new-fashioned technology. Until the website was shut down last month, it was the place to score, say, a brick of cocaine with a few anonymous strokes on a computer keyboard. According to the authorities, it greased $1.2 billion in drug deals and other crimes, including murder for hire.

From Lulzsec to Pirate Bay to Silk Road, the coming anarchy of a Bladerunner like society are far more vicious than a few street thugs in our major cities. The rise of virtual currencies that can’t be traced like Bitcoin only make the possibilities for a huge crime wave on the Dark Net more imminent—one which IBM estimates already costs the economy $400 billion annually.

So while both Cowen and Krugman agree that stagnation is causing the labor force participation rate to fall, they disagree as to whether anything can be done to remedy the problem.

In the early 1970’s the participation rate began to climb as more and more women entered the workforce. It peaked when George Bush entered office and has been on the decline ever since. As the Time’s David Leonhardt has pointed out, this has very little to do with Baby Boomer retirement. The economist Daniel Alpert has argued in his new book, The Age of Oversupply, that “the central challenge facing the global economy is an oversupply of labor, productive capacity and capital relative to the demand for all three.”

Viewed through this lens, neither the policy prescriptions of Republicans nor Democrats are capable of changing the dynamic brought about by the entrance of three billion new workers into the global economy in the last 20 years. Republican fears that U.S. deficits will lead to Weimar-like hyper-inflation ring hollow in a country where only 63% of the able bodied are working. Democrats hectoring for The Fed and the banks to loan more to business to stimulate the economy are equally nonsensical when American corporations are sitting on $2.4 trillion in cash.

But there is a way out of this deflationary trap we are in. First the Republicans have got to acknowledge the obvious: America’s corporations are not going to invest in vast amounts of new capacity when there is a glut in almost every sector worldwide. Secondly, that overcapacity is not going to get absorbed until more people go back to work and start buying the goods from the factories. This was the same problem our country faced in the great depression and the way we got out of it was by putting people to work rebuilding the infrastructure of this country. Did it ever occur to the politicians in Washington that the reason so many bridges, water and electrical systems are failing is because most of them were built 80 years ago, during the great depression? For Republicans to insist that more austerity will bring back the “confidence fairy”is exactly the wrong policy prescription for an age of oversupply. But equally destructive, as Paul Krugman points out are Democratic voices like Erskine Bowles, shouting from any venue that will pay him, that the debt apocalypse is upon us.

But the Democrats are also going to have to give up some long held beliefs that all good solutions come from Washington. If the Healthcare.gov website debacle has taught us anything, it is that devolving power from Washington to the states is the answer to the complexity of modern governance. While California’s healthcare website performed admirably, the notion of trying to create a centralized system to service 50 different state systems was a fool’s errand. So what is needed is a federalist solution for investment in the infrastructure of the next economy. This is the way out of The Interregnum. Investors buying tax-free municipal bonds to rebuild ancient water systems and bridges as well as solar and wind plants will finance much of it. But just as President Eisenhower understood that a national interstate highway system built in the 1950’s would lead to huge productivity gains in the 1960’s and 1970’s, Federal tax dollars will have to play a large part in rebuilding America. As we wind down our trillion dollar commitments to wars in the Middle East, we must engage in an Economic Conversion Strategy from permanent war to peaceful innovation that both liberals and libertarians could embrace.

The way to overcome the partisan gridlock on infrastructure spending would be for Obama to commit to a totally federalist solution to us getting out of our problems. The Federal Government would use every dollar saved from getting out of Iraq, Afghanistan and all the other defense commitments in block innovation grants to the states. Lets say the first grant is for $100 Billion. It will be given directly to the states on a per capita basis to be used to foster local economic growth. No strings or Federal Bureaucracy attached to the grants except that the states have to publish a yearly accounting of the money in an easily readable form. And then let the press follow the money and see which states come up with the most imaginative solutions. Some states might use the grants to lower the cost of state university tuition. Others might spend the money on high-speed rail lines or municipal fiber broadband and wifi. As we have found in the corporate sector, pushing power to the edges of an organization helps foster innovation. As former IBM CEO Sam Palmisano told his colleagues, “we have to lower the center of gravity of this organization”.

If it worked, then slowly more money could be transferred to the states in these bureaucracy free block grants. Gradually the bureaucracies of the Federal government would shrink as more and more responsibility was shifted to local supervision of education, health, welfare and infrastructure.

In the midst of our current Washington quagmire this vision of a growing American middle class may seem like a distant mirage. But it is clear that the establishment consensus on foreign policy, defense spending, domestic spying and corporate welfare has died in the last 12 months. The old top-down establishment order is clearly dying, but just how we build the new order based on a bottom-up, networked society that works for the 90%, not just the establishment is the question of our age.

Controlling the Surveillance State

 http://www.liebenfels.com/wp-content/uploads/2012/09/The-surveillance-state.jpg

A new report from the ACLU shows that local law enforcement agencies have been spending big bucks on surveillance technology — and offers recommendations on how to rein in the spending.

California cities and counties have spent more than $65 million on surveillance technologies in the past decade while conducting little public debate about the expenditures, according to a new report published this week by three American Civil Liberties Union chapters in the state. Public records reviewed by the ACLU also indicate that though cities and counties in California bought surveillance technologies 180 instances, they only held public discussions about the proposals just 26 times.

The technologies examined in the report included automated license plate readers, closed-circuit video cameras, facial recognition software, drones, data mining tools, and cellphone interception devices known as ISMI catchers or stingrays. The report analyzed purchases by 59 cities and by 58 county governments in California. In many instances, city and county officials used federal grant money to make the purchases, and then asked local legislative bodies to rubber-stamp their decisions. “We long suspected California law enforcement was taking advantage of federal grant money to skirt official oversight and keep communities in the dark about surveillance systems,” said Nicole Ozer, the technology and civil liberties director for the ACLU of California.

The report also found that only one-third of the cities and counties surveyed had privacy policies to prevent law enforcement abuse.

The ACLU report also includes a model ordinance that would require a public process and official legislative approval by local governments before law enforcement could purchase or use surveillance technologies that could impact the privacy of community members. Santa Clara County Supervisor Joe Simitian and San Francisco Supervisor John Avalos planned to announce on Wednesday their intention to introduce versions of the ACLU’s ordinance to their respective legislative bodies. In an interview, Avalos said he believes the proposed ordinance is a long overdue response to an alarming trend. “The level of surveillance in our society has increased dramatically over the past fifteen years and has gotten notably worse under the Obama administration,” Avalos said. “There’s technology out there that is available for cops to pick up … and it’s not clear to me how the technology will be used or useful.”

Avalos also stated that the purchase and use of such equipment is an alarming example of mission creep. “SFPD [San Francisco Police Department] and other police departments are developing an intelligence-gathering capacity beyond what their mission should be,” he said, adding that he is concerned that city policing policy is being driven by technology and equipment purchases that are not currently under the control or oversight of elected officials.

This is not the first effort to regulate the use of surveillance technology by Northern California law enforcement. An ad-hoc advisory committee formed in Oakland to oversee the drafting of the city’s privacy policy for the Domain Awareness Center has recommended similar legislation to city councilmembers (see “Oakland’s Surveillance Fight Continues,” 7/22/14). Oakland’s proposed ordinance would carry a $5,000 penalty or result in a misdemeanor for anyone found to have violated the city’s guidelines. In October, the City of San Carlos rejected a proposal to buy license plate readers on the grounds that the threat to civil liberties and privacy posed by the tracking technology outweighed any potential public safety benefits.

The ACLU’s model ordinance would establish a process for public debate and a consideration of the types of technologies being considered for purchase. The ordinance also would cover the use of surveillance technologies shared between law enforcement agencies, including those employed by fusion centers, such as the Northern California Regional Intelligence Center, which coordinates the sharing of Stingrays owned by police in Oakland and San Francisco and maintains a centralized database of license-plate-reader data from dozens of Northern California agencies. Equipment obtained through private charities such as police foundations would also be covered under the model ordinance. Last month, ProPublica revealed the role of police foundations in New York City and Los Angeles in purchasing surveillance technology that was outside the oversight of local elected officials.

The ACLU report noted that many surveillance tools are being purchased and deployed without consideration of long-term costs associated with maintaining and using such equipment. “The fiscal impact of surveillance can far exceed initial purchase prices for equipment,” the report stated. “Modifying current infrastructure, operating and maintaining systems, and training staff can consume limited time and money even if federal or state grants fund initial costs. Surveillance technologies may also fail or be misused, resulting in costly lawsuits. Looking beyond the sticker price is essential.”

Many communities have purchased costly systems that are intrusive and don’t address the issues that residents believe are important. “The federal funding streaming down from Washington has sidestepped thoughtful considerations of what makes sense for communities,” Ozer said, noting that Oakland received millions of dollars for its Domain Awareness Center, yet received much smaller grants for its successful Operation Ceasefire program.

Avalos said he is in discussions with SFPD Chief Greg Suhr over the proposed ordinance, and is looking forward to hearing the input of his colleagues. “We want a public process around this issue before we enter into the legislative work,” Avalos said.

He is particularly disturbed by the SFPD’s use of devices that capture information from cellphones, like stingrays, and is looking forward to a full accounting of the police department’s technology and policies. Surveillance, Avalos said, “is a broad way of controlling behavior, [and] that is not an American or San Francisco value.”

William Gibson: I never imagined Facebook

The brilliant science-fiction novelist who imagined the Web tells Salon how writers missed social media’s rise

William Gibson: I never imagined Facebook
William Gibson (Credit: Putnam/Michael O’Shea)

Even if you’ve never heard of William Gibson, you’re probably familiar with his work. Arguably the most important sci-fi writer of his generation, Gibson’s cyber-noir imagination has shaped everything from the Matrix aesthetic to geek culture to the way we conceptualize virtual reality. In a 1982 short story, Gibson coined the term “cyberspace.” Two years later, his first and most famous novel, “Neuromancer,” helped launch the cyberpunk genre. By the 1990s, Gibson was writing about big data, imagining Silk Road-esque Internet enclaves, and putting his characters on reality TV shows — a full four years before the first episode of “Big Brother.”

Prescience is flashy, but Gibson is less an oracle than a kind of speculative sociologist. A very contemporary flavor of dislocation seems to be his specialty. Gibson’s heroes shuttle between wildly discordant worlds: virtual paradises and physical squalor; digital landscapes and crumbling cities; extravagant wealth and poverty.

In his latest novel, “The Peripheral,” which came out on Tuesday, Gibson takes this dislocation to new extremes. Set in mid-21st century Appalachia and far-in-the-future London, “The Peripheral” is partly a murder mystery, and partly a time-travel mind-bender. Gibson’s characters aren’t just dislocated in space, now. They’ve become unhinged from history.

Born in South Carolina, Gibson has lived in Vancouver since the 1960s. Over the phone, we spoke about surveillance, celebrity and the concept of the eternal now.

You’re famous for writing about hackers, outlaws and marginal communities. But one of the heroes of “The Peripheral” is a near-omniscient intelligence agent. She has surveillance powers that the NSA could only dream of. Should I be surprised to see you portray that kind of character so positively?

Well, I don’t know. She’s complicated, because she is this kind of terrifying secret police person in the service of a ruthless global kleptocracy. At the same time, she seems to be slightly insane and rather nice. It’s not that I don’t have my serious purposes with her, but at the same time she’s something of a comic turn.

Her official role is supposed to be completely terrifying, but at the same time her role is not a surprise. It’s not like, “Wow, I never even knew that that existed.”



Most of the characters in “The Peripheral” assume that they’re being monitored at all times. That assumption is usually correct. As a reader, I was disconcerted by how natural this state of constant surveillance felt to me.

I don’t know if it would have been possible 30 years ago to convey that sense to the reader effectively, without the reader already having some sort of cultural module in place that can respond to that. If we had somehow been able to read this text 30 years ago, I don’t know how we would even register that. It would be a big thing for a reader to get their head around without a lot of explaining. It’s a scary thing, the extent to which I don’t have to explain why [the characters] take that surveillance for granted. Everybody just gets it.

You’re considered a founder of the cyberpunk genre, which tends to feature digital cowboys — independent operators working on the frontiers of technology. Is the counterculture ethos of cyberpunk still relevant in an era when the best hackers seem to be working for the Chinese and U.S. governments, and our most famous digital outlaw, Edward Snowden, is under the protection of Vladimir Putin?

It’s seemed to me for quite a while now that the most viable use for the term “cyberpunk” is in describing artifacts of popular culture. You can say, “Did you see this movie? No? Well, it’s really cyberpunk.” Or, “Did you see the cyberpunk pants she was wearing last night?”

People know what you’re talking about, but it doesn’t work so well describing human roles in the world today. We’re more complicated. I think one of the things I did in my early fiction, more or less for effect, was to depict worlds where there didn’t really seem to be much government. In “Neuromancer,” for example, there’s no government really on the case of these rogue AI experiments that are being done by billionaires in orbit. If I had been depicting a world in which there were governments and law enforcement, I would have depicted hackers on both sides of the fence.

In “Neuromancer,” I don’t think there’s any evidence of anybody who has any parents. It’s kind of a very adolescent book that way.

In “The Peripheral,” governments are involved on both sides of the book’s central conflict. Is that a sign that you’ve matured as a writer? Or are you reflecting changes in how governments operate?

I hope it’s both. This book probably has, for whatever reason, more of my own, I guess I could now call it adult, understanding of how things work. Which, I suspect, is as it should be. People in this book live under governments, for better or worse, and have parents, for better or worse.

In 1993, you wrote an influential article about Singapore for Wired magazine, in which you wondered whether the arrival of new information technology would make the country more free, or whether Singapore would prove that “it is possible to flourish through the active repression of free expression.” With two decades of perspective, do you feel like this question has been answered?

Well, I don’t know, actually. The question was, when I asked it, naive. I may have posed innocently a false dichotomy, because some days when you’re looking out at the Internet both things are possible simultaneously, in the same place.

So what do you think is a better way to phrase that question today? Or what would have been a better way to phrase it in 1993?

I think you would end with something like “or is this just the new normal?”

Is there anything about “the new normal” in particular that surprises you? What about the Internet today would you have been least likely to foresee?

It’s incredible, the ubiquity. I definitely didn’t foresee the extent to which we would all be connected almost all of the time without needing to be plugged in.

That makes me think of “Neuromancer,” in which the characters are always having to track down a physical jack, which they then use to plug themselves into this hyper-futuristic Internet.

Yes. It’s funny, when the book was first published, when it was just out — and it was not a big deal the first little while it was out, it was just another paperback original — I went to a science fiction convention. There were guys there who were, by the standards of 1984, far more computer-literate than I was. And they very cheerfully told me that I got it completely wrong, and I knew nothing. They kept saying over and over, “There’s never going to be enough bandwidth, you don’t understand. This could never happen.”

So, you know, here I am, this many years later with this little tiny flat thing in my hand that’s got more bandwidth than those guys thought was possible for a personal device to ever have, and the book is still resonant for at least some new readers, even though it’s increasingly hung with the inevitable obsolescence of having been first published in 1984. Now it’s not really in the pale, but in the broader outline.

You wrote “Neuromancer” on a 1927 Hermes typewriter. In an essay of yours from the mid-1990s, you specifically mention choosing not to use email. Does being a bit removed from digital culture help you critique it better? Or do you feel that you’re immersed in that culture, now?

I no longer have the luxury of being as removed from it as I was then. I was waiting for it to come to me. When I wrote [about staying off email], there was a learning curve involved in using email, a few years prior to the Web.

As soon as the Web arrived, I was there, because there was no learning curve. The interface had been civilized, and I’ve basically been there ever since. But I think I actually have a funny kind of advantage, in that I’m not generationally of [the Web]. Just being able to remember the world before it, some of the perspectives are quite interesting.

Drones and 3-D printing play major roles in “The Peripheral,” but social networks, for the most part, are obsolete in the book’s fictional future. How do you choose which technological trends to amplify in your writing, and which to ignore?

It’s mostly a matter of which ones I find most interesting at the time of writing. And the absence of social media in both those futures probably has more to do with my own lack of interest in that. It would mean a relatively enormous amount of work to incorporate social media into both those worlds, because it would all have to be invented and extrapolated.

Your three most recent novels, before “The Peripheral,” take place in some version of the present. You’re now returning to the future, which is where you started out as a writer in the 1980s. Futuristic sci-fi often feels more like cultural criticism of the present than an exercise in prediction. What is it about the future that helps us reflect on the contemporary world?

When I began to write science fiction, I already assumed that science fiction about the future is only ostensibly written about the future, that it’s really made of the present. Science fiction has wound up with a really good cultural toolkit — an unexpectedly good cultural toolkit — for taking apart the present and theorizing on how it works, in the guise of presenting an imagined future.

The three previous books were basically written to find out whether or not I could use the toolkit that I’d acquired writing fictions about imaginary futures on the present, but use it for more overtly naturalistic purposes. I have no idea at this point whether my next book will be set in an imaginary future or the contemporary present or the past.

Do you feel as if sci-fi has actually helped dictate the future? I was speaking with a friend earlier about this, and he phrased the question well: Did a book like “Neuromancer” predict the future, or did it establish a dress code for it? In other words, did it describe a future that people then tried to live out?

I think that the two halves of that are in some kind of symbiotic relationship with one another. Science fiction ostensibly tries to predict the future. And the people who wind up making the future sometimes did what they did because they read a piece of science fiction. “Dress code” is an interesting way to put it. It’s more like … it’s more like attitude, really. What will our attitude be toward the future when the future is the present? And that’s actually much more difficult to correctly predict than what sort of personal devices people will be carrying.

How do you think that attitude has changed since you started writing? Could you describe the attitude of our current moment?

The day the Apple Watch was launched, late in the day someone on Twitter announced that it was already over. They cited some subject, they linked to something, indicating that our moment of giddy future shock was now over. There’s just some sort of endless now, now.

Could you go into that a little bit more, what you mean by an “endless now”?

Fifty years ago, I think now was longer. I think that the cultural and individual concept of the present moment was a year, or two, or six months. It wasn’t measured in clicks. Concepts of the world and of the self couldn’t change as instantly or in some cases as constantly. And I think that has resulted in there being a now that’s so short that in a sense it’s as though it’s eternal. We’re just always in the moment.

And it takes something really horrible, like some terrible, gripping disaster, to lift us out of that, or some kind of extra-strong sense of outrage, which we know that we share with millions of other people. Unfortunately, those are the things that really perk us up. This is where we get perked up, perked up for longer than for over a new iPhone, say.

The worlds that you imagine are enchanting, but they also tend to be pretty grim. Is it possible to write good sci-fi that doesn’t have some sort of dystopian edge?

I don’t know. It wouldn’t occur to me to try. The world today, considered in its totality, has a considerable dystopian edge. Perhaps that’s always been true.

I often work in a form of literature that is inherently fantastic. But at the same time that I’m doing that, I’ve always shared concerns with more naturalistic forms of writing. I generally try to make my characters emotionally realistic. I do now, at least; I can’t say I always have done that. And I want the imaginary world they live in and the imaginary problems that they have to reflect the real world, and to some extent real problems that real people are having.

It’s difficult for me to imagine a character in a work of contemporary fiction who wouldn’t have any concerns with the more dystopian elements of contemporary reality. I can imagine one, but she’d be a weird … she’d be a strange character. Maybe some kind of monster. Totally narcissistic.

What makes this character monstrous? The narcissism?

Well, yeah, someone sufficiently self-involved. It doesn’t require anything like the more clinical forms of narcissism. But someone who’s sufficiently self-involved as to just not be bothered with the big bad things that are happening in the world, or the bad things — regular-size bad things — that are happening to one’s neighbors. There certainly are people like that out there. The Internet is full of them. I see them every day.

You were raised in the South, and you live in Vancouver, but, like Philip K. Dick, you’ve set some of your most famous work in San Francisco. What is the appeal of the city for technological dreamers? And how does the Silicon Valley of today fit into that Bay Area ethos?

I’m very curious to go back to San Francisco while on tour for this book, because it’s been a few years since I’ve been there, and it was quite a few years before that when I wrote about San Francisco in my second series of books.

I think one of the reasons I chose it was that it was a place that I would get to fairly frequently, so it would stay fresh in memory, but it also seemed kind of out of the loop. It was kind of an easy canvas for me, an easier canvas to set a future in than Los Angeles. It seemed to have fewer moving parts. And that’s obviously no longer the case, but I really know contemporary San Francisco now more by word of mouth than I do from first-person experience. I really think it sounds like a genuinely new iteration of San Francisco.

Do you think that Google and Facebook and this Silicon Valley culture are the heirs to the Internet that you so presciently imagined in the 1980s? Or do they feel like they’ve taken the Web in different directions than what you expected?

Generally it went it directions that didn’t occur to me. It seems to me now that if I had been a very different kind of novelist, I would have been more likely to foresee something like Facebook. But you know, if you try to imagine that somebody in 1982 writes this novel that totally and accurately predicted what it would be like to be on Facebook, and then tried to get it published? I don’t know if you would be able to get it published. Because how exciting is that, or what kind of crime story could you set there?

Without even knowing it, I was limited by the kind of fiction of the imaginary future that I was trying to write. I could use detective gangster stories, and there is a real world of the Internet that’s like that, you know? Very much like that. Although the crimes are so different. The ace Russian hacker mobs are not necessarily crashing into the global corporations. They’re stealing your Home Depot information. If I’d put that as an exploit in “Neuromancer,” nobody would have gotten it. Although it would have made me seem very, very prescient.

You’ve written often and eloquently about cults of celebrity and the surrealness of fame. By this point you’re pretty famous yourself. Has writing about fame changed the way you experience it? Does experiencing fame change the way you write about it?

Writers in our society, even today, have a fairly homeopathic level of celebrity compared to actors and really popular musicians, or Kardashians. I think in [my 1993 novel] “Virtual Light,” I sort of predicted Kardashian. Or there’s an implied celebrity industry in that book that’s very much like that. You become famous just for being famous. And you can keep it rolling.

But writers, not so much. Writers get just a little bit of it on a day-to-day basis. Writers are in an interesting place in our society to observe how that works, because we can be sort of famous, but not really famous. Partly I’d written about fame because I’d seen little bits of it, but the bigger reason is the extent to which it seems that celebrity is the essential postmodern product, and the essential post-industrial product. The so-called developed world pioneered it. So it’s sort of inherently in my ballpark. It would be weird if it wasn’t there.

You have this reputation of being something of a Cassandra. I don’t want to put you on the spot and ask for predictions. But I’m curious: For people who are trying to understand technological trends, and social trends, where do you recommend they look? What should they be observing?

I think the best advice I’ve ever heard on that was from Samuel R. Delany, the great American writer. He said, “If you want to know how something works, look at one that’s broken.” I encountered that remark of his before I began writing, and it’s one of my fridge magnets for writing.

Anything I make, and anything I’m describing in terms of its workings — even if I were a non-literary futuristic writer of some kind — I think that statement would be very resonant for me. Looking at the broken ones will tell you more about what the thing actually does than looking at one that’s perfectly functioning, because then you’re only seeing the surface, and you’re only seeing what its makers want you to see. If you want to understand social media, look at troubled social media. Or maybe failed social media, things like that.

Do you think that’s partly why so much science fiction is crime fiction, too?

Yeah, it might be. Crime fiction gives the author the excuse to have a protagonist who gets her nose into everything and goes where she’s not supposed to go and asks questions that will generate answers that the author wants the reader to see. It’s a handy combination. Detective fiction is in large part related to literary naturalism, and literary naturalism was a quite a radical concept that posed that you could use the novel to explore existing elements of society which had previously been forbidden, like the distribution of capital and class, and what sex really was. Those were all naturalistic concerns. They also yielded to detective fiction. Detective fiction and science fiction are an ideal cocktail, in my opinion.

 

http://www.salon.com/2014/11/09/william_gibson_i_never_imagined_facebook/?source=newsletter

Follow

Get every new post delivered to your Inbox.

Join 1,618 other followers