Who talks like FDR but acts like Ayn Rand? Easy: Silicon Valley’s wealthiest and most powerful people

Tech’s toxic political culture: The stealth libertarianism of Silicon Valley bigwigs

Tech's toxic political culture: The stealth libertarianism of Silicon Valley bigwigs
Ayn Rand, Marc Andreessen, Franklin D. Roosevelt (Credit: AP/Reuters/Fred Prouser/Salon)

Marc Andreessen is a major architect of our current technologically mediated reality. As the leader of the team that created the Mosaic Web browser in the early ’90s and as co-founder of Netscape, Andreessen, possibly more than any single other person, helped make the Internet accessible to the masses.

In his second act as a Silicon Valley venture capitalist, Andreessen has hardly slackened the pace. The portfolio of companies with investments from his VC firm, Andreessen Horowitz, is a roll-call for tech “disruption.” (Included on the list: Airbnb, Lyft, Box, Oculus VR, Imgur, Pinterest, RapGenius, Skype and, of course, Twitter and Facebook.) Social media, the “sharing” economy, Bitcoin — Andreessen’s dollars are fueling all of it.

So when the man tweets, people listen.

And, good grief, right now the man is tweeting. Since Jan. 1, when Andreessen decided to aggressively reengage with Twitter after staying mostly silent for years, @pmarca has been pumping out so many tweets that one wonders how he finds time to attend to his normal business.

On June 1, Andreessen took his game to a new level. In what seems to be a major bid to establish himself as Silicon Valley’s premier public intellectual, Andreessen has deployed Twitter to deliver a unified theory of tech utopia.

In seven different multi-part tweet streams, adding up to a total of almost 100 tweets, Andreessen argues that we shouldn’t bother our heads about the prospect that robots will steal all our jobs.  Technological innovation will end poverty, solve bottlenecks in education and healthcare, and usher in an era of ubiquitous affluence in which all our basic needs are taken care of. We will occupy our time engaged in the creative pursuits of our heart’s desire.



So how do we get there? Easy! All we have to do is just get out of Silicon Valley’s way. (Andreessen is never specific about exactly what he means by this, but it’s easy to guess: Don’t burden tech’s disruptive firms with the safety, health and insurance regulations that the old economy must abide by.)

Oh, and one other little thing: Make sure that we have a social welfare safety net robust enough to take care of the people who fall though the cracks (or are eaten by robots).

The full collection of tweets marks an impressive achievement — a manifesto, you might even call it, although Andreessen has been quick to distinguish his techno-capitalist-created utopia from any kind of Marxist paradise. But there’s a hole in his argument big enough to steer a $500 million round of Series A financing right through. Getting out of the way of Silicon Valley and ensuring a strong safety net add up to a political paradox. Because Silicon Valley doesn’t want to pay for the safety net.

* * *

http://www.salon.com/2014/06/06/techs_toxic_political_culture_the_stealth_libertarianism_of_silicon_valley_bigwigs/

A surprising new warning on robots and jobs

When even the Economist starts hemming and hawing

about automation and labor markets, it’s time to get worried

A surprising new warning on robots and jobs
(Credit: josefkubes via Shutterstock)

When the Economist magazine starts warning about the threat of robots, it’s high time to grab your survival gear and light out for the back country. Journalism’s preeminent defender of the market wisdom of Adam Smith’s invisible hand rarely questions the forward march of innovation. But in a special report on our fast-arriving robot future published in the print edition this week, the Economist does just that. Kind of.

As headlines go, the warning is hardly definitive: “Job destruction by robots could outweigh creation.” The story itself is hedged so thickly one can barely see the central thesis: Robots might be threatening our jobs, but we’re not really sure. Globalization is also a problem — as is the arrival of women in the workforce. Pick your poison men: women or robots!

But just the fact that the Economist is even asking the question of whether robots could conceivably have a negative impact on labor markets is worth taking notice of. It’s a reflection of a shift in opinion on the part of people the Economist takes seriously — credentialed economists.

Nick Bloom, an economics professor at Stanford, has seen a big change of heart about such technological unemployment in his discipline recently. The received wisdom used to be that although new technologies put some workers out of jobs, the extra wealth they generated increased consumption and thus created jobs elsewhere. Now many economists are taking the short- to medium-term risk to jobs far more seriously, and some think the potential scale of change may be huge.

The Economist also mentions the work of MIT’s Erik Brynjolfsson and Andrew McAfee, who argue that “technological dislocation may create great problems for moderately skilled workers in the coming decades.” (Salon’s interview with Brynjolfsson and McAfee can be found here.)

But the magazine doesn’t go much further than pointing out that there are some new concerns. Far more ink is lavished in this special report on the wonders of the new technology coming down the pike than the potential dangers. There’s even a wave of the hand at everyone’s favorite utopian technological dream: Once robots are doing all the work, our main problem will be figuring out what to do with our abundance of leisure time.



It is even conceivable that the fruits of greater productivity will be distributed so as to allow people to work less and spend more time doing other things. After all, the humor in the double meaning of the message that “Our robots put people to work” depends on understanding that people do not necessarily want to work, if they have better things to do.

Wouldn’t that be nice! The real question is: distributed by whom? The benefits of a potential robot utopia are unlikely to be widely distributed without strong political leadership. Unfortunately, so far, there’s very little evidence to support the notion that governments, anywhere, have a clue on how to steer us through a robot future.

Tale of the Teletank: The Brief Rise and Long Fall of Russia’s Military Robots

http://www.popsci.com/sites/popsci.com/files/styles/image_full/public/Teletank.jpg?itok=j0ATHlE4

Seventy-four years ago, Russia accomplished what no country had before, or has since—it sent armed ground robots into battle. These remote-controlled Teletanks took the field during one of WWII’s earliest and most obscure clashes, as Soviet forces pushed into Eastern Finland for roughly three and a half months, from 1939 to 1940. The Finns, by all accounts, were vastly outnumbered and outgunned, with exponentially fewer aircraft and tanks. But the Winter War, as it was later called (it began in late November, and ended in mid-March), wasn’t a swift, one-sided victory. As the more experienced Finnish troops dug in their heels, Russian advancement was proving slow and costly. So the Red Army sent in the robots.

Specifically, the Soviets deployed two battalions of Teletanks, most of them existing T-26 light tanks stuffed with hydraulics and wired for radio control. Operators could pilot the unmanned vehicle from more than a kilometer away, punching at rows of dedicated buttons (no thumbsticks or D-pads to be found) to steer the tank or fire on targets with a machine gun or flame thrower. And the Teletank had the barest minimum of autonomous functionality: if it wandered out of radio range, the tank would come to a stop after a half-minute, and sit, engine idling, until contact was reestablished.

Notably missing, though, was any sort of remote sensing capability—the Teletank couldn’t relay sound or audio back to its human driver, most often located in a fully-crewed T-26 trailing behind the mechanized one. This was robotic teleoperation at its most crude, and made for halting, imprecise maneuvering across uneven terrain.

What good was the Teletank, then? Though records are sparse, the unmanned tanks appear to have been used in combat, including during the Battle of Summa, an extended, two-part engagement that eventually forced a Finnish retreat. The Teletank’s primary role was to throw fire without fear, offsetting its lack of accuracy with gouts of flame.

On March 13, 1940, Finland and the USSR signed a treaty in Moscow, ending the Winter War. It was the end of the Teletank, as well—in the wider, even more brutal conflict to come, the T-26 was obsolete in practically every way, lacking the armor and armament to stand up to German tanks, or even to antitank weapons fielded by the Finnish. With no additional units built after 1940, the T-26 was a dead design rolling, and the remote-controlled version was just as doomed.

For a few months, nearly three quarters of a century ago, Russia led the world in military robotics. It’s a position the country would never hold again, as both Soviet and post-breakup forces have all but abandoned the development of armed ground and aerial bots. Even as recently as 2008, during its conflict with Georgia—triggered, in part, by the downing of Georgian reconnaissance drones—Russian drones were all but absent, and its air strikes were entirely manned. While Russia hasn’t shied away from open warfare, it hasn’t made robots a battlefield priority.

Until recently, that is. A number of Russian-based aircraft makers have won contracts in the past few years to build combat drones, including a 5-ton model originally slated for testing this year, and a 20-ton model planned for 2018. Military officials now hope to have strike drone capability by 2020.

And while there’s no evidence that it will ever be deployed, Russia is, in fact, home to a gun-wielding ground drone. The MRK-27 BT, built by the Moscow Bauman Technical University and first unveiled in 2009, is a tracked weapon platform, armed with a machine gun and paired grenade launchers and flame throwers. Most likely, it will go the way of MAARS, SWORDS, MULE, and other imposing ground combat bots—which is to say, nowhere. So far, the Teletank is an anomaly among robotic weapons, a precursor with no real descendants. Or none, luckily, with any confirmed kills.

http://www.popsci.com/blog-network/zero-moment/tale-teletank-brief-rise-and-long-fall-russia%E2%80%99s-military-robots?dom=PSC&loc=recent&lnk=1&con=tale-of-the-teletank-the-brief-rise-and-long-fall-of-russias-military-robots

 

Controversy Brews Over Role Of ‘Killer Robots’ In Theater of War

Written By:
Posted: 03/9/14 9:00 AM

 

 

 

mfc-amas-photo-01-h

Technology promises to improve people’s quality of life, and what could be a better example of that than sending robots instead of humans into dangerous situations? Robots can help conduct research in deep oceans and harsh climates, or deliver food and medical supplies to disaster areas.

As the science advances, it’s becoming increasingly possible to dispatch robots into war zones alongside or instead of human soldiers. Several military powers, including the United States, the United Kingdom, Israel and China, are already using partially autonomous weapons in combat and are almost certainly pursuing other advances in private, according to experts.

The idea of a killer robot, as a coalition of international human rights groups has dubbed the autonomous machines, conjures a humanoid Terminator-style robot. The humanoid robots Google recently bought are neat, but most machines being used or tested by national militaries are, for now, more like robotic weapons than robotic soldiers. Still, the line between useful weapons with some automated features and robot soldiers ready to kill can be disturbingly blurry.

Whatever else they do, robots that kill raise moral questions far more complicated than those posed by probes or delivery vehicles. Their use in war would likely save lives in the short run, but many worry that they would also result in more armed conflicts and erode the rules of war — and that’s not even considering what would happen if the robots malfunctioned or were hacked.

Seeing a slippery slope ahead, human rights groups began lobbying last year for lethal robots to be added to the list of prohibited weapons that includes chemical weapons. And the U.N., driven in part by a 2013 report by Special Rapporteur Christof Heyns, has set a meeting in May for nations to explore that and other limits on the technology.

“Robots should not have the power of life and death over human beings,” Heyns wrote in the report.

There’s no doubt that major military powers are moving aggressively into automation. Late last year, Gen. Robert Cone, head of the U.S. Army’s Training and Doctrine Command, suggested that up to a quarter of the service’s boots on the ground could be replaced by smarter and leaner weaponry. In January, the Army successfully tested a robotic self-driving convoy that would reduce the number of personnel exposed to roadside explosives in war zones like Iraq and Afghanistan.

According to Heyns’s 2013 report, South Korea operates “surveillance and security guard robots” in the demilitarized zone that buffers it from North Korea. Although there is an automatic mode available on the Samsung machines, soldiers control them remotely.

The U.S. and Germany possess robots that automatically target and destroy incoming mortar fire. They can also likely locate the source of the mortar fire, according to Noel Sharkey, a University of Sheffield roboticist who is active in the “Stop Killer Robots” campaign.

And of course there are drones. While many get their orders directly from a human operator, unmanned aircraft operated by Israel, the U.K. and the U.S. are capable of tracking and firing on aircraft and missiles. On some of its Navy cruisers, the U.S. also operates Phalanx, a stationary system that can track and engage anti-ship missiles and aircraft.

The Army is testing a gun-mounted ground vehicle, MAARS, that can fire on targets autonomously. One tiny drone, the Raven is primarily a surveillance vehicle but among its capabilities is “target acquisition.”

No one knows for sure what other technologies may be in development.

“Transparency when it comes to any kind of weapons system is generally very low, so it’s hard to know what governments really possess,” Michael Spies, a political affairs officer in the U.N.’s Office for Disarmament Affairs, told Singularity Hub.

At least publicly, the world’s military powers seem now to agree that robots should not be permitted to kill autonomously. That is among the criteria laid out in a November 2012 U.S. military directive that guides the development of autonomous weapons. The European Parliament recently established a non-binding ban for member states on using or developing robots that can kill without human participation.

Yet, even robots not specifically designed to make kill decisions could do so if they malfunctioned, or if their user experience made it easier to accept than reject automated targeting.

MAARS-robotWhat if, for example, a robot tasked with destroying an unmanned military installation instead destroyed a school? Robotic sensing technology can only barely identify big, obvious targets in clutter-free environments. For that reason, the open ocean is the first place robots are firing on targets. In more cluttered environments like the cities where most recent wars have been fought, the sensing becomes less accurate.

The U.S. Department of Defense directive, which insists that humans make kill decisions, nonetheless addresses the risk of “unintended engagements,” as a spokesman put it in an email interview with Singularity Hub.

Sensing and artificial intelligence technologies are sure to improve, but there are some risks that military robot operators may never be able to eliminate.

Some issues are the same ones that plague the adoption of any radically new technology: the chance of hacking, for instance, or the legal question of who’s responsible if a war robot malfunctions and kills civilians.

“The technology’s not fit for purpose as it stands, but as a computer scientist there are other things that bother me. I mean, how reliable is a computer system?” Sharkey, of Stop Killer Robots, said.

Sharkey noted that warrior robots would do battle with other warrior robots equipped with algorithms designed by an enemy army.

“If you have two competing algorithms and you don’t know the contents of the other person’s algorithm, you don’t know the outcome. Anything could happen,” he said.

For instance, when two sellers recently unknowingly competed for business on Amazon, the interactions of their two algorithms resulted in prices in the millions of dollars. Competing robot armies could destroy cities as their algorithms exponentially escalated, Sharkey said.

An even likelier outcome would be that human enemies would target the weaknesses of the robots’ algorithms to produce undesirable outcomes. For instance, say a machine that’s designed to destroy incoming mortar fire such as the U.S.’s C-RAM or Germany’s MANTIS, is also tasked with destroying the launcher. A terrorist group could place a launcher in a crowded urban area, where its neutralization would cause civilian casualties.

NASA is building humanoid robots.

NASA is building humanoid robots.

Or consider a real scenario. The U.S. sometimes programs its semi-autonomous drones to locate a terrorist based on his cell phone SIM card. The terrorists, knowing that, often offload used SIM cards to unwitting civilians. Would an autonomous killing machine be able to plan for such deception? Even if robots plan for particular deceptions, the history of the web suggests that terrorists could find others.

Of course, most technologies stumble at first and many turn out okay. The militaries developing war-fighting robots are assuming this model and starting with limited functions and use cases. But they are almost certainly working toward exploring disruptive options, if only to keep up with their enemies.

Sharkey argues that, given the lack of any clear delineation between limited automation and killer robots, a hard ban on robots capable of making kill decisions is the only way to ensure that machines never have the power of life and death over human beings.

“Once you’ve put in billions of dollars of investment, you’ve got to use these things,” he said.

Few expect the U.N. meeting this spring to result in an outright ban, but it will begin to lay the groundwork for the role robots will play in war.

Photos: Lockheed Martin, QinetiQ North America, NASA

http://singularityhub.com/2014/03/09/controversy-brews-over-role-of-killer-robots-in-theater-of-war/

 

The Internet Is the Greatest Legal Facilitator of Inequality in Human History

It doesn’t have to be.
JAN 28 2014, 4:34 PM ET
Reuters

In the 1990s, the venture capitalist John Doerr famously predicted that the Internet would lead to the “the largest legal creation of wealth in the history of the planet.” Indeed, the Internet has created a tremendous amount of personal wealth. Just look at the rash of Internet billionaires and millionaires, the investors both small and large that have made fortunes investing in Internet stocks, and the list of multibillion-dollar Internet companies—Google, Facebook, LinkedIn, and Amazon. Add to the list the recent Twitter stock offering, which created a reported 1,600 millionaires.

Then there’s the superstar effect. The Internet multiplies the earning power of the very best high-frequency traders, currency speculators, and entertainers, who reap billions while the merely good are left to slog it out.

But will the Internet also create the greatest economic inequality the global economy has ever known? And will poorly designed government policies aimed at ameliorating the problem of inequality end up empowering the Internet-driven redistribution process?

As the Internet goes about its work making the economy more efficient, it is reducing the need for travel agents, post office employees, and dozens of other jobs in corporate America. The increased interconnectivity created by the Internet forces many middle and lower class workers to compete for jobs with low-paid workers in developing countries. Even skilled technical workers are finding that their jobs can be outsourced to trained engineers and technicians in India and Eastern Europe.

That’s the old news.

The new news is that Internet-based companies may well be the businesses of the future, but they create opportunities for only a select few. Google has a little over 54,000 employees and generated revenues of around $50 billion in sales or about $1.0 million per employee. The numbers are similar for Facebook. Amazon is running at a $70 billion revenue rate and had around110,000 employees or a little over $600 thousand in sales per employee. In the U.S., each non-farm worker adds a little over $120,000 to the domestic output.

That means that in order to justify hiring an employee, a highly productive Internet company must create five to ten times the dollars in sales as the average domestic company.

In the past, the most efficient businesses created lots of middle class jobs. In 1914, Henry Ford shocked the industrial world by doubling the pay of assembly line workers to $5 a day. Ford wasn’t merely being generous. He helped to create the middle class, by reasoning that a higher paid workforce would be able them to buy more cars and thus would grow his business.

Ford’s success trickled down, as other companies followed his lead. Automotive companies not only employed numerous well paid workers but they created a large demand for other product and services that employed millions more—steel, glass, machine tools, auto dealers and dealerships, gas stations, mechanics, bridges, roads, and construction equipment. The workers in those industries purchased homes, appliances, and clothes creating still more jobs.

One reason we are failing to create a vibrant middle class is that the Internet affects the economy differently than the new businesses of the past did., forcing businesses and their workers to face increased global competition. It reduces the barriers for moving jobs overseas. It has a smaller economic trickle-down effect.

Doing some of the obvious things like raising the minimum wage to fight the effects of the Internet will probably worsen the problem. For example, it will make it more difficult for bricks-and-mortar retailers to compete with online retailers.

Surprisingly, the much-vilified Walmart probably does more to help middle class families raise their median income than the more productive Amazon.Walmart hires about one employee for every $200,000 in sales, which translates to roughly three times more jobs per dollar of sales than Amazon. Raising the minimum wage will also make it more difficult to bring manufacturing jobs back to the U.S. The Internet is not the sole force driving income inequality in the U.S. Our languishing education system is a major contributor to the problem. But two things are certain: the Internet is creating many of those in the ultra-wealthy 1%; and it forces businesses to compete with capable international competitors while providing the tools so that businessmen can squeeze inefficiency out of the system in order to remain competitive.

If the government is going to be in the business of redistributing wealth, a better approach would be to raise the earned income tax credit and increase taxes to pay for it. Not only would this raise the income of low paid workers, but also it would subsidize businesses so they would be more competitive in world markets and encourage them to create jobs. Since the minimum wage would not go up, moving jobs overseas would be a less attractive alternative.

If policy makers want to attack income inequality, they must pay more attention to the ways in which the Internet is affecting their businesses. If we ignore the power of the Internet when making policy decisions, we are in danger of allowing it to become the greatest legal facilitator of income inequality in the history of the planet.

Pray That This Scary, Galloping Four-Legged Robot Never Comes for You

Last year when DARPA released video of its new robot designed to replicate the movements (and, eventually, speed) of a cheetah, some of us were creeped out by the machine’s ability to “chase and evade” at the rate of nearly 30 mph.

The only thing reassuring was that the Cheetah was tethered by an electronic leash of cables. Now MIT spinoff Boston Dynamics has released WildCat, the sequel to Cheetah, which can only move at 16 mph at the moment, but can do so while untethered.

Funded by DARPA, WildCat uses a combination of galloping and bounding to move and gain speed, as the video below shows, and can already recover from from stumbles and falls with ease.

It’s still not totally clear what military applications Cheetah or WildCat will have once fully developed (although one guess is to carry military equipment in war zones if it gets a quieter power source), but DARPA has previously said that Cheetah could be used for “emergency response, firefighting, advanced agriculture and vehicular travel.”

Allen McDuffee reports on defense and national security for Wired and is currently working on a book about the influence of think tanks in Washington.

Automation and job destruction

 

“Tom Friedman begins his latest op-ed in the NYT with an anecdote about Dutch chess grandmaster Jan Hein Donner who, when asked how he’d prepare for a chess match against a computer, replied: ‘I would bring a hammer.’ Donner isn’t alone in fantasizing that he’d like to smash some recent advances in software and automation like self-driving cars, robotic factories, and artificially intelligent reservationists says Friedman because they are ‘not only replacing blue-collar jobs at a faster rate, but now also white-collar skills, even grandmasters!’ In the First Machine Age (The Industrial Revolution) each successive invention delivered more and more power but they all required humans to make decisions about them. … Labor and machines were complementary. Friedman says that we are now entering the ‘Second Machine Age’ where we are beginning to automate cognitive tasks because in many cases today artificially intelligent machines can make better decisions than humans. ‘We’re having the automation and the job destruction,’ says MIT’s Erik Brynjolfsson. ‘We’re not having the creation at the same pace. There’s no guarantee that we’ll be able to find these new jobs. It may be that machines are better than that.’ Put all the recent advances together says Friedman, and you can see that our generation will have more power to improve (or destroy) the world than any before, relying on fewer people and more technology. ‘But it also means that we need to rethink deeply our social contracts, because labor is so important to a person’s identity and dignity and to societal stability.’ ‘We’ve got a lot of rethinking to do,’ concludes Friedman, ‘because we’re not only in a recession-induced employment slump. We’re in technological hurricane reshaping the workplace.'”

~Slashdot~

Your Pee Could Power Future Robots

Researchers have found a way to turn urine into electric power that could drive a robot.
Researchers have found a way to turn urine into electric power that could drive a robot.
Credit: Bristol Robotics Laboratory

There’s a new use for artificial hearts, and it involves a more taboo bodily fluid than blood.

A device that mimics the squeezing action of the human heart has been used to pump urine into a microbial fuel cell, which could power robots that convert the waste into electricity.

“In the future, we hope the robots might be used in city environments for remote sensing,” where they could help to monitor pollution, said study researcher Peter Walters, an industrial designer at the University of the West of England. “It could refuel from public lavatories, or urinals, ” Walters said.

 

Walters and colleagues at the University of Bristol have created four generations of these so-called EcoBots over the past decade. Previous versions of the robots ran off energy from rotten produce, dead flies, wastewater and sludge. [Super-Intelligent Machines: 7 Robotic Futures]

Each is powered by a microbial fuel cell, containing live microorganisms like those found in the human gut or sewage treatment plants. The microbes digest the waste (or urine) and produce electrons, which can be harvested to produce electrical current, Walters said.

The researchers have already proved the microbial fuel cells can use urine power to charge a mobile phone.

Now, the team has developed a device, made of artificial muscles, that delivers real human urine to the robot’s microbial power stations. The pump is constructed from smart materials, called shape memory alloys, which remember their shape after being deformed.

Heating the artificial muscles with an electric current causes them to compress the soft center of the pump, forcing urine through an outlet that pumps it up to the height of the robot’s fuel cells. Removing the heat allows the muscles to revert to their original shape, allowing more fluid to enter the pump — much as a heart relaxes to suck in more blood.

Twenty-four of these fuel cells stacked together were able to produce enough electricity to charge a capacitor, which was used to trigger contractions of the artificial heart pump, the researchers report today (Nov. 8) in the journal Bioinspiration and Biomimetics.

Whereas conventional motor-powered pumps tend to get clogged, the artificial muscle pump has larger internal orifices, Walters said.

While the new pump does produce more electricity than it consumes (since some of the electricity comes from urine that’s converted to electrons), it’s still not extremely efficient. The researchers hope to improve the pump’s efficiency for use in future generations of the EcoBot.