The War Nerd: Google’s Big New Dog

By Gary Brecher , written on February 13, 2014

From The War Desk

Google has bought eight robotics companies in the past few months, and no one seems to know why.

Most of the speculation has focused on one of the eight companies, Boston Dynamics (BD). BD is the only one of the eight companies that specializes in military robotics, always a fertile field for those addicted to “Rise of the Machines” fantasies. But BD’s products are also viscerally exciting, or disturbing, to those seeing them for the first time. That’s because the best-known robots use legs, not wheels or treads. The company has also given its most popular models catchy, animal names, like the quadraped models “Big Dog,” and “Cheetah.” You can watch Big Dog in this video, plodding up a steep slope and keeping its balance on a slick, frozen pond.

It’s a good promotional video, but the modest abilities Big Dog displays here can’t account for the excitement people seem to feel on seeing it trudge up that hill. A wheeled or tracked robot climbing a hill, crossing a frozen pond, wouldn’t get a second glance.

For that matter, why is Big Dog a robot? It’s a small vehicle, with legs instead of wheels, but there’s no evidence it can choose its own route or mission. With a little help from Google, your Nissan can drive home without your touching the steering wheel, but that doesn’t seem to qualify it as a “robot” or entitle it to a fraction of the press Big Dog is getting.

Clearly it’s those gimmicky legs, that imitation of mammal gait. Not that this gait is very fast or efficient; your Nissan is faster, smoother, quieter and can carry far more cargo on its boring old wheels—but we don’t call it a robot.

The rule seems to be that one sense of “robot” in contemporary English is something like “a machine that does a bad imitation of a living organism.” The Nissan isn’t trying to look or move like an animal, so we’re underwhelmed. Big Dog, clomping along like a bear designed by a Human Resources Department, is a robot and a delight.

This would be fine, if this was a privately-funded novelty act. In reality, Big Dog has been funded by the military for years, and is being promoted as a military supply vehicle, perfect for carrying supplies with small units moving on foot over rough terrain—a synthetic donkey, a clockwork St. Bernard, in other words.

This is odd, not to say implausible, on several grounds. For starters, what does that mission have to do with robots, or machine intelligence? A dirt bike can do that job, and nobody associates dirt bikes with high IQs (though that may have to do with their riders). For that mission, what matters is carrying capacity, range, noise when moving, fuel consumption, reliability, speed—and by those criteria, a real donkey beats this artificial one easily.

And when you compare Big Dog to other machines, like offroad bikes or ATVs, the useless gimmickry of the four-legged “robot” is even more obvious. A dirt bike has very little brain but it will claw up a muddy gulch wall better than Big Dog. Throw some ATVs into the competition, and how about one of those amazing Russian Kamaz trucks?

There just aren’t too many slopes, short of a rock wall, that can’t be climbed by bike, ATV, or even Kamaz. At any rate, I’m willing to offer Boston Dynamics a fair bet on a two-vehicle race: Team War Nerd, fielding a 2013 HondaCRF450X ridden by a volunteer (i.e. anybody thinner and less of an uncoordinated dweeb than me) vs. Big Dog and his team of handlers.

The course: 20 miles of the roughest ground you can find. The stakes: totally fair and balanced, to wit: The losing side gives its income for the year to the winner. If I win, all that DARPA money funding Big Dog goes to me, to be used researching military history in someplace that has a good coral reef right offshore. In the unlikely event of Big Dog romping over the finish line first, BD gets my yearly income, which will serve them right.

Big Dog needs computing power only because it’s trying to mimic vertebrate locomotion. Drop that gimmick and it’s a dirt bike or ATV, with more torque than brain.

There’s something very dubious about BD’s products, with their quixotic attempt to imitate mammal motion at a time when familiar machines with wheels have surpassed mammals in every category. Either the whole concept is a classic military boondoggle, or the stated purpose of these machines is not what we’re being told.

BD’s history is a good place to start. The company was started by Marc Raibert, who taught electrical engineering and computer science at MIT. Raibert’s specialty was balance—creating a robot that could keep its balance as well as vertebrates do. Raibert managed to build a robot that could hop without falling over --a great moment for those dreaming of an all-robot production of Riverdance, no doubt. But that breakthrough created a lot of enthusiasm in a more lucrative audience: the research agency for the Department of Defense (DoD), the Defense Advanced Research Projects Agency (DARPA). DARPA and other military agencies have been BD’s main clients for its entire history.

Which raises the same question I keep asking: Why? Why are legs so wonderful, when wheels and treads can do pretty much everything legs do, only faster—much faster?

Two possibilities come to mind: (a) It’s the Department of Defense, which means that insane profligacy with tax money is all we’re seeing; or (b) It’s the human-like or mammal-like motion that DARPA values—not for the stated reason that legs work better on bad terrain, but because DARPA wants a generation of military robots that looks human/mammalian and moves like a mammal. The gimmick, the anthropomorphism, is the goal in itself. To what end we can only guess.

I’m leaning toward option (b), but if you know anything about DoD, you can’t just dismiss “insane proflilgacy” out of hand. In fact, supply vehicles that walk on machine legs is an old dream of DARPA’s. Way back in the Vietnam War, DARPA put a lot of tax money into a project that stood out for sheer idiocy in a war defined by DoD idiocy: a “mechanical elephant” that could carry supplies through steep, roadless jungle—a bigger, earlier version of BD’s Big Dog.

Big Elephant was scrapped as a “damn fool” idea before it took its first steps—a real loss to comedy, if not military logistics—but DARPA hasn’t stopped dreaming about military robots that walk rather than roll.

What about this recurring claim that legs work better than wheels or treads when the goin’ gets tough? It makes no sense at all. Not even the specious sort of sense one finds in many DoD theories. The argument behind it is that long before wheels and tracks dominated movement, the world crawled with creatures that used legs—two or four or six or eight. This proves that legs work better on roadless, rough terrain like the landscape in which the legged creatures evolved.

The only argument against this is the fact that machines moving on wheels and treads showed long ago that they can move faster—much faster—than anything on legs. And continue far longer without tiring. And carry loads thousands of times heavier. Over all kinds of terrain.

If you want to see wheeled vehicles dealing with terrain much rougher than anything Big Dog takes on, and moving through it with ease at high speed, check out a Russian monster truck race. Russians take big trucks and mud real seriously, and I have yet to see a Russian engineer lose faith in the wheel and instead design trucks with legs.

That claim doesn’t hold up. But if you watch the promotional video for Big Dog, marching up the hill like a mechanical mastiff, it’s easy to see why people are so amazed they don’t bother to think about the claims made for this marvel.

In fact, all Big Dog does in his screen-test video is trudge, slowly and noisily, up a hill, then keep its balance after being kicked while crossing a frozen pond. Balance; BD’s line keeps coming back to its one and only breakthrough, Raibert’s work on perfection of balance in robots

But why is that worth DARPA’s time? The only reason Big Dog needs good balance is that it’s imitating the mammal shape, with its high center of gravity—top-heavy body and head on long, skinny legs. The problem of falling over when kicked doesn’t even apply to an ordinary ATV with a low center of gravity. The three best kickers in MMA—Jon Jones, Cro-Cop in his prime, and Anderson Silva—would have a hard time kicking over a heavy-duty ATV (let alone a Kamaz).

And that animal mimicry is such a huge design cost that the product is slow and noisy, extremely noisy. If you watched the Big Dog video with the sound off, try again with the volume up. You’ll hear Big Dog whining like a chorus of chainsaws as it tries to get up that hill.

That alone rules this contraption out for a small-unit mission on foot over rough terrain. And if Big Dog isn’t useful for that kind of mission, what is it good for?

Unless DARPA is insane (a real possibility), the “supply vehicle” story is ridiculous. BD’s chassis, with its animal shape and gait, is such a huge design cost that it must be an end in itself. So the mission must involve looking and moving like a human or a quadruped mammal.

When you reframe the question that way, a plausible role for these walking machines pops up instantly. Most likely, BD’s anthropomorphic walkers are slotted as the chassis for a new generation of military robots whose software is being developed somewhere else, away from all the publicity. Remember how every Chevy used to have a stamp, “Body by Fisher”? This generation of robots will be stamped “Body by Boston Dynamics.” Big Dog and his metal buddies are going to have their heads sawed open and fitted with brains, subcontracted to somebody with more pure AI experience, and then shipped off to do the missions human soldiers can’t, or won’t, do effectively.

And it’s pretty clear what that job is: Counterinsurgency (CI), the most important military mission we have, and the one our military hates and refuses to take seriously.

The reason the US military hates CI is that it’s defined by “the Three D’s” of counterinsurgency: “dull, dirty, and dangerous.” Not to mention that it reeks of Vietnam and Iraq, our worst military failures (a fact which is, let’s say, not unrelated to our distaste for CI, making the military’s aversion for and avoidance of the job one of those “self-fulfilling prophecies” your high-school counselor warned you about.)

Robots are naturals for jobs characterized by the “Three D’s.” Like vacuuming. You like vacuuming? Me neither. Which is why Roomba was invented.

Machines don’t get bored. The Roomba never daydreams of being a cruise missile, as far as I know. It will vacuum until it breaks down, no need for R&R. The motion sensor on your garage never gets tired of being a motion sensor, never daydreams about going to Vegas. Consider land mines, which could reasonably be called the simplest military robots, because they operate without human help once programmed or set. In this way, a mine is much more like a robot than the drones everyone’s worried about. Drones are just fancy model airplanes; they have no capacity to attack their targets without a human operator and his/her supervisors making the call.

Mines don’t need any help, once in place. They never get bored or distracted. They remain in place until their sensors are triggered, and then they detonate. It doesn’t matter if the war ended years ago; they’re still not distracted or bored. Dull is not a problem for a land mine.

What about the second D, “dirty”? It’s a huge problem for human soldiers occupying another country, dealing with an insurgency. These wars are every way you can use the word. Literally dirty, because these wars, by their nature, happen in poor countries where there are no public services, where the toilet is a pit and water is a precious commodity you buy or carry a long way home. And since the occupying soldiers are from a richer country—again, by the nature of such wars—it’s hard for them, even before their first ambush, just dealing with the smell and the dirt. They hate the locals before the ambushes even start.

And that leads to the other kind of “dirty,” the sleaze that irregular war always encourages. A squad search a house and one of them steals a gold necklace; nobody wants to lose out, so they all steal. A man objects and gets shot; the squad plants a gun and keeps quiet.

This is an aspect of what the Army likes to call “unit cohesion,” banding together in combat—but it’s the worst thing a CI force can do. You’re all dirty now, and the whole city knows it and hates you. Any sympathy you had is gone. The guerrillas get more and more good info; you get lies or nothing at all.

Now imagine a unit of non-human units, say BD Atlas chassis with a good program, dealing with the “dirty” stuff. For them, there is no dirt, literal or figurative. The neighborhood dirt and smells don’t register at all, and the emotions that lead to stealing, murder, rape, and humiliation of the locals don’t exist. If the units are doing something counterproductive, you alter their programming; there’s no grudge, no memory, no resistance.

Now comes the last and most important D: “Dangerous.” This is where robots could really revolutionize CI warfare. Over the last century, guerrillas have developed a kind of military miracle, a strategy for defeating bigger, wealthier, better-equipped occupying armies. It’s worked again and again, all over the world—and it works not by tinkering with weapons or massing giant armies, but by playing with the occupiers’ emotions, warping them patiently, back and forth, until the occupying soldiers are so scared, resentful and vengeful they’re no use at all, and are actually recruiting for the guerrillas among the civilians, the third group that both sides are trying to win over.

No other form of war depends entirely on playing with the enemy’s emotional responses. The occupier wants to warp that bond by “winning hearts and minds”; the guerrilla wants to make the occupying force lash out at the civilian population. It doesn’t take much, actually. Most combat soldiers are young, male, provincial, and hungry for the group’s approval. If your squad goes out on patrol and loses a soldier or two each time—to “cowardly” guerrilla tactics like snipers or IEDs—then the group will preach hate for all the locals, with no distinction between guerrillas and civilians. Eventually, the group will act on that belief by firing every weapon it has at anything that moves, or any house in sight.

That’s when the guerrillas win—when the occupier starts raging around like a blind giant, killing old women and kids, disabled, housebound elders, all those who are supposed to be off limits. When that happens, the guerrillas start getting donations, information, volunteers, and the soldiers hole up behind sandbags. They’ve lost the neighborhood, in spite of their advantage in money and weaponry.

If the occupiers sortie out and blast the neighborhood, the guerrillas will usually not even fight back. These sorties just make the occupiers more monstrous, more hated, more isolated. To keep the civilians’ anger for revenge satisfied, the guerrillas use all their new, eager informers to find out when the next patrol leaves the base. There’s an IED waiting for it, and in the gory mess after it goes off, the soldiers overreact wildly, firing the tank’s main cannon into apartment houses. At that point, the war is over and the guerrillas win, even if it takes years for the foreigners to leave.

Sooner or later, the occupiers leave. They’re the only party that can leave. The civilians have nowhere else to go, and the guerrillas have plans for the day when the foreigners pull out. In a year, or ten years, the budget at home is tightened, the polls say the war is hurting the ruling family, or the oligarchy finds something else to obsess about—and the occupying forces leave, hating and hated by everyone.

Now, imagine a CI unit in which the patrols venturing into dangerous areas are robotic, not human. All that guerrilla theory is suddenly obsolete. The guerrillas might be very good at their job, and “kill” several robotic units patrolling the neighborhood. But the reprisals they’re hoping for just won’t happen, and that ruins their whole strategy. The units feel no anger or fear, no grief for the damaged units. They follow their programming, unmoved.

The guerrillas repeat the process, still hoping for reprisals. There are none. The robot patrols may not even need to retaliate. In theory, an occupier rich and patient enough to focus on the “hearts and minds” job could decide to keep counter-guerrilla violence to a minimum. The occupying power could simply keep sending more robotic units to replace those destroyed, while the robot patrols focus on projects like improving sanitation, roads, and electric power. Most occupying armies talk about that kind of work, but it’s hard for humans to feel very gung-ho about Peace Corps chores on behalf of the people who are trying to kill them. Robotic units have no grudges, making them lethal CI soldiers.

If the occupiers had the patience and money to continue this experiment in CI strategy long enough, the guerrillas would become more violent toward the civilian population as the robot units became less hated. Guerrillas are only human—worse yet, they’re usually young males, easily outraged and prone to violence. The guerrillas expect the civilians to share their outrage, but with no reprisals and occupying units fixing up the streets and sewers for the first time in anyone’s memory, other human variants would rather wait and see. Eventually, some “collaborators” will be killed by the more emotional guerrillas. At that point, the occupiers win no matter what happens next. Maybe the guerrillas split over these reprisals, and civil war destroys the resistance. Maybe the robot units are so trusted by now that a delegation of those with an interest in stability—rich people, parents with military-age sons they’d like to keep alive, vulnerable minority sects—go to the outpost gate to present a list of guerrilla leaders and their present addresses.

No real war is likely to go that smoothly for an occupier, even with automated troops. But then, few guerrilla wars go as smoothly as the guerrilla victory I outlined above either.

What is intriguing about robot units in CI warfare is that emotion, the key of guerrilla strategy, is off the table—unless the people programming and running the automated units project their fickle, unstable reactions onto their robots. Which is all too possible. And in that case—well, it would be just as easy to program automated soldiers to kill everything that moved in a certain neighborhood as to focus on helping repair the infrastructure. But an occupying power could use nukes for that, more quickly and cheaply than robot soldiers.

What robot soldiers could do is just as scary, though: Make outright colonialism a practical option again. If guerrillas can’t provoke reprisals by playing on the soldiers’ fear and hate, then there’s only one other player in the game whose emotions can be exploited—the civilian population. That puts the guerrilla in the occupying army’s traditional role. It’s the human guerrillas—as vengeful and unpredictable as most humans are—who become resented, even if the neighborhood agrees, in theory, with their struggle against occupation. The guerrillas are the only wild card, so they are the element to fear and eventually, to hate.

Meanwhile, the people running the occupation feed in replacement units and plan how to siphon off whatever it is they wanted in the occupied area, a world away from the shooting. And their machine-soldiers—never homesick, never scared, never angry—can keep this up forever, or until a newer model comes along. No doubt some company will become the Toyota of machine-soldiers, and their commercials will feature a rusty old unit suddenly famous because the guerrilla this veteran unit just killed turns out to be the great-grandson of the first one it neutralized when shipped to the occupation zone as a squeaky-clean product, fresh out of the carton.

When you imagine military robots in this scenario, the huge design costs of BD’s humanoid and mammalian chassis begin to make sense. Machines make better CI soldiers than humans, but only if they still resemble humans in outline. A checkpoint manned by occupation robots with no human characteristics would be too alienating.

What’s more likely is a mixed squad, with some actual humans—back out of suicide-bomber range—and BD-derived models, biped and quadraped, fronting the public. There’ll be mockery, but that’s another weapon only useful on gregarious mammals. Over time, the inhuman discipline of the walking machines will make their approximation of familiar organisms acceptable. Better a humanoid who doesn’t commit reprisals than a fully human foreigner with a temper and a 25mm automatic cannon.

The advantages—for the occupier—are almost endless. No relatives to hold up pictures of dead relatives outside the White House. No lawsuits. No PTSD. And huge, almost unimaginable profits for whoever holds the patents.