The robot courses across the battlefield, swiveling on sleek steel legs, turning its bullet-shaped head toward its prey. Its eyes flash red, then blue: signal received. Before the ensuing sound boom and the inevitable end of hundreds of lives, it pauses a moment to reflect. Death is never easy, even with a heart of circuits.
Okay, so maybe not quite like that, but you get the drift. To a demographic familiar with Cylons and Replicants, the concept of robot warfare might not seem so far fetched. But according to the New York Times, great strides are being made to program ethics, and even guilt, into robots designed for combat.
Believe it or not, there are already over 18,000 unmanned systems deployed in Iraq now, so the issue of robotic warfare is moot at this point. It’s here, whether or not we are comfortable with it. But Ronald Arkin of Georgia tech is out to change the way robotic warfare is conducted and to improve it: after a three year programming project working with the U.S. Army he believes that “in limited situations, like countersniper operations or storming buildings, the software will actually allow robots to outperform humans from an ethical perspective.”
Essentially he’s working to create a robot super brain, far out performing the average soldier in difficult situations. Arkin explains, “I believe these systems will have more information available to them than any human soldier could possibly process and manage at a given point in time and thus be able to make better informed decisions.”
And beyond logic and reason, beyond even ethics, Arkin wants to instill guilt in the programming to mimic “remorse, compassion, and shame”. According to the article, guilt allows to change outcomes and generate constructive change.
While fighting, his robots assess battlefield damage and then use algorithms to calculate the appropriate level of guilt. If the damage includes noncombatant casualties or harm to civilian property, for instance, their guilt level increases. As the level grows, the robots may choose weapons with less risk of collateral damage or may refuse to fight altogether.
Of course, this brings about a slew of questions. Can remorse really be programmed? Or, as I thought when I first read this piece, isn’t the point of robot warfare to make them inhuman? Or is it just a matter of being able to get into difficult places and minimize “true” loss of life? These are the questions that sci-fi has been exploring for years. It’s quite something else altogether to see it manifested.
All this has happened before, and all of it will happen again. Two tickets to Kobol, anyone?