I want to highlight three points in the New York Times report this week about the travails of remote-control warfare.
First, any time that you pressure people for results against potential victims from whom they are emotionally detached, you will get some level of atrocity no matter how moral or necessary your cause may be. Essentially, the Pentagon ran their anti-ISIS campaign in Syria like a massive Milgram experiment. Second, by divorcing the results of that experiment from performance review (because secrecy, natch), the iron law of institutions prevents timely, intelligent response to the whistleblower.
As bad strikes mounted, the four military officials said, Talon Anvil’s partners sounded the alarm. Pilots over Syria at times refused to drop bombs because Talon Anvil wanted to hit questionable targets in densely populated areas. Senior C.I.A. officers complained to Special Operations leaders about the disturbing pattern of strikes. Air Force teams doing intelligence work argued with Talon Anvil over a secure phone known as the red line. And even within Talon Anvil, some members at times refused to participate in strikes targeting people who did not seem to be in the fight.
Finally, to legalistic minds, the tactic of jerking drone cameras away from a target just before impact to prevent documentation of civilian deaths implies consciousness of guilt by the operators. Setting aside this argument, however, we must bear in mind that remote operators pay a higher psychological toll than soldiers on the ground precisely because they more often witness the results of their fire, and because their fire is far more lethal. Looking away is evidence of a conscience.
Herein lies the 21st Century conundrum. Very few human beings are actually capable of keeping their eye on dubious or obviously-wrong human targets under their fire with perfect detachment. To eliminate all human error and misbehavior, we would need a truly unblinking eye. Put simply, we would need to let artificial intelligence take charge, and that would be the beginning of the end for us all.
Our millennium could be a short one. A freeway to global dystopia might be littered with exactly these sorts of best intentions. Accountability is one thing, added complexity quite another, because very-complex systems have added vulnerability. We would all be better off accepting that “war is hell,” and focus on conflict reduction, instead of following the techno-utopian impulse to micromanage the execution of policy with complex technologies by adding further layers of technology.
Leave a human finger on the trigger. The other way lies madness.