Don’t wanna miss anything?
Please subscribe to our newsletter
Foto: This image was generated with Adobe Firefly Image 2, an AI tool
wetenschap

UvA PhD student: “The AI arms race has long since begun”

Jip Koene,
13 februari 2024 - 10:14

Artificial intelligence has become indispensable on the battlefield. In addition to intelligence gathering, weapons are being developed that can carry out attacks on their own. But does the humanitarian law of war provide sufficient guidance in times of war? And who do we hold responsible when things go wrong?

Over the past decade, artificial intelligence (AI) has led to important new applications, including Chat GPT. It is often said in military circles that AI will become indispensable on the battlefield of the future, not only for intelligence gathering but also for deploying autonomous weapons. The international community states that all military AI systems must comply with the humanitarian law of war (HOR), but applying this law to new technologies is not always straightforward. We posed five questions to UvA doctoral student Jonathan Kwik.
 
What exactly are we talking about when we speak of autonomous weapons?
“Autonomous weapons are those in which a crucial part of the decision-making process—the use of force, for example—is left to artificial intelligence. Currently, only weapons that approximate that are used; the commanding officer still plays a key role in their use.”

“In Ukraine, for example, they use the Javelin, a kind of anti-tank missile, against Russian tanks that uses an algorithm to find the fastest and best route to take out tanks. Or consider, for example, the deployment of Harpy missiles by Israel. These are a kind of drone missile that autonomously locate enemy radar systems and destroy their source.”
 
“In addition to firearms, artificial intelligence is also being used to provide information on the battlefield. One example is swarm robots. These robots are let loose by the hundreds to obtain information about the locations and numbers of military troops, or to find the best supply route for transporting weapons to the front, for example. The commanding officer can make strategic and tactical decisions based on that information.”
 
But can't an awful lot go wrong with the deployment of autonomous weapons? For example, how can we be sure that the weapons only hit military targets and not civilians?
“In principle, an autonomous weapon should be able to distinguish between military targets and civilians. For example, a tank has a different shape, weight, and heat output than a bus or car. The more specific the target selection is, the more certainty there is and the fewer the mistakes. However, uncertainties always remain, especially if adversaries take AI-resistant measures.”
 
“For instance, adversaries can mislead autonomous weapons by causing a malfunction, or feed the algorithms data that makes them ‘hallucinate.’ These kinds of manipulations can lead to serious problems, such as the inability of autonomous weapons to distinguish between different targets, which creates legal and operational complications.”

It may well be that the programmer is a racist who has caused certain ethnic groups to be targeted

If things go wrong, who do we hold responsible? AI or the commanding officer?
“That's always the question, isn't it? As far as I'm concerned, the responsibility lies primarily with the commanding officer. After all, he is the end user of the systems and has control over their deployment. The commanding officer also decides on the behavior of the AI system by changing technical settings such as the precision of target selection.”
 
“But this is not to say that the producers and other parties are never responsible the moment something goes wrong. It may well be, for instance, that the programmer of the systems is a racist who has caused certain ethnic groups to be targeted. At that point, of course, you cannot say that the commanding officer is responsible. Then it seems pretty clear to me who the culprit is.”
 
If there are so many snags with autonomous weapons, why should we want to use them?
“AI is currently indispensable on the battlefield. It can no longer be banned, either. Besides, it offers significant advantages, such as speeding up defenses and increasing the efficiency of processing large amounts of information on the battlefield. It is therefore crucial for making strategic and tactical decisions. It is a dilemma since using AI systems can cause a strategic disadvantage in conflict situations in which opponents deploy them. That arms race has long since begun.”

Don't these technologies create problems in the application of humanitarian law of war?
“The humanitarian law of war is written very technology-neutral. You are not allowed to attack civilian targets, you must ensure that enough humanitarian aid can get to the scene, and you have a duty to warn civilians so they can evacuate in the event of a bombing, for example. These rules apply to guns, bombs, missiles, and cyberattacks as well as autonomous weapons. Yet the rules are open to interpretation. How do you define civilian targets, what constitutes enough humanitarian aid, and how far in advance should you warn civilians? Along these lines, the various articles of law are scrutinized in the case of new developments such as autonomous weapons, on which we can then make additional agreements.”
 
“Treaties play an important role in this. For example, biological or chemical weapons are all prohibited because we have agreed on that ban together in international treaties. Humanitarian law of war prohibits them indirectly by stating that one may not use weapons that cannot distinguish between military and civilian targets, such as viruses. Therefore, that also applies to autonomous weapons. I would not rule out the possibility of international agreements on the use of certain autonomous weapons.”