by Barry Goldman
Let’s start with a simple case. You want to know if the tires on your bike are properly inflated. You give them a squeeze. They feel fine. But you want to be sure. So you get out your handy tire pressure gauge. It says they’re at 68 psi. The tires say they want to be at 72. So you give them another shot of air. You trust the tool more than you trust your own senses. Why?
I think there are three reasons. You trust the gauge because you believe it has more reliability and more validity than your squeeze. Reliability means the gauge will give the same measurement whenever the tire pressure is the same and a higher or lower measurement when the pressure is higher or lower. Validity means the gauge is measuring what you want to measure – tire pressure – and not some other variable. It may be that if your fingers are hot or cold or if you are hungry, angry, lonely or tired it affects the accuracy of your tire-squeezing. A pressure gauge doesn’t make those mistakes.
The third reason you trust the gauge, logically prior to the other two, is that you believe tire pressure is the kind of thing that it makes sense to measure with a tool. You believe you’re not making a category mistake. We’ll come back to that.
Now suppose you’re buying a new pair of shoes. The shoe store has a machine that tells you what size you need. You place your feet on the black box and the robot brings you a pair of shoes. You put them on and they squash your toes. You complain to the robot. “These shoes squash my toes,” you say. The robot says you’re mistaken. It says it was trained by the finest experts using millions of data points and it has access to vast troves of information you can’t possibly be aware of and it knows more about shoes and toes than you will ever know and you are just wrong.
Is there anything the robot could say that would convince you to trust the machine instead of your own senses? You were willing to do it with the tire gauge. What is the difference between tire pressure and toe pressure?
I think we can agree that while you are quite likely to be wrong about tire pressure, it is impossible for you to be wrong about whether your shoes are too tight.
Good. Now we have a case at either end. We have a case where we agree we would be perfectly willing to trust the machine rather than our own senses, and we have another case where there is nothing the machine could say that would get us to trust it. Now we can talk about other cases and try to determine where they fit relative to those two.
Suppose I say I have a machine that measures the quality of bolognese sauce. You put a tablespoon of your beloved family recipe bolognese in the little hole in the top of the black box. It does whatever it does. (The fact that you don’t know how it works shouldn’t be a problem. You don’t really know how a tire gauge works, do you?) After a few seconds the machine shows a gauge with a needle pointing to POOR.
Do you believe it? I’m not asking whether you believe the machine says your sauce is crummy. It says what it says. I’m asking whether there is anything the machine could say that would get you to believe your sauce is crummy.
One possible response is to say there is no point in having this discussion. De gustibus non est disputandum. Everyone is entitled to their opinion. You happen to like your family recipe. Other people may prefer grocery store bolognese from a can.
But suppose I say my machine was trained on data from thousands of expert evaluations performed by the finest chefs testing bolognese from the best restaurants in the world? We could quibble about who the experts are and which restaurants are the best and how many tastings are enough, but when the quibbling is over, you would agree, wouldn’t you? After all, what does it mean to say this is good bolognese except to say that people who know about bolognese think this is good bolognese. What would it mean to say all the experts agree this is crummy bolognese, but they’re mistaken. Does that even make sense?
Notice, however, there was a hidden move there. No expert chefs tasted your bolognese. A machine tested it and reported that its data analysis shows that if experts had tasted it they would have said it is crummy. This hidden move may be hiding a category mistake. It could be that a minimum requirement for a reliable taste tester is that it has to have a mouth. But let’s proceed.
I think we can agree that there are some conditions under which you could be persuaded to trust the bolognese machine even when you disagree with it. But there are no conditions under which you could be persuaded by the shoe size machine against the evidence of your own senses. Why? Because you are willing to allow that there are people more expert than you on the subject of bolognese. But there is no one more expert than you on the subject of how your toes feel. In fact, the idea that there could be a more reliable expert on the subject is just silly.
So now we’re ready to talk about autonomous weapons.
Suppose we’re in a war. Assume it’s a “just war.” We were attacked by an unprovoked aggressor. Suppose I have a machine that can fly around over the battlefield unguided, pick its own targets, and launch its own missiles. Suppose it’s very efficient. It can work all day and all night without a break. It doesn’t get bored or distracted. It doesn’t lose its nerve, and it doesn’t get consumed by the desire for vengeance. Suppose it was trained with millions of data points by the world’s foremost experts on target selection and missile selection and whatever else you’re supposed to use to train an autonomous weapon. Remember, in this hypothetical we’re in a war. Killing the enemy is the goal. Killing the enemy quickly, cheaply and efficiently is the whole idea. So, do you want to buy my machine?
If not, why not?
Here I turn to Peter Asaro and his article Autonomous Weapons and the Ethics of Artificial Intelligence. Asaro believes it is immoral to use an autonomous weapon.
[T]he decision to use violent force and take human life requires a human capable of assessing the situation, determining the necessity to engage a weapon on a target, who has access to the moral and legal justification for the use of violent force, and who can take moral and legal responsibility for the consequences of that decision.
Since machines and automated functions are not moral and legal agents, it is inappropriate to delegate moral and legal authorities to such systems. In the case of autonomous weapons, it is immoral, and should be illegal, to delegate the authority to kill, or to select and engage targets with violent force, to such systems.
For Asaro, trusting an autonomous weapon to use lethal force is a category mistake. A computational system is not the kind of thing that can make those kinds of decisions. No matter how “well-trained,” no matter how “efficient” such a machine might be. If I were a Ukrainian soldier I imagine I would feel differently, but I think he is right.
***
Enjoying the content on 3QD? Help keep us going by donating now.
