Developing a murder

Recently the topic came up again about the new age of artificial intelligence and how much power we should give it. Which decisions do we want to entrust an AI?
The question is almost as old as artificial intelligence itself: do we need to be afraid of what it can do?
We use machines for a broad range of activities and work. Independent decisions are rare and not often used.
Asimov’s “Three laws of robotics” exist for that very reason. Those were created to ensure that creations do not act against their creators.
The problem persists because we need to ask ourselves also from a moral and ethic perspective whether we shall allow AI’s to make such decisions.

What if a robot could save 10 people by sacrificing 1?
2 for 100? 10 for 1000? Or maybe 5 for 1 special person?
Where shall we draw the line?
At the bottom we talk about cold numbers. If no other factors strike which place will you protect from certain death: the small village of 100 or the city of 10.000? Those numbers are without emotion and rational. Logic demands to save the city over the small village.

The following example is more realistic: a driving situation.
Two cars. One car is driven by you. The second car is a family of four. An emergency situation ensues and there is only a save spot on the street for one car. The other car will crash. Within a fraction of a second a computer could asses the situation and decide to make room for the car with the family and thus killing you.
Do you want to allow your car to kill you to save a strangers family?
Imagine there is a flaw in the system and it kills you for no reason? Causing a crash on the street and potentially killing and injuring more in the aftermath.

No matter how you twist and turn such scenarios I always end up with the same conclusion. Us humans are not perfect ergo we can only create imperfect things. Nothing we create could ever be as good as us to make such decisions. Only living beings should be allowed to make a decision about life and death.
While you can argue that machines are always logic and rational they were built by humans inheriting some of our flaws.
Life is the highest good we posses and we should not give control over it away. While a human is emotional, irrational and illogic so is life and the circumstances of situations that demand such decisions.
We are not as fast as machines but machines cannot asses life or weigh its balance in the sense of the greater interest.

Machines may work for us, assist us, make things easier, faster and safer. But a lifeless robot can not and shall never make decisions over life for living beings.

Sources:

Popular science: Mathematics of Murder

Isaac Asimov’s “Three Laws of Robotics”

No Comments

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

WordPress Themes