The Trolley Problem is Not the Problem

Eli Lyons
6 min readMay 2, 2021
<https://commons.wikimedia.org/wiki/File:Trolley_Problem.svg#/media/File:Trolley_Problem.svg>

The trolley problem is a thought experiment, often used as a basis for starting a discussion about ethics. There are five people in the way of an oncoming trolley, do you pull the lever to divert it to kill only a single person?

You may recognize this as similar to the ethical issues that arise with self-driving cars. A grandma is in the middle of the road, should the Tesla swerve away, knowing that it will then hit a man on the sidewalk?

The answer is that it doesn’t matter what your answer is, unless you also are involved at the society level problems.

Let me pose a third question.

Why do airplanes have pilots?

The other day I asked this question to a college student.

“Because we don’t have the technology.”

We’ve been flying automated missiles, we have automated space ships, and in fact, have had them for a long time. When I was in college, my engineering professor in college asked this question to our class, and at that time he was confident that we already the technology before then! So, technology is not the problem.

“The computer could malfunction.”

So you just make a backup system, or even two backups. You can have more backups than human pilots.

The reason we don’t have self-driving planes is about liability. If a plane crashes, and there is no human pilot, who do you blame the crash on? The company that made the automation software (if outsourced)? The software engineer? The company that made the plane?

If there is a human pilot, there is a point of liability. This point of liability is one that the pilot, the airline, the passengers, and society at large seem to find agreeable. Tesla is facing the reality that society may not have agreed yet about who is liable regarding the autopiloting of cars. They are facing a lawsuit for a pedestrian's death. Tesla argues the human sitting in the driver’s seat bears responsibility. <https://www.forbes.com/sites/lanceeliot/2020/05/16/lawsuit-against-tesla-for-autopilot-engaged-pedestrian-death-could-disrupt-full-self-driving-progress/?sh=1fea1e8d71f4>. Well this certainly creates a risky situation for a potential purchaser of a self-driving car, it is very difficult to asses the value of a car with autopilot against the risks of using the autopilot.

As Nassim Taleb points out, ethics is about symmetry. We should be careful not to create a system in which agents can profit when they do their job well, and avoid losses when they kill someone.

When I buy a ticket to fly in a plane I do not take on liability as the pilot. Tesla is in a difficult situation, in that it advertises its product as an auto-pilot. Well, pilots have responsibilities.

I think there’s one more reason society likes having human airplane pilots. A human pilot may appropriately handle a situation that has never occurred or is not expected to occur. An emergency landing in a river. Or Iron Man flying next to the plane. While these are extremely rare events, we currently trust humans to find the correct solutions more than machines. So, if it’s Halloween and someone is dressed as a green traffic light, a human driver likely won’t run them over.

As for cars and planes…etc. I would use autopilot under the following conditions:

  1. Autopilot-making-company takes on liability.
  2. Autopilot shows performance equal or better than the best human performers with a minimum of 95% confidence.
  3. Autopilot covers some known edge cases. For example, if a deer runs into the road, the autopilot drives straight (human drivers often do the more dangerous action, which is swerving off the road).
  4. Autopilot is not more vulnerable to being compromised than a human. This is difficult to determine because humans and machines become compromised in different ways. Human doctors may be compromised by Pharmaceutical companies by what the elites would call ‘rebates’ or some other such incentive system that is typically called bribes by working-class people. Software systems can be compromised by hardware modification, hacking and modifying the software…etc.

As automation increases, our societies will have to make similar decisions in other fields. Medicine will be a big one, as doctors will soon take the role of pilots in airplanes if we are not careful (e.g. da Vinci surgical robot and diagnosis/imaging analysis software…etc). The four conditions I stated above can apply to various automation areas that have a direct relationship to human safety.

If I were asked the Trolley problem, I would ask several questions in return. Do I have permission to use the lever? What authority provides permission to use the lever? What are the current laws related to malpractice of a trolley-track-lever-puller? What are the precedents for legal cases involving trolley track-lever-pullers? Has a similar situation arisen before where there was a trolley with no driver that has killed someone? Maybe the big question is, if I use the lever, does the trolley company take liability, or do I?

So clearly, the problem is that it’s difficult to agree on who has the liability, and who makes the ethical decisions. Does Tesla get to decide their solution set to various Trolley problems? California government? Federal Government? Should we even allow cars to be sold with autopilot when the company making the autopilot bears no responsibility? These are the real problems.

A Brief Note on Human Ingenuity, Liability, and Fire

The Mann Gulch Fire was a disaster that occurred in 1949. The book, Young Men and Fire, details the disaster, which was a fire ‘blowup’, and follows the author’s journey researching the tragedy. It’s part forensics, part fire-science, part history, and part personal journal. It’s the author’s way of expressing his own connection to fire, and the empathy he builds with the men who died in the fire through comprehensively researching who these people were, what lead to their horrific deaths, and what may be done to prevent these situations in the future. This book was published after the author’s death, and is more like a journal than a book, partially because the publisher intentionally only performed basic editing.

The book also deals with liability, as the men that died were all very young (including two 19-year-olds), with the oldest being 28. Should the government be recruiting such inexperienced smokejumping firefighters for such missions? Were the recruiting practices for firefighting negligent? Furthermore, the team was not experienced working together, making the mission exceptionally risky.

As the fire chases the men up a hill, the men run in panic towards the ridge. Suddenly the foreman, Dodge, stops. He kneels down. Lights a match in front of him, and walks into the fire. He had yelled at the others to stop but they didn’t understand what he was trying to communicate and do. He seemed to have lost his mind. He wets a handkerchief with water, puts it over his mouth, and lies in the fire he had just set as his fire burns forward in front of him. Dodge, on the spot, before winds had gotten too strong and just before the blowup, had created an ‘escape fire’. Dodge’s fire was in the backdraft of the main fire, and therefore burned straight up the hill in front of Dodge and the main fire, causing the main fire to burn around, not through, where Dodge lied down. Dodge’s escape fire allowed him to avoid the full brunt of the blowup and become one of the few that survived. The author notes that while Native Americans may have used a similar gambit, this idea had never been reported within the American Fire Service. Dodge had never had this idea before he was running up the hill, a huge fire chasing him, his life on the line.

--

--

Eli Lyons

A Hungarian man said to me, 'You don't talk much do you.' Co-founder/CEO of www.genomeminer.ai.