We have done the trolley problem in the past. A subject has to choose between killing three people through inaction and saving one or saving three by killing one by acting. You may recall that most people choose to save three.
Now let’s put a self driving car to the same test. The self driving car, not be able to stop in time, will have to choose between killing three to save one or killing one to save three. The question is should the self driving car be programmed to do the least amount of harm?