Programming autonomous vehicles to kill
- musketeer925
- Retired Contributor
- Posts: 2484
- Joined: Mar 28, 2015
- ESO: musketeer925
Programming autonomous vehicles to kill
A really interesting article about an interesting problem:
http://www.technologyreview.com/view/54 ... d-to-kill/
It''s strange when thought experiments like the classic Trolley Problem begin to have practical applications.
http://www.technologyreview.com/view/54 ... d-to-kill/
It''s strange when thought experiments like the classic Trolley Problem begin to have practical applications.
Programming autonomous vehicles to kill
-- deleted post --
Reason: on request (off-topic bulk delete)
Programming autonomous vehicles to kill
It's an interesting problem, but it remains pure theory: there's always a way to save everyone in real life (for instance: pedestrians reacting on time and avoiding the car that had begun to slow down).
So the car should always aim for 0 victims and if it fails... well that's too bad but at least, in this example, the driver survives and there is no ethical problem, 'only' a dramatic accident.
That's just my opinion though
So the car should always aim for 0 victims and if it fails... well that's too bad but at least, in this example, the driver survives and there is no ethical problem, 'only' a dramatic accident.
That's just my opinion though
- Good ol Ivan
- Howdah
- Posts: 1345
- Joined: Mar 31, 2015
- ESO: ivanelterrible
Programming autonomous vehicles to kill
Why not just have a proper traction control system instead?
When the car detects pedestrians it should be able to stop momentarily.
When the car detects pedestrians it should be able to stop momentarily.
Programming autonomous vehicles to kill
Yes but this problem is in the case where the system fails to detect the pedestrians soon enough to be able to stopivan wrote:Why not just have a proper traction control system instead?
When the car detects pedestrians it should be able to stop momentarily.
- Good ol Ivan
- Howdah
- Posts: 1345
- Joined: Mar 31, 2015
- ESO: ivanelterrible
Programming autonomous vehicles to kill
krichk wrote:Yes but this problem is in the case where the system fails to detect the pedestrians soon enough to be able to stopivan wrote:Why not just have a proper traction control system instead?
When the car detects pedestrians it should be able to stop momentarily.
yes, thats when the traction control system acts. If the system completely fails to detect them, I dont see the dilemma as in that case they would get killed either way. Theres no way the car can crash with a wall in time but doesnt have ythe time to brake/stop instead.
I would see the validity in this hypothesis had cars stayed the same the last 15 years. Modern cars stop pretty much momentarily even when going at great speeds.
But alright, assuming the anti traction system didnt work - obviously the program should be utilitarian and avoid killing the most people regardless of the situation.
Programming autonomous vehicles to kill
Is your car a formula 1? :pivan wrote:Modern cars stop pretty much momentarily even when going at great speeds
Because I just can''t agree with this
(assuming that momentarily=instantaneously, my English is not so good)
-
- Musketeer
- Posts: 56
- Joined: Oct 29, 2015
Programming autonomous vehicles to kill
krichk wrote:Is your car a formula 1? :pivan wrote:
Because I just cant agree with this
(assuming that momentarily=instantaneously, my English is not so good)?
Formula 1 cars definitely dont stop instantaneously.
Programming autonomous vehicles to kill
Reminds me of iRobot.
Programming autonomous vehicles to kill
Hmm, I kinda feel like we have to be aware that traffic would be different in an automated world. Traffic would be so much safer, which instantly implies that those pedestrians are in the wrong here. That changes the question... Should someone who did nothing wrong be sacrifised for someone who was apperantly rather careless (which really is the only reason to end up in such a scenario). It changes the question at hand, maybe the car should just try to minimize damage as much as possible?
- gustavusadolphus
- Lancer
- Posts: 520
- Joined: Oct 19, 2015
Programming autonomous vehicles to kill
In my opinion conditions on roads are still not very good. things change so rapidly and taking more control away from the driver is control that I don't think can be given up. However I'm one who was sad when automatic transmissions began tot ake over.
10000 gallon fiberglass, FNS plus DE filter 48 square foot, ray pack Mini Max 512 BTU gas heater, one Sip n Oodle, NO ZEOSAND
- musketeer925
- Retired Contributor
- Posts: 2484
- Joined: Mar 28, 2015
- ESO: musketeer925
Programming autonomous vehicles to kill
Yeah I think really the car must be programmed to keep its driver safe. In a world where the car is forced to make these kinds of decisions, the driver would likely have little control over his situation, while the pedestrians put themselves in the road and must be aware of their surroundings.jerom wrote:Hmm, I kinda feel like we have to be aware that traffic would be different in an automated world. Traffic would be so much safer, which instantly implies that those pedestrians are in the wrong here. That changes the question... Should someone who did nothing wrong be sacrifised for someone who was apperantly rather careless (which really is the only reason to end up in such a scenario). It changes the question at hand, maybe the car should just try to minimize damage as much as possible?
It''s also important to remember that I don''t think it matters too much how the car is programmed. It could generate a random number to decide which course of action to take, and would be about the same (over a large number of incidents) as results if a human was driving, except that there would be fewer incidents in the first place.
Programming autonomous vehicles to kill
Remember that these cars are programmed, so ultimately it's not the car that makes the decision, it's the human programmer.
I've actually run into a situation where I had to make this same type of decision, though the living road obstacles were deer and not humans. One foggy morning I came over a hill and immediately onto a bridge, which had a herd of about twenty deer on it. All I could do in the two seconds I had to react was aim for the smallest one and hope that I wasn't going to go out of control and end up forty feet down into the river. Since I had a tag on me, I just tagged the deer I'd hit and butchered it -- no sense letting the meat go to waste.
I've actually run into a situation where I had to make this same type of decision, though the living road obstacles were deer and not humans. One foggy morning I came over a hill and immediately onto a bridge, which had a herd of about twenty deer on it. All I could do in the two seconds I had to react was aim for the smallest one and hope that I wasn't going to go out of control and end up forty feet down into the river. Since I had a tag on me, I just tagged the deer I'd hit and butchered it -- no sense letting the meat go to waste.
Programming autonomous vehicles to kill
And regular cars slow down even less efficiently... I dont see your point herestevelance wrote:Formula 1 cars definitely dont stop instantaneously.krichk wrote:Is your car a formula 1? :p
Because I just cant agree with this
(assuming that momentarily=instantaneously, my English is not so good)
But still, I dont think there is a realistic situation where the probability of saving everyone is equal to 0, so saving everyone should always be the only goal of the car.
- Good ol Ivan
- Howdah
- Posts: 1345
- Joined: Mar 31, 2015
- ESO: ivanelterrible
Programming autonomous vehicles to kill
What makes you think so? Have you ever driven a car anyway?krichk wrote:And regular cars slow down even less efficiently... I dont see your point herestevelance wrote:Formula 1 cars definitely dont stop instantaneously.
I drive a spirit from 1991 - it takes the car several meters to stop, specially when going at high speeds.
I got to drive once a Honda from 2009. The anti tracing system felt really weird/unnatural, the car would stop abruptly whenever I hit the brake, regardless of how fast it was going. I didnt get to drive past 80 km/h though.
Programming autonomous vehicles to kill
At 70 mph, your car will travel the same distance as it takes to stop at 30 mph, before you can even react.
[img]http://www.passmyexams.co.uk/GCSE/physics/images/friction_large.jpg[/img]
On a side note, this is an interesting mix of mph and meters. Do you still use miles in the UK, where this chart is from? I see they misspelled Braking too.
Same data, better chart:
[img]http://www.cyberphysics.co.uk/graphics/graphs/highwaycodeSD.png[/img]
[img]http://www.passmyexams.co.uk/GCSE/physics/images/friction_large.jpg[/img]
On a side note, this is an interesting mix of mph and meters. Do you still use miles in the UK, where this chart is from? I see they misspelled Braking too.
Same data, better chart:
[img]http://www.cyberphysics.co.uk/graphics/graphs/highwaycodeSD.png[/img]
- gustavusadolphus
- Lancer
- Posts: 520
- Joined: Oct 19, 2015
Programming autonomous vehicles to kill
Yes the human who has programmed the car will decide what the car should do in different situations but if variables are introduced the program cannot handle, the program will be making decisions in a situation where it has insufficient options.metis wrote:Remember that these cars are programmed, so ultimately it''s not the car that makes the decision, it''s the human programmer.
I''ve actually run into a situation where I had to make this same type of decision, though the living road obstacles were deer and not humans. One foggy morning I came over a hill and immediately onto a bridge, which had a herd of about twenty deer on it. All I could do in the two seconds I had to react was aim for the smallest one and hope that I wasn''t going to go out of control and end up forty feet down into the river. Since I had a tag on me, I just tagged the deer I''d hit and butchered it -- no sense letting the meat go to waste.
10000 gallon fiberglass, FNS plus DE filter 48 square foot, ray pack Mini Max 512 BTU gas heater, one Sip n Oodle, NO ZEOSAND
Programming autonomous vehicles to kill
It will be interesting to see if, 50 years from now, an automobile AI will be put on trail for making the wrong decision.gustavusadolphus wrote:the program will be making decisions in a situation where it has insufficient options.metis wrote:
Programming autonomous vehicles to kill
Actually I read about it on the French Wikipedia page about braking distance just before posting, its a study published by Renault (and obviously a formula 1 brakes more efficiently than a regular car, you think the average Mini Cooper has better brakes than a formula1???)ivan wrote:What makes you think so? Have you ever driven a car anyway?krichk wrote:And regular cars slow down even less efficiently... I dont see your point here
And yes I drive, no my car doesnt stop instantaneously, and its common knowledge that braking distances are something to take into account when you drive.
But I guess you know that and youre just exagerating things a bit and I shouldnt be so aggressive when explaining my thoughts but hey
Programming autonomous vehicles to kill
I can see a lot of advantage with an AI that takes over and applies the brakes before the driver even has time to react. Since the AI can react in milliseconds versus seconds any car traveling at highway speeds would stop twenty meters sooner.
- fightinfrenchman
- Ninja
- Posts: 23506
- Joined: Oct 17, 2015
- Location: Pennsylvania
Programming autonomous vehicles to kill
metis wrote:I can see a lot of advantage with an AI that takes over and applies the brakes before the driver even has time to react. Since the AI can react in milliseconds versus seconds any car traveling at highway speeds would stop twenty meters sooner.
who would be responsible if the ai car makes a mistake? Manufacture? Driver? Google,m?
Dromedary Scone Mix is not Alone Mix
Programming autonomous vehicles to kill
This is a good question. I would think that the best thing to do here would be to set down some a priori legal work absolving the company and programmers should the system act as it was designed but couldnt foresee all circumstances. This would be no different from absolving a motorist from guilt because he swerved into a roadside bush in order to miss a pedestrian in the road, unaware that a homeless man was sleeping under said bush.fightinfrenchman wrote:metis wrote:I can see a lot of advantage with an AI that takes over and applies the brakes before the driver even has time to react. Since the AI can react in milliseconds versus seconds any car traveling at highway speeds would stop twenty meters sooner.
who would be responsible if the ai car makes a mistake? Manufacture? Driver? Google,m?
Programming autonomous vehicles to kill
fightinfrenchman wrote:metis wrote:I can see a lot of advantage with an AI that takes over and applies the brakes before the driver even has time to react. Since the AI can react in milliseconds versus seconds any car traveling at highway speeds would stop twenty meters sooner.
who would be responsible if the ai car makes a mistake? Manufacture? Driver? Google,m?
think larger. All cars in direct contact with eachother with exact information on their location and destination, a system continuously optimizing the traffic flow of an entire city, an entire country maybe. Well reach a point where the AIs wont even make mistakes, it will be amazing. In particular, these pedestrian scenarios will mean those particular pedestrians almost surely did something wrong, like consciously taking a risk.
The people inside a car should never ne sacrifised in that scenario.
- fightinfrenchman
- Ninja
- Posts: 23506
- Joined: Oct 17, 2015
- Location: Pennsylvania
Programming autonomous vehicles to kill
jerom wrote:think larger. All cars in direct contact with eachother with exact information on their location and destination, a system continuously optimizing the traffic flow of an entire city, an entire country maybe. Well reach a point where the AIs wont even make mistakes, it will be amazing. In particular, these pedestrian scenarios will mean those particular pedestrians almost surely did something wrong, like consciously taking a risk.fightinfrenchman wrote:who would be responsible if the ai car makes a mistake? Manufacture? Driver? Google,m?
The people inside a car should never ne sacrifised in that scenario.
that is the ideal future, but we must remember there will be a long transition period. Especially in america, self driving cars and regulrs will be driving together for some t ime
Dromedary Scone Mix is not Alone Mix
Programming autonomous vehicles to kill
fightinfrenchman wrote:jerom wrote:think larger. All cars in direct contact with eachother with exact information on their location and destination, a system continuously optimizing the traffic flow of an entire city, an entire country maybe. Well reach a point where the AIs wont even make mistakes, it will be amazing. In particular, these pedestrian scenarios will mean those particular pedestrians almost surely did something wrong, like consciously taking a risk.
The people inside a car should never ne sacrifised in that scenario.
that is the ideal future, but we must remember there will be a long transition period. Especially in america, self driving cars and regulrs will be driving together for some t ime
I think during that transition person they want the driver to still pay attention and act if necessary. Thats what the law states in the Netherlands atm.
Who is online
Users browsing this forum: No registered users and 5 guests