Customer Service Lines Open Mon-Fri 9am-6pm
[ Contact Us ]
Need Help? Calling from a mobile please call 0151 647 7556
0800 195 4926Do you have a question? or need help?
Customer Service Lines Open Mon-Fri 9am-6pm, Closed Saturday & Sunday
Updated 10/8/21
At Total Loss Gap, we wonder what the future holds for autonomous vehicles. Will they make ethical decisions, or will that fall on us, their drivers?
One day in the near future, you may be able to relax in the back seat of your vehicle while it takes care of all aspects on its own.
One day we might not have to worry about traffic conditions and can instead allow our autonomous vehicles to make critical ethical choices for us. This would surely benefit everyone from those concerned parents trying to get their child home after a long road trip, teenagers experiencing newfound freedom without having parental supervision, people tired and physically unable due to an illness or injury wish want some time off work. There's no one better suited than an automated driving system!
Our only concern is how they'll make ethical decisions when faced with issues such as who should prioritise pedestrians or vehicles?
Unlike humans, computers and machines don't make mistakes (or they shouldn't). They can complete millions of calculations in Millie seconds with precision accuracy that will continue for hours without rest- all while avoiding errors caused by fatigue or emotion. The issue is they're still not perfect: if the power goes out, these silicon brains need to be able to handle tough decisions on their own; otherwise, manufacturers might as well just put a computer chip into every human brain, so we never have ethical dilemmas about what's best (or safest) when push comes to shove.
When was the last time you heard of a machine making an ethical decision? Machines are incapable of doing so. But that's not to say they're inept in other areas-in fact, computers and their counterparts can complete millions of minute calculations with precise accuracy. At the same time, humans often make mistakes, both big and small, when faced with similar tasks. It stands to reason then—should this artificial intelligence ever find themselves facing tough decisions outside this scientifically based realm —that manufacturers will have put as much thought into how ethics might factor in during programming as it did for calculating ability.
Let's consider the following situation:
It may sound a bit far fetched. However, let us look at the potential ( we sincerely hope never happens ) scenario.
This then leaves your autonomous vehicle with choices to make. Will it, remembering its primary function is to preserve your life:
None of the answers is particularly appealing, but one choice must be made.
The future of autonomous vehicles is a fantastic prospect and one which we will anticipate with interest. These vehicles must be able to make decisions on the fly, without human input, about who it saves in situations where there's not enough time or room for everyone. But this isn't always the best of idea's because sometimes these choices can lead to someone getting hurt more than they would have been otherwise - that hurts us as well!
This raises some difficult questions: how do you program these automated systems? How much control should manufacturers give them over life-or-death decisions like speeding up or slowing down at high speed before impact? And what if your car makes a choice that may kill you to save others around you; does anyone deserve death by a robot?
It seems like there will be many hurdles for autonomous vehicle manufacturers to overcome. How will they do it? Only time will tell.