News:

Welcome to our site!

Main Menu

Driverless cars and morality

Started by Hydra009, December 30, 2016, 04:05:29 PM

Previous topic - Next topic

Hydra009



I found an interesting story with a needlessly provocative title, self-driving cars are already deciding who to kill.

QuoteAutonomous vehicles are already making profound choices about whose lives matter, according to experts, so we might want to pay attention.

"Every time the car makes a complex maneuver, it is implicitly making trade-off in terms of risks to different parties," Iyad Rahwan, an MIT cognitive scientist, wrote in an email.

The most well-known issues in AV ethics are trolly problemsâ€"moral questions dating back to the era of trollies that ask whose lives should be sacrificed in an unavoidable crash. For instance, if a person falls onto the road in front of a fast-moving AV, and the car can either swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the pedestrian, what should it do?
Sam Harris has been talking about just such a dilemma on his podcast.  When people were asked whether the driverless car should prefer to save its driver's life or the pedestrian's life if there was no other option, people generally said that there should be no preference - that the car should value both lives equally and shouldn't prefer one over the other.  But when people were asked which car they would rather buy - a car that's more likely to save the driver's life or a car that's more likely to save the pedestrian's life - they overwhelmingly went with the car that saves the driver.

It seems like the first question had a very definitive answer which was masked by a response bias.  Revealing one's true position on the first question (preferring the car to save the driver and kill the pedestrian instead) would've come across as selfish or uncaring of others, likely biasing respondents to claiming a neutral position that they don't actually hold.

The emerging technology of AI is going to be very fascinating issue by how much it reveals about humanity.  Put a dozen ethicists in a room and they might debate ethical questions like the trolley problem endlessly.  But put a strong AI in charge of a city's traffic and these seemingly unsolvable moral quandaries get resolved surprisingly quickly.

Another issue would be the risks associated with an experimental new drug starting human trials.  People could die if the drug doesn't perform well or has unforeseen adverse reactions.  Yet people will definitely die from an untreated life-threatening condition.  Who has the moral solution to that conundrum?  Someday, an AI might be the one making that call.

And if we succeed at developing an strong AI that's also ethical, it might be better at handling these hard choices than us humans.  After all, a computer won't be able to appeal to God or tradition or intuition.  A robot judge can't dole out harsher sentences because it's cranky or upset or bigoted.  That's already a substantial improvement, imo.

Johan

#1
I've seen where this question has been coming up over the past year or so. IMO its largely a pointless discussion and most of the people putting effort into debating it need to find something more productive to focus their attention on. Because everyone debating this topic seem to completely forget or ignore one incredibly important fact. Human beings absolutely positively suck at driving. Period.

We're so god damn worried about these rock and hard place trolly scenarios that will almost never manifest themselves. Meanwhile in the US alone, 100 people lost their lives in car accidents today and another 6000+ were injured or disabled. And another 100 will die tomorrow. We're talking about a technology that could realistically take those kinds of daily numbers and turn them into the annual numbers and you want to get all up in my grill about well what if there's this bizarre situation where there's a old person standing in the other lane and a bus load of orphans in front of you and a dynamite factory on the other side and the car has to decide which one gets hit? How about fuck you. Fuck you in the neck. Who fucking cares which one its programmed to hit? If we're talking 100 people a year vs 100 people a day, program the car to kill them all in that scenario and we'd still be better off.

BTW Hydra, I wasn't saying fuck you in the neck to you. That was aimed at those who seem to get so up tight about how we absolutely must work these questions out.
Religion is regarded by the common people as true, by the wise as false and by the rulers as useful

Hydra009

Quote from: Johan on December 30, 2016, 05:26:58 PM
I've seen where this question has been coming up over the past year or so. IMO its largely a pointless discussion and most of the people putting effort into debating it need to find something more productive to focus their attention on. Because everyone debating this topic seem to completely forget or ignore one incredibly important fact. Human beings absolutely positively suck at driving. Period.

We're so god damn worried about these rock and hard place trolly scenarios that will almost never manifest themselves. Meanwhile in the US alone, 100 people lost their lives in car accidents today and another 6000+ were injured or disabled. And another 100 will die tomorrow. We're talking about a technology that could realistically take those kinds of daily numbers and turn them into the annual numbers and you want to get all up in my grill about well what if there's this bizarre situation where there's a old person standing in the other lane and a bus load of orphans in front of you and a dynamite factory on the other side and the car has to decide which one gets hit? How about fuck you. Fuck you in the neck. Who fucking cares which one its programmed to hit? If we're talking 100 people a year vs 100 people a day, program the car to kill them all in that scenario and we'd still be better off.
Yeah, that was one of the comments on reddit when this article was posted there.  Even if the AI killed everyone in every trolley problem it encountered, it'd still be a better outcome than the current situation of humans routinely plowing into each other, so all this worry about how driverless cars would operate in trolley problem situations is foolish.  I'm inclined to agree.

Still, it'd be fascinating to see how people's moral theories (and the moral intuitions behind them) stack up when it's time to actually implement them.  Would we prefer a car that saves its occupants at all costs?  How about a car that avoids bigger objects when possible, resulting in hitting smaller objects, possibly splatting a small animal to avoid hitting a truck?  These are questions for wise men with skinny arms.

Johan

You forgot pasty skin. Skinny arms and pasty skin.

It is a somewhat interesting topic to explore in terms of social norms and whatnot. But its such a complex subject and the answers involve technology that is so amazingly complex, its really far beyond what most laymen can realistically comprehend I think.

I think one of the biggest factors that most everyone glosses over is the fact that flawed though it is, the human brain has an incredible ability to store an image of a 'perfect object' and is then able to almost instantly identify an infinite number of different perfect and non-perfect versions of that 'perfect object'. In other words, we can look at a picture with a person standing next to garbage can and nearly instantly and effortlessly deduce which is which even though we've never before seen that particular person or that particular garbage can.

Computers look at that same image and see two objects, one (presumably) taller than the other. It must then compare each to millions of other stored images to try to figure out what exactly each object is.

So for example lets say that someone cleaned out the garage placed an old department store mannequin out by the curb for garbage pickup. Lets say this mannequin has one leg and one arm missing. Now lets say you're driving your good old fashion '72 Dart down that street and a kid runs out in front of you. You being human, would almost automatically know swerving and possibly hitting that mannequin is the far better option to continuing on and possibly hitting the kid.

But a computer could potentially see that mannequin as a disabled person who is less able to get out of the way than the obviously able bodied person running directly in front of you because its incredibly difficult for a computer to instantly tell the difference between a person and mannequin.

But then again if we're being realistic we'd need to remember that the autonomous car will have radar and other sensors that will make it able to 'see' that kid running toward the street long before the human inside the car was in a position to actually see the kid. Therefore the autonomous car would have theoretically began applying the brakes before the kid was ever in view thereby opting to hit neither rather than having to choose which is the better option to aim for.

And yeah, there will most definitely still be situations where car isn't able to take early action to avoid danger and will then have to instantly decide which 'thing' it should steer into. And yeah, people will definitely get hit, injured or killed as a result. But again when we get right down to it, I think we're talking about killing a small handful per year while at the same time saving literally thousands of others not only from death but also from any kind of injury. In the end the answer is still I don't care which way the engineers make the car choose, just get it done ASAP.
Religion is regarded by the common people as true, by the wise as false and by the rulers as useful

Baruch

If human beings suck ... then just kill them.  Don't let them drive, don't let them vote either.

Y'all aren't talking realistically about driverless cars and morality.  Driverless cars aren't human.  Also driverless cars, for insurance purposes, will not be owned by ordinary individuals.  They will mostly be government and corporate fleet vehicles (see the rush for driverless trucks).  Mexican drivers in Mexican trucks aren't cheap enough.  I can't wait for driverless Domino pizza drones too.  Pizza Blitz over London!  Morality never never applies to human organizations ... if anyone bothered to examine them.  Morality only applies to individual human beings, and is usually used to oppress and kill them (by the assholes who are defining what is or is not moral).
Ha’át’íísh baa naniná?
Azee’ Å,a’ish nanídį́į́h?
Táadoo ánít’iní.
What are you doing?
Are you taking any medications?
Don't do that.


Hydra009

Quote from: Johan on December 30, 2016, 08:21:36 PMI think one of the biggest factors that most everyone glosses over is the fact that flawed though it is, the human brain has an incredible ability to store an image of a 'perfect object' and is then able to almost instantly identify an infinite number of different perfect and non-perfect versions of that 'perfect object'. In other words, we can look at a picture with a person standing next to garbage can and nearly instantly and effortlessly deduce which is which even though we've never before seen that particular person or that particular garbage can.

Computers look at that same image and see two objects, one (presumably) taller than the other. It must then compare each to millions of other stored images to try to figure out what exactly each object is.
Yeah, human pattern recognition is amazing.  A few years ago, I got involved with a Galaxy Zoo, a crowdsourced project where John Q Public identifies galaxy types from photographs of galaxies taken by a robotic telescope - the draw for people is that they get to boldly see galaxies that have never been seen by humans before.  Apparently, humans were/are better at categorizing galaxies than computers.

But I've been hearing for years that computers are making strides in image recognition.  Given any rate of improvement, they'll eventually catch up.  It'll definitely be an interesting development when they do.

Johan

True enough. And I suppose its also important to remember that these systems could be made to see or sense things that people cannot. For instance mannequin would look very different from an actual person on a thermal imaging camera. People and animals also have an electrical signature of sorts that a machine could theoretically be made to sense. And with enough sensors and enough procession power, its reasonable to believe these machines could, in a fraction of a second create an accurate map of every living thing within a 500' radius and also be able to predict within a second or two whether any of those living things have the potential to create a conflict.
Religion is regarded by the common people as true, by the wise as false and by the rulers as useful

Baruch

#8
Tomorrowland fantasy ... ruled by neb-lib SJWs?

https://www.youtube.com/watch?v=lNzukD8pS_s

"The girl says ... I felt anything is possible" ... aka I thought with my emotions, not with my brain.  I watched it when it came out, in the theater.  Enjoyed it.  But Disneyland is for children.  The future is always a dystopia, which kind depends on which stupid ideology is in charge.   Folks simply don't get the irony of this movie ... the button she held was an advertising gimmick.  AI is an advertising gimmick.  It is just computers, being the stupid machines they always are.  They are not smart, their programmers and engineers are smart ... sometimes.  I could use Exchange email as a counter argument ;-)

I was a member of the AAAI in the 80s.  Laymen don't know how their car works.  Pattern recognition is a lot harder than y'all think.  Imitation Game was an entertainment, not how it was actually done ... the pre-computer didn't decode the German messages, it helped narrow the possibilities so that humans could do the actual decoding.  This is a basic pattern recognition test ... can you decode an Enigma message into readable German?

What will make autonomous vehicles work, is their lack of autonomy.  You have to input the whole traffic pattern; streets, lights, all other vehicles etc ... and operate the whole transportation grid as a machine.  But as such, you can't have pedestrians or human driven vehicles on it, no more than you can have private vehicles on railroad tracks.  Railroad tracks are for trains, owned by corporations, not individuals.  Then you can basically run transportation the same way the Soviet Union was run.  There is a reason why Star Trek etc looks like the Soviet Union won the Cold War.  Atheism + technology + socialism.
Ha’át’íísh baa naniná?
Azee’ Å,a’ish nanídį́į́h?
Táadoo ánít’iní.
What are you doing?
Are you taking any medications?
Don't do that.

Baruch

#9
Here is a sequence of seemingly random 1/2 digit numbers.  This is an encoding of a common English sentence, so you don't have to adjust for difference of language and culture.  It is encoded with an Enigma derived system I developed as a lark two summers ago ... it is better than Enigma, and in fact is comparable to AES, the current standard for US classified work.  Please perform pattern recognition good enough to decode it, and do that without any human intervention, an algorithm (which comes from where?) is the only thing you can use.

14
32
17
16
22
1
11
1
19
25
7
9
16
27
7
16
36
18
22
35
8
7
32
3
13
29
1
10
2
24
31
6
28
23
9
26
14
18
31
33
21
28

There is a problem in number theory, having to do with the nature of what "pseudo" means in "pseudo-random" numbers.  This is crucial for cryptography.  It is an unsolved fundamental problem by the greatest mathematicians the world's government's can harness.  Basically, unless you can solve that problem (and not all math problems have a solution ... aka the solution is the null set) you can't do true non-human assisted pattern recognition.

One problem I have with AI advocates, is they are nerds in their mom's basement who think that human beings are computers ... or they hope we are, because unless they can build Stepford chicks, they aren't getting any dates ;-)  Human beings aren't electronic, and human thought isn't software.  Thinking this is both a category error, and a profound anti-humanist POV.

See my post I added today, to the Math/Computer section, regarding the fraud occurring in the quantum computing field.  This ties in, because quantum computing ostensibly can solve another fundamental math problem/cryptography problem ... factoring large primes.  Yes, Cold Fusion will save us all, and give us the technological equivalent of the free lunch.
Ha’át’íísh baa naniná?
Azee’ Å,a’ish nanídį́į́h?
Táadoo ánít’iní.
What are you doing?
Are you taking any medications?
Don't do that.

Baruch

#10
Professional pattern recognition ...

Medieval Chinese:起信論義疏
Google Translate:From the letter on the sparse
Actual Meaning:Commentary on the Awakening of Faith

Pattern recognition is a set of special techniques, each appropriate for a special problem (domain), each of which don't work on the other domains.  There is no known general algorithm, not even a Turing Machine program ... which is the definition of what an algorithm is.  Therefore, it requires something non-algorithmic to solve problems that are beyond an algorithmically defined pseudo-random number (aka a code of a solution).  There are subsets of algorithms, all of which can be computed by a Turing Machine program, which defines particular classes of pseudo-random numbers.  There is a way on paper, a Turing Machine + Oracle ... that allows one in principle, but not in practice to specify the outlines, but not the specifics, of a non-algorithmicly generated number.  Professional cryptography uses this .. the key is generated by a random natural process, not by an algorithm.  Aka ... CIA, NSA etc assume that ... natural processes are not the result of any digital quantum computer.  An analog quantum computer is a different matter.  In so far as the universe is ruled by quantum mechanics, then it is an analog quantum computer ... but that isn't equivalent to a digital one.  A digital computer can only produce a subset (equivalence set technically, an infinite number of reals are mapped to each integer) of the output of an analog computer.  But this is good enough for things like accounting.  Again, this ties to number theory ... the integers (which can be algorithmically defined by finite processes (in the sense of a Turing Machine) are a proper subset of the real numbers.  Integers can only be used to approximate real numbers, that is not equivalence!  Approximation is an old engineering joke, involving the difference between a mathematician and an engineer, trapped in a room with a beautiful girl, and they can only approach her under certain rules.  The mathematician defeats himself because he is limited by Zeno's Paradox, but the engineer knows he can get what he wants ... with a sufficiently good approximation.

Google Translate uses the most advance language recognition available ... but it is brittle as all AI programs are ... it works real well on toy sentences, but not on more obscure ones, because this particular Medieval Chinese was not in its training data set.  It is much more successful on more recent language quotes (even in Chinese) that are more likely to be similar to its training data set.  This is a real limitation, not imposed by the system that is being trained.  So the technical question is, is the behavior of a totally no-human-in-the-loop transportation system, representable by an algorithmically defined pseudo-random number or not?  This can't be cogitated one way or another ... it can only be empirically demonstrated.

If for example, you have no driver in the car, but it is receiving data from the human driven cars all around ... then in fact, the human traffic is providing indirect steering of the driverless car, as opposed to direct steering by a person in the car.  But that isn't driverless driving, just a more hidden version of driver driving.  I await the results of the empirical demonstration of bumper cars ... but not while I am in traffic thanks.
Ha’át’íísh baa naniná?
Azee’ Å,a’ish nanídį́į́h?
Táadoo ánít’iní.
What are you doing?
Are you taking any medications?
Don't do that.

Cavebear

Quote from: Baruch on December 31, 2016, 09:57:34 AM

Google Translate uses the most advance language recognition available ... but it is brittle as all AI programs are ... it works real well on toy sentences, but not on more obscure ones, because this particular Medieval Chinese was not in its training data set.  It is much more successful on more recent language quotes (even in Chinese) that are more likely to be similar to its training data set.  This is a real limitation, not imposed by the system that is being trained. 
I use Google Translate to communicate with a friend in Brazilian Portugese .  Google only has Portugese Portugese.  It makes for some odd errors sometimes.  And as someone with a few years of Latin, I understand some of the basics of all Romance languages. 

My friend can understand most of what I say in Portugese Portugese, but sometimes there are some real surprises!
Atheist born, atheist bred.  And when I die, atheist dead!

Baruch

Quote from: Cavebear on January 01, 2017, 06:16:52 AM
  I use Google Translate to communicate with a friend in Brazilian Portugese .  Google only has Portugese Portugese.  It makes for some odd errors sometimes.  And as someone with a few years of Latin, I understand some of the basics of all Romance languages. 

My friend can understand most of what I say in Portugese Portugese, but sometimes there are some real surprises!

That is silly.  Most people who speak Portuguese, are in Brazil.  So the training set should be texts from Brazil, not old Portugal.
Ha’át’íísh baa naniná?
Azee’ Å,a’ish nanídį́į́h?
Táadoo ánít’iní.
What are you doing?
Are you taking any medications?
Don't do that.

AllPurposeAtheist

#13
Already lots of people are complaining about the idea of autonomous vehicles and not for the reasons you might expect. It has little to nothing to do with morality and everything to do with having to give up on old technology. Folks love the idea of being able to punch the accelerator to zoom past the old guy driving ever so slowly. Personally I prefer to drive the posted speed limit, but that never stops most of the traffic from zooming past or driving way to close to the rear bumper of the car in front of them. They're all in a race to get home to guzzle beer and watch meaningless crap on teeeeveeee..
All hail my new signature!

Admit it. You're secretly green with envy.

Baruch

Quote from: AllPurposeAtheist on January 01, 2017, 12:51:06 PM
Already lots of people are complaining about the idea of autonomous vehicles and not for the reasons you might expect. It has little to nothing to do with morality and everything to do with having to give up on old technology. Folks love the idea of being able to punch the accelerator to zoom past the old guy driving ever so slowly. Personally I prefer to drive the posted speed limit, but that never stops most of the traffic from zooming past or driving way to close to the rear bumper of the car in front of them. They're all in a race to get home to guzzle beer and watch meaningless crap on teeeeveeee..

Have you hated human drivers your whole life? ;-)  If doing technology X is profitable, it will be done, moral or not.
Ha’át’íísh baa naniná?
Azee’ Å,a’ish nanídį́į́h?
Táadoo ánít’iní.
What are you doing?
Are you taking any medications?
Don't do that.