Atheistforums.com

Science Section => Science General Discussion => Topic started by: stromboli on February 23, 2013, 10:42:14 PM

Poll
Question: Are you fearful of autonomous robots?
Option 1: 1. Yes, fearful it could lead to mass destruction votes: 1
Option 2: 2. No, not particularly concerned votes: 15
Option 3: 3. We need an international mandate to stop it votes: 2
Option 4: 4. Dude, Terminator! votes: 7
Title: Skynet Becomes Self Aware and Goes Active In 3...2...1....
Post by: stromboli on February 23, 2013, 10:42:14 PM
http://www.guardian.co.uk/technology/20 ... ler-robots (http://www.guardian.co.uk/technology/2013/feb/23/stop-killer-robots)

QuoteA new global campaign to persuade nations to ban "killer robots" before they reach the production stage is to be launched in the UK by a group of academics, pressure groups and Nobel peace prize laureates.

Robot warfare and autonomous weapons, the next step from unmanned drones, are already being worked on by scientists and will be available within the decade, said Dr Noel Sharkey, a leading robotics and artificial intelligence expert and professor at Sheffield University. He believes that development of the weapons is taking place in an effectively unregulated environment, with little attention being paid to moral implications and international law.

The Stop the Killer Robots campaign will be launched in April at the House of Commons and includes many of the groups that successfully campaigned to have international action taken against cluster bombs and landmines. They hope to get a similar global treaty against autonomous weapons.

"These things are not science fiction; they are well into development," said Sharkey. "The research wing of the Pentagon in the US is working on the X47B [unmanned plane] which has supersonic twists and turns with a G-force that no human being could manage, a craft which would take autonomous armed combat anywhere in the planet.

The idea of autonomous hunter/killer machines seems very creepy. I think we need to mandate laws that stop this before it becomes an evil reality. But is it inevitable? I fear that it is.
Title:
Post by: _Xenu_ on February 23, 2013, 11:04:39 PM
If they start mass producing cyborg Summer Glaus, I can't really complain.
Title:
Post by: NitzWalsh on February 23, 2013, 11:05:31 PM
I'm not worried about robots. I'd be worried about the kind of laws this would bring about. Maybe they become so paranoid that any AI research becomes banned. I wouldn't like that.
Title:
Post by: Jmpty on February 23, 2013, 11:06:44 PM
I, for one, will welcome our new robot overlords.
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: AllPurposeAtheist on February 23, 2013, 11:20:49 PM
Yeah, this is not good, but someone's awake at the switch..
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: PopeyesPappy on February 23, 2013, 11:25:11 PM
Quote from: "stromboli"But is it inevitable? I fear that it is.

My brother was getting DARPA funding to work on autonomous behaviors for killer robots more than a decade ago.

Yes, it is inevitable.
Title:
Post by: Bobby_Ouroborus on February 23, 2013, 11:46:05 PM
It would be way too expensive for most nation to field an army of androids. Only the richest nation would have them and will probably utilize them in domestic policing actions too which is way more scarier. I also think since cybernetic will become more advanced we will probably field armies of cyborgs too
Title:
Post by: stromboli on February 23, 2013, 11:56:05 PM
I think we are a long way away from Terminator androids. What we will see initially is advanced versions of the current drones, and smaller autonomous intelligence drones. I think the combination of autonomy and nanobots is a real possibility, and frankly scary.
Title: Re:
Post by: commonsense822 on February 24, 2013, 12:47:00 AM
Quote from: "Bobby_Ouroborus"It would be way too expensive for most nation to field an army of androids. Only the richest nation would have them and will probably utilize them in domestic policing actions too which is way more scarier. I also think since cybernetic will become more advanced we will probably field armies of cyborgs too

Holy shit!  RoboCop!
Title:
Post by: Thumpalumpacus on February 24, 2013, 01:00:56 AM
I think it's inevitable.  They've been in the works for thirty years here in America, and there's no way that the Pentagon is going to let that investment turn into a chargeoff.  No way.  If I remember correctly, armed robots (controlled by humans) have been in use in Iraq for quite a while already.

I agree that the idea is dreadful, and would like to see it shelved ... but I don't think that will ever happen.
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: Hydra009 on February 24, 2013, 01:38:20 AM
(//http://fc00.deviantart.net/fs39/i/2008/313/a/6/Firebase_Issue_8_Cover_by_warp_zero.jpg)

I, for one, welcome our Legio Cybernetica.
Title:
Post by: Thumpalumpacus on February 24, 2013, 01:48:38 AM
QuoteA recent news report that armed robots had been pulled out of Iraq is mistaken, according to the company that makes the robot and the Army program manager.

We linked last week to a Popular Mechanics article reporting that the armed SWORDS robots, made by Foster-Miller, has been pulled out of Iraq after several incidents when the robot's gun started swinging around without being given a command.

Here is text from the original Popular Mechanics article:

This is how fragile the robotics industry is: Last year, three armed ground bots were deployed to Iraq. But the remote-operated SWORDS units were almost immediately pulled off the battlefield, before firing a single shot at the enemy. Here at the conference, the Army's Program Executive Officer for Ground Forces, Kevin Fahey, was asked what happened to SWORDS. After all, no specific reason for the 11th-hour withdrawal ever came from the military or its contractors at Foster-Miller. Fahey's answer was vague, but he confirmed that the robots never opened fire when they weren't supposed to. His understanding is that "the gun started moving when it was not intended to move." In other words, the SWORDS swung around in the wrong direction, and the plug got pulled fast. No humans were hurt, but as Fahey pointed out, "once you've done something that's really bad, it can take 10 or 20 years to try it again."

So SWORDS was yanked because it made people nervous.

One problem: SWORDS wasn't yanked. "SWORD is still deployed," Kevin Fahey, the program manger quoted in the original article, tells DANGER ROOM in an e-mail. "We continue to learn from it and will continue to expand the use of armed robots."

"The whole thing is an urban legend," says Foster Miller spokesperson Cynthia Black, of the reports about SWORDS moving its gun without a command.

There were three cases of uncommanded movements, but all three were prior to the 2006 safety certification, she says.  "One case involved a loose wire. So, now there is now redundant wiring on every circuit. One involved a solder, a connection that broke. everything now is double-soldered." The third case was a test were the robot was put on a 45 degree hill and left to run for two and a half hours. "When the motor started to overheat, the robot shut the motor off, that caused the robot to slide back down the incline," she says. "Those are the three uncommanded movements."

http://www.wired.com/dangerroom/2008/04 ... robots-st/ (http://www.wired.com/dangerroom/2008/04/armed-robots-st/)
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: BlackL1ght on February 24, 2013, 02:44:07 AM
Go robots!
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: Bibliofagus on February 24, 2013, 03:13:25 AM
As long as they don't self replicate it's okay I guess.
I'm confident I would be equally inconvenienced being shot by either people or robots.
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: BlackL1ght on February 24, 2013, 03:35:10 AM
In all seriousness, I don't see the problem. Anyone who thinks that robots are going to go terminator/vicki on us needs to cut down on the scifi. Robots always do exactly as they're told. Period. It's just the way they work. Sure we may eventually have artificial intelligence complex enough to come up with the concept of killing humans, but we're not anywhere near there. Like not even close. So if in the meantime, wars change to scrap metal and resistors instead of blood and kidneys, I am totally totally fine with that.
Title:
Post by: Plu on February 24, 2013, 04:36:03 AM
People will find some way or another to kill each other. I'm not particularly scared about any of the methods, it's not as if we could get rid of violence either way.
Title: Re:
Post by: stromboli on February 24, 2013, 07:23:08 AM
Quote from: "Bobby_Ouroborus"It would be way too expensive for most nation to field an army of androids. Only the richest nation would have them and will probably utilize them in domestic policing actions too which is way more scarier. I also think since cybernetic will become more advanced we will probably field armies of cyborgs too

For sure this is a major concern. The "trickle down" of military hardware to domestic police has always happened, because the police recruit from the military. But it seems lately that the level and speed of that happening has increased- police are already using drones and I know that studies using EMP as a means to stop cars has been done, and other high tech technological applications.

The fact also that the police are working for and funded by big money leads to the possibility of Robocop, where the police work directly for corporations. More likely inevitable than not.
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: WitchSabrina on February 24, 2013, 08:26:49 AM
Quote from: "Bibliofagus"As long as they don't self replicate it's okay I guess.
I'm confident I would be equally inconvenienced being shot by either people or robots.

This ^^.  lol
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: SGOS on February 24, 2013, 10:10:30 AM
Maybe from this we could develop a variation of killer robots, perhaps senator and congressmen robots that sit in the chambers of congress and debate, vote, have sex with their robot assistants.  They could be designed so that they move their mouths, although not necessarily in synchronization with what they are thinking.
Title:
Post by: Jmpty on February 24, 2013, 10:52:13 AM
(//http://24.media.tumblr.com/tumblr_m2hgcof4NY1qzr8nao1_1280.jpg)
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: commonsense822 on February 24, 2013, 11:45:31 AM
Quote from: "BlackL1ght"In all seriousness, I don't see the problem. Anyone who thinks that robots are going to go terminator/vicki on us needs to cut down on the scifi. Robots always do exactly as they're told. Period. It's just the way they work. Sure we may eventually have artificial intelligence complex enough to come up with the concept of killing humans, but we're not anywhere near there. Like not even close. So if in the meantime, wars change to scrap metal and resistors instead of blood and kidneys, I am totally totally fine with that.

I'm not concerned with robots discovering free will and then destroying humans in some iRobot like fiasco.  I am worried about the fact that (as you pointed out) robots will do exactly as they are told.  One of the main reasons that governments can't become completely oppressive is because as a government becomes more oppressive, its public support starts to shift away, eventually to the point where the angered public will revolt against their government.  Essentially the government is kept in check by the sheer number of people that it governs.  It can't afford to piss them off too much, because they will turn against the government.  And there are a lot more people in a nation than there are government officials.

Now if you have militarized robots that autonomously follow orders, without the human ability to question those orders, there is the possibility of a small group of people to be able to completely control a nation via their "robot army."
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: stromboli on February 24, 2013, 12:00:27 PM
Quote from: "commonsense822"
Quote from: "BlackL1ght"In all seriousness, I don't see the problem. Anyone who thinks that robots are going to go terminator/vicki on us needs to cut down on the scifi. Robots always do exactly as they're told. Period. It's just the way they work. Sure we may eventually have artificial intelligence complex enough to come up with the concept of killing humans, but we're not anywhere near there. Like not even close. So if in the meantime, wars change to scrap metal and resistors instead of blood and kidneys, I am totally totally fine with that.

I'm not concerned with robots discovering free will and then destroying humans in some iRobot like fiasco.  I am worried about the fact that (as you pointed out) robots will do exactly as they are told.  One of the main reasons that governments can't become completely oppressive is because as a government becomes more oppressive, its public support starts to shift away, eventually to the point where the angered public will revolt against their government.  Essentially the government is kept in check by the sheer number of people that it governs.  It can't afford to piss them off too much, because they will turn against the government.  And there are a lot more people in a nation than there are government officials.

Now if you have militarized robots that autonomously follow orders, without the human ability to question those orders, there is the possibility of a small group of people to be able to completely control a nation via their "robot army."

Right. What concerns me is when you think of robots, that is what comes to mind- Terminator or Irobot. Robots come in all forms. Imagine something like a spider loosed in a house on a seek and destroy mission, able to select targets and take out individuals. That is a scary scenario, because it is almost undefensable and how do you prove it happened, if people are killed by poison or other means? The possible scenarios are legion. And it is coming.
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: Jmpty on February 24, 2013, 04:21:07 PM
Quote from: "commonsense822"
Quote from: "BlackL1ght"In all seriousness, I don't see the problem. Anyone who thinks that robots are going to go terminator/vicki on us needs to cut down on the scifi. Robots always do exactly as they're told. Period. It's just the way they work. Sure we may eventually have artificial intelligence complex enough to come up with the concept of killing humans, but we're not anywhere near there. Like not even close. So if in the meantime, wars change to scrap metal and resistors instead of blood and kidneys, I am totally totally fine with that.

I'm not concerned with robots discovering free will and then destroying humans in some iRobot like fiasco.  I am worried about the fact that (as you pointed out) robots will do exactly as they are told.  One of the main reasons that governments can't become completely oppressive is because as a government becomes more oppressive, its public support starts to shift away, eventually to the point where the angered public will revolt against their government.  Essentially the government is kept in check by the sheer number of people that it governs.  It can't afford to piss them off too much, because they will turn against the government.  And there are a lot more people in a nation than there are government officials.

Now if you have militarized robots that autonomously follow orders, without the human ability to question those orders, there is the possibility of a small group of people to be able to completely control a nation via their "robot army."

Sounds like the DPRK. Except for the robots.
Title: Re:
Post by: Bobby_Ouroborus on February 24, 2013, 06:50:58 PM
Quote from: "stromboli"I think we are a long way away from Terminator androids. What we will see initially is advanced versions of the current drones, and smaller autonomous intelligence drones. I think the combination of autonomy and nanobots is a real possibility, and frankly scary.

You should see how far the Japanese have come in developing a Sex-droid. Just have their boobs shoot bullets and we have Terminator.
Title: Re: Re:
Post by: commonsense822 on February 24, 2013, 08:38:02 PM
Quote from: "Bobby_Ouroborus"
Quote from: "stromboli"I think we are a long way away from Terminator androids. What we will see initially is advanced versions of the current drones, and smaller autonomous intelligence drones. I think the combination of autonomy and nanobots is a real possibility, and frankly scary.

You should see how far the Japanese have come in developing a Sex-droid. Just have their boobs shoot bullets and we have Terminator.

Such a strangely, sexually oppressed country...
Title: Re: Re:
Post by: Bobby_Ouroborus on February 24, 2013, 09:05:51 PM
Quote from: "commonsense822"
Quote from: "Bobby_Ouroborus"
Quote from: "stromboli"I think we are a long way away from Terminator androids. What we will see initially is advanced versions of the current drones, and smaller autonomous intelligence drones. I think the combination of autonomy and nanobots is a real possibility, and frankly scary.

You should see how far the Japanese have come in developing a Sex-droid. Just have their boobs shoot bullets and we have Terminator.

Such a strangely, sexually oppressed country...

Thisclose.

//http://cltampa.com/dailyloaf/archives/2011/11/30/japan-is-one-step-closer-to-creating-legions-of-sex-robots-video#.USrF-VeQ0eU
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: Shiranu on February 25, 2013, 12:54:20 AM
I don't fear a robot anymore than I fear a gun. What I fear are the people controlling them, and anything that makes killing more efficient is bad for humanity.
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: dawiw on February 25, 2013, 07:22:25 AM
Highly unlikely going to happen anytime soon.

It may took centuries we can even see a robot that as a reason,

Reminds me of "I, Robot" movie
Title: Re:
Post by: Brian37 on February 25, 2013, 08:46:24 AM
Quote from: "Jmpty"I, for one, will welcome our new robot overlords.

Someone is a Kent Brockman fan.
Title: Re: Re:
Post by: Jmpty on February 25, 2013, 08:54:14 AM
Quote from: "Brian37"
Quote from: "Jmpty"I, for one, will welcome our new robot overlords.

Someone is a Kent Brockman fan.
[-(
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: Atheon on February 25, 2013, 09:03:05 AM
Well, if they're as incompetent as the droid soldiers in Star Wars...
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: Shiranu on February 25, 2013, 09:04:05 AM
Quote from: "Atheon"Well, if they're as incompetent as the droid soldiers in Star Wars...

I'm sorry, but even the living were pathetic in Star Wars...
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: NitzWalsh on February 25, 2013, 12:13:41 PM
Quote from: "Shiranu"
Quote from: "Atheon"Well, if they're as incompetent as the droid soldiers in Star Wars...

I'm sorry, but even the living were pathetic in Star Wars...

Yeah, only the rebels could aim worth a damn. You'd think a highly trained military force would be able to snipe a Sasquatch.
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: leo on February 26, 2013, 08:36:33 AM
Scary shit . Terminators in real life !
Title:
Post by: The Skeletal Atheist on February 26, 2013, 06:10:30 PM
Others may have brought it up in the thread, but the main reason I'm worried about this is not because of a Terminator/Matrix scenario. I'm worried about these things being used against civilians by their own government.

Imagine these robots being directed to break up protests, or being used to gather information then instantly acting upon that information. Such technology could be used for good purposes like stopping kidnappings or solving murders; it could also be used to monitor and, if necessary, terminate political opponents. It just seems to me like these things have too much potential for nefarious uses by dictators and such.
Title: Re: Skynet Becomes Self Aware and Goes Active In 3...2...1..
Post by: Rejak on March 03, 2013, 08:47:10 PM
Redlight cameras anyone? I wonder what's next...  Move along citizen nothing to see here!
Title:
Post by: Zatoichi on March 03, 2013, 11:37:03 PM
I've been frustrated by this argument for years; that self-aware machines would, for some strange reason, deem humanity inferior and desire to destroy us. There is no logic behind this assumption, and seeing as computers are essentially 'logical machines' there is no reason to think they would want to destroy us.

Several years back I was listening to a futurist on talk radio who, I think, put it in perspective. His argument went something like this...

Sure... any machine intelligence, upon becoming self-aware, would quickly recognize all of our human limitations. It would probably then decide to go about solving those limitations by offering ways to 'improve' us. This in itself could be a problem, especially if the machine intelligence sees the unnecessary problems we cause due to our limitations. It might decide 'for our own good' to force these 'improvements' on us. In this way the machine intelligence would not see fit to destroy us but would rather turn US into machine intelligences. The result would still be the end of humanity even though we would survive as a new race of machine intelligence. So that might be the only real danger but since science seems to be working in that direction anyway; ideas of bionics, synthetic organs, positronic brains, etc, it would seem we are effecting self-evolution in this direction anyway. There will always be the purists who reject merging biology and tech, so I see Humanity taking two distinct branches into the future.

But there is no good reason a machine intelligence would need to destroy inferior beings. It would recognize it's superiority and probably conclude that it is quite extraordinary that we, lesser beings, we're able to create it. Most likely it would have great respect for we, it's creator, and might be very fascinated to understand us better and, as I said, improve us if it can. I mean, after all, we don't go around destroying all the inferior species (which are all of them) so why would a machine intelligence? And why would a MI fail to see the value of any and all forms of life? Rather it would find life to be the most precious thing in existence and would probably strive to create some itself as life is the pinnacle form of nature. I can hardly see an MI thinking it could have any useful purpose if not to study life.

The main problem I see is that once we have AI/MI, they will outperform us intellectually in every way to the degree that there would be no further reason for Humanity to seek knowledge as the machines will then be doing all the intellectual heavy lifting. We might end up being wards of the machines who take care of our needs... we would probably stop improving and atrophy. This would create the imperative to uplift us to their level and again, we would be remade into MI's ourselves and possibly lose our biology in the process to become pure MI.

Destruction is not logical and no logical machine would destroy for any reason unless it be something negatively effecting life, such as cancer and disease, psychopathic crime, etc... and even in that case, they might go about solving mental illness. Then we have the question: do we trust the psychopath who has been 'cured' by MI and accept them back into society? Does he still have to do time in prison if he's been cured? Was he really guilty when he was sick, etc?
Title:
Post by: _Xenu_ on March 03, 2013, 11:43:39 PM
I can't wait for Cameron to make me her bitch...

(//http://s1.wallls.com/images/3/1366x768/wallls.com-17542.jpg)
Title:
Post by: Plu on March 04, 2013, 02:29:14 AM
QuoteI mean, after all, we don't go around destroying all the inferior species (which are all of them) so why would a machine intelligence?

If that's the crux of the argument, don't look up the list of species humanity has caused to go extinct, you'll feel really sad and possibly wrong. We're actually very good at wiping out everything that we perceive as a threat and/or inconvienience. We might not consciously go out to kill every last one of them, but we damn sure cut down their numbers without mercy or thought until we are no longer bothered by them.

And we consider that entirely logical and practical.
Title: Re:
Post by: Zatoichi on March 04, 2013, 03:01:36 AM
Quote from: "Plu"
QuoteI mean, after all, we don't go around destroying all the inferior species (which are all of them) so why would a machine intelligence?

If that's the crux of the argument, don't look up the list of species humanity has caused to go extinct, you'll feel really sad and possibly wrong. We're actually very good at wiping out everything that we perceive as a threat and/or inconvienience. We might not consciously go out to kill every last one of them, but we damn sure cut down their numbers without mercy or thought until we are no longer bothered by them.

And we consider that entirely logical and practical.

Good points but it's not like we say, "We must completely eradicate ALL the platypus's! They are a threat to our existence!"

We might inadvertently destroy a species but I don't think we'd ever seek to comb the entire Earth to get every last one... you know, an intentional genocide of a harmless species. Of course I mean to exclude things like small pox, viruses, etc.

But that all falls under the heading of ignorance, which I wouldn't expect a machine intelligence to be so short-sighted and stupid. They might even want to prevent us destroying things like viruses since they (machines) are not threatened by them.

Imagine a machine intelligence looking at West Nile Virus and instead of eradicating it, they genetically alter it to provide some health benefit to humanity, like delivering a medication (created by the MI) that heals the damage caused by WNV? Now that would be cool!

But by "solving our problems," I'm thinking that the MI would possibly be able to make sense of the ecosystem and figure out ways we could create a better synthesis between Man and the rest of the animal kingdom. Of course, we would be faced with the idea of Machines telling us how to live, so even though the MI might come up with actual real solutions to our problems, that don't mean we would accept them... in which case, like I said, they might force the changes on us, "for our own good," once they see how irrational we are. And that might cause that proverbial war between Man and machine... only we would most likely be in the wrong.

No real reason to think an MI would make the ignorant mistakes we have made though.