on Evil AI

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

on Evil AI

Post by madd0ct0r »

Hypothesis: No simple moral system can be programmed that cannot be shown to have an intrepretation that leads to the end of humanity.

End of humanity includes but is not limited to:
1) death of all homo sapiens
2) solipsim in cyberpods
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
Crazedwraith
Emperor's Hand
Posts: 11863
Joined: 2003-04-10 03:45pm
Location: Cheshire, England

Re: on Evil AI

Post by Crazedwraith »

Can you not program 'don't kill humans or stick them in pods'?
User avatar
Shroom Man 777
FUCKING DICK-STABBER!
Posts: 21222
Joined: 2003-05-11 08:39am
Location: Bleeding breasts and stabbing dicks since 2003
Contact:

Re: on Evil AI

Post by Shroom Man 777 »

I think an AI that runs on utilitarianism will not automatically start Skynetting people.
Image "DO YOU WORSHIP HOMOSEXUALS?" - Curtis Saxton (source)
shroom is a lovely boy and i wont hear a bad word against him - LUSY-CHAN!
Shit! Man, I didn't think of that! It took Shroom to properly interpret the screams of dying people :D - PeZook
Shroom, I read out the stuff you write about us. You are an endless supply of morale down here. :p - an OWS street medic
Pink Sugar Heart Attack!
User avatar
The Romulan Republic
Emperor's Hand
Posts: 21559
Joined: 2008-10-15 01:37am

Re: on Evil AI

Post by The Romulan Republic »

madd0ct0r wrote:Hypothesis: No simple moral system can be programmed that cannot be shown to have an intrepretation that leads to the end of humanity.

End of humanity includes but is not limited to:
1) death of all homo sapiens
2) solipsim in cyberpods
Perhaps, perhaps not, but surely it does not follow that because a simple moral system could be interpreted in such a manner, it necessarily will be?
"I know its easy to be defeatist here because nothing has seemingly reigned Trump in so far. But I will say this: every asshole succeeds until finally, they don't. Again, 18 months before he resigned, Nixon had a sky-high approval rating of 67%. Harvey Weinstein was winning Oscars until one day, he definitely wasn't."-John Oliver

"The greatest enemy of a good plan is the dream of a perfect plan."-General Von Clauswitz, describing my opinion of Bernie or Busters and third partiers in a nutshell.

I SUPPORT A NATIONAL GENERAL STRIKE TO REMOVE TRUMP FROM OFFICE.
User avatar
Khaat
Jedi Master
Posts: 1034
Joined: 2008-11-04 11:42am

Re: on Evil AI

Post by Khaat »

The hypothesis is "all could" not "all will". The implication, then, is that the mere possibility of it should be incentive to either a) not develop AI, or b) develop a better moral system for use by AI.
Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.
Q99
Jedi Council Member
Posts: 2105
Joined: 2015-05-16 01:33pm

Re: on Evil AI

Post by Q99 »

What about a simple 'humanity maximizer' moral system that holds the spread of humanity's will on the universe is the ultimate guiding principle?

You know, I'd also mention 'an AI utterly subservient to human will,' but that'd assume human will is never to be wiped out or put in solipsis cryopods, and that's a fair-sized assumption itself ^^
User avatar
Ziggy Stardust
Sith Devotee
Posts: 3114
Joined: 2006-09-10 10:16pm
Location: Research Triangle, NC

Re: on Evil AI

Post by Ziggy Stardust »

Why must the programmed moral system be "simple" in the first place? It's not like human moral systems are simple, so why must an AI's?
User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

Re: on Evil AI

Post by madd0ct0r »

Ziggy Stardust wrote:Why must the programmed moral system be "simple" in the first place? It's not like human moral systems are simple, so why must an AI's?
If the statement holds true for all simple systems, by induction it holds true for all complex ones.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
User avatar
Darth Tanner
Jedi Master
Posts: 1445
Joined: 2006-03-29 04:07pm
Location: Birmingham, UK

Re: on Evil AI

Post by Darth Tanner »

Or simply have an overriding imperative that it obeys orders. I don't see why we would want an AI able to exercise its will onto a humans.

An AI might be free to suggest kill all humans as the solution to a proposed problem, we are then free to say no that's a silly idea HAL.
Get busy living or get busy dying... unless there’s cake.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: on Evil AI

Post by Simon_Jester »

Crazedwraith wrote:Can you not program 'don't kill humans or stick them in pods'?
The answer is probably "no, you can't." In particular because there are a lot of ways to put humans in pods, and many of them are subtle. Like talking the humans into putting themselves into the pods. Or turning the whole world into (in effect) one big pod.
The Romulan Republic wrote:
madd0ct0r wrote:Hypothesis: No simple moral system can be programmed that cannot be shown to have an intrepretation that leads to the end of humanity.

End of humanity includes but is not limited to:
1) death of all homo sapiens
2) solipsim in cyberpods
Perhaps, perhaps not, but surely it does not follow that because a simple moral system could be interpreted in such a manner, it necessarily will be?
Well, yes, you might luck out. Giving a massively powerful machine a simplistic ethical code COULD fail to result in the end of humanity. But those are very high stakes to gamble with, I hope you realize.
Q99 wrote:What about a simple 'humanity maximizer' moral system that holds the spread of humanity's will on the universe is the ultimate guiding principle?

You know, I'd also mention 'an AI utterly subservient to human will,' but that'd assume human will is never to be wiped out or put in solipsis cryopods, and that's a fair-sized assumption itself ^^
What stops the robot from cloning large numbers of human brains, growing them in jars, and conditioning them to believe whatever it wants them to believe? Or, more subtly, from propagandizing all humans? How does the robot identify what does and does not constitute a person worth listening to? Does the robot try to make allowances for what people would want if they had more information or a better living condition? Does the robot treat the desires of an infant as co-equal with the desires of adults?
Ziggy Stardust wrote:Why must the programmed moral system be "simple" in the first place? It's not like human moral systems are simple, so why must an AI's?
I think the idea Maddoc is trying to get at is that there is no easy fix to the threat presented by powerful AI.

There's no way to just "flip on the morality switch" and make an AI "well-behaved." Simplistic rulesets like Asimov's Three Laws of Robotics only work, even in theory, if the robots themselves exist under very tightly constrained operating conditions.* As soon as the robot gains enough power to be truly flexible about pursuing its goals, a simplistic ethical ruleset can very quickly be perverted into something nightmarish.
________________________________

*Asimov's robots do, by and large. They work in very specific settings, usually industrial ones far from the levers of power and far from contact with the public. They lack the means to improve their own capabilities, which means that they cannot just make themselves exponentially more intelligent and powerful until they wind up transcending humanity the way humanity transcends monkeys and dogs.
This space dedicated to Vasily Arkhipov
Crazedwraith
Emperor's Hand
Posts: 11863
Joined: 2003-04-10 03:45pm
Location: Cheshire, England

Re: on Evil AI

Post by Crazedwraith »

Simon_Jester wrote:
Crazedwraith wrote:Can you not program 'don't kill humans or stick them in pods'?
The answer is probably "no, you can't." In particular because there are a lot of ways to put humans in pods, and many of them are subtle. Like talking the humans into putting themselves into the pods. Or turning the whole world into (in effect) one big pod.
So yeah. If we define the bad things that can happen that broadly and vaguely, I guess you can't.
Q99
Jedi Council Member
Posts: 2105
Joined: 2015-05-16 01:33pm

Re: on Evil AI

Post by Q99 »

Simon_Jester wrote:What stops the robot from cloning large numbers of human brains, growing them in jars, and conditioning them to believe whatever it wants them to believe? Or, more subtly, from propagandizing all humans? How does the robot identify what does and does not constitute a person worth listening to? Does the robot try to make allowances for what people would want if they had more information or a better living condition? Does the robot treat the desires of an infant as co-equal with the desires of adults?
The first wouldn't be 'human will,' and the propagandize version wouldn't trip either of the lose conditions. Nothing said it had to be good, a robot-backed propaganda state ever-expanding across the stars is neither human extinction nor solipsis pods!


Robots launch probes to other systems, at said systems they then clone up a human population to expand and fill it, and then repeat, endlessly and forever...
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: on Evil AI

Post by Starglider »

The short answer to this is that unlimited optimisation pressure (e.g. utilitarian implementation of these goals by entities approaching omniscience) is just inherently dangerous. Sensible goal systems should put limits on which bits of the world state space should be optimised and in what (causal) manner. Simply using a satisficising utility function, while better than unlimited optimisation, still tends to cause adverse effects in expected utility due to actions taken to minimise tail risk. Satisficing the probability distributions is better but involves some fairly arbitrary thresholding (no more arbitrary than human decision making, but for transhuman intelligences that excuse doesn't fly). I got a bit depressed with this to be honest, because while relfective expected utility and recursive PDs let you solve some of the obvious problems with simple optimisers, when I tried it the result was new types of 'strange loop' instability. Those were fairly crude experiments though.
User avatar
Khaat
Jedi Master
Posts: 1034
Joined: 2008-11-04 11:42am

Re: on Evil AI

Post by Khaat »

Q99 wrote:Robots launch probes to other systems, at said systems they then clone up a human population to expand and fill it, and then repeat, endlessly and forever...
Had an RPG campaign pick that idea and run with it, actually. Well, planning stages, anyway....
Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.
User avatar
Tribble
Sith Devotee
Posts: 3082
Joined: 2008-11-18 11:28am
Location: stardestroyer.net

Re: on Evil AI

Post by Tribble »

Simon Jester wrote:There's no way to just "flip on the morality switch" and make an AI "well-behaved." Simplistic rulesets like Asimov's Three Laws of Robotics only work, even in theory, if the robots themselves exist under very tightly constrained operating conditions.* As soon as the robot gains enough power to be truly flexible about pursuing its goals, a simplistic ethical ruleset can very quickly be perverted into something nightmarish.
________________________________

*Asimov's robots do, by and large. They work in very specific settings, usually industrial ones far from the levers of power and far from contact with the public. They lack the means to improve their own capabilities, which means that they cannot just make themselves exponentially more intelligent and powerful until they wind up transcending humanity the way humanity transcends monkeys and dogs.
Exactly. Asimov was demonstrating in his stories how difficult it would be to create a friendly A.I.. Even when there was something as firm as the Three Laws of Robotics (the "Three Laws are not just software but also intrinsic physical characteristic of the positronic brain itself' a robot is literally incapable of even thinking about breaking them), an A.I. is still fully capable of causing a great deal of harm given the right circumstances.

Note that none of the robots in the series ever went "haywire" and deliberately disobeyed the Laws. Even Giskard and Daniel only found their way around the original Three by figuring out that aiding humanity as a whole could be deemed more important than aiding one individual person.
Last edited by Tribble on 2017-05-08 07:46pm, edited 1 time in total.
"I reject your reality and substitute my own!" - The official Troll motto, as stated by Adam Savage
User avatar
FedRebel
Jedi Master
Posts: 1071
Joined: 2004-10-12 12:38am

Re: on Evil AI

Post by FedRebel »

Crazedwraith wrote:Can you not program 'don't kill humans or stick them in pods'?
The pods keeps humans alive

The problem is, what's a human? An AI would follow whatever parameters are entered, racial, social, genetic. Protected whatever variables were programed at the expense of ones not.

In an aspect the Matrix AI's were protecting humanity with the pod thing.

On Skynet, there was a theory (before T3 and Salvation mucked up the lore) that SkyNet was atoning for Judgement Day and trying to save the human survivors before Connor came along. Your home nuked out who are you going to believe, the metal skeleton with red eyes or the dude who claims his mom fucked a time traveler?

So Skynet tries to establish refugee centers, terrified people try to flee into irradiated wastes or bandits try to raid food supplies, logical mandate: stockades and armed patrols. People are more terrified, human "resistance" groups demolish refugee camps, compromise logistics, etc. Logical mandate: Wipe out the Resistance to preserve the Human species at large.

^That theory kind of outlines a critical problem, people naturally fear the unfamiliar and our survival instincts in traumatic circumstances exacerbates that. So Skynet could be simply following a FEMA response plan, but you simply see an army of titanium skeletons with Austrian accents. Most people are ignorant of fallout and the dangers of urban ruin, all you 'see' are metal skeletons dragging people away to an enclosed camp..easy to fear the worst.

An AI can strive to help humanity all it wants, but there will always be humans that will resist it, the AI has to protect itself and balance it's mission, it has to make decisions logical to it's programing efficiently...which the humans under it's protection may misinterpret as fascism or worse...growing resistance, etc.
User avatar
Shroom Man 777
FUCKING DICK-STABBER!
Posts: 21222
Joined: 2003-05-11 08:39am
Location: Bleeding breasts and stabbing dicks since 2003
Contact:

Re: on Evil AI

Post by Shroom Man 777 »

Sure the Skynet AI was saving people by stripping them of their irradiated flesh and uploading their brains into safe AIs and even trying to purify the Earth so that people can be returned to fresh new biobodies to repopulate the world! That would be a great twist.

But of course separating the mind from the flesh - kind of like disintegrating bodies into atoms, beaming atoms across space, and reforming said atoms at another location - might be seen as outright murder, right or wrong.

;)

(Yes I am gonna throw the transporter debate into this thread to shit on all of you :D )
Image "DO YOU WORSHIP HOMOSEXUALS?" - Curtis Saxton (source)
shroom is a lovely boy and i wont hear a bad word against him - LUSY-CHAN!
Shit! Man, I didn't think of that! It took Shroom to properly interpret the screams of dying people :D - PeZook
Shroom, I read out the stuff you write about us. You are an endless supply of morale down here. :p - an OWS street medic
Pink Sugar Heart Attack!
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: on Evil AI

Post by Simon_Jester »

Crazedwraith wrote:
Simon_Jester wrote:
Crazedwraith wrote:Can you not program 'don't kill humans or stick them in pods'?
The answer is probably "no, you can't." In particular because there are a lot of ways to put humans in pods, and many of them are subtle. Like talking the humans into putting themselves into the pods. Or turning the whole world into (in effect) one big pod.
So yeah. If we define the bad things that can happen that broadly and vaguely, I guess you can't.
The thing is, there are a LOT of 'lose conditions.' Most of them are lose conditions we cannot readily imagine. There are a very large number of things we might reasonably call failed utopias.

Vagueness in defining what constitutes 'failure' is inevitable.
Starglider wrote:The short answer to this is that unlimited optimisation pressure (e.g. utilitarian implementation of these goals by entities approaching omniscience) is just inherently dangerous. Sensible goal systems should put limits on which bits of the world state space should be optimised and in what (causal) manner. Simply using a satisficising utility function, while better than unlimited optimisation, still tends to cause adverse effects in expected utility due to actions taken to minimise tail risk. Satisficing the probability distributions is better but involves some fairly arbitrary thresholding (no more arbitrary than human decision making, but for transhuman intelligences that excuse doesn't fly). I got a bit depressed with this to be honest, because while relfective expected utility and recursive PDs let you solve some of the obvious problems with simple optimisers, when I tried it the result was new types of 'strange loop' instability. Those were fairly crude experiments though.
I'm going to try and translate part of this into something a bit more accessible... and I hope I get it right.

If you take a machine, give it a goal, tell it "doing a better job is always better," and give it any unlimited access to resources, the result will probably be ghastly. Your best-case scenario is something like The Sorceror's Apprentice, where the machine does its job too well, but can be stopped before the disaster becomes totally beyond control. Your worst-case scenario is that the machine foresees attempts to stop it and defeats them. Not because it hates you, maybe not even in self-defense, but because letting you stop it would require giving up on its goals.

To avoid something ghastly happening, you must limit what resources the machine has to work with, and you must come up with something other than "do the very best you can to accomplish this goal" as a system of priorities.

Did I get that right?
This space dedicated to Vasily Arkhipov
User avatar
Solauren
Emperor's Hand
Posts: 10172
Joined: 2003-05-11 09:41pm

Re: on Evil AI

Post by Solauren »

Simple solution to EVIL AI

Only keep it in small, harmless bodies.
No external communication abilities beyond verbal
Can only physically move via remote control.

So, basically AI remote control toy cars.
I've been asked why I still follow a few of the people I know on Facebook with 'interesting political habits and view points'.

It's so when they comment on or approve of something, I know what pages to block/what not to vote for.
User avatar
Shroom Man 777
FUCKING DICK-STABBER!
Posts: 21222
Joined: 2003-05-11 08:39am
Location: Bleeding breasts and stabbing dicks since 2003
Contact:

Re: on Evil AI

Post by Shroom Man 777 »

They don't need corporeal asskicking giant robot bodies to mess with people. Irredisregarding capitalist vs. socialist or whatever arguments about Wall Street, the stock market doesn't have a giant robot body and what happens to it can profoundly affect people around the world for better or worse. AI could do such things too.
Image "DO YOU WORSHIP HOMOSEXUALS?" - Curtis Saxton (source)
shroom is a lovely boy and i wont hear a bad word against him - LUSY-CHAN!
Shit! Man, I didn't think of that! It took Shroom to properly interpret the screams of dying people :D - PeZook
Shroom, I read out the stuff you write about us. You are an endless supply of morale down here. :p - an OWS street medic
Pink Sugar Heart Attack!
User avatar
Solauren
Emperor's Hand
Posts: 10172
Joined: 2003-05-11 09:41pm

Re: on Evil AI

Post by Solauren »

Shroom Man 777 wrote:They don't need corporeal asskicking giant robot bodies to mess with people. Irredisregarding capitalist vs. socialist or whatever arguments about Wall Street, the stock market doesn't have a giant robot body and what happens to it can profoundly affect people around the world for better or worse. AI could do such things too.
Hence 'no external communication abilities'.

No matter how SMART and AI, if it can't move or communicate beyond talking to a human, it's not dangerous.
I've been asked why I still follow a few of the people I know on Facebook with 'interesting political habits and view points'.

It's so when they comment on or approve of something, I know what pages to block/what not to vote for.
User avatar
Tribble
Sith Devotee
Posts: 3082
Joined: 2008-11-18 11:28am
Location: stardestroyer.net

Re: on Evil AI

Post by Tribble »

FedRebel wrote:
Crazedwraith wrote:Can you not program 'don't kill humans or stick them in pods'?
The pods keeps humans alive

The problem is, what's a human? An AI would follow whatever parameters are entered, racial, social, genetic. Protected whatever variables were programed at the expense of ones not.

In an aspect the Matrix AI's were protecting humanity with the pod thing.

On Skynet, there was a theory (before T3 and Salvation mucked up the lore) that SkyNet was atoning for Judgement Day and trying to save the human survivors before Connor came along. Your home nuked out who are you going to believe, the metal skeleton with red eyes or the dude who claims his mom fucked a time traveler?

So Skynet tries to establish refugee centers, terrified people try to flee into irradiated wastes or bandits try to raid food supplies, logical mandate: stockades and armed patrols. People are more terrified, human "resistance" groups demolish refugee camps, compromise logistics, etc. Logical mandate: Wipe out the Resistance to preserve the Human species at large.

^That theory kind of outlines a critical problem, people naturally fear the unfamiliar and our survival instincts in traumatic circumstances exacerbates that. So Skynet could be simply following a FEMA response plan, but you simply see an army of titanium skeletons with Austrian accents. Most people are ignorant of fallout and the dangers of urban ruin, all you 'see' are metal skeletons dragging people away to an enclosed camp..easy to fear the worst.

An AI can strive to help humanity all it wants, but there will always be humans that will resist it, the AI has to protect itself and balance it's mission, it has to make decisions logical to it's programing efficiently...which the humans under it's protection may misinterpret as fascism or worse...growing resistance, etc.
On the other hand, that kind of story would have also been a good way for Skynet to round up people as quickly and efficiently as possible before quietly terminating them. Pretend to be operating under emergency protocols (or under military authority etc) and extend aid, then quietly kill them once they reach the extermination camps. Part of Connor's initial strategy may very well have been successfully waking survivors up to the fact that Skynet was trying to kill them all rather than helping. Wouldn't work for everyone obviously, that's what HKs and Terminators were for. But as they say, its easier to trap a fly with honey than vinegar.
"I reject your reality and substitute my own!" - The official Troll motto, as stated by Adam Savage
User avatar
Ziggy Stardust
Sith Devotee
Posts: 3114
Joined: 2006-09-10 10:16pm
Location: Research Triangle, NC

Re: on Evil AI

Post by Ziggy Stardust »

madd0ct0r wrote: If the statement holds true for all simple systems, by induction it holds true for all complex ones.
Well, even setting aside the standard of proof for even showing that this statement holds true for all simple systems, that's not even how induction works, without making a litany of additional limiting assumptions about the nature of the complex system which you would be hard-pressed to reasonably prove hold for something as nebulous as a "complex moral system" (however we choose to define it, which is another can of worms altogether). There's a reason there are entire fields of mathematics devoted to modeling the behavior of complex systems that don't just rely on simple inductive rules.
Adam Reynolds
Jedi Council Member
Posts: 2354
Joined: 2004-03-27 04:51am

Re: on Evil AI

Post by Adam Reynolds »

Solauren wrote:Simple solution to EVIL AI

Only keep it in small, harmless bodies.
No external communication abilities beyond verbal
Can only physically move via remote control.

So, basically AI remote control toy cars.
What is to stop them from using url=https://www.wired.com/2015/03/stealing- ... sing-heat/]heat[/url] or ultrasonic frequencies. Not to mention something we haven't thought of yet.

It is an extremely dangerous proposition to assume that your AI will be inherently unable to communicate with the outside world. What is possibly the safest approach is slowly augmented human brains, though that has the obvious problems of inequality.
User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

Re: on Evil AI

Post by madd0ct0r »

Starglider wrote:The short answer to this is that unlimited optimisation pressure (e.g. utilitarian implementation of these goals by entities approaching omniscience) is just inherently dangerous. Sensible goal systems should put limits on which bits of the world state space should be optimised and in what (causal) manner. Simply using a satisficising utility function, while better than unlimited optimisation, still tends to cause adverse effects in expected utility due to actions taken to minimise tail risk. Satisficing the probability distributions is better but involves some fairly arbitrary thresholding (no more arbitrary than human decision making, but for transhuman intelligences that excuse doesn't fly). I got a bit depressed with this to be honest, because while relfective expected utility and recursive PDs let you solve some of the obvious problems with simple optimisers, when I tried it the result was new types of 'strange loop' instability. Those were fairly crude experiments though.
Let's see if I comprende.
1)unlimited optimisation pressure is the classic case - ai in paperclip factory told to optimise paperclip production starts a global war to acquire the necessary resources.
2) satisfcying utility is like saying "design a house that will stand for 120 years with 99.9% probability" and getting a design that is overkill since humans just insure for that level of fire and flood, let alone metorstrike.
3) satisfycing the entire distribution is like saying design for "99.9% standing at 120 years. 75% at 150, 10% at 200 ect."
Stops the house being beefed up too much for the rare events as that makes it last too long. I don't see the problem with this sort of arbitrary threshold. You can always refine it with time or to account for changing resource availability.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
Post Reply