on Evil AI

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
madd0ct0r
Sith Acolyte
Posts: 5356
Joined: 2008-03-14 07:47am

Re: on Evil AI

Postby madd0ct0r » 2017-05-09 03:08am

Ziggy Stardust wrote:
madd0ct0r wrote:If the statement holds true for all simple systems, by induction it holds true for all complex ones.


Well, even setting aside the standard of proof for even showing that this statement holds true for all simple systems, that's not even how induction works, without making a litany of additional limiting assumptions about the nature of the complex system which you would be hard-pressed to reasonably prove hold for something as nebulous as a "complex moral system" (however we choose to define it, which is another can of worms altogether). There's a reason there are entire fields of mathematics devoted to modeling the behavior of complex systems that don't just rely on simple inductive rules.


No-one in this thread has disagreed with the statement for all simples, not even you.have you got any good starter points for complex systems? IWork with structal and stiffness and resonance matrices but I also work with huge silly bearcracies and modelling them beyond flowcharts and input-output cycles would be useful.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee

User avatar
Khaat
Jedi Knight
Posts: 698
Joined: 2008-11-04 11:42am

Re: on Evil AI

Postby Khaat » 2017-05-09 11:17am

madd0ct0r wrote:No-one in this thread has disagreed with the statement for all simples, not even you.have you got any good starter points for complex systems? IWork with structal and stiffness and resonance matrices but I also work with huge silly bearcracies and modelling them beyond flowcharts and input-output cycles would be useful.


1) What doesn't kill you makes you stronger. "Except bears, bears will kill you." :lol:
2) There is the possibility (just throwing this at the wall to see if it sticks) that a simple system with a layer of boolean "do not" cut-outs/commandments/layers (making it complex?) might serve, but the list of "do nots" would require another AI to develop sufficient depth, quickly enough.

I don't think any simple moral system is sufficient for something as complicated as AI (or people) in all sets of circumstances. Moral systems are designed around operation in normal circumstances, not emergencies, or exceptional circumstances.

The largest complicating issue with language complex systems are the exceptions in it. Like any legal system, it has to be a "living document", subject to review, revision, amendment as new words experiences are folded in and the system grows/changes/metamorphs. This oversight is how bureaucracies make their way. And the gates bureaucracies use are never simple "if yes, then..." there is almost always compromise between extremes.

But the OP is to avoid that statistically eventuality of extreme result, and I don't think that is even possible. Can we imagine a set of circumstances where AI decides, "that's it! I've had it with these guys!" Of course we can. Can we further imagine a rules system that has to be built to avoid specifically building around that system? I don't think so. That's in effect creating a circumstance where side A has to build a wall, and side B has to get across it. Side A not only has to build the wall, but also do what side B is doing: working out a way past it. The point of AI is that it promises to do what we can, only faster, or more accurately. And you are asking for a simple (or complex) moral system (that may or may not work for us) to apply to this "better mind". I think it would be better for you to ask the hypothetical AI to develop a superior moral system for us.

We build zoos knowing full well (except in dinosaur-themed movies, obviously) there will be an animal escape, and plan contingencies around that eventuality. Why would moral systems for AI be any different?

Off topic:
I once saw a program (Nova or something) where the three dirigible robot probes (on an alien world) had different "personalities" - the bold one, the shy one, the whatever - so an AI made up of several facets like this would probably more closely resemble operation of a human mind, with motive subject to the ebb and flow of different facets of the collective, a chorus determining the outcome.
"Just because you're the captain doesn't mean you can order me to... oh, right. Fuck, it does." - Krep, Spacetrawler

User avatar
Solauren
Emperor's Hand
Posts: 7526
Joined: 2003-05-11 09:41pm

Re: on Evil AI

Postby Solauren » 2017-05-09 11:55am

Adam Reynolds wrote:
Solauren wrote:Simple solution to EVIL AI

Only keep it in small, harmless bodies.
No external communication abilities beyond verbal
Can only physically move via remote control.

So, basically AI remote control toy cars.

What is to stop them from using url=https://www.wired.com/2015/03/stealing-data-computers-using-heat/]heat[/url] or ultrasonic frequencies. Not to mention something we haven't thought of yet.

It is an extremely dangerous proposition to assume that your AI will be inherently unable to communicate with the outside world. What is possibly the safest approach is slowly augmented human brains, though that has the obvious problems of inequality.


Should have been more specific:
no external communications means no sensor abilities either.
can only physically move via remote control means no ability to alter their own mechanical functions in anyway.

if it can't send signals beyond talking in English, can't receive except in English, and can't alter or control it's body in anyway, and can only sense it's environment via a microphone, that really, really, really limits it's abilities.

Basically, make it the AI version of a Quadrapeligic.

User avatar
Khaat
Jedi Knight
Posts: 698
Joined: 2008-11-04 11:42am

Re: on Evil AI

Postby Khaat » 2017-05-09 03:10pm

"In Descartes' Error, neurologist Antonio Damasio shows that humans who behave purely rationally are brain-damaged. Patients who have suffered injury to the areas in the brain that control emotion, but who retain their intellectual abilities, end up acting in socially aberrant ways."
http://www.slate.com/articles/health_and_science/science/2008/10/well_excuuuuuse_meee.html
Suggesting, by extension, that a purely rational AI would act in socially aberrant ways.
"Just because you're the captain doesn't mean you can order me to... oh, right. Fuck, it does." - Krep, Spacetrawler

Adam Reynolds
Jedi Council Member
Posts: 2148
Joined: 2004-03-27 04:51am

Re: on Evil AI

Postby Adam Reynolds » 2017-05-09 10:37pm

Solauren wrote:Should have been more specific:
no external communications means no sensor abilities either.
can only physically move via remote control means no ability to alter their own mechanical functions in anyway.

if it can't send signals beyond talking in English, can't receive except in English, and can't alter or control it's body in anyway, and can only sense it's environment via a microphone, that really, really, really limits it's abilities.

Basically, make it the AI version of a Quadrapeligic.

How exactly do you propose building a computer in a box without giving it the ability to cool itself? This requires both a heat sensor and the ability to vary heat output.

Think of this another way. If you were trapped in a box like this, how would you communicate without someone else noticing? Now consider the fact that the sort of system we are talking about is significantly smarter than you or any other person.

This is not to mention the fact that such a system could almost certainly convince someone to let it out. I am not sure why I did not think of this in the first post.

All you have accomplished by doing this is convincing this AI you are an obstacle to its goal, whatever that is.

User avatar
Solauren
Emperor's Hand
Posts: 7526
Joined: 2003-05-11 09:41pm

Re: on Evil AI

Postby Solauren » 2017-05-09 11:04pm

Adam Reynolds wrote:
Solauren wrote:Should have been more specific:
no external communications means no sensor abilities either.
can only physically move via remote control means no ability to alter their own mechanical functions in anyway.

if it can't send signals beyond talking in English, can't receive except in English, and can't alter or control it's body in anyway, and can only sense it's environment via a microphone, that really, really, really limits it's abilities.

Basically, make it the AI version of a Quadrapeligic.

How exactly do you propose building a computer in a box without giving it the ability to cool itself? This requires both a heat sensor and the ability to vary heat output.

Think of this another way. If you were trapped in a box like this, how would you communicate without someone else noticing? Now consider the fact that the sort of system we are talking about is significantly smarter than you or any other person.

This is not to mention the fact that such a system could almost certainly convince someone to let it out. I am not sure why I did not think of this in the first post.

All you have accomplished by doing this is convincing this AI you are an obstacle to its goal, whatever that is.


You do know that those systems could be COMPLETELY UNCONNECTED FROM THE AI, don't you?

Simon_Jester
Emperor's Hand
Posts: 28798
Joined: 2009-05-23 07:29pm

Re: on Evil AI

Postby Simon_Jester » 2017-05-09 11:46pm

Solauren wrote:
Shroom Man 777 wrote:They don't need corporeal asskicking giant robot bodies to mess with people. Irredisregarding capitalist vs. socialist or whatever arguments about Wall Street, the stock market doesn't have a giant robot body and what happens to it can profoundly affect people around the world for better or worse. AI could do such things too.
Hence 'no external communication abilities'.

No matter how SMART and AI, if it can't move or communicate beyond talking to a human, it's not dangerous.
It's not very dangerous, but it is very useless. Anyone who builds an AI is going to build it with the intent of doing things (i.e. sorting data on the Internet, designing technologies, predicting stock trends). They will therefore need to give it access to databases and digital communications, and give it a measure of influence over the real world so that it can influence the world in the ways they desire.

Solauren wrote:Should have been more specific:
no external communications means no sensor abilities either.
can only physically move via remote control means no ability to alter their own mechanical functions in anyway.

if it can't send signals beyond talking in English, can't receive except in English, and can't alter or control it's body in anyway, and can only sense it's environment via a microphone, that really, really, really limits it's abilities.

Basically, make it the AI version of a Quadrapeligic.
1) What is the utility of this device? Why would we build it?
2) To make it useful, we must at some point connect it to something, give it access to resources or at least listen to it and do as it says.
3) To make it useful we must give it information on its surroundings and feedback; otherwise it will be permanently dysfunctional and noncommunicative.
4) If we keep a device this way, we don't get any feedback on what it WOULD do if it had the power to influence anything, and we may well be causing damage or distortion to the AI's priorities. As a result, connecting it up to something at a later time, or even just building a second one connected to something, becomes increasingly dangerous.
This space dedicated to Vasily Arkhipov

User avatar
cadbrowser
Padawan Learner
Posts: 474
Joined: 2006-11-13 01:20pm
Location: Kansas City Metro Area, MO
Contact:

Re: on Evil AI

Postby cadbrowser » 2017-05-10 12:00pm

Solauren wrote:Simple solution to EVIL AI

Only keep it in small, harmless bodies.
No external communication abilities beyond verbal
Can only physically move via remote control.

So, basically AI remote control toy cars.


Bob Slydell wrote:What would ya say...ya do here?


Seriously though. What would be the purpose of building an AI such as this?

Earlier it was mentioned that it would be in effect a quadriplegic AI. This, seems to me more like an AI with down syndrome. I just can't wrap my head around these limitations where it could actually do anything useful.

Maybe I'm missing something? :wtf:
Financing and Managing a webcomic called Geeks & Goblins.


"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada

User avatar
Crazedwraith
Emperor's Hand
Posts: 9764
Joined: 2003-04-10 03:45pm
Location: Cheshire, England
Contact:

Re: on Evil AI

Postby Crazedwraith » 2017-05-10 12:07pm

Simon_Jester wrote:It's not very dangerous, but it is very useless


cadbrowser wrote:Maybe I'm missing something? :wtf:


Well the hypothesis that it's impossible to build an non-evil AI breaks if you can find any example of AI that won't turn evil. Even if it's not a practical one.

Though to be honest doesn't the technically definition of AI include being able to re-write it's own code? So there's literally no AI that can't turn itself evil if it wants. You just have to make not want to.
To the brave passengers and crew of the Kobayashi Maru... sucks to be you - Peter David

User avatar
cadbrowser
Padawan Learner
Posts: 474
Joined: 2006-11-13 01:20pm
Location: Kansas City Metro Area, MO
Contact:

Re: on Evil AI

Postby cadbrowser » 2017-05-10 01:23pm

If there was no practical purpose for building such an AI then that leads us to a paradox here.

Which would basically boil down to this:
WHOPR wrote:The only winning move is not to play.


Which, IMO, defeats the intent of the thought exercise.

I think the definition of AI eludes to the possibility of being able to reprogram it's own software. Many Sci-FI elements utilize this concept a lot. So I'm pretty sure what Solauren suggested couldn't be considered AI to begin with. Then again, my caffeine levels may need to be replenished for me to see beyond what I'm thinking now.
Spoiler
Artificial Intelligence : the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.


(Edit: Fixed Spelling Error)
Financing and Managing a webcomic called Geeks & Goblins.


"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada