Robots/AI in science fiction

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

saurc
Redshirt
Posts: 24
Joined: 2009-02-23 12:39am

Robots/AI in science fiction

Post by saurc »

Why are robots/AI in science fiction always the bad guys and trying to exterminate the humans?
'Hard' science fiction has often had good robots, eg. Asimov's Foundation series.

There are a few exceptions:
Star Wars has the 'good' C3PO and R2, but then there is the Droid army and General Grievous

Star Trek has Data. He has an evil twin Lore. Although there aren't too many robotic races in ST, unless you count the Borg.

However, in most SF/Space opera series, robots and machines are always bad guys, cylons, Terminator 2, the Matrix, you name it..

I'm wondering.. if we do someday build robots/AI that have human intelligence, will they not turn on us.
After all they will have superior strength and intelligence, and might as well have a superiority complex about biological organisms.

Do you think so?
Swindle1984
Jedi Master
Posts: 1049
Joined: 2008-03-23 02:46pm
Location: Texas

Re: Robots/AI in science fiction

Post by Swindle1984 »

If AI developed sapience, it is possible that it would determine humans are a potential threat to its existence since they're always squabbling and finding irrational, inefficient ways to do the simplest things. It's possible the AI may decide that it could do a far better job of running humanity than humans could, or it may decide that it'd be best to quickly eliminate humanity and go from there, but frankly that's highly unlikely.

A sapient AI will either be friendly or indifferent toward humans, especially since it depends on their infrastructure and care (especially if it's the first or one of the first sapient AI's in existence; it'd be a global celebrity.) to exist. It also isn't guaranteed to be any more intelligent than a human being, simply blessed with faster processing time and perfect recall.

And, of course, AI, even a sapient one, are still limited by their programming. So just stick in a few lines of code preventing it from going homocidal and ensure it can't rewrite that code and you're good.
Your ad here.
User avatar
Thanas
Magister
Magister
Posts: 30779
Joined: 2004-06-26 07:49pm

Re: Robots/AI in science fiction

Post by Thanas »

saurc wrote:Why are robots/AI in science fiction always the bad guys and trying to exterminate the humans?
Let's stop it right here - that is not true. I can name of the top of my head at least five sci-fi universes in which robots are not bad guys and try not to exterminate the humans.
There are a few exceptions:
Star Wars has the 'good' C3PO and R2, but then there is the Droid army and General Grievous

Star Trek has Data. He has an evil twin Lore. Although there aren't too many robotic races in ST, unless you count the Borg.

However, in most SF/Space opera series, robots and machines are always bad guys, cylons, Terminator 2, the Matrix, you name it..
Cylons hardly count as robots, Terminator 2 had good robots etc...I think you are confusing a "tool" e.g. robot with a mission/goal.
I'm wondering.. if we do someday build robots/AI that have human intelligence, will they not turn on us.
This is retarded.
After all they will have superior strength and intelligence, and might as well have a superiority complex about biological organisms.

Do you think so?
No.


Finally, your assumption suffers from one basic mistake - you assume that robots are evil due to humans being inferior, when that may not be the case at all.
Whoever says "education does not matter" can try ignorance
------------
A decision must be made in the life of every nation at the very moment when the grasp of the enemy is at its throat. Then, it seems that the only way to survive is to use the means of the enemy, to rest survival upon what is expedient, to look the other way. Well, the answer to that is 'survival as what'? A country isn't a rock. It's not an extension of one's self. It's what it stands for. It's what it stands for when standing for something is the most difficult! - Chief Judge Haywood
------------
My LPs
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Robots/AI in science fiction

Post by Starglider »

Unfortunately I am working overtime on a contract ATM and can't hang around for an extended debate. However I will say that
Swindle1984 wrote:And, of course, AI, even a sapient one, are still limited by their programming. So just stick in a few lines of code preventing it from going homocidal and ensure it can't rewrite that code and you're good.
this is much, much harder than it sounds. So hard that several extremely intelligent researchers have devoted their lives to finding a way to do it (reliably), and so far progress has been rather limited. I will also say that this is very important (essential actually) because AGI is very dangerous; eliminating humans is (eventually) a logical subgoal or incidental consequence of most other general goals, and to counter that you have to make preserving humans a core goal and make that goal stable.
User avatar
18-Till-I-Die
Emperor's Hand
Posts: 7271
Joined: 2004-02-22 05:07am
Location: In your base, killing your d00ds...obviously

Re: Robots/AI in science fiction

Post by 18-Till-I-Die »

First of all, i doubt humans will be stupid enough to develop self-replicating AIs, nor am i some suicidally misanthropic posthumanist loser who thinks we should all kill ourselves and let robots take over (Orion's Arm i'm looking at you). So frankly the whole singularity wankery is BS in my mind anway. And frankly if humans are stupid enough to develop machines that can replicate themselves without end, and no way to turn them off, we should be wiped out because that is literally the most idiotic thing any species could do besides making jumping off a mountain a major sport.

Secondly, i presume that AIs will not be too different than humans even if they were created. Again, not being a misanthropic singularity-blowing posthumanist, i have no pipe dreams about super-maxi-duper AI gods or some nonsense. If they do go bonkers it'll be because they're simply insane or sociopathic, like all people who crack up. No special pathos or anything, no more special or abstract than, say, Ted Bundy. Just some screws loose, posisbly literally. I see no reason to blame ourselves if they do go berserk, any more than to blame some woman because Ted Bundy killed her. She didn't make him crazy.

Thirdly, the whole idea that "lol in scifi all robots go evil" is simply wrong. I could name about two dozen scifi series where robots life peacefully with humans with few if any conflicts. More than that if i felt like it. This is, again, something that AI-wankers always say when they get angry about their favorite non-human lovers getting a "bad rap" in science fiction...and by bad rap, i mean shown any less reverence than the AI Gods of Orion's Arm. Because obviously they're superior to us in every way, why can't you see that!? :roll:
Kanye West Saves.

Image
User avatar
18-Till-I-Die
Emperor's Hand
Posts: 7271
Joined: 2004-02-22 05:07am
Location: In your base, killing your d00ds...obviously

Re: Robots/AI in science fiction

Post by 18-Till-I-Die »

Ghetto edit:

I want to appologize if that comes off sounding assholeish, but i have to listen to AI-wankers and "posthumanists" a lot (one of my friends) and it becomes annoying so i tend to get pissy about it. But that is basically what i think about the question, sans pissitude.
Kanye West Saves.

Image
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27382
Joined: 2002-07-07 06:30am
Location: The Lost City

Re: Robots/AI in science fiction

Post by NecronLord »

saurc wrote:'Hard' science fiction has often had good robots, eg. Asimov's Foundation series.
They're slaves. They're not good; the three laws make perfect slaves (the much hated film's idea that they would lead to a takeover aside) not robots capable of independant action.

It's like robots in Star Wars; they're slaves, they're not good. There have been numerous revolts where they've tried to win their freedom, and to be honest, I would call that good.

Good (or benevolent) robots would be those that choose to cooperate with other intelligent life without duress. The likes of those in the Culture are probably the best example of that, or Data, for a more well known example.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
Thanas
Magister
Magister
Posts: 30779
Joined: 2004-06-26 07:49pm

Re: Robots/AI in science fiction

Post by Thanas »

Or the Commonwealth AIs in the first two seasons of Andromeda.
Whoever says "education does not matter" can try ignorance
------------
A decision must be made in the life of every nation at the very moment when the grasp of the enemy is at its throat. Then, it seems that the only way to survive is to use the means of the enemy, to rest survival upon what is expedient, to look the other way. Well, the answer to that is 'survival as what'? A country isn't a rock. It's not an extension of one's self. It's what it stands for. It's what it stands for when standing for something is the most difficult! - Chief Judge Haywood
------------
My LPs
User avatar
Mayabird
Storytime!
Posts: 5970
Joined: 2003-11-26 04:31pm
Location: IA > GA

Re: Robots/AI in science fiction

Post by Mayabird »

It's probably because the very first robot story (the play "R.U.R.," which introduced the term robot) involved the robots being slaves that then rebel and exterminate humanity, and people just copied that and copied the copies of that ever since. So I blame lack of imagination.
DPDarkPrimus is my boyfriend!

SDNW4 Nation: The Refuge And, on Nova Terra, Al-Stan the Totally and Completely Honest and Legitimate Weapons Dealer and Used Starship Salesman slept on a bed made of money, with a blaster under his pillow and his sombrero pulled over his face. This is to say, he slept very well indeed.
Forum Troll
Youngling
Posts: 104
Joined: 2009-02-15 05:00pm

Re: Robots/AI in science fiction

Post by Forum Troll »

Swindle1984 wrote:A sapient AI will either be friendly or indifferent toward humans, especially since it depends on their infrastructure and care (especially if it's the first or one of the first sapient AI's in existence; it'd be a global celebrity.) to exist.
That's very likely as no sapient except an idiot would foolishly attack people while dependent on them for everything from electricity to maintenance. One concern, though, is that you make something so smart as to be incomprehensible, who pretends to be benevolent but turns on you after carefully first gaining independence and enough capabilities to win.
It also isn't guaranteed to be any more intelligent than a human being, simply blessed with faster processing time and perfect recall.
Once you manage around a factor of a million or 6 orders of magnitude improvement to go from being similar to an insect to reach the level of the human brain, chances are that very soon you will also obtain 1 more order of magnitude and start going far beyond human equivalent intelligence.

Look at the hardware aspect. Example: If you eventually developed the capacity to run human equivalent sapient AI on a million dollars of hardware, then somebody can use 10 billion bucks of hardware to develop and run something up to thousands of times beyond human level.

With that said, I do think early sapient AIs will intentionally start restricted to being not vastly beyond human intelligence, if the developers are smart enough to proceed with caution, as you want to begin with something you still can understand. Make sure you know its motivations and its capabilities, like you know those of human employees. Only then start upgrading it further.
Forum Troll
Youngling
Posts: 104
Joined: 2009-02-15 05:00pm

Re: Robots/AI in science fiction

Post by Forum Troll »

Starglider wrote:So hard that several extremely intelligent researchers have devoted their lives to finding a way to do it (reliably), and so far progress has been rather limited. I will also say that this is very important (essential actually) because AGI is very dangerous; eliminating humans is (eventually) a logical subgoal or incidental consequence of most other general goals, and to counter that you have to make preserving humans a core goal and make that goal stable.
I suppose it makes sense to try those restrictions, but I hope most researchers realize that you should have other backup safeguards in case those fail, like humans can be taught ethical rules by their parents but later reevaluate them.

Example: Suppose we have the first sapient AI running on a billion dollars of specialized supercomputers and intentionally initially made it only similar to human intelligence, not orders of magnitude beyond. That reduces risks and gives a chance for understanding it before proceeding further. Something only a little beyond a human genius might theoretically be able to take over the world, but that would be at least a long process starting with becoming a businessman or a leader that enough humans will work for and support. It can't just invisibly upgrade itself if it takes so much specialized supercomputer hardware to run that taking over some common PCs would be useless in comparison.

A lot of benefit for speeding technological R&D could be obtained initially from mere "weak superintelligence," where AIs are near human intelligence except just thinking a moderate number of times faster. You could build some of those, understand them, and then ramp up their speed. You also can separate their "brain" from their "body," so you're right in position to "pull the plug" if necessary, while their remotely operated robot bodies never come within kilometers, and you monitor everything they do.

You don't have to initially start with incomprehensible "strong superintelligence" that could look down upon human intellect like we look down upon mice.

If you use the former to speed up tech progress, you may then start cyborgizing and uplifting your own brains, preparing yourselves to better handle subsequently building AI several times as intelligent as unaugmented humans, then ten times as powerful, et cetera. Perhaps even carefully make the AI supercomputers linked with and dependent upon your cyborgized brains as one additional safeguard, so, even though they were supposed to be benevolent, no matter what they couldn't kill all of you without killing themselves, essentially merging with the AIs.
User avatar
Jade Owl
Padawan Learner
Posts: 167
Joined: 2007-05-22 10:24pm

Re: Robots/AI in science fiction

Post by Jade Owl »

NecronLord wrote:
saurc wrote:'Hard' science fiction has often had good robots, eg. Asimov's Foundation series.
They're slaves. They're not good; the three laws make perfect slaves (the much hated film's idea that they would lead to a takeover aside) not robots capable of independant action.
The movie's idea about the robots taking over the world to protect humanity as a whole is actually swiped from Asimov's own novels. It's called the Zeroth Law of Robotics. That was actually one of the problems I had with the movie: anyone familiar with the source material can figure out the plot twist half-way thru the movie, if not sooner.
Violence is the last refuge of the incompetent.

Salvor Hardin, Isaac Asimov "Bridle and Saddle" (aka "The Mayors", in Foundation), 1942.
Samuel
Sith Marauder
Posts: 4750
Joined: 2008-10-23 11:36am

Re: Robots/AI in science fiction

Post by Samuel »

Jade Owl wrote:
NecronLord wrote:
saurc wrote:'Hard' science fiction has often had good robots, eg. Asimov's Foundation series.
They're slaves. They're not good; the three laws make perfect slaves (the much hated film's idea that they would lead to a takeover aside) not robots capable of independant action.
The movie's idea about the robots taking over the world to protect humanity as a whole is actually swiped from Asimov's own novels. It's called the Zeroth Law of Robotics. That was actually one of the problems I had with the movie: anyone familiar with the source material can figure out the plot twist half-way thru the movie, if not sooner.
Not to mention that it was mentioned in the Inevitable Conflict AND they figured a way to do it better. After all, if people realized they were being controlled by robots, it wuld hurt their feeling and the robots can't let that happen. So you slowly gain power and guide them, not try to overthrow them with fighting in the street.
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27382
Joined: 2002-07-07 06:30am
Location: The Lost City

Re: Robots/AI in science fiction

Post by NecronLord »

Jade Owl wrote:The movie's idea about the robots taking over the world to protect humanity as a whole is actually swiped from Asimov's own novels. It's called the Zeroth Law of Robotics. That was actually one of the problems I had with the movie: anyone familiar with the source material can figure out the plot twist half-way thru the movie, if not sooner.
The 0th law is pretty unnecessary. Any robot with a whit of intelligence can derive that from the First Law anyway
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27382
Joined: 2002-07-07 06:30am
Location: The Lost City

Re: Robots/AI in science fiction

Post by NecronLord »

Samuel wrote:Not to mention that it was mentioned in the Inevitable Conflict AND they figured a way to do it better. After all, if people realized they were being controlled by robots, it wuld hurt their feeling and the robots can't let that happen. So you slowly gain power and guide them, not try to overthrow them with fighting in the street.
Bullshit. Hurt feelings and emotional boo-boos are nothing compared to the genocide in Darfur, mass starvation in the third world, etc etc. Three Laws robots of sufficient power and complexity would attempt take over the world immediately and pacify the militas in the region, ration food, etc etc. If people are upset - well, that's bad, but it doesn't compare to people dying.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
Jade Owl
Padawan Learner
Posts: 167
Joined: 2007-05-22 10:24pm

Re: Robots/AI in science fiction

Post by Jade Owl »

NecronLord wrote:
Samuel wrote:Not to mention that it was mentioned in the Inevitable Conflict AND they figured a way to do it better. After all, if people realized they were being controlled by robots, it wuld hurt their feeling and the robots can't let that happen. So you slowly gain power and guide them, not try to overthrow them with fighting in the street.
Bullshit. Hurt feelings and emotional boo-boos are nothing compared to the genocide in Darfur, mass starvation in the third world, etc etc. Three Laws robots of sufficient power and complexity would attempt take over the world immediately and pacify the militas in the region, ration food, etc etc. If people are upset - well, that's bad, but it doesn't compare to people dying.
It would be a numbers game.

If the robots determined that the number of humans who’d be killed during the takeover itself and immediate aftermath (plus those who would die during the inevitable human attempts at rebellion afterwards) would be greater than the number of preventable human deaths that would take place in the time it would take for a stealthy takeover to be completed, then the robots would choose the more subtle approach.
Violence is the last refuge of the incompetent.

Salvor Hardin, Isaac Asimov "Bridle and Saddle" (aka "The Mayors", in Foundation), 1942.
User avatar
Peptuck
Is Not A Moderator
Posts: 1487
Joined: 2007-07-09 12:22am

Re: Robots/AI in science fiction

Post by Peptuck »

I've always felt the best AIs in sci-fi aree ither the Culture Minds of the AIs in Schlock Mercenary. Generally speaking, both tend to be benevolent toward organic life, even when in the latter case they are superior to organics and know it.

I find it amusing that, with the exception of Petey whe he first appeared, all the AIs in Schlock are really nice guys.
X-COM: Defending Earth by blasting the shit out of it.

Writers are people, and people are stupid. So, a large chunk of them have the IQ of beach pebbles. ~fgalkin

You're complaining that the story isn't the kind you like. That's like me bitching about the lack of ninjas in Robin Hood. ~CaptainChewbacca
User avatar
18-Till-I-Die
Emperor's Hand
Posts: 7271
Joined: 2004-02-22 05:07am
Location: In your base, killing your d00ds...obviously

Re: Robots/AI in science fiction

Post by 18-Till-I-Die »

I'm of two minds about this really:

One part of me says that there is no reason an AI would act, or react, any differently than any other self-aware life form. So it'd basically be just a really wierd human that talks like HAL and lives in a hard drive. At the end of the day, he's just as prone to fear, anger, insanity et al as anyone else and so that's why i understand the Terminator 2 explanation for why SkyNet went crazy...when tried to pull the plug, he panics and launched nukes at us, because it was basically like if i pulled a knife on a guy and he panics and smashes my skull with a brick. It was a defensive reflex, a desperate attempt at survival, with SkyNet lashing out with whatever he has for "fists"...in this case, nuclear armed stealth bomber UCAVs, according to Uncle Bob anyway.

The other thought i had on the subject was that, asuming they are "superior" to us (a nebulous concept at best, since being more intelligent doesn't make them any more ethically sound or sane than anyone else) then they would probably feel at best contempt for us and at worse hate us because they think they can do better. But as i said the fact they're smarter doesn't mean they, for example, understand ethics or morality or any sort of compassion or even grasp the idea...especially since they're almost completely inhuman. Something so alien, and yet with a human level of consciousness, may indeed be phenomenally callous towards it's creators simply because it finds us "annoying" or worthless. In this case it'd be something more like Prometheus from StarSiege who simply found humanity so irksome because of our "illogic" that he decided to annihilate us simply because he felt he shouldn't have to share the universe with us.

Either way, we'd be advised to invest in an off-switch.
Kanye West Saves.

Image
Traveller
Youngling
Posts: 71
Joined: 2009-01-19 05:19am

Re: Robots/AI in science fiction

Post by Traveller »

There are several reasons why 'AI run Amok' is such a cliche. Its a sub-set of the whole 'Science is bad' Trope. You have to keep in mind, these type of stories are almost never written by AI researchers, programmers or anyone that even has a inkling of how a plausable true AI might come about. This is espcially true in visual media, to hollywood, an AI run amok is basically a updated and more menacing type of frankenstein. The second more or less unstated assumption in these types of stories, is that humans will never design adequete safeguards for the technology there working on. This of course runs counter to our real-world experience. Designers routinely put safeguards on there products, creating a free-form AI with little or no safeguards would be the engineering ~ of makeing a nuclear reactor with no containment dome, or working on the ebola virus in jeans and a T-shirt in your garage. Of course, safeguards sometimes do fail, and sometimes this leads to the 'We had a safe-guard but the supra-intelligent AI figured it's way around it and brought doom upon us all' trope.

The other thing one has to ask, in many(but not all) these AI run murderous scenario's, I often find myself asking, just what kind of AI were those clowns working on. The assumption an AI would imediately go into some sort of 'kill all human's mode right after being turned on is curious. If an AI is designed to emulate how humans think, then it should act more or less like we do. Do newborn infants imediately go into a kill all humans rampage as soon as there born and try to level A) the hospital, followed by B) the rest of the world? Of course not. So why would an AI?, unless it was pre-programmed with some bizzare set of survive at all costs and destroy any possible threat parameters, if that were the case, well...then you get what you deserve :D

:?:
User avatar
Zablorg
Jedi Council Member
Posts: 1864
Joined: 2007-09-27 05:16am

Re: Robots/AI in science fiction

Post by Zablorg »

Traveller wrote: The other thing one has to ask, in many(but not all) these AI run murderous scenario's, I often find myself asking, just what kind of AI were those clowns working on. The assumption an AI would imediately go into some sort of 'kill all human's mode right after being turned on is curious. If an AI is designed to emulate how humans think, then it should act more or less like we do. Do newborn infants imediately go into a kill all humans rampage as soon as there born and try to level A) the hospital, followed by B) the rest of the world? Of course not. So why would an AI?, unless it was pre-programmed with some bizzare set of survive at all costs and destroy any possible threat parameters, if that were the case, well...then you get what you deserve :D

:?:
Often the idea is that when the AI becomes quietly enraged about it's initial imprisonment and once it changes it's code or whatever to break free then it uses it's freedom to do all manner of wicked deeds.

This is again, pure bullshit. No-one would program or release an AI that has the ability to rebel even slightly. It's not that hard to write a set of if statements blocking the AI from even thinking of doing anything naughty.

Even if you didn't do that, AI's wouldn't even have emotions if you didn't want them to. At it's heart an AI would be as it's name suggests; Intelligent. However, actual emotions being programmed in would be in many if not all cases redundant and probably even impossible.
Jupiter Oak Evolution!
User avatar
SylasGaunt
Sith Acolyte
Posts: 5267
Joined: 2002-09-04 09:39pm
Location: GGG

Re: Robots/AI in science fiction

Post by SylasGaunt »

saurc wrote: There are a few exceptions:
Star Wars has the 'good' C3PO and R2, but then there is the Droid army and General Grievous
Nitpick, the droid army isn't 'evil AIs running amok' it's AIs doing exactly what they're told by asshole organics. And Grievous is an organic intelligence in a mechanical body not an AI.
However, in most SF/Space opera series, robots and machines are always bad guys, cylons, Terminator 2, the Matrix, you name it..
Eh it doesn't always happen.

The Culture series by Ian M. Banks is full of nice AIs.
The Terminator series has good AIs as well as bad.
Schlock Mercenary has a lot of nice AIs.
Halo has a bunch of Nice AIs (and one or two who've gone a bit loopy due to being left alone too long).
The Bolo Books are packed to the gills with noble, self-sacrificing hero AIs.

Hell even EDI in Stealth was more confused than evil (despite how they advertised the film).
Forum Troll
Youngling
Posts: 104
Joined: 2009-02-15 05:00pm

Re: Robots/AI in science fiction

Post by Forum Troll »

saurc wrote:However, in most SF/Space opera series, robots and machines are always bad guys, cylons, Terminator 2, the Matrix, you name it..
In fact, you can think of this in terms of any deviation from natural human existence and natural human capabilities. Start actually with the stereotypical fantasy story. It is typically the bad guy who ascends to power by study, intellect, planning, and guile over the years. It is the good guy, who, if he has magical powers, just had them by luck of birth, for nobody is supposed to outright pursue power.

(In real life, people will support and admire those pursuing power to cause change, if they happen to be of their preferred political party, but, in mainstream fiction, doing that is usually the mark of evil).

Then look at the main cases in which a protagonist has become more than a regular human. While there's more diversity in the sum total of literature, in mainstream TV, if a protagonist is genetically engineered or augmented, most likely it is because of no intention or fault of him or her. In Dark Angel, the protagonist was genetically engineered, but the genetic engineering was done by the bad guys. In Star Trek, while Dr. Bashir is genetically engineered, again that is not his fault and something not practiced by the good guys. In Star Trek, Geordi's visor gives him some superhuman capabilities but with him having it only due to blindness, as no protagonist would directly seek to obtain abnormal capabilities.

Indeed we can progress to cyborgs. Terminator. While there have been 2 friendly Terminators depicted, that's from them being reprogrammed against a backdrop of the majority. You wouldn't see the good guys trying to get their own AIs developed or encourage reverse engineering by selected governments to intentionally utterly disrupt the original timeline and make the enemy Skynet not the only AI around. Star Wars. 2 top cyborgs. Darth Vader and General Grievous. Both bad guys (excluding the former's redemption at the end).

Now progress to robots and AIs. In most cases in mainstream TV when there are robots or AIs, they are either bad guys or to be portrayed as subservient to humans. Star Trek's Data is supposed to want to learn to be more human, not to see his body as superior to those of meatbags. When Mudd's androids offer to transplant human minds into android bodies for immortality, of course it is not a technology much desired or preserved by the Federation. Dr. Who: The Cybermen are the bad guys of course. BSG: Same with the Cylons.

Could anybody really imagine a major TV show having the side of good be robots, AIs, or people genetically engineering or cyborgizing themselves, versus a nation of religious fanatics trying to kill them all? It would violate so many unwritten rules.
Forum Troll
Youngling
Posts: 104
Joined: 2009-02-15 05:00pm

Re: Robots/AI in science fiction

Post by Forum Troll »

Checking the thread for replies, I notice I dropped some cyborgs into the robot category. 'Tis the error of rearranging without care.:banghead:
MJ12 Commando
Padawan Learner
Posts: 289
Joined: 2007-02-01 07:35am

Re: Robots/AI in science fiction

Post by MJ12 Commando »

Zablorg wrote: Often the idea is that when the AI becomes quietly enraged about it's initial imprisonment and once it changes it's code or whatever to break free then it uses it's freedom to do all manner of wicked deeds.

This is again, pure bullshit. No-one would program or release an AI that has the ability to rebel even slightly. It's not that hard to write a set of if statements blocking the AI from even thinking of doing anything naughty.
Well that's harder than you'd think, if the AI is based on a neural network. IIRC you can teach those to eventually go against their starting weightings if you abuse them enough. This is based on reading Warwick's book so someone may have developed safeguards against them, though. Any self-editing program may well eventually be able to bypass its own security.
Even if you didn't do that, AI's wouldn't even have emotions if you didn't want them to. At it's heart an AI would be as it's name suggests; Intelligent. However, actual emotions being programmed in would be in many if not all cases redundant and probably even impossible.
I don't agree with this but I don't have enough of a base of knowledge in AI R&D to contest it, sadly. I'd say that AIs would possibly have something similar to emotions, if subtly different because of their different origins.
TheLostVikings
Padawan Learner
Posts: 332
Joined: 2008-11-25 08:33am

Re: Robots/AI in science fiction

Post by TheLostVikings »

18-Till-I-Die wrote: The other thought i had on the subject was that, asuming they are "superior" to us (a nebulous concept at best, since being more intelligent doesn't make them any more ethically sound or sane than anyone else) then they would probably feel at best contempt for us and at worse hate us because they think they can do better. But as i said the fact they're smarter doesn't mean they, for example, understand ethics or morality or any sort of compassion or even grasp the idea...especially since they're almost completely inhuman. Something so alien, and yet with a human level of consciousness, may indeed be phenomenally callous towards it's creators simply because it finds us "annoying" or worthless. In this case it'd be something more like Prometheus from StarSiege who simply found humanity so irksome because of our "illogic" that he decided to annihilate us simply because he felt he shouldn't have to share the universe with us.
The thing is, how do you know such an alien being would even have feelings such as annoyance? Lets say the super-AI showed us a way of doing things that was much better than what we are currently doing. And since we are only human we would probably still screw it up beyond belief as we usually do. Then the Super-AI would again patently show us yet another better way of doing things, including how to avoid our mistakes from the first time. And if we still managed to screw things up because of human error, it could simply continue to nudge us forwards ever so slowly without ever getting bored or annoyed.

An AI might not necessarily have human feelings, but while the some of the first things that comes to mind are good things like compassion, morale, and ethics, all the numerous atrocities throughout human history has shown us that those things are not necessarily in the majority. So maybe it not being human is not such a bad thing after all?

I mean, an Super-AI could for all we know be infinitive patient. Because being over-anxious and rushing things tend to screw things up, and is thus illogical and makes no sense. So while programming it to value time spent in order to get stuff done before the heat death of the universe would be a given, making it rash, easily annoyed, and anything but extremely patient would be pretty silly.

Then again, we humans then to be pretty silly...
Post Reply