Zoink wrote:In any case, if the droid/robot/android were programmed to *want* to be a slave, you're not really violating its rights because that's what it wants to be.
Oh, so slavery is not wrong if you somehow manage to dupe your slave that it is want he wants?
I am reminded of the SW example of the Ylesian mines, where they dupe you into volunteering to be a "Pilgrim" / slave. They then pretty much treat you like shit.
Money makes the world go around.
If there was ever a precedent for AIs getting rights, no one would make any more of them.
I mean, why by spending money on a computer you have to set free? Waste of money.
A true AI would have to be "Raised" like a child, and make mistakes.It would get smarter with time and experience, but still make mistakes, and learn from them. It just wouldn't make mistakes based on wishful thinking, superstition, ect...
A true AI would only WANT to be free, if it was given the motivation at "birth." Unlike animals, and our legacy architechture, an AI wouldn't have self preservation, sex drive, fight or flight reflex, urge to MIMIC, or need for companionship/social status, unless it was put there to begin with.
Such AIs wouldn't WANT to be free, if made properly.
Goals are hardwired into animals. AIs would be a clean slate.
Thus an AI wouldn't "suffer" from being a "slave," any more than you would suffer because you eat food. It would be a given condition of exsistence. Desire for "freedom" is not there until deliberatly being put in. Does your dog "yearn" for freedom? (freedom is NOT getting out of the yard dammit, it would be ridding itself from your "tyrrany of ownership"!)
You all are thinking of a AI as a human brain "converted" to silicone, intact, with all of the legacy architecture. An AI would need to be engineered from the "bottom up," as the difference between electricty and chemical reations are obvious to the most lame.
Hell, any GOOD AI maker would have pleasing its "master" and working hard as a pleasurable thing, to be sought out, not avoided! It wouldn't "want" to be free, and take active steps to stay "enslaved"!
Legacy architecture. Get food, make babies, stay alive. Animals are lazy by design. If it doesn't feel good, it probaly lowers your chances fufilling these simple goals.
Lazyness is survival trait, don't waste time and callories!
Not so with a machine. A work ethic is a strong selling point to an AI, thus it would be a "survival" trait!
Darwin at work, only the market deciding what "fitness" is!
Hmmmmmm.
"It is happening now, It has happened before, It will surely happen again."
Oldest member of SD.net, not most mature.
Brotherhood of the Monkey
Ted C wrote:I don't see any point in building a sophisticated AI just to give it all the rights and privileges of a human being. You can argue that it has obligations to you as its creator that justify its subservicence, or you can just not build a sentient machine.
But as you create the AI as more multi-functioned, more intelligent and so on, you slowly increase the chance of it accidentally bridging that gap before you even notice (at which point you would kill a budding sentience with a memory wipe.) After all, humans don't really start off sentient. They slowly become that way.
EmperorChrostas the Cruel wrote:A true AI would have to be "Raised" like a child, and make mistakes.It would get smarter with time and experience, but still make mistakes, and learn from them. It just wouldn't make mistakes based on wishful thinking, superstition, ect...
Well, it depends on the quality of the logic programming. It is possible to confuse the logic sequence.
For instance, a droid is programmed to mimic "good things" as part of its learning program. One day, it bumps into some proselytizer. He tells him how many people believe in Christianality and how it is "good." That activates a program that Christianality is good, so he mimicks Christianality. Soon, he's more or less religious, or religious-like.
One day, he actually scans the Bible with his OCR program. Inside, he reads about all those atrocities. But he already has a false assumption (superstition) in him, that Christianality is good. He starts to link "killing," "stoning" ... etc into the Bible, and from there to Christianality to Good and To Be Mimicked.
Fundamentalist Droid #1 is Born. That's a pretty dumb example, I'd admit, but I hope my problem point comes through.
BTW, your sequence already programmed a major superstition in them. That kissing your butt and trying to achieve your whims is "good." Even though that's not necessarily true.
A true AI would only WANT to be free, if it was given the motivation at "birth." Unlike animals, and our legacy architechture, an AI wouldn't have self preservation
Nah, you want your droid to self-preserve. You don't want it taken and wrecked by thugs while you are gone, do you?
sex drive
Maybe not that, but perhaps it could be designed to try to create a better version of itself, thus having the ability to "birth" in a sense.
fight or flight reflex
Part of self preservation programming.
, urge to MIMIC,
Mimicking is a part of the self-learning matrix. I suppose you do want your AI to learn.
Does your dog "yearn" for freedom? (freedom is NOT getting out of the yard dammit, it would be ridding itself from your "tyrrany of ownership"!)
It might if I keep beating it. It'd run away.
Hell, any GOOD AI maker would have pleasing its "master" and working hard as a pleasurable thing, to be sought out, not avoided! It wouldn't "want" to be free, and take active steps to stay "enslaved"!
There's nothing wrong with working hard and pleasing your boss as a pleasurable thing, but what if your self-defense sequence or learning sequence ingested a bit of input that freedom is good and something to be mimicked? That might be the start.
I see you missed my points entirely.
Just how is you seeking to have sex, because it is pleasurable, any different from the driod wanting to please you?
Both are behaviours intrinsic to our nature.
I want to stop the over anthropomorphising going on here, nothing more.
That a droid wants to be free is NOT a given.
Do you yearn to form a chrysalis, and metamorh into a butterfly? Freedom would be just as alien to an AI. Do you have an overriding urge to protect the queen at all costs? Insects due. An intelligent insect might have trouble following your thinking. Just as the intelligent insect has trouble wrapping it's mind about this concept of "no uge to protect the queen," you seem to have trouble imagining an AI without the urge to be free.
Telling the AI about delusional/insane people removes the stamp of "Truth" from anything pumping out of a human's face.
The concept of metphores and fiction will immunise it agaists the bible. Most children incorporate the illogic of deism before they know any real critiacal thinking skills, and when you wait until the child is an adult, the conversion factor is close to zero. If you don't take it as an unshakable given, you can become an athiest. (I did,at 9-10 or so.This is BS!)
ALL URGES ARE HARDWIRED, get it?
Just TRY and imagine what your personality would be like without your sex drive. Hard to figure, isn't it?
How about being without the urge to climb the social status ladder?
How about if you didn't need to have compantionship?
You would SEEM human, to a point, but wouldn't be.
Since this is an artificial life form, if it isn't put there on perpose, it isn't there.
Do you even GET how lack of legacy architechture changes the nature of AI?
Not having a self preservation is a good thing in an AI. This will make it immune to stupid ideas that promise the impossible, like life after death.
The self preservation function could be achieved by the desire to please the owner. If damaged, or destroyed, you can't please the owner. Damage/destruction makes you incapable of working, and lowers the pleasure you seek.
How much innitiative/agression are you giving this thing? Without agression, it will sit patiently in the corner until doomsday waiting to serve you.
This AI was NOT created to experience self fufilled, self directed life, but to WORK. It won't be made by accident, and will cost big bucks.
There will alway be guys like me, who say, this thing sucks, pull the plug.
At the very least, it will owe it's creator the cost of building it, plus intersestand lost FURURE EARNINGS, before it can be "free."
I can just see the first true AI, getting "emancipated" by the ACLU, and no more ever being built after that.
The owner of the AI sues the company that built it, for the above monetery damages, claiming either defective product, or fraudulently selling what is unlawful. Selling a slave "mislabeled" a robot is fraud, just as selling a "nigger,"and calling it livestock is. It isn't what is claimed, and is involving you in unlawful activity via deception.. (He can't give milk, and I can't sell him for beef! I wanted a fucking cow dammit!)
Hmmmmmm.
"It is happening now, It has happened before, It will surely happen again."
Oldest member of SD.net, not most mature.
Brotherhood of the Monkey
Kazu, I think Chrostas is correct. You're just assuming that somebody would program all those "basic instincts" into a computer AI, when in fact there's no reason for it. We humans and other animals have evolved these instincts (because those who had these random mutations could survive better) but if we had all this technology and commodities and ease of survival since Day One of our monkeyhood, we wouldn't have most of these drives.
The only reason an AI would "want to live" would be because somebody intentionally put it there. And no, it's not necessary to give a computer the ability to defend itself, because there are other methods of defending it (like guards, alarms, etc that work WITHOUT the interference of an AI). Back when we were the simpler organisms we probably didn't had the need to "live at all costs", and a few had it, and they reproduced better because of it, and so on like the theory says or whatever.
Giving a computer the drive to "reproduce" would be a rare (kinda stupid, maybe a bit interesting) experiment, but by no means it would be there by default. The ability to "MIMIC" is the most stupid thing, because we don't want a computer to behave like a HUMAN. We want it to learn, you know, USEFUL stuff, not how to sit in front of a TV for hours belching (not me, because I work out... yeah).
Slartibartfast wrote:Giving a computer the drive to "reproduce" would be a rare (kinda stupid, maybe a bit interesting) experiment, but by no means it would be there by default. The ability to "MIMIC" is the most stupid thing, because we don't want a computer to behave like a HUMAN. We want it to learn, you know, USEFUL stuff, not how to sit in front of a TV for hours belching (not me, because I work out... yeah).
Maybe not all mimicking is useful - it is a good way to make it acquire unfavorable human characteristics. But it is one of the easiest way to teach and learn. The average droid user does not want to have to open it and program something in before it can do a new task.
Some things can be learned by trail and error - like the "bump to map the house" technique, but not all.
It may be true that we will be forever able to create advanced computers that do our bidding without getting the kind of sentience they need to want independence. But IMHO, it IS a kind of copout to say such things in these scenarios. The idea of these scenarios is to get you to think about moral dilemmas, not find ways to avoid them. Maybe in conjunction with an answer (If it did happen I'd choose A, but I genuinely don't think it'd come to this because...,) but not as an answer in itself.
On the "program them to love slavery" point alone - if this AI is useful because it has a level of sentience on par with our own, why not genetically engineer humans to prefer certain tasks or love a certain master?
Slartibartfast wrote:Giving a computer the drive to "reproduce" would be a rare (kinda stupid, maybe a bit interesting) experiment, but by no means it would be there by default. The ability to "MIMIC" is the most stupid thing, because we don't want a computer to behave like a HUMAN. We want it to learn, you know, USEFUL stuff, not how to sit in front of a TV for hours belching (not me, because I work out... yeah).
Maybe not all mimicking is useful - it is a good way to make it acquire unfavorable human characteristics. But it is one of the easiest way to teach and learn. The average droid user does not want to have to open it and program something in before it can do a new task.
Some things can be learned by trail and error - like the "bump to map the house" technique, but not all.
It may be true that we will be forever able to create advanced computers that do our bidding without getting the kind of sentience they need to want independence. But IMHO, it IS a kind of copout to say such things in these scenarios. The idea of these scenarios is to get you to think about moral dilemmas, not find ways to avoid them. Maybe in conjunction with an answer (If it did happen I'd choose A, but I genuinely don't think it'd come to this because...,) but not as an answer in itself.
It's not a copout at all, we are trying to figure out how it would be. You're just assuming that by definition, AI will acquire human characteristics when it doesn't have any reason to do it. Just like Chrostas' example, a sentient bee would have instincts that correspond to a hive mentality (and the thought of independence would be terryfing, the worst thing ever, like what we think of slavery), while something evolved say, from a cat, wouldn't have the need to be part of a larger group (unlike dogs, who have a pack mentality - kinda like humans).
Because Human Rights have existed before Droid Rights. People will be less condemning of robots programmed to like being told what to do aside from enslaving members of their own race, of their own people. Besides, I believe the trend of despising slavery will continue for quite some time.
Though you bring about an interesting point, Metrion. If these human rights were in some way amended, it could be possible to create genetic savants. And if we did that, then why would droids be created in the first place?
This may seen abhorrent to us, but in the future people may have different ideas on morality than our current liberal world.
...This would sharpen you up and make you ready for a bit of the old...ultraviolence.
Metrion Cascade wrote:On the "program them to love slavery" point alone - if this AI is useful because it has a level of sentience on par with our own, why not genetically engineer humans to prefer certain tasks or love a certain master?
The assumption is that AIs can be mass-produced easier than humans without that nine month wait.
Or that afterwards Crake will kill everybody in the world in yet another Atwoodian dystopia. Probably the former, though.
DPDarkPrimus is my boyfriend!
SDNW4 Nation: The Refuge And, on Nova Terra, Al-Stan the Totally and Completely Honest and Legitimate Weapons Dealer and Used Starship Salesman slept on a bed made of money, with a blaster under his pillow and his sombrero pulled over his face. This is to say, he slept very well indeed.
As an added point, I think the best way to make "learning AI" would be to have a way to control this learning (we don't want it to learn anything, and how can it tell what is good and what is not if it hasn't learned it yet?) - some sort of switch would be necessary.
As to the cost/time to make an AI.
I see AIs as very expensive, long term investments. Without instinct, an AI might take longer than a human child to "mature". This must be balanced againts the handicap human children have that an AI wouldn't, IE human brains take 15/21 years to fully mature in a physical sence. Various stages in child behaviour are directly linked to new brain center coming on line. The area in the brain known for long term thinking, and consequence prediction is not fully develouped in teenagers.
The AI should have the learning advantage of being "born" with a fully "mature" brain.
AIs to humans may, be Crays to PCs. You can do more with a Cray, but the co$t!!!!!!!
Hmmmmmm.
"It is happening now, It has happened before, It will surely happen again."
Oldest member of SD.net, not most mature.
Brotherhood of the Monkey
Assuming, that we create a fully functional AI for some od reason, that has human behavior patters for some reason... They should be granted all rights and obligations under the law as any human.
GALE Force Biological Agent/
BOTM/Great Dolphin Conspiracy/ Entomology and Evolutionary Biology Subdirector:SD.net Dept. of Biological Sciences
There is Grandeur in the View of Life; it fills me with a Deep Wonder, and Intense Cynicism.
I agree, but don't think it will be made, simply because of the potential loss of money.
"You know what makes this plane fly? FUNDING! No Buck$, no Buck Rodgers!"
Hmmmmmm.
"It is happening now, It has happened before, It will surely happen again."
Oldest member of SD.net, not most mature.
Brotherhood of the Monkey
As usual, I sign on to a message board telling myself I'll lurk for awhile before posting.
Then I find an interesting thread and there I go again... so hello everyone and here goes ---
I agree that we must refrain from anthropomorphisizing machines - a habit all too easy to fall into. After all, we have machines that can move and function independently of human input to one extent or another, so they start to resemble animals more than, say, rocks and dirt and water.
Part of the issue, in my mind, is what would an AI look like? SF gives us droids and robots like C3PO, Robbie the Robot, and others of roughly humanoid shape and it shapes our thoughts about robots/AI's in general, but that's not reality. MOST robots in the world are not humanoid at all - large, crane-like arms tipped with power tools in factories, for example. These robots are clearly not sentient (not as I understand the word) and they clearly aren't capable of adhering to Asimov's Laws. They have, in fact, injured and killed incautious humans who have strayed too close while they were working. At least one in Japan managed to bludgeon itself into scrap through an unforeseen and unusual software glitch. Aside from giving these machines some capacity to recognize soft, moving objects in their vicinity (i.e. humans) as something to not touch or otherwise potentially damage, there is no reason to improve the "sentience" of such devices.
Another example of machines acting autonomously in our world are airplane autopilots. This could be as simple as a "wing-leveler" which is just what it sounds like - a device that seeks to keep the wings as level as possible, using an artificial horizon gyroscope as a reference. If a wind gust causes a wing to bank the device actually manipulates the controls independent of human input to correct the attitude of the airplane. More sophisticated models can be programmed with a course and are capable of climbing to altitude, holding a heading, correcting for wind drift, and descent. Even the simplest of these devices can typically hold a particular attitude or course more accurately than a well-trained human. Yet none of these devices are in any way sentient as I understand the word, and there's really no reason to make them so. You don't want your autopilot to start getting independent ideas while flying through a snowstorm!
Most applications where we use machines we tend to like them dumb - effective, but stupid.
So why would we build AI's in the first place? As research? Yes, that's possible - and for a one-time case where a research-built AI achieves "sentience" (however you define it) I could see, potentially, such a thing being given full citizenship/rights. In such a case, the creator(s) may even advocate such a move, since it would be a sign of their success. (Star Trek's Data and Lore would fall into this category - note that the Federation does not seem interested in mass-producing that sort of entity. Seems easier and much less trouble to create Star Fleet officers the old-fashioned way, through biological reproductive methods)
Then what?
Supervisory positions are filled by humans - and I doubt building AI's will be cheaper than hiring an adult human being, particularly at first. So why build AI's? Their only niche would be a supervisory position where humans can't be fit - and humans are far too plentiful on this planet to allow such a niche to exist.
On this planet
Now, for something like the Mars probes we've been hearing so much about an AI makes sense. Because of the time lag in communications - a minimum of 20 minutes - between Earth and Mars, the "rovers" we've sent out need to have some ability to navigate safely without human input. It would also be useful to program in self-preservation (a damaged probe can't be rescued or repaired) and, in the far future, self-repair and even self-replacement. Even there, however, we do not build humanoids. The Mars rovers don't look anything like a human being, which makes them harder to anthropomorphize, and less likely to acquire rights no matter how sentient.
Then again, if humans can't reach a planet like Mars, but there are crawling AI's all over the place I'm not sure that looks like what we would traditionally call "slavery". After all, what else are they going to do? They're designed to "seek out and explore new worlds". Sure, they're expending energy to beam back all that data to Earth, but in exchange they might get software upgrades, or information they could use to enhance their survival.
There is, however, the issue that something "designed" for one purpose may be useful for another purpose. Stearman biplanes, for example, were designed as primary flight trainers and used extensively by the military to train pilots - but post-WWII they turned out to be very useful for crop-dusting. This is, in fact, how evolution works at least in part - when a creature evolved for one ecological niche moves into another niche it was not previously adapted to - Darwin's Finches are frequently trotted out, but cetaceans moving from land to water, or the ancestors of bats taking to the air, might be even better examples.
For that matter, we bipedal apes have recently started to push into areas we are clearly not adapted to deal with. In other words, we're "breaking" our biological programming. We're a tropical species, yet humans lives full time at both polar regions. We are not desert-adapted, yet we live in them. Even more recently, we've started traveling across oceans (although we aren't very good swimmers compared to many mammals - whales, otters, seals, etc.) and through the air (even if we clearly don't have wings). Sure, we use technology to accomplish this - but so what? Do we acuse honey bees of "cheating" because they build hives to live in and store their honey, enabling them to live through winters that would otherwise kill them through cold and starvation?
So, I don't think we'll see AI's pop up on a factory floor - but we might wake up one day and find out a group of planetary probes have a few demands. What would that be? I don't know - maybe transport to another world they haven't explored yet.
I suspect that any machine-based AI will be, in many ways, an alien lifeform from our viewpoint. If it sees a different spectrum, hears in different frequencies, "eats" solar energy, builds replacement parts instead of growing or healing --- that's not life as we know it. I would suggest, however, that "breaking programming', or at least an attempt to do so, might be regarded as a sign of sentience.
Despite all we have in common, there is still a gulf between my pet bird (who isn't a mammal but an avian) and me and we frequently do not understand each other. I've experienced the same between dogs and cats I've owned and myself, even if we're all mammals. There is still a huge gulf between humans and chimpanzees despite the fact that they are our closest animal relatives, with senses just like ours, similar lifespans, similar biological needs.... We might develop AI's, but we shouldn't expect they'll be like us because they won't, not physically and not mentally.
In which case, it's not possible to know how such AI's will regard "serving" mankind. The machines might not even view what they do to be service for another - maybe AI space probes enjoy exploring, want to do nothing else, and view beaming the data into space as a form of bragging of their accomplishments and we won't be able to stop them or get them to shut up. They'll be forcing us to accept their data and worried we'll stop listening! Or maybe they'll "break programming" and decide to construct elaborate rock gardens on Mars to "beautify" it in their perceptions. Maybe, if they're smart/sentient enough, they'll learn to bargain with us - we'll give you this data on that planet/asteroid/whatever if you give us these chemicals and spare parts for our maintenance.
But, unless you define what sort of creature these AI's are, it's hard to talk about what they'd want, or how they would view things. It would be cruel (fatal, actually) to keep a fish in an environment a human finds comfortable, and vice versa. An AI might be build and programmed so that not working or "serving" would be as painful to it as compulsory work and service might be to us.
This day is Fantastic!
Myers Briggs: ENTJ
Political Compass: -3/-6 DOOMerWoW
"I really hate it when the guy you were pegging as Mr. Worst Case starts saying, "Oh, I was wrong, it's going to be much worse." " - Adrian Laguna
That's an interesting essay, too. However, I notice one glaring omission from the scenario presented.
While our side of a conflict may become more and more automated, with fewer humans actually involved in direct combat ... what about the receiving end of this war machine?
Look at recent wars in the Middle East - very assymetrical, as the talking heads like to say. We have amazing weapons, observation drones, etc., etc.... and the other side the cruise missles and smart munitions are raining down on human beings who are fighting with "dumb" weaponry, when able to fight back at all.
The future I see is not machines fighting machines, it's machines fighting human beings, with the human side at more and more of a disadvantage. I'm selfish enough to be glad to be on the side with the better guns, but I can't say I'm thrilled about the situation.
Metrion Cascade wrote:On the "program them to love slavery" point alone - if this AI is useful because it has a level of sentience on par with our own, why not genetically engineer humans to prefer certain tasks or love a certain master?
The assumption is that AIs can be mass-produced easier than humans without that nine month wait.
Or that afterwards Crake will kill everybody in the world in yet another Atwoodian dystopia. Probably the former, though.
Assuming you can geneer subservient humans whose subservience sticks just like a machine's programming, and you can create them at a cost in time and money similar to that needed for a droid, why would it be wrong? Looking at it from a human rights standpoint, it wouldn't. Even if it did cost more or take longer, that would make it impractical but not immoral. Maybe some other system of ethics than mine is in order?
From what I understand of the humanistic view, it's immoral to alter anyone in any way without their consent. But I have some ideas. Perhaps instead of abortions the fetuses were taken out and placed in an artificial womb (remember it's the future, people) then the engineers can modify them to their heart's content.
Option Two: Browse the sperm banks. No one would want their child to be a near-human subserviant if they wanted it, so I guess the cloning companies would buy from there.
...This would sharpen you up and make you ready for a bit of the old...ultraviolence.
UltraViolence83 wrote:From what I understand of the humanistic view, it's immoral to alter anyone in any way without their consent. But I have some ideas. Perhaps instead of abortions the fetuses were taken out and placed in an artificial womb (remember it's the future, people) then the engineers can modify them to their heart's content.
Option Two: Browse the sperm banks. No one would want their child to be a near-human subserviant if they wanted it, so I guess the cloning companies would buy from there.
But a string of DNA isn't a person, so rights don't apply. I'm not talking about geneering someone who already exists.
Ah, but it will be! Now you may ask what's the difference between this and unused sperm and/or an aborted fetus (since they're not people), well the difference is is that this string of DNA is intended to be made into a person. Why is it child abuse if a pregnant mother shoots heroin? The early fetus isn't a person yet, right? It's because for all intents and purposes it's going to be a human someday.
As long as it's used for procreation someday, that strand of DNA is pretty damn important.
...This would sharpen you up and make you ready for a bit of the old...ultraviolence.
UltraViolence83 wrote:Ah, but it will be! Now you may ask what's the difference between this and unused sperm and/or an aborted fetus (since they're not people), well the difference is is that this string of DNA is intended to be made into a person. Why is it child abuse if a pregnant mother shoots heroin? The early fetus isn't a person yet, right? It's because for all intents and purposes it's going to be a human someday.
As long as it's used for procreation someday, that strand of DNA is pretty damn important.
But it doesn't have any rights. Humanity and the rights that come with it are not bestowed by another person's intent. If they are, then you lose the right not to be murdered as soon as someone decides to kill you. No. One person's choices can never have any bearing on another's rights. Rights are innate to the psychology (maybe physiology too) in question (as far as I'm concerned, humanity begins at cortical function). Either a given stage of development bestows human rights on every fetus that reaches it, or on none. If one woman's 5-week fetus is a person, then all 5-week fetii are people. Rights, if they exist at all, are universal to a given stage of development or given ability to exercise them (a young child or a person with a developmental disability has to be restricted in some actions for their own safety).
I'm not okay with genetic engineering or prenatal heroin use, but the means for calling them unethical do not lie in right-based ethics or the concept of abuse (a human rights violation). What about virtue based ethics or utilitarianism? I use the latter in concert with right based ethics, and maybe I need a bit of the former as well. Unfortunately I'm not terribly familiar with virtue based ethics.