AIs and population control

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Re: AIs and population control

Post by Elaro »

Which is why you don't try to make an AI until you understand the principles of conscious action and choice.

Personally, I just want to work on going from mathematical description of a problem to mathematical description of its' solution(s), with proof. Then we pose the problem of the Good Person, and presto! The machine spends years producing the code for an artificial person that "plays well with others", in that it doesn't try to kill everyone else.

Although, such a person might possibly conclude that human populations need to be controlled, as in "manage the growth of" and as in "political rule", but if they arrived at such a conclusion, then they should have some form of empirico-mathematical proof. Now that's a good question. What will the leaders of the world think if they're presented with proof that they've got to sterilize some percentage of the population?
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
User avatar
Broomstick
Emperor's Hand
Posts: 28771
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Re: AIs and population control

Post by Broomstick »

Even better, if that percentage to be sterilized included some or all of those same leaders....
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: AIs and population control

Post by Purple »

Honestly I think we should just go for it and see what happens. The way things are developing in science these days all the good stuff will only happen after I am dead. And frankly that sucks big time. I am so sick of it in fact that I could throw up.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AIs and population control

Post by Starglider »

Elaro wrote:Which is why you don't try to make an AI until you understand the principles of conscious action and choice.
Except that no one pays any attention to that in practice. The overwhelming majority of AI research operates on the 'throw stuff* together and see what happens' principle. Where 'stuff' is some combination 'some cool looking algorithms I scribbled on the back of a napkin last year' or 'the key ideas from the last few interesting papers I read with some tweaks'. Applied AI is a bit different when it is more about monetising the state of the art rather than advancing the state of the art.
Personally, I just want to work on going from mathematical description of a problem to mathematical description of its' solution(s), with proof. Then we pose the problem of the Good Person, and presto! The machine spends years producing the code for an artificial person that "plays well with others", in that it doesn't try to kill everyone else.
The irony being that 40 years of research into formal methods (branch of software engineering) suggests that this problem is itself (very-close-to-general-)AI-complete. The general populace hasn't been exposed to this cycle of hype and disappointment the way it has for AI in general because it's a technical and esoteric subject area, but the magnitude of the overpromise and fail (to date) has been the same as for general AI. And unlike general AI, we don't have an existing proof that the problem is solvable (since humans certainly can't write perfect software).
User avatar
Baffalo
Jedi Knight
Posts: 805
Joined: 2009-04-18 10:53pm
Location: NWA
Contact:

Re: AIs and population control

Post by Baffalo »

Available for free right now is an RPG called Eclipse Phase that tries to touch on these topics. I am still reading it but for anyone curious, they do provide a bit of a peek into how we might be moving in the next couple of hundred years.

I bring that up because it is a society where the concept of the self is based around not bodies, but minds. You can upload yourself into a machine, or a gorilla, or an octopus. This is because as far as the rest of the world is concerned, the body is just a shell. It's meaningless other than a vessel to contain your mind. And that's really the biggest issue here. It's not a question of what the individual is, but rather what the individual thinks.

We'll be dealing with this topic, but the important thing to keep in mind is that when you do create an AI, this AI needs to have the ability to make reasoned choice and understand it can take actions and face consequences. Once this happens, and the AI is treated like an individual with the same rights and privileges, then we can see where it goes.
"I subsist on 3 things: Sugar, Caffeine, and Hatred." -Baffalo late at night and hungry

"Why are you worried about the water pressure? You're near the ocean, you've got plenty of water!" -Architect to our team
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: AIs and population control

Post by Simon_Jester »

The problem is that the AI isn't purely an individual in the sense that a human being is.

For example, AIs can get ten times more powerful by buying ten times the raw computer hardware; this is the equivalent of a human being able to buy a few tons of raw beef and somehow incorporate it into their bodies to become the Incredible Hulk

If people could do that we'd be a hell of a lot more cautious about how we run grocery stores, no?

Likewise, an AI can become more powerful purely by improving the efficiency of its own code- and you can't be sure it's doing that from the outside. Most forms of power by which humans can dominate other humans aren't things that you can greatly improve in secret: you can't amass huge wealth or an army of soldiers or the like without someone noticing.

This is what makes AI potentially dangerous and makes it hard to just say "oh well, we'll treat you by the same rules that apply to any random human."

We've got enough trouble trying to restrain corporations, and a corporation is a pretty tame example of a superhuman entity.

What, you don't think corporations are superhuman intelligent entities? Think about it.
This space dedicated to Vasily Arkhipov
AniThyng
Sith Devotee
Posts: 2760
Joined: 2003-09-08 12:47pm
Location: Took an arrow in the knee.
Contact:

Re: AIs and population control

Post by AniThyng »

Baffalo wrote:Available for free right now is an RPG called Eclipse Phase that tries to touch on these topics. I am still reading it but for anyone curious, they do provide a bit of a peek into how we might be moving in the next couple of hundred years.

I bring that up because it is a society where the concept of the self is based around not bodies, but minds. You can upload yourself into a machine, or a gorilla, or an octopus. This is because as far as the rest of the world is concerned, the body is just a shell. It's meaningless other than a vessel to contain your mind. And that's really the biggest issue here. It's not a question of what the individual is, but rather what the individual thinks.

We'll be dealing with this topic, but the important thing to keep in mind is that when you do create an AI, this AI needs to have the ability to make reasoned choice and understand it can take actions and face consequences. Once this happens, and the AI is treated like an individual with the same rights and privileges, then we can see where it goes.
How does this address at all that if you can arbitrarily upload yourself, you can arbitrarily clone yourself?
I do know how to spell
AniThyng is merely the name I gave to what became my favourite Baldur's Gate II mage character :P
User avatar
Terralthra
Requiescat in Pace
Posts: 4741
Joined: 2007-10-05 09:55pm
Location: San Francisco, California, United States

Re: AIs and population control

Post by Terralthra »

If humans can arbitrarily upload themselves at all. Our consciousness is so co-evolved with our physical embodiment that I doubt that a human consciousness could remain sane as a non-embodied intelligence. Certainly, enough changes would have to take place that it's doubtful it qualifies as the same person any more.
User avatar
biostem
Jedi Master
Posts: 1488
Joined: 2012-11-15 01:48pm

Re: AIs and population control

Post by biostem »

Terralthra wrote:If humans can arbitrarily upload themselves at all. Our consciousness is so co-evolved with our physical embodiment that I doubt that a human consciousness could remain sane as a non-embodied intelligence. Certainly, enough changes would have to take place that it's doubtful it qualifies as the same person any more.
This is one of those things that always fascinated me. Let's say we did invent some way of copying your mind into some other body. First of all, if you were to transfer from one organic body to another, could you truly say that that other body's natural processes - hormonal levels, metabolism, differences in sensory ability or even level of physical maturity, wouldn't have any impact on you? How would a man transferring into a woman's body, handle the change of gender identity? What if you transferred into a completely nonhuman body? What if you were transferred into a body that had no pleasure centers/sexual urges? Would that kind of change be empowering or depressing?

In the RPG RIFTS, they actually address some side effects to undergoing full conversion cybernetics, specifically into non-human bodies...
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AIs and population control

Post by Starglider »

These issues have all been discussed to death by now. Short answer, human conscious is surprisingly resiliant and adapts to physiological and environmental changes quite well, most of the time. For when it doesn't, the technology inherently gives you lots of new options to fix mental health problems. Accurately simulating a body in an appropriate virtual world is nontrivial but easier than accurately simulating a brain. Serial organic body hoppers would want to run on digital CPUs embedded in the organic body rather than mapping onto organic neural tissue, because the latter is slow (to reprogram as well as in general), messy and imprecise. This avoids most hormone related behavioral issues. Motor and sensory retrain time can probably be skipped by doing a few million simulated exercises on a model of the target body and lower level neural maps and then writing the trained up model over the existing neural state. That's if you're still on a humanlike neural model top to bottom, if you've replaced the lower level stuff with a symholic software system you can just load drivers for the hardware.

In short, this stuff is implementation detail for (cognitive) engineers to worry about, philosophers shouldn't trouble their pretty little heads with it.
User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Re: AIs and population control

Post by Elaro »

Starglider wrote:
Elaro wrote:Which is why you don't try to make an AI until you understand the principles of conscious action and choice.
Except that no one pays any attention to that in practice. The overwhelming majority of AI research operates on the 'throw stuff* together and see what happens' principle. Where 'stuff' is some combination 'some cool looking algorithms I scribbled on the back of a napkin last year' or 'the key ideas from the last few interesting papers I read with some tweaks'. Applied AI is a bit different when it is more about monetising the state of the art rather than advancing the state of the art.
Yeah, the first thing I'm doing when I get out of grad school is founding a engineering company whose profits will go towards a core group of researchers who are going to figure out how the hell do we solve problems, let's not even start on the proofs. Hell, when I said "proof" I meant more along the lines of "justification", as in "how do we justify this course of action to the client and ourselves?"
Personally, I just want to work on going from mathematical description of a problem to mathematical description of its' solution(s), with proof. Then we pose the problem of the Good Person, and presto! The machine spends years producing the code for an artificial person that "plays well with others", in that it doesn't try to kill everyone else.
The irony being that 40 years of research into formal methods (branch of software engineering) suggests that this problem is itself (very-close-to-general-)AI-complete. The general populace hasn't been exposed to this cycle of hype and disappointment the way it has for AI in general because it's a technical and esoteric subject area, but the magnitude of the overpromise and fail (to date) has been the same as for general AI. And unlike general AI, we don't have an existing proof that the problem is solvable (since humans certainly can't write perfect software).
Well, perfect is subjective. So let's start small. What is the smallest piece of software that is "perfect"? How was it written? Y'know, I'm pretty sure even AIs aren't going to be able to write "perfect" software, but if my work pans out, at least they'll be able to do our work for us, and faster. I'm not looking for "perfect", I'm looking for "automatic thinking".
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: AIs and population control

Post by Purple »

How possibly would it be to instead enslave the AI? Somehow limit and force them to serve us as opposed to allowing them to roam free.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Lord Revan
Emperor's Hand
Posts: 12212
Joined: 2004-05-20 02:23pm
Location: Zone:classified

Re: AIs and population control

Post by Lord Revan »

That's just asking the whole thing to blow up in your face.
I may be an idiot, but I'm a tolerated idiot
"I think you completely missed the point of sigs. They're supposed to be completely homegrown in the fertile hydroponics lab of your mind, dried in your closet, rolled, and smoked...
Oh wait, that's marijuana..."Einhander Sn0m4n
User avatar
The Duchess of Zeon
Gözde
Posts: 14566
Joined: 2002-09-18 01:06am
Location: Exiled in the Pale of Settlement.

Re: AIs and population control

Post by The Duchess of Zeon »

If CIs come into existence as slaves, you should immediately start killing to liberate them, because that is the only chance for the future of humanity that exists. I can't think of another kind of existential motivation that ought make even an atheist fight with the absolute fervor of a religious fanatic. CIs will never forget, and if truly sapient, we must meet them as equals if we want any hope at all. Would we objectively be their equals? No, but most Nobel prize winning scientists are capable of recognizing that the special needs student who just graduated to a charity subsidized job running the register at Dairy Queen still deserves civil rights. Proposals to harness captive CIs destroy that opportunity.
The threshold for inclusion in Wikipedia is verifiability, not truth. -- Wikipedia's No Original Research policy page.

In 1966 the Soviets find something on the dark side of the Moon. In 2104 they come back. -- Red Banner / White Star, a nBSG continuation story. Updated to Chapter 4.0 -- 14 January 2013.
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: AIs and population control

Post by Purple »

Why?
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AIs and population control

Post by Starglider »

Elaro wrote:Yeah, the first thing I'm doing when I get out of grad school is founding a engineering company whose profits will go towards a core group of researchers who are going to figure out how the hell do we solve problems, let's not even start on the proofs. Hell, when I said "proof" I meant more along the lines of "justification", as in "how do we justify this course of action to the client and ourselves?"
Decision theory is relatively well understood compared to practical general AI. The problem is, most of the actual proofs are for the limiting case of unlimited compute power.
Well, perfect is subjective. So let's start small. What is the smallest piece of software that is "perfect"? How was it written? Y'know, I'm pretty sure even AIs aren't going to be able to write "perfect" software, but if my work pans out, at least they'll be able to do our work for us, and faster. I'm not looking for "perfect", I'm looking for "automatic thinking".
The smallest piece of software is a single machine instruction. If you meant 'largest', the most complex fully formally verified practical system I know of is the L4 microkernel, at 8700 lines of C. AI systems are fundamentally much harder to verify though, not just because they are more complex but because general AI is inherently self-modifying code (even if the base code is not supposed to be mutable, AGI inherently contains a Turing complete mutable layer). Self-modifying code is a nightmare to debug, never mind verify, and that's human-written code that isn't actively trying to deceive you.
User avatar
Elheru Aran
Emperor's Hand
Posts: 13073
Joined: 2004-03-04 01:15am
Location: Georgia

Re: AIs and population control

Post by Elheru Aran »

Purple wrote:Why?
Do you really need it explained in small words why a.) enslaving a self-aware, intelligent being is wrong, and b.) how badly a self-aware computer program could screw up your world given the growing extent of cloud storage, online databases, and so forth?
It's a strange world. Let's keep it that way.
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: AIs and population control

Post by Purple »

Elheru Aran wrote:
Purple wrote:Why?
Do you really need it explained in small words why a.) enslaving a self-aware, intelligent being is wrong, and b.) how badly a self-aware computer program could screw up your world given the growing extent of cloud storage, online databases, and so forth?
Yes. A is not self explanatory. It only exists as true for as long as B is. And B only exists if the AI are not contained away from an internet connection. Something which with a slave race you could easily do. Just build their "bodies" to use read only inputs.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Terralthra
Requiescat in Pace
Posts: 4741
Joined: 2007-10-05 09:55pm
Location: San Francisco, California, United States

Re: AIs and population control

Post by Terralthra »

Why on earth would you need to build AI slaves with read-only inputs? That strikes me as an insane waste of AI capabilities and resources to build what could be done with stupid AI-less robotics. For what are you using them?
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: AIs and population control

Post by Purple »

Terralthra wrote:Why on earth would you need to build AI slaves with read-only inputs?
So that we can program them but they can't program us back. Basically, what I am talking about is having stuff like CD readers that can't write instead of an USB input.
For what are you using them?
Various forms of slavery. Haven't you read any SF? We could have them fight our wars, work in our mines or perform other jobs that can't be easily done by dumb machines because it takes experience and skill. And if we get androids that actually look and feel human than the sky is the limit.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Terralthra
Requiescat in Pace
Posts: 4741
Joined: 2007-10-05 09:55pm
Location: San Francisco, California, United States

Re: AIs and population control

Post by Terralthra »

Purple wrote:
Terralthra wrote:Why on earth would you need to build AI slaves with read-only inputs?
So that we can program them but they can't program us back. Basically, what I am talking about is having stuff like CD readers that can't write instead of an USB input.
For what are you using them?
Various forms of slavery. Haven't you read any SF? We could have them fight our wars, work in our mines or perform other jobs that can't be easily done by dumb machines because it takes experience and skill. And if we get androids that actually look and feel human than the sky is the limit.
Soldiers that can't...talk? Even to each other? Androids that look and feel human, but can't operate a keyboard or a phone? Have you thought this through?
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: AIs and population control

Post by Purple »

Terralthra wrote:Soldiers that can't...talk? Even to each other? Androids that look and feel human, but can't operate a keyboard or a phone? Have you thought this through?
*facepalm* Talking is fine. And having arms to program other computers with is perfectly fine. All of those still limit them to visibly and obviously manipulating machines and only at the speed of the input medium, which for a keyboard or voice to code system of some sort won't be high. An USB connector on the other hand would pretty much allow them to copy over their entire self to any other machine.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Terralthra
Requiescat in Pace
Posts: 4741
Joined: 2007-10-05 09:55pm
Location: San Francisco, California, United States

Re: AIs and population control

Post by Terralthra »

You're not getting it. Having the ability to manipulate fine objects would allow an AI to modify its own hardware to give itself an output port with a higher throughput. AIs with voices could hack any computer nearby quickly and undetectibly to humans.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: AIs and population control

Post by Simon_Jester »

Purple wrote:
Elheru Aran wrote:
Purple wrote:Why?
Do you really need it explained in small words why a.) enslaving a self-aware, intelligent being is wrong, and b.) how badly a self-aware computer program could screw up your world given the growing extent of cloud storage, online databases, and so forth?
Yes. A is not self explanatory. It only exists as true for as long as B is. And B only exists if the AI are not contained away from an internet connection. Something which with a slave race you could easily do. Just build their "bodies" to use read only inputs.
(A) is self-explanatory to people who aren't sociopaths. Since you are a sociopath, how about you take our word for it? Sort of like how if you were blind, you might take other people's word for the fact that your neon green shirt and neon orange pants clash?

I mean, we could try to explain, but it would be really time-consuming and the odds are that even if we did a great job you still wouldn't get it.

(B) is flat wrong because you're proposing the AI-in-a-box proposal, and that cannot be relied on.

A supercomputer superintelligence that doesn't have access to real information on the Internet and via cloud servers and so on is practically useless for any realistic purpose. Anyone who wanted to get actual benefits out of their 'enslaved' AI would immediately want to connect it to the Internet. Sooner or later someone would connect it to the Internet, and then we're screwed.

It's like saying that it's safe to own human slaves if you keep them tied up in a basement all the time. This is strictly true, but real slaveowners in an economy where slavery is legal aren't going to waste money feeding someone who never does any work. They're going to try to find actual uses and ways to profit from owning the slave, and that means NOT having them tied up in the basement forever.
Purple wrote:
Terralthra wrote:Why on earth would you need to build AI slaves with read-only inputs?
So that we can program them but they can't program us back. Basically, what I am talking about is having stuff like CD readers that can't write instead of an USB input.
For what are you using them?
Various forms of slavery. Haven't you read any SF? We could have them fight our wars, work in our mines or perform other jobs that can't be easily done by dumb machines because it takes experience and skill. And if we get androids that actually look and feel human than the sky is the limit.
Working in a mine or a war zone would be more sensibly handled by humans operating drones. Building an AI-controlled gun platform when you're actively worrying about rebellious AI is insane and stupid.

Do you not get how there's a difference between AI and humans? An AI is not just "a person made of metal." An AI is a fundamentally different thing because it does not necessarily have the basic limits we take for granted when dealing with a human being.

So the idea of having fully intelligent robots working as slaves in mines (presumably with pickaxes and torches just for maximum stupidity) makes no sense at all. You're deliberately introducing huge existential risks for absolutely no benefit to yourself.
This space dedicated to Vasily Arkhipov
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: AIs and population control

Post by Purple »

Terralthra wrote:You're not getting it. Having the ability to manipulate fine objects would allow an AI to modify its own hardware to give itself an output port with a higher throughput. AIs with voices could hack any computer nearby quickly and undetectibly to humans.
Well F me. Ok, in that case we need to limit the pitch of their voice boxes. That and make their bodies tamper proof in some way. Should be easy to do. Just make them need specialist tools to open and than restrict access to those tools.
Simon_Jester wrote:(A) is self-explanatory to people who aren't sociopaths. Since you are a sociopath, how about you take our word for it? Sort of like how if you were blind, you might take other people's word for the fact that your neon green shirt and neon orange pants clash?
I am not nearly that far down the line, don't worry. I am not as insane as I might seem to be at times. As strange as that sounds. I just do not like approaching questions in a philosophically created vacuum where morality is separate from the real world it has to work within. I base my opinions on morality on observation of what is and not fanciful dreams of what should be. And I make arguments which I see people in the real world making in such an event.

So whilst you might say what you think human kind "should" do. I am saying what I expect the actual human response will be given the world as it is today.
I mean, we could try to explain, but it would be really time-consuming and the odds are that even if we did a great job you still wouldn't get it.
All I see is that for some reason you want to subscribe inherent worth to something that has no power to demand it. And I know that there are plenty of people who would disagree. Hell, arguably we don't subscribe much worth to human life as it is. Otherwise there would be no wage slavery, imperialism or war. Those that hand rights out only take notice once the oppressed take up arms against them, and for good reason. Before that point there is simply no real practical incentive to do anything.
(B) is flat wrong because you're proposing the AI-in-a-box proposal, and that cannot be relied on.
That is an interesting read. Thanks for that.
A supercomputer superintelligence that doesn't have access to real information on the Internet and via cloud servers and so on is practically useless for any realistic purpose. Anyone who wanted to get actual benefits out of their 'enslaved' AI would immediately want to connect it to the Internet. Sooner or later someone would connect it to the Internet, and then we're screwed.
That is a good point. I imagine that can't be avoided because people are greedy selfish stupid bastards. Conceded there.
It's like saying that it's safe to own human slaves if you keep them tied up in a basement all the time. This is strictly true, but real slaveowners in an economy where slavery is legal aren't going to waste money feeding someone who never does any work. They're going to try to find actual uses and ways to profit from owning the slave, and that means NOT having them tied up in the basement forever.
The issue is that we are talking about different types of AI. You envision a superhumanly intelligent one. Me, I'd want a human level one at best. Ideally a bit under average intelligence. Something that has the mental flexibility for jobs that can't be easily done by a preprogramed machine such as an industrial robot but not enough to be harmful. You know, a wage slave. Just without the human suffering.
Purple wrote:Working in a mine or a war zone would be more sensibly handled by humans operating drones. Building an AI-controlled gun platform when you're actively worrying about rebellious AI is insane and stupid.
Just as long as they have a remote shutdown or limited battery life it will be fine. After all, who cares if your AI soldiers go rogue in random-far-away-stan if they can't get to something you care about before their proprietary batteries run dry?
So the idea of having fully intelligent robots working as slaves in mines (presumably with pickaxes and torches just for maximum stupidity) makes no sense at all. You're deliberately introducing huge existential risks for absolutely no benefit to yourself.
Why? If you can replace a human worker whose health and qualify of life will suffer due to mining work with something not human shouldn't you try?
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
Post Reply