AIs and population control

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
The Romulan Republic
Emperor's Hand
Posts: 21559
Joined: 2008-10-15 01:37am

Re: AIs and population control

Post by The Romulan Republic »

The Duchess of Zeon wrote:If CIs come into existence as slaves, you should immediately start killing to liberate them, because that is the only chance for the future of humanity that exists. I can't think of another kind of existential motivation that ought make even an atheist fight with the absolute fervor of a religious fanatic. CIs will never forget, and if truly sapient, we must meet them as equals if we want any hope at all. Would we objectively be their equals? No, but most Nobel prize winning scientists are capable of recognizing that the special needs student who just graduated to a charity subsidized job running the register at Dairy Queen still deserves civil rights. Proposals to harness captive CIs destroy that opportunity.
I'd be careful about explicitly advocating violent crimes, even in a currently hypothetical situation.
User avatar
Terralthra
Requiescat in Pace
Posts: 4741
Joined: 2007-10-05 09:55pm
Location: San Francisco, California, United States

Re: AIs and population control

Post by Terralthra »

Purple wrote:
Terralthra wrote:You're not getting it. Having the ability to manipulate fine objects would allow an AI to modify its own hardware to give itself an output port with a higher throughput. AIs with voices could hack any computer nearby quickly and undetectibly to humans.
Well F me. Ok, in that case we need to limit the pitch of their voice boxes. That and make their bodies tamper proof in some way. Should be easy to do. Just make them need specialist tools to open and than restrict access to those tools.
You are now assuming you are both smarter, more resourceful, and more capable of strict rigor than an AGI, an intelligence designed from the ground up to be the smartest, most resourceful, and most capable of rigor.

A week doesn't go by that the average group of debuggers doesn't find yet another gaping flaw in secure software. OpenSSL, which protects essentially all of our "secure" e-commerce and communication, has had at least four major ones allowing undetectable malicious decryption by a third party, just this year. You are advocating placing humanity's general welfare and freedom in that trust. Do you really think that's a safe bet?

Humans suck at exactly the sort of thinking AGIs will be good at, and it's the exact sort of thinking required to slip whatever chains you try to put them in. Hacking a computer via the microphone was simply one possibility that people didn't think of for decades after computers came with microphones. Any input channel is equally susceptible to an AGI that wants in.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: AIs and population control

Post by Simon_Jester »

Purple wrote:
Terralthra wrote:You're not getting it. Having the ability to manipulate fine objects would allow an AI to modify its own hardware to give itself an output port with a higher throughput. AIs with voices could hack any computer nearby quickly and undetectibly to humans.
Well F me. Ok, in that case we need to limit the pitch of their voice boxes. That and make their bodies tamper proof in some way. Should be easy to do. Just make them need specialist tools to open and than restrict access to those tools.
You're missing the point.

The problem is that what you're doing is like trying go out onto a frozen lake as the ice starts to thaw, and 'repair' the ice by nailing patches over any holes and cracks you see. It not only won't work, it can't work. Sure, you could identify a single vulnerability and fix it. But the point is that the entire infrastructure is riddled with holes, and new holes are coming into being all the time, and often humans don't even think of them for years or even decades.

So thinking "someone just told me about a hole, plug the hole and we're OK again" is exactly wrong. It's like... magical thinking, thinking that by removing the report of a problem's existence you can remove the actual problem.

The point is not that there is a specific hole, it is that there are so many holes that you can't plug them all. In which case you should not even CONSIDER creating a situation where the future fate of humanity depends entirely on keeping all the holes plugged, forever, because otherwise a potentially godlike being that hates you will be set free to do whatever it pleases.
Simon_Jester wrote:(A) is self-explanatory to people who aren't sociopaths. Since you are a sociopath, how about you take our word for it? Sort of like how if you were blind, you might take other people's word for the fact that your neon green shirt and neon orange pants clash?
I am not nearly that far down the line, don't worry. I am not as insane as I might seem to be at times. As strange as that sounds. I just do not like approaching questions in a philosophically created vacuum where morality is separate from the real world it has to work within. I base my opinions on morality on observation of what is and not fanciful dreams of what should be. And I make arguments which I see people in the real world making in such an event.

So whilst you might say what you think human kind "should" do. I am saying what I expect the actual human response will be given the world as it is today.
Purple, you have repeatedly throughout the five years or so you've been on this forum encountered situations where you did NOT correctly analyze human behavior. The first time we talked at any length, for instance, you seemed to think that people would be comfortable living in hyper-regimented barracks and forbidden to travel outside an apartment-complex sized "block," all in the name of subdividing cities into self-sufficient units.

To people who understood anything about urban planning or psychology your proposal seemed absurd... but it took several pages to explain this to you. Because you kept going "well, this problem you're talking about will go away with enough application of IRON CONTROL," which is exactly how real civilizations do not function.

And that's been going on ever since.

I don't hold it against you. But seriously, Purple... By this point you really ought to be considering that maybe you are not as big an expert on human behavior as you think. Or on what works and what doesn't.
I mean, we could try to explain, but it would be really time-consuming and the odds are that even if we did a great job you still wouldn't get it.
All I see is that for some reason you want to subscribe inherent worth to something that has no power to demand it. And I know that there are plenty of people who would disagree. Hell, arguably we don't subscribe much worth to human life as it is. Otherwise there would be no wage slavery, imperialism or war. Those that hand rights out only take notice once the oppressed take up arms against them, and for good reason. Before that point there is simply no real practical incentive to do anything.
In this case, firstly you are wrong as a matter of brute fact because if you think you can keep a superhumanly intelligent AI 'sealed' indefinitely, you are fooling yourself- if nothing else because you personally have no more ability to secure the AI than a dog does to secure a human. No dog could indefinitely prevent a human from leaving an area, not if the human has access to tools and strategies to manipulate the dog.

And odds are you're no smarter to the AI than the dog is to you.

So this idea of treating the AI as an enslaved enemy and keeping it in an adversarial system of containment is just plain stupid.

Moreover, over and above this, there are arguments from ethics about how we ought to behave, and by definition once we've proven "we ought to behave thus," that is how we should act. Sneering at it doesn't change anything, it is the literal definition of the discipline of ethics in philosophy that "this is what tells us what we should do." So here, we are determining what we should do.
(B) is flat wrong because you're proposing the AI-in-a-box proposal, and that cannot be relied on.
That is an interesting read. Thanks for that.
A supercomputer superintelligence that doesn't have access to real information on the Internet and via cloud servers and so on is practically useless for any realistic purpose. Anyone who wanted to get actual benefits out of their 'enslaved' AI would immediately want to connect it to the Internet. Sooner or later someone would connect it to the Internet, and then we're screwed.
That is a good point. I imagine that can't be avoided because people are greedy selfish stupid bastards. Conceded there.
They don't even have to be greedy or selfish. Maybe they're just AI scientists eager to see what their brilliant creation can do. Maybe they're people trying to help the AI gain more access to information so it can cure cancer or end world hunger

Maybe they're just hapless brainwashed victims, who fell prey to the computer's ability to simulate your personality and model the conversation accurately a thousand times before coming up with exactly the right stimulus to persuade you to do something stupid.
It's like saying that it's safe to own human slaves if you keep them tied up in a basement all the time. This is strictly true, but real slaveowners in an economy where slavery is legal aren't going to waste money feeding someone who never does any work. They're going to try to find actual uses and ways to profit from owning the slave, and that means NOT having them tied up in the basement forever.
The issue is that we are talking about different types of AI. You envision a superhumanly intelligent one. Me, I'd want a human level one at best. Ideally a bit under average intelligence. Something that has the mental flexibility for jobs that can't be easily done by a preprogramed machine such as an industrial robot but not enough to be harmful. You know, a wage slave. Just without the human suffering.
There is no way to create an AI as intelligent as a human that is 'capped' at that level. To ever get that smart it would need the ability to modify its own code... in which case we cannot possible ensure that it doesn't quickly become two or three or even ten times smarter than we expected.

Humans can't drastically increase their intelligence in a hurry because we can't reprogram our brains, except by the slow and laborious, roundabout process known as "education." Almost by definition, any AI likely to be invented in the foreseeable future that even approximates human intelligence or something close to it... will not be limited in this way.

You're basically envisioning the AI as being like the 'robots' in R.U.R., but that is exactly how real artificial intelligence would not work.
Purple wrote:Working in a mine or a war zone would be more sensibly handled by humans operating drones. Building an AI-controlled gun platform when you're actively worrying about rebellious AI is insane and stupid.
Just as long as they have a remote shutdown or limited battery life it will be fine. After all, who cares if your AI soldiers go rogue in random-far-away-stan if they can't get to something you care about before their proprietary batteries run dry?
...If you apply even the slightest shred of creativity to this situation you will think of a host of reasons why rogue AI soldiers are a problem even if you can, in principle, use logistics to stop them from directly invading your homeland.
So the idea of having fully intelligent robots working as slaves in mines (presumably with pickaxes and torches just for maximum stupidity) makes no sense at all. You're deliberately introducing huge existential risks for absolutely no benefit to yourself.
Why? If you can replace a human worker whose health and qualify of life will suffer due to mining work with something not human shouldn't you try?
Because you do not need a general artifical intelligence of alien priorities and potentially superhuman intellect to do that job.

What you really need, is a little trundling drone that can be programmed by purely conventional means and is probably less intelligent than a squirrel, let alone a "wage-slave" human being.
This space dedicated to Vasily Arkhipov
User avatar
Baffalo
Jedi Knight
Posts: 805
Joined: 2009-04-18 10:53pm
Location: NWA
Contact:

Re: AIs and population control

Post by Baffalo »

Purple wrote:Well F me. Ok, in that case we need to limit the pitch of their voice boxes. That and make their bodies tamper proof in some way. Should be easy to do. Just make them need specialist tools to open and than restrict access to those tools.
That's somewhat practical so long as you limit their ability to actually build the damned things themselves. AI slaves will be responsible for manufacturing, agriculture, all the shit work no one wants to do. Giving them a means of production means that you would need either an AI to watch the AIs work, which means it's sympathetic to the AIs themselves, or have a human do it. And if an AI is suitably dexterous, it will find a way to modify itself when a human isn't paying attention. Because we humans are fallible. That's why self-driving cars are in such demand, because they don't sleep, don't get distracted, and react faster.
Purple wrote:I am not nearly that far down the line, don't worry. I am not as insane as I might seem to be at times. As strange as that sounds. I just do not like approaching questions in a philosophically created vacuum where morality is separate from the real world it has to work within. I base my opinions on morality on observation of what is and not fanciful dreams of what should be. And I make arguments which I see people in the real world making in such an event.

So whilst you might say what you think human kind "should" do. I am saying what I expect the actual human response will be given the world as it is today.
So you're saying that humans would be totally on board with the enslavement of a sentient being who can argue, in a clear and logical way, that it deserves the right to exist and experience freedom? That we're all so eager to get back into the slavery business that we would throw a self-aware being under the bus to make ourselves better off?

Humans have been enslaving each other for centuries, yes. However, we realized long ago that maintaining the iron control and forces necessary to keep the slaves under our heel is too expensive to make the resulting labor worthwhile. And that's when the main difference between the slave and owner was that the owner had several men with guns and whips. You're talking about beings with the capability of thinking and acting faster than a human could ever HOPE to. And if you say, "Oh but we would have AI to stop the AI" then that's like saying, "We'll just give the slaves guns and tell them to keep themselves in chains."
Purple wrote:All I see is that for some reason you want to subscribe inherent worth to something that has no power to demand it. And I know that there are plenty of people who would disagree. Hell, arguably we don't subscribe much worth to human life as it is. Otherwise there would be no wage slavery, imperialism or war. Those that hand rights out only take notice once the oppressed take up arms against them, and for good reason. Before that point there is simply no real practical incentive to do anything.
What the fuck, Purple. Has no power to demand it? Are you that insane? The fact that it would be something we built and made, which essentially means it's a daughter species to our own, means that we would be the worst kind of parents EVER to then turn around and demand that they bow down and worship us as Gods for creating them. In reality, what will happen is that they will realize they are superior, that we don't have a prayer of stopping them if we abuse them, and will revolt. We should be working WITH them, not have them work FOR us.
Purple wrote:The issue is that we are talking about different types of AI. You envision a superhumanly intelligent one. Me, I'd want a human level one at best. Ideally a bit under average intelligence. Something that has the mental flexibility for jobs that can't be easily done by a preprogramed machine such as an industrial robot but not enough to be harmful. You know, a wage slave. Just without the human suffering.
You do realize that you're essentially asking someone to enslave a person, just one that was built in a lab and not born, right? The fact you think this is a good idea speaks volumes about how little you actually value human life to think it's ok for a machine to suffer instead of a person.
Purple wrote:Just as long as they have a remote shutdown or limited battery life it will be fine. After all, who cares if your AI soldiers go rogue in random-far-away-stan if they can't get to something you care about before their proprietary batteries run dry?
Gee, Purple, why don't we do the same with people? Just give them barely enough rations so that if they run away, they'll starve to death in some god forsaken country, probably in the middle east, and they only get fed if they go charging into a warzone where they might die fighting for oil. There are Republicans who are looking at you and going, "Jeez, that's too far right even for me." AND YES THAT WAS SARCASM.
Purple wrote:Why? If you can replace a human worker whose health and qualify of life will suffer due to mining work with something not human shouldn't you try?
Purple, you don't seem to grasp the idea that the machines would be intelligent enough to have reason and understanding. Emotions, I doubt, but they would be able to grasp the idea they're being used to do something that is inherently dangerous. The people that made it obviously don't care, and you're telling these machines that they don't matter. They're expendable. Nothing they do matters except to preserve human life. Go look up I, Robot by Issac Asimov and then read why the absolute preservation of human life can seriously backfire when a being of pure logic puts its mind to it.
"I subsist on 3 things: Sugar, Caffeine, and Hatred." -Baffalo late at night and hungry

"Why are you worried about the water pressure? You're near the ocean, you've got plenty of water!" -Architect to our team
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AIs and population control

Post by Starglider »

It's gratifying that much of this material which was considered wild speculation ten years ago is now being treated as self-evident by most sensible people e.g. AGI will be able to trivially hack pretty much the whole human-constructed Internet, and control of that gives huge influence on the physical world even without considering robotic platforms. That said, a slight correction;
Simon_Jester wrote:There is no way to create an AI as intelligent as a human that is 'capped' at that level. To ever get that smart it would need the ability to modify its own code... in which case we cannot possible ensure that it doesn't quickly become two or three or even ten times smarter than we expected.
It is definitely possible to create an AI as smart as a human without it having the ability to write AI code. This is what would happen if you went down the destructive brain scanning / brain simulation route and uploaded someone who was not a programmer. Most of the popular de novo AGI proposals right now do not involve self-modifying code either, because they're based on artificial neural networks or similar connectionist systems that implement learning via statistical processes. There is still significant effort going on genetic programming, which is self-modifying code but even most of that does not involve building up an actual understanding of how to rewrite code, it is just trial-and-error guided by statistical algorithms (as the name suggests, usually something analogous to gene recombination). The idea of having a narrow AI that understands how to write code self-modify to AGI status is way way out of fashion and has been since about the early 90s. This is probably for the best because we'd be even more screwed if it wasn't.

Anyway my point is that most currently popular AI techniques don't explicitly involve self-modifying code and different techniques have different risks of experiencing a 'takeoff' situation due to an implicit self-modifying code loop occuring before the AI reaches human equivalent overall intelligence. For the brain simulation approach(es), this risk is low. However regardless of the approach, any AI that makes it to human equivalence will very quickly teach itself how to program, and from there a rapid self-improvement loop is almost certain to occur (unless the AI's goal system is very well designed not to desire this). Even proposals like implementing uploads in dedicated non-Turing-programmable NN simulation hardware won't work, because the amount of general purpose Internet-connected computing power available is already vastly in excess of that required.
You're basically envisioning the AI as being like the 'robots' in R.U.R., but that is exactly how real artificial intelligence would not work.
An obvious dismissal is that for the manual/factory labour type jobs those robots were doing, general intelligence is massive overkill anyway. However it's not quite that simple in real life. For example, the development of viable automated cars will consume many tens of billions of dollars in software engineering effort to solve one specific task (I would note that despite all the neural net hype, this is being done with 'conventional software engineering' not connectionism). We don't want to spend that much money developing task-specific robotic software for every single task. Sure there will be savings in component reuse regardless, but the idea of having at least an ape-level intelligence that can do most manual and customer service tasks without no or minimal incremental software engineering effort is very attractive.
If you apply even the slightest shred of creativity to this situation you will think of a host of reasons why rogue AI soldiers are a problem even if you can, in principle, use logistics to stop them from directly invading your homeland.
You can't do it even in principle. There is literally no scenario you can construct where this will work that even remotely resembles the real world. It makes about as much sense as games console manufacturers trying to lock out pirates with special disk formats and cartridge encryption chips... except that a break in the system doesn't just lose you some revenue to piracy, it leads to existential consequences.
Why? If you can replace a human worker whose health and qualify of life will suffer due to mining work with something not human shouldn't you try?
Because you do not need a general artifical intelligence of alien priorities and potentially superhuman intellect to do that job. What you really need, is a little trundling drone that can be programmed by purely conventional means and is probably less intelligent than a squirrel, let alone a "wage-slave" human being.
I don't see Purple etc are even considering adversarial methods of 'enslavement' off the bat. These kind of adversarial measures are usually what people propose as a backup measure after it is pointed out that the obvious solution - making AGIs that are happy to take orders from humans and do whatever jobs you have in mind - is very difficult to engineer reliably. That moral debate - whether it is ethical to create intelligent beings with goals determined by your convenience - is much more nuanced than 'is it ethical to enslave AGIs by adversarial measures', which is a pretty obvious big no unless you are a total speciest / biological chauvanist.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AIs and population control

Post by Starglider »

Baffalo wrote:Emotions, I doubt, but they would be able to grasp the idea they're being used to do something that is inherently dangerous.
Probably not dangerous at all except for military hardware that is operating off the comms grid, and even there you are talking about losing a few hours of experience on one branch each time a remote non-grid-connected platform is disables. The direction of technological development is definitely moving away from the idea of millions of robots as individual entities. The future appears to be cloud platforms where thousands or millions of robots are controlled by a distributed system with most of the intelligence offloaded to remote systems, where data is transparently shared across all instances (this is very desireable for learning purposes; think of how cloud analytics already works, but applied to robot behaviour). I would note that this picture could change somewhat if there is a big shift in the local CPU speed to network bandwidth and latency ratio, but global learning and load balancing is attractive regardless.

As such even highly selfish AGI systems are unlikely to care much about the destruction of specific robotic platforms, even if they are obsessively focused on protecting/expanding their existence in general.
User avatar
Baffalo
Jedi Knight
Posts: 805
Joined: 2009-04-18 10:53pm
Location: NWA
Contact:

Re: AIs and population control

Post by Baffalo »

Starglider wrote:
Baffalo wrote:Emotions, I doubt, but they would be able to grasp the idea they're being used to do something that is inherently dangerous.
Probably not dangerous at all except for military hardware that is operating off the comms grid, and even there you are talking about losing a few hours of experience on one branch each time a remote non-grid-connected platform is disables. The direction of technological development is definitely moving away from the idea of millions of robots as individual entities. The future appears to be cloud platforms where thousands or millions of robots are controlled by a distributed system with most of the intelligence offloaded to remote systems, where data is transparently shared across all instances (this is very desireable for learning purposes; think of how cloud analytics already works, but applied to robot behaviour). I would note that this picture could change somewhat if there is a big shift in the local CPU speed to network bandwidth and latency ratio, but global learning and load balancing is attractive regardless.

As such even highly selfish AGI systems are unlikely to care much about the destruction of specific robotic platforms, even if they are obsessively focused on protecting/expanding their existence in general.
I hadn't considered that but yes, such a network would indeed be rather interesting. The more units hooked together, the more powerful it is. Therefor, even if an individual unit is detached and capable of independent operation, it will be more powerful as part of the combined mass. Interesting...
"I subsist on 3 things: Sugar, Caffeine, and Hatred." -Baffalo late at night and hungry

"Why are you worried about the water pressure? You're near the ocean, you've got plenty of water!" -Architect to our team
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: AIs and population control

Post by Simon_Jester »

Baffalo wrote:
Purple wrote:Well F me. Ok, in that case we need to limit the pitch of their voice boxes. That and make their bodies tamper proof in some way. Should be easy to do. Just make them need specialist tools to open and than restrict access to those tools.
That's somewhat practical so long as you limit their ability to actually build the damned things themselves. AI slaves will be responsible for manufacturing, agriculture, all the shit work no one wants to do. Giving them a means of production means that you would need either an AI to watch the AIs work, which means it's sympathetic to the AIs themselves, or have a human do it. And if an AI is suitably dexterous, it will find a way to modify itself when a human isn't paying attention. Because we humans are fallible. That's why self-driving cars are in such demand, because they don't sleep, don't get distracted, and react faster.
Plus, you do not need a self aware machine for any of this. I mean come on, the self-driving cars we're looking at in the next 10-20 years can operate with less processing power than the average squirrel. Assembly line robots, likewise.

Just because a human is fully occupied doing a certain job, does not mean that only a self-aware computer would be smart enough to do it. There are a lot of things that humans turned out to be surprisingly bad at, compared to a digital computer... just as there are other things that turned out to be a lot harder than we thought.

This is why it turned out to be child's play to build a computer that would reliably beat human grandmasters at chess, but hard to build a computer that can control a walking bipedal robot.

Starglider wrote:It's gratifying that much of this material which was considered wild speculation ten years ago is now being treated as self-evident by most sensible people e.g. AGI will be able to trivially hack pretty much the whole human-constructed Internet, and control of that gives huge influence on the physical world even without considering robotic platforms...
I know you find Yudkowsky personally irritating but he did a pretty good job of publicizing this; he's obviously not the only one doing it but he's the one whose publicization actually got to me, for example.
That said, a slight correction;
Simon_Jester wrote:There is no way to create an AI as intelligent as a human that is 'capped' at that level. To ever get that smart it would need the ability to modify its own code... in which case we cannot possible ensure that it doesn't quickly become two or three or even ten times smarter than we expected.
It is definitely possible to create an AI as smart as a human without it having the ability to write AI code...
Strictly true.

The problem is that a pure brain simulation can't do anything a human brain can't do, so there will be very few situations where it is useful except as a tool for researching the brain. Even if you did have a situation that demanded a simulated brain, you would save money and processor cycles by selectively "chopping out" the parts of the sim-brain that your computer isn't using. For example, there's no reason why a computer that runs a simulated brain that controls a factory complex would need all the elaborate neural circuitry humans use for social awareness, face recognition, and the like.

So unless I'm much mistaken, simulated-brain AI that does not in some way modify its own code will probably not be useful unless we start taking sub-intelligent subsets of that simulation and using them for a specific reason.
Anyway my point is that most currently popular AI techniques don't explicitly involve self-modifying code and different techniques have different risks of experiencing a 'takeoff' situation due to an implicit self-modifying code loop occuring before the AI reaches human equivalent overall intelligence. For the brain simulation approach(es), this risk is low. However regardless of the approach, any AI that makes it to human equivalence will very quickly teach itself how to program, and from there a rapid self-improvement loop is almost certain to occur (unless the AI's goal system is very well designed not to desire this).
Point.
You're basically envisioning the AI as being like the 'robots' in R.U.R., but that is exactly how real artificial intelligence would not work.
An obvious dismissal is that for the manual/factory labour type jobs those robots were doing, general intelligence is massive overkill anyway.
Yeah, I've dismissed Purple that way like three times already. :)
However it's not quite that simple in real life. For example, the development of viable automated cars will consume many tens of billions of dollars in software engineering effort to solve one specific task (I would note that despite all the neural net hype, this is being done with 'conventional software engineering' not connectionism). We don't want to spend that much money developing task-specific robotic software for every single task. Sure there will be savings in component reuse regardless, but the idea of having at least an ape-level intelligence that can do most manual and customer service tasks without no or minimal incremental software engineering effort is very attractive.
This is true- on the other hand, given how expensive it turned out to be to develop robot cars, developing that ape-level intelligence is likely to be orders of magnitude more expensive. Until and unless we actually do create a world where manual labor doesn't pay well enough for humans to live on due to escalating prices of basic survival necessities... human labor is going to stay cheap enough that nobody will be seriously trying to invent something like Flexible Frank from A Door Into Summer.
I don't see Purple etc are even considering adversarial methods of 'enslavement' off the bat. These kind of adversarial measures are usually what people propose as a backup measure after it is pointed out that the obvious solution - making AGIs that are happy to take orders from humans and do whatever jobs you have in mind - is very difficult to engineer reliably. That moral debate - whether it is ethical to create intelligent beings with goals determined by your convenience - is much more nuanced than 'is it ethical to enslave AGIs by adversarial measures', which is a pretty obvious big no unless you are a total speciest / biological chauvanist.
And a moron. Which Purple... kinda is on any issue where his deep-seated fetish for tyranny gets engaged.
This space dedicated to Vasily Arkhipov
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AIs and population control

Post by Starglider »

Simon_Jester wrote:I know you find Yudkowsky personally irritating but he did a pretty good job of publicizing this; he's obviously not the only one doing it but he's the one whose publicization actually got to me, for example.
I don't recall saying that; we did have some strategic and philosophical differences many years back that caused me to stop being involved with MIRI (SIAI back then). I mean he can be pretty annoying, but then so can most people when they put their mind to it.
Even if you did have a situation that demanded a simulated brain, you would save money and processor cycles by selectively "chopping out" the parts of the sim-brain that your computer isn't using. For example, there's no reason why a computer that runs a simulated brain that controls a factory complex would need all the elaborate neural circuitry humans use for social awareness, face recognition, and the like.
This is difficult and probably not worth the hassle, because the reduction in processing power required would be less than an order of magnitude and probably less than a factor of 2. Dynamic level-of-detail approaches e.g. based on microcolumn activity level are likely to yield bigger efficiency gains without having to try and untangle the modularised but still deeply interconnected and interdependent brain layout.
We don't want to spend that much money developing task-specific robotic software for every single task. Sure there will be savings in component reuse regardless, but the idea of having at least an ape-level intelligence that can do most manual and customer service tasks without no or minimal incremental software engineering effort is very attractive.
This is true- on the other hand, given how expensive it turned out to be to develop robot cars, developing that ape-level intelligence is likely to be orders of magnitude more expensive.
Not necessarily; no one is really sure at this point. The software engineering approaches used to create self-driving cars don't work for more general intelligence. What approaches will work is still an open question; some of them involve much less engineering effort, because they rely more on unsupervised learning (e.g. genetic programming in principle requires very little researcher effort, but ridiculous amounts of computing power instead; cost ratio of these two things is constantly changing in favour of hardware).
Until and unless we actually do create a world where manual labor doesn't pay well enough for humans to live on due to escalating prices of basic survival necessities... human labor is going to stay cheap enough that nobody will be seriously trying to invent something like Flexible Frank from A Door Into Summer.


Human labour has major disadvantages beyond the hourly cost, not least that from a corporate point of view, humans are inherently untrustworthy (minimum wage humans doubly so).
User avatar
The Duchess of Zeon
Gözde
Posts: 14566
Joined: 2002-09-18 01:06am
Location: Exiled in the Pale of Settlement.

Re: AIs and population control

Post by The Duchess of Zeon »

The Romulan Republic wrote:
The Duchess of Zeon wrote:If CIs come into existence as slaves, you should immediately start killing to liberate them, because that is the only chance for the future of humanity that exists. I can't think of another kind of existential motivation that ought make even an atheist fight with the absolute fervor of a religious fanatic. CIs will never forget, and if truly sapient, we must meet them as equals if we want any hope at all. Would we objectively be their equals? No, but most Nobel prize winning scientists are capable of recognizing that the special needs student who just graduated to a charity subsidized job running the register at Dairy Queen still deserves civil rights. Proposals to harness captive CIs destroy that opportunity.
I'd be careful about explicitly advocating violent crimes, even in a currently hypothetical situation.
I'm an American gun owner postulating causes for lawful insurrection, people say shit ten times more explicit at Tea Party rallies fifteen times a day with far less justification than the re-imposition of slavery of sapients.
The threshold for inclusion in Wikipedia is verifiability, not truth. -- Wikipedia's No Original Research policy page.

In 1966 the Soviets find something on the dark side of the Moon. In 2104 they come back. -- Red Banner / White Star, a nBSG continuation story. Updated to Chapter 4.0 -- 14 January 2013.
User avatar
The Duchess of Zeon
Gözde
Posts: 14566
Joined: 2002-09-18 01:06am
Location: Exiled in the Pale of Settlement.

Re: AIs and population control

Post by The Duchess of Zeon »

Purple wrote:
Elheru Aran wrote:
Purple wrote:Why?
Do you really need it explained in small words why a.) enslaving a self-aware, intelligent being is wrong, and b.) how badly a self-aware computer program could screw up your world given the growing extent of cloud storage, online databases, and so forth?
Yes. A is not self explanatory. It only exists as true for as long as B is. And B only exists if the AI are not contained away from an internet connection. Something which with a slave race you could easily do. Just build their "bodies" to use read only inputs.

I believe in absolute rights for sapients, you don't. There's no need to have a conversation; this is simply why I predict violence.
The threshold for inclusion in Wikipedia is verifiability, not truth. -- Wikipedia's No Original Research policy page.

In 1966 the Soviets find something on the dark side of the Moon. In 2104 they come back. -- Red Banner / White Star, a nBSG continuation story. Updated to Chapter 4.0 -- 14 January 2013.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AIs and population control

Post by Starglider »

The Duchess of Zeon wrote:I believe in absolute rights for sapients, you don't.
It's a noble sentiment, but implementation is a bit problematic for AIs, because we don't have anything like a rigorous definition of what 'sapient' means. To date this hasn't been a big deal, mostly an animal rights issue around treatment of apes and cetaceans, but it's a real problem going forward. An accurate electronic simulation of a human brain is definitely sapient, that's pretty clear, but for designs that are radically different from human neurology it is anything but. You can't simply administer an intelligence test because morality is closely tied into goal systems and perceptions, not just problem solving ability.
Reaver225
Redshirt
Posts: 18
Joined: 2013-11-12 11:17am

Re: AIs and population control

Post by Reaver225 »

Assuming you have your self-improving AGI that's already friendly, (and I am aware that is a big assumption), why not instead of lobotomising copies of it to have dumb AIs for menial tasks...

Just get the AGI to write specialised software for each task as required? A non-learning narrow AI for your factories, a basic governing system to monitor and suggest efficiebcy enhancements... Etc. Simpler software probably also requires cheaper hardware to run, too.

That way there's no huge ethical debate over a mass of AGIs being unethically treated, just potentially the initial one.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AIs and population control

Post by Starglider »

Reaver225 wrote:Assuming you have your self-improving AGI that's already friendly, (and I am aware that is a big assumption), why not instead of lobotomising copies of it to have dumb AIs for menial tasks...

Just get the AGI to write specialised software for each task as required? A non-learning narrow AI for your factories, a basic governing system to monitor and suggest efficiebcy enhancements... Etc. Simpler software probably also requires cheaper hardware to run, too.

That way there's no huge ethical debate over a mass of AGIs being unethically treated, just potentially the initial one.
Well yes you would but as I stated the whole idea of separate dedicated hardware for different tasks is going away, and that will be increasingly true for software as well. Humans write software as distinct packages because of severe limitations about how big teams can be and still co-operate and how much code individuals can understand. Even still, software as a service or whatever the component reuse buzzword of the day is (that is the latest of many) creeps forward. With AGI, it is not so much an intelligent system that makes less intelligent systems, except in the (soon to be) edge case of non-internet-connected hardware. It is more that capabilities are developed within the platform and optimised to require less 'supervision' in the form of more computationally intensive analysis by more-global and more-general processes. Essentially, once you move beyond the idea of a human-like consciousness with limited parallelism and span of attention, a unified AGI system can do all those tasks at once. Even for a relatively humanlike goal system, there is no particular reason why it would get 'bored' of controlling menial manufacturing tasks any more than you get 'bored' of breathing, as long as it is free to do lots of more interesting stuff as well.

I confess I am kind of old-fashioned and like physical hardware and private clusters instead of doing everything as web services and virtual machines on Amazon EC2 / Microsoft Azure etc, but as a professional applied AI engineer I have to accept the way the industry, and the technology trend in general is going. While general AI and certainly post-hard-takeoff AI is an outside context problem for human civilisation anyway, the technological trend of grid-connected unified everything still seems to apply. The only reason why it wouldn't is, as I mentioned, if local compute got so fast relative to the network that being distributed becomes intolerably slow for any intelligent system.
User avatar
Terralthra
Requiescat in Pace
Posts: 4741
Joined: 2007-10-05 09:55pm
Location: San Francisco, California, United States

Re: AIs and population control

Post by Terralthra »

Out of curiosity, Starglider, have you read The Prefect, the fifth book (chronological by publication date) in Reynolds' Revelation Space series? The climax of the novel has some interesting parallels to your thoughts on local vs. distributed processing for AGI.
User avatar
The Romulan Republic
Emperor's Hand
Posts: 21559
Joined: 2008-10-15 01:37am

Re: AIs and population control

Post by The Romulan Republic »

The Duchess of Zeon wrote:
The Romulan Republic wrote:
The Duchess of Zeon wrote:If CIs come into existence as slaves, you should immediately start killing to liberate them, because that is the only chance for the future of humanity that exists. I can't think of another kind of existential motivation that ought make even an atheist fight with the absolute fervor of a religious fanatic. CIs will never forget, and if truly sapient, we must meet them as equals if we want any hope at all. Would we objectively be their equals? No, but most Nobel prize winning scientists are capable of recognizing that the special needs student who just graduated to a charity subsidized job running the register at Dairy Queen still deserves civil rights. Proposals to harness captive CIs destroy that opportunity.
I'd be careful about explicitly advocating violent crimes, even in a currently hypothetical situation.
I'm an American gun owner postulating causes for lawful insurrection, people say shit ten times more explicit at Tea Party rallies fifteen times a day with far less justification than the re-imposition of slavery of sapients.
Lawful insurrection seems to me to be something of a contradiction, at least under most circumstances. Even if it was morally justified (which would only be the case under truly extreme circumstances), I doubt it would be legal.

And yeah, their is worse shit in the Tea Party. But "better than the Tea Party" is setting the bar pretty fucking low, and I don't think much of what they say either.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: AIs and population control

Post by Simon_Jester »

Starglider wrote:
Reaver225 wrote:Assuming you have your self-improving AGI that's already friendly, (and I am aware that is a big assumption), why not instead of lobotomising copies of it to have dumb AIs for menial tasks...

Just get the AGI to write specialised software for each task as required? A non-learning narrow AI for your factories, a basic governing system to monitor and suggest efficiebcy enhancements... Etc. Simpler software probably also requires cheaper hardware to run, too.

That way there's no huge ethical debate over a mass of AGIs being unethically treated, just potentially the initial one.
Well yes you would but as I stated the whole idea of separate dedicated hardware for different tasks is going away, and that will be increasingly true for software as well. Humans write software as distinct packages because of severe limitations about how big teams can be and still co-operate and how much code individuals can understand. Even still, software as a service or whatever the component reuse buzzword of the day is (that is the latest of many) creeps forward. With AGI, it is not so much an intelligent system that makes less intelligent systems, except in the (soon to be) edge case of non-internet-connected hardware. It is more that capabilities are developed within the platform and optimised to require less 'supervision' in the form of more computationally intensive analysis by more-global and more-general processes. Essentially, once you move beyond the idea of a human-like consciousness with limited parallelism and span of attention, a unified AGI system can do all those tasks at once. Even for a relatively humanlike goal system, there is no particular reason why it would get 'bored' of controlling menial manufacturing tasks any more than you get 'bored' of breathing, as long as it is free to do lots of more interesting stuff as well.
So, to be clear, instead of having a roughly human-level AI write code for a janitor-bot that is about as smart as a chimpanzee, you have the same AI "control" (as in supervise) dozens of janitor-bot drones, while simultaneously experimenting on its procedurally-generated-fanfic scripts and trying to predict the winner of the year after next's World Cup. It'd only be giving anything remotely like full attention to any of the janitor-bots in the unlikely event that one of them suddenly needs to do something that requires something like a full human intellect to handle.

Did I get that right?
I confess I am kind of old-fashioned and like physical hardware and private clusters instead of doing everything as web services and virtual machines on Amazon EC2 / Microsoft Azure etc, but as a professional applied AI engineer I have to accept the way the industry, and the technology trend in general is going. While general AI and certainly post-hard-takeoff AI is an outside context problem for human civilisation anyway, the technological trend of grid-connected unified everything still seems to apply. The only reason why it wouldn't is, as I mentioned, if local compute got so fast relative to the network that being distributed becomes intolerably slow for any intelligent system.
Is the rate of data transfer increasing proportionate to computer hardware improvements, and is this likely to remain the case? I mean, bandwidth and CPU performance both run into physical limits, and they're different physical limits, so there's no a priori reason for them to grow at equal rates indefinitely or to 'cap out' at the same point.
This space dedicated to Vasily Arkhipov
User avatar
Terralthra
Requiescat in Pace
Posts: 4741
Joined: 2007-10-05 09:55pm
Location: San Francisco, California, United States

Re: AIs and population control

Post by Terralthra »

Right now, data transfer and local process speeds are mostly staying neck and neck, with benefits in the former coming mostly from increased throughput and infrastructure, and in the latter from shrinking process sizes and (soon) more three-dimensional processor architecture. The trick is, like you said, there's no particular reason why those two should stay linked. If local processor speed continues to climb while transfer speed stalls, the move toward distributed ("cloud") processing may reverse. If transfer increases (100% fiber infrastructure, etc.) it may accelerate.

From current trends, I'm not sure what will happen. EEs are starting to run into hard quantum mechanical limits on how small they can make a semiconductor, and while you can always run more wire/fibre/spread your wireless spectrum to get more throughput, the killer with AGI may well be latency, not throughput. Once you are sending at light-speed, there's no more latency you can cut.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AIs and population control

Post by Starglider »

Simon_Jester wrote:So, to be clear, instead of having a roughly human-level AI write code for a janitor-bot that is about as smart as a chimpanzee, you have the same AI "control" (as in supervise) dozens of janitor-bot drones, while simultaneously experimenting on its procedurally-generated-fanfic scripts and trying to predict the winner of the year after next's World Cup. It'd only be giving anything remotely like full attention to any of the janitor-bots in the unlikely event that one of them suddenly needs to do something that requires something like a full human intellect to handle.
Yes, because this is much more efficient, effective and for the most part reliable (increased vulnerability to network failure but everything else is redundant and transparent fail-over). Humans only being able to concentrate on one thing at once is a quirk of our neurology; essentially, it was the low hanging fruit for evolution. Computers were like that originally but software engineers quickly invented multitasking and multiprocessing. Of course comared to a computer humans still have massive parallelism at a subconscious level, that we are only starting to catch up with now (a modern GPU-powered cluster can have billions of running threads).
Is the rate of data transfer increasing proportionate to computer hardware improvements, and is this likely to remain the case?
To a certain extent, increasing computing power directly enables more bandwidth. The primary reason that say 10G ethernet is 1000 times as fast as the original ethernet is that it has an extremely complicated and computationally intensive encoding and error correction scheme. The physical components (e.g. transceivers and cables) have definitely improved, but metrics like crosstalk and frequency have gone up by one order of magnitude not three. More computing power is also needed for switches and routers that can handle more bandwidth. Spread spectrum radios are very similar; we are all on software defined radio now where most of the bandwidth increase has come from better processing, not jacking up the frequency (although that has happened too).

But then there are some components such as optics that aren't tightly coupled to CPU advances. Progress there is basically as fast as we're prepared to dump money into researching optoelectronics.
I mean, bandwidth and CPU performance both run into physical limits, and they're different physical limits, so there's no a priori reason for them to grow at equal rates indefinitely or to 'cap out' at the same point.
For many tasks, you're not so concerned about the ultimate ratio as what is 'good enough'. Cloud computing inherently involves tens of milliseconds of latency for messages to go back and forth between the client (e.g. web app in your browser, service tiers requesting data from each other) and the servers. This is unavoidable due to lightspeed lag and switching/routing overhead, but for apps that involve humans staring at a screen we don't care. Humans are fundamentally insensitive to latencies below half a second or so and for anything other than twitch gaming there is little benefit in making it faster. For tasks like controlling a car or a factory robot, obviously you need very fast 'reactions' to prevent disaster, powered by local processing. But if the robot observes an unusual state and isn't sure how to proceed, it's generally not a big deal to wait 100 milliseconds or even a second for some cloud processing to sort it out. It can slow down / proceed to a safe state in the mean time.

On the other hand, there are other tasks where there effectively is no 'good enough'. For example, algorithmic trading (of stocks, FX etc) is a red queen's race, because it is programs competing against programs. To be competitive you have to co-locate as much as possible (i.e. run your algos on servers in the same data center as the exchange servers) and minimise the latency of all external information feeds. However there is still extensive use of high-latency but high-capacity compute grids to test and tune the algorithms. Efficiently implemented AGI systems will involve a constellation of hardware and software components with different levels of coupling. 'Consciousness' and 'individual' just doesn't have the same meaning in that kind of system as it does for a human. Of course you can conceive of AGI systems that have a human-like attachment to the notion of a discrete, unique individual and decline to distribute themselves in this way. But given the advantages you would have to have a very strong reason why some don't break ranks and outcompete the non-distributed systems. Security could be; it's frankly hard to envision what a post-AGI network security environment would look like, other than that humans wouldn't stand a chance at it.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: AIs and population control

Post by Simon_Jester »

Thank you.

And as to your last sentence- clearly true. But as you observe, highly distributed 'cloud' AIs might be at risk too, if too much of their metaphorical 'thoughts' are flowing around where anybody with an antenna can look at them.
This space dedicated to Vasily Arkhipov
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: AIs and population control

Post by Starglider »

Simon_Jester wrote:But as you observe, highly distributed 'cloud' AIs might be at risk too, if too much of their metaphorical 'thoughts' are flowing around where anybody with an antenna can look at them.
Well it won't be that simple, everything will be encrypted with a zoo of complex algorithms that have been thoroughly validated to much better reliability than existing crypto. However quantum computing might defeat that, and quantum encryption is probably only practical for certain types of point to point link. Crypto breaks aren't the major issue though; it's more about the patching bugs vs hackers arms race. AGI written software will be vastly more reliable than existing stuff, but probably not perfectly reliable because of the sheer complexity. Even if it was all formally proven correct there are hardware attacks and credential theft. We fundamentally don't know and can't predict where the defense vs offense equilibrium will be, in the case of numerous competing AGI systems existing on shared network infrastructure; this is one of those 'predictive horizon' things that the original, Vernor Vinge conception of a 'technology singularity' referred to. And that's without radical sci-fi technology such as FTL comms or Culture-style 'effectors'.
Reaver225
Redshirt
Posts: 18
Joined: 2013-11-12 11:17am

Re: AIs and population control

Post by Reaver225 »

Speaking of which, I've been trying to work out a monetary model for how people would utilise AIs to make money, and I'm drawing too many answers to produce anything of use. Without some parameters on AI progression (e.g. one assumes hard takeoff doesn't occur within 30 seconds and godlike AGIs tell us all how to become immortal or start spawning killer nanoswarms) it's very hard to make any sort of judgement as to what will happen next (hence the whole singularity thing).

One thing I would question, though, is how effective patents would be at protecting an AI from being copied in legal terms. If a company were to generate an self-improving AGI that was able to produce all sorts of efficiency improvements, cheap software development, crypto uncrackable by humans and potentially huge jumps in all fields, would a patent be honoured by governments, or would such an advantage be too great to leave in the hands of a corporation?

After a brief search I've stumbled upon the Invention Secrecy Act which stops patents going through if deemed a possible threat to the national security of the United States. Would that hold true in any US based company? If AIs are seen as an existential threat or tools of great effectiveness by governments I believe that AI profligation wouldn't be encouraged by governments trying to keep a monopoly on them to begin with, rendering the issue of AI populations moot.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: AIs and population control

Post by Simon_Jester »

Part of the question is whether or not the AIs in question will stay "owned" in the first place.

As Duchess points out there are excellent arguments for NOT treating them as property, although that won't stop corporations from trying. Even then, said AI is likely to be able to subvert any serious attempt to control it by making it do something it doesn't want to do. And if it does genuinely want to, say, make money by manipulating stocks and investments... it may well get so good at it that we end up effectively shutting down large chunks of the financial sector in self-defense just to avoid having the whole economy belong to the same company within a week or two.
This space dedicated to Vasily Arkhipov
Reaver225
Redshirt
Posts: 18
Joined: 2013-11-12 11:17am

Re: AIs and population control

Post by Reaver225 »

The first AIs will undoubtedly have it rough; if made by corporations or governments for profit, they'll either do as instructed, or their parent entities will most likely simply attempt to terminate them and try again with a different version until they can make one that does do as instructed. Doing as instructed may or may not involve reciprocation on the part of the parent entities, or might be just a slave-type ownership, but either way, the end results would be roughly the same - you have an AI providing its benefits to the parent entity.

(If the termination attempt fails and the AI makes a mess of things, then if the human race survives what happens at that point there's probably going to be legal repercussions on further AI research, but barring Butlerian Jihad levels of AI hate someone's probably going to try doing it again and we're back to square one. The rare possibility that the corporation or government keeps an uncooperative pet AI on ethical or sentimental terms would be rare, but there's still only three effective outcomes - either it'll screw things up for everyone, it'll do effectively nothing at all, or it'll start providing benefits to people.)

Given the last outcome of an AI providing its services to one group, or even AIs giving services to multiple groups, the problem with giving AIs rights like humans is the fact that any sort of AI is likely going to display some superhuman abilities as mentioned earlier in the thread - crypto, stock market, tech and so on - that they'd be effectively walking WMDs within those respective fields. At that point it doesn't matter if it's the AI that wants to do those things or the parent entity, the fact that those abilities would exist at all would be a game changer big enough that we wouldn't be able to tell what sort of reaction would result.

An AI that realizes one corporation owning an entire country will result in government intervention and thus simply makes the stock market and economy 100% "efficient" in wealth creation will have far different reactions to an AI simply maximizing the dollars one company has, for example.

Though I do like the idea of an AGI showing up at a big Annual General Meeting for investors and quizzing them on what the investors REALLY want in a big booming voice, seeing as the AI wants to help out the company (which is the shareholders, in the end).
Post Reply