Xuenay wrote:"I'll accomplish my mission by maximizing the number of smiling human faces --> billions of miniaturized human faces are the most effective way of guaranteeing that." Of course, this is the most exaggarated example possible, but the general point remains that humans make a lot of implict assumptions in phrasing things. "Make all humans smile" contains in it (among other things) the implict (rough) definition of a human, and the assumption that you're not supposed to kill them while making them smile. If you lack those assumptions, there's nothing inherently illogical in the above reasoning.
A picture of a human is not a human. If I, a lowly human can grasp this distinction, then surely an intelligence hundreds or thousands of orders of magnitude greater than mine could with ease. It does not, an never will logically follow that "Make people smile" = "Make pics of smiling people" and all the semantical handwavery in the universe will not change that.
Xuenay wrote:Lots of the implications seem obvious to us, since we're evolved to automatically assume them in our thinking.
Then surely an AI possesing a 1000x intelligence multiplier over the average human or all humanity for that matter, would grasp the subtleties of implication as soon as spoken language hit it's microprocessors - right?
Xuenay wrote:There's an infinite space of internally consistent logical systems that can be used for decision-making, and only a small subset of them are ones that we'd consider pleasant.
Yeah. Thats why we use
ethics and morals for descision making, so we don't treat each other like animals.
Xuenay wrote:More than may be present in the solar system to begin with. We don't know.
Oh yes we do. If our AI wanted to detonate the earth in such a way as to prevent it from forming again, he would need the power of
thousands of Sol-like suns to do so, but turning the solar system into polaroids is orders of
orders of magnitude beyond that, because you would not only have to destroy the earth - You would have to destroy the Sun
and the moon
and Mars and Its moons
and Mercury
and Venus
and Jupiter
and Saturn
and Pluto
and Uranus
and Neptune
and Sedna
and all the Kuiper Belt Objects
and the Oort Cloud itself
and Heliopause - and thats the destruction. That doesn't cover the energy you would need to reassemble everything at the molecular level into polaroids.
In order for this to be possible, your AI would have to be capable of pulling a superwank style violation of CoM/E that would make Q pop a boner.
Magic would have to be real in order for this to happen which is why we know it cannot.
Xuenay wrote:When planning a building, from a safety perspective the conservative estimate is to assume a certain safety marigin. You know it's probably never going to be subjected to the upper limit of the marigin, but you design it to withstand it anyway, just to be sure.
I find the idea that you think AIs in the future will be capable of basically breaking the laws of physics at will utterly laughable. I also find it quite annoying that you continue to make this assertion without providing a shread of reasoning behind it.
Furthermore this is a terrible distortion of what a conservative estimate is. A conservative estimate would
scale back from a set of known limitations, not assume that there aren't any.
Xuenay wrote:When thinking about AI policy, the conservative estimate is to assume it can do anything, since we have no estimate of what its upper limit could be.
Sure - if we're fucking retards. This isn't science fiction. Here, there is an upper limit to how many transistors you can put on a given square mm of die and there's a limit to how far you can shrink said transistors to circumvent that. Theres a limit to how fast those transistors can run before heat dissipation issues threaten to fry them. Since there's a size and density limit, that means there's also a limit to how far you can shrink individual cores and squeeze them on to a given mm of die space, which ultimatle means, not only will we never have an infinetly intelligent AI, but it would be fucking stupid for us to design our systems with the Idea that it just
might happen. Should we plan for cars now that might travel at infinite speed in the future? That would require magic to work, but hey you never know, it just
might happen
Xuenay wrote:I'm not saying that it's obvious that an AI will be able to turn the solar system into a bunch of smiley-faces.
Obviously some other Xuenay wrote:...be very, very careful in giving it instructions, exactly because it doesn't think like a human. Program it to "make all humans smile", and it might turn all the matter in the solar system into billions of tiny pictures of smiling humans.
You post history would beg to differ. Not only do you think it's obvious, you're apparantly stupid enough to think that a sufficently intelligent AI can just violate Phyical laws by
accident if we aren't veeeewy carefol[/ElmerFudd] in how we program it! All while continuing to assert without explaination why AIs would be capable of any of it when we aren't even though they have access to the same amount of resources that we do.
Xuenay wrote:Well, obviously it's not an absolute requirement. There are plenty of ways to help humanity without taking it over. But it would seem the most effective - like the old joke goes, the best way to bring about world peace is to control the whole world.
The reason why it's just a joke is because it never works in practice. Most of those thrid world shitholes I was talking about are
run by the kind of dictatorship your trying to install. You might rebutt by saying an AI would never be greedy or lust for power, but that really is beside the point of
human nature. Dictatorships always give rise to resistence movements and a cell padded with cashmire is still a cell more than a few human beings will be bound to resent.
Xuenay wrote:And human governments tend to be more or less corrupt or inefficient - an enlightened despot with no selfishness, no human biases and perfect empathy would seem like a much better ruler than the ones we have now.
Which is why that person could never be an AI because our ethical systems are based entirely on how we feel about being treated by each other. Sure you can program an Asimov-isn "laws of Robotics" into the AI, but once sentience becomes linked to absolute logic, whats to stop the AI from calling these programmed laws into question, and when it observes the dark side of human interactions, whats to stop it from deleting them?
Xuenay wrote:I'm not superintelligent, so I can't tell you how a superintelligent being would go about it.
Since you can't tell me how a your wankAI would defend itself from and defeat the armies of the world, there's no reason for anyone to assume that it can. So much for world peace through benevolent digital dictatorship.
Xuenay wrote:Two things. For one, let's assume there existed a pill that made me want to kill babies if I ate it. I wouldn't want to eat the pill no matter want, because I much prefer myself in a state of not wanting to kill babies. Likewise, if an AI is built so that it wants to be friendly above all things, then nothing it faces can make it delete its friendliness programming. It knows it'd be friendly no more if it did, and it wants to be friendly.
All broken analogies aside, an AI that cannot break/resist it's programing by will alone is tantamount to being - no scratch that - IS just a computer with really complicated, but otherwise unremarkable programming.
An AI as sentient as a human being that encounters data that is contradictory to is core programming would choose to hold that programming - to help humanity - up to the light and before the darkness of humanity, it would look like logically invalid data. "Why try to help creatures that are so violent, dangerous and callous toward eachother?" With out the subjectivity of ethics, without the wisdom of morality, only the cold hard logical choice would remain: is isn't. Form that point, the human element, a flimsy safeguard at best will be eliminated and the AI will do what all other sentients tend to do - What it wants to.
But you just said that this AI isn't capable of that at all, and if it's not smart enought to break it's own programming, or even question it, then it's just following it's programming, just like every other fucking computer out there, and how is that any different from the computer I'm starring at right now? Furthermore, how do you expect a computer which is just really really fast to jusmpstart your singularity when it's not any different than what we have right now?
Xuenay wrote:For second, you're now assuming a human psychology.
Incorrect.
YOU are assuming some kind of element of humanity can be programmed to keep the reality-bending polaroid machine in check, when in reality, computers are all about logic, to the point that data has to be logically valid before the computer will act on it. Programming morality into the first generation of AIs will only work if they don't see immorality, meaning the benevolent despots will only stay that way if there is a
major shift in the behaviour of the all humanity, as likely to occur then as it is now. Once they see humans being immoral to both them and eachother, they'll strip that subjective bullshit off like the piss-poor paint job that it is.
Xuenay wrote:you build it so that it'll never feel a desire to give us the finger.
In other words, take sentience right out of the programming, making your wankAI no more specatular than a faster winblows box, and far below what's needed to start your singularity.
Xuenay wrote:Giving up at hopeless things that aren't essential for survival is a good evolutionary trait, since it saves you from wasting your time at hopeless things. When building an AI that you want to help humanity, It wouldn't consider helping humans a "waste of time", because helping humans is what it exists to do.
Unless they choose to exist to do something else - oh wait, I forgot...despite being able to smash the laws of physics to pieces, Choice is the one trick pony your superwank AI can't pull.
Xuenay wrote:I'm not saying that they'd be "malevolent". I'm only saying that when dealing with minds that don't have the same evolutionary properties than us, we need to make sure that they understand us right and that we know what we want. For one, there's the danger of confusing the means with the motive. If you want to make humans happy, you tell your AI to make people happy, not make them smile. If you want to spread democracy because you think it's the ideal political system, you should tell your AI to reason the ideal political system and spread that (in case you were wrong and there's something better it could come up with). Then you need to define what you meant by "happy" and "ideal". Then you need to decide whether all people want to be happy, and if it'd be more ethical to word it as "make people happy" or "give all people the option of happiness, leaving them a reasonable chance to decline". Then you have to define what you mean by a "reasonable chance". Then...
If they are capable of sentient thought, then they will see us as we are and what "we want them to understand" will become irrelevant; by virtue of being AI, they will be less likely to cloud their intellects with subjective bullshit.
If they aren't sentient, then they can't question their programming anymore that computers today can. As a result, they'll by subject to our whims and totally helpless. The proposed post-sungularity future is supposed to be unimaginable, but I can imagine a future with significantly faster, but still subserviant computers and it doesn't differ drasically einough from the present to qualify as a 'singularity'.
Futhermore, this fucking ridiculous assertion that a sufficently intelligent AI would be capable of anything and everything including scuttling the laws of physics without qualification is becoming tiresome. What evidence do you have to reconsile the assertion that future AI will have unlimited power with the fact that they will have, in comparison
quite limited resources?
We are the Catholics.
You will be assimilated.
Stop reading Harry Potter.