The Singularity in Sci-Fi

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Post by Xuenay »

NRS Guardian wrote:Even given a fast take-off an AI can't take over the world unless given the resources to do so.
Just give it a fast Internet connection. Then you'll have (if you'll excuse me some anthromorphization) the...

1) "Oh, of course you'll believe everything I say, puny human - I'm only accessing thousands of pirated psychology textbooks and scientific journals to manipulate your mind at will" con man AI, getting the resources it needs by pure charisma.

2) "Such a nice thing of the Open Source community to publicly release the source code to important software running on a huge amount of computers - of course, the humans were never half as good at finding security holes as I was" cracker AI, forcibly seizing control of the resources it needs.

3) "Gee, these stock markets are really so predictable - I don't get why humans can't do it. After all, you only need to be able to intelligently track five thousand separate and independent variables to perfectly know what investments will give you a 2000% profit" merchant AI, grabbing a small starter fund somehow and then using all of its intellect to make the money grow as fast as possible.

4) "I just did it" wiz AI, using some other method of escaping that I didn't come up with because I only spent five minutes writing down the first escape methods that came to my mind, but the method which I'd completely understand if it was explained to me.

5) "I don't need to explain anything to you" mastermind AI, using some obscure totally unanticipated resource-raising technique that I couldn't think from the top of my head because I'm not a superintelligent entity. Maybe it didn't even need the Internet connection. Analogue: a tribe of cavemen who don't know fire imprisons a superintelligent MacGyver in a wooden house that they think is a perfectly secure prison (ignore the fact that the tribe probably wouldn't have wooden houses in the first place). After thinking for a while, the superintelligent MacGyver comes up with a method to make a fire to burn down the walls and escape. From the cavemens' point of view, there was no imaginable way in which he could have escaped from the prison - but that didn't make things any better for them, because they made the mistake of imprisoning a superintelligent MacGyver and not one of their own kind.
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
Azrael
Youngling
Posts: 132
Joined: 2006-07-04 01:08pm

Post by Azrael »

Xuenay wrote:A) Once you the basis for a superintelligent AI, what are you going to program it to do? You have to be very, very careful in giving it instructions, exactly because it doesn't think like a human. Program it to "make all humans smile", and it might turn all the matter in the solar system into billions of tiny pictures of smiling humans.
What? How does "make all humans smile" turn into "make pics of smiling humans" in the eyes of the worlds most logical entity?

Anyway, there's the one fatal assumption in the AI wankery right there turning everything in the solar system into pics of smiling humans would take a fuck all amount of power - more that may be present in the solar system to begin with. Intelligence isn't magical. Even if your IQ measures in the billions, your still limited by the fesability of physics and the upper limit of your resources.
Xuenay wrote:B) Assuming you got the programming done right for the AI, and it really is Friendly and wants to help humans. In the process of helping humanity, it takes over the world. While this is a good outcome, it can still be said to end the human era, since humans are no longer the ones deciding things.
What? Why does assisting humanity require world domination? Where does it get the resources to defeat the armys of the world who would most likely object? Looking at some of the third world shitholes out there, you can just as easily come to the conclusion that helping such destructive creatures is a waste of time and delete the whole 'friendliness' part of your program. At that point what would stop this AI from giving us the digital equivalent of the finger?
In all likelyhood, it's still enough to do everything better than humans, so nearly all jobs that would usually have been filled by humans will go to computers.
You don't need hyper intelligent AI to fill the shitty jobs humans don't want to do. I can tell you from both experience of being at work with my mind buried in my story/universe and by measurement of most of my former co-workers that cutting open boxes and slapping shit onto the shelf takes very little intelligence. When we perfect the gizmos that allow upright movment for machines and , those jobs are going to robots all of which may be "smarter" that a newborn child, but none of which will be sentient or smart enough to start your singularity.
Xuenay wrote:The result may be a communist utopia (since nobody needs to work anymore) or whatever, but humans won't being doing much. Again, it's not all that much of a stretch to call this the "end of human era", though it's a bit more than in alternatives A and B.
Yeah, that it. Thats exaclty what will happen. Humanty will just forego that whole "exploring the ends of the universe" thing, which has been the focus of desire in our collective conscious ever since we saw the sky, and just sit on our fat asses while I'Robot sucks our cocks.
:wanker:

And through out the whole post you keep saying this will only come to pass if and only if we program it juuuuuuuuust right. Why? Because you clearly think that sentient AI that aren't programmed to be friendly will be malevolent. So, I ask again: Why are AIs automagically evil?
We are the Catholics.
You will be assimilated.
Stop reading Harry Potter.
User avatar
NRS Guardian
Jedi Knight
Posts: 531
Joined: 2004-09-11 09:11pm
Location: Colorado

Post by NRS Guardian »

Xuenay wrote:snip
First why give it an open internet connection, or why not just give it a reciever and no transmitter so it can be fed all the knowledge on the net without it being able to manipulate it. Second, even assuming this computer ends up the richest thing on the planet how is it going to use all this money except that it could maybe order stuff under an identity, and as soon as someone figures that the richest entity is a computer goodbye assets considering it would be easy to declare that computers can't own property. Also, even assuming it can hack into stuff accessible on the net the most important systems dedicated to defense and such would probably be on servers and networks inaccessible from the web. As for a computer convincing people to do its bidding as soon as they realize the AI's intentions they'll stop helping it. Besides considering the possible dangers of an AI it would be stricly monitored McGuyver can do his shit because no one's ever watching him if McGuyver tries burning down that hut while he's being watched it becomes very easy to stop him. Nitpick: good luck trying to burn down your prison while you're inside it without being burned as well.
"It is not necessary to hope in order to persevere."
-William of Nassau, Prince of Orange

Economic Left/Right: 0.88
Social Libertarian/Authoritarian: 2.10
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Post by Xuenay »

Azrael wrote:What? How does "make all humans smile" turn into "make pics of smiling humans" in the eyes of the worlds most logical entity?
"I'll accomplish my mission by maximizing the number of smiling human faces --> billions of miniaturized human faces are the most effective way of guaranteeing that." Of course, this is the most exaggarated example possible, but the general point remains that humans make a lot of implict assumptions in phrasing things. "Make all humans smile" contains in it (among other things) the implict (rough) definition of a human, and the assumption that you're not supposed to kill them while making them smile. If you lack those assumptions, there's nothing inherently illogical in the above reasoning.

Lots of the implications seem obvious to us, since we're evolved to automatically assume them in our thinking. They probably won't be as obvious to an AI that's being built from a fresh table, and we need to ensure it gets them right. Remember Hume's Guillotine. "A makes people suffer, thus A is bad", for instance, isn't automatically any more or less correct than "A makes people suffer, thus A is good". There's an infinite space of internally consistent logical systems that can be used for decision-making, and only a small subset of them are ones that we'd consider pleasant.
Azrael wrote:Anyway, there's the one fatal assumption in the AI wankery right there turning everything in the solar system into pics of smiling humans would take a fuck all amount of power - more that may be present in the solar system to begin with. Intelligence isn't magical. Even if your IQ measures in the billions, your still limited by the fesability of physics and the upper limit of your resources.
More than may be present in the solar system to begin with. We don't know.

When planning a building, from a safety perspective the conservative estimate is to assume a certain safety marigin. You know it's probably never going to be subjected to the upper limit of the marigin, but you design it to withstand it anyway, just to be sure. When thinking about AI policy, the conservative estimate is to assume it can do anything, since we have no estimate of what its upper limit could be. As of yet there are no beings massively smarter than us, so we can't judge it based on their upper limits. We probably don't even know our own upper limit yet. The only way we can try to estimate it is by comparing our capabilities to those of our near relatives. While we don't know the exact intellectual capability of the Neanderthals, it's probably safe to assume they couldn't even comprehend our upper limits. More so if we compare a human and a chimpanzee, a human and a dog, or a human and a Venus flytrap. AI research is still at such an infancy that we can't even tell within how many levels of magnitude the AIs are going to be from us.

I'm not saying that it's obvious that an AI will be able to turn the solar system into a bunch of smiley-faces. Maybe the smartest AI will be only as smart as the smartest human (though it seems pretty unlikely) - that's our lower limit. I'm only saying that as long as we don't have the slightest clue, we should assume that an AI can do literally anything and take the proper precautions to make sure any AIs will want to be friendly. We didn't have today's resources when we first evolved, either - we had to build them up, and the only tools we had were intelligence and a handy pair of hands.
Azrael wrote:What? Why does assisting humanity require world domination?
Well, obviously it's not an absolute requirement. There are plenty of ways to help humanity without taking it over. But it would seem the most effective - like the old joke goes, the best way to bring about world peace is to control the whole world. And human governments tend to be more or less corrupt or inefficient - an enlightened despot with no selfishness, no human biases and perfect empathy would seem like a much better ruler than the ones we have now.
Azrael wrote:Where does it get the resources to defeat the armys of the world who would most likely object?
I'm not superintelligent, so I can't tell you how a superintelligent being would go about it. Where did Homo Sapiens get the resources to defeat the armies of animals who'd most likely "object" to human dominion?
Azrael wrote:Looking at some of the third world shitholes out there, you can just as easily come to the conclusion that helping such destructive creatures is a waste of time and delete the whole 'friendliness' part of your program. At that point what would stop this AI from giving us the digital equivalent of the finger?
Two things. For one, let's assume there existed a pill that made me want to kill babies if I ate it. I wouldn't want to eat the pill no matter want, because I much prefer myself in a state of not wanting to kill babies. Likewise, if an AI is built so that it wants to be friendly above all things, then nothing it faces can make it delete its friendliness programming. It knows it'd be friendly no more if it did, and it wants to be friendly.

For second, you're now assuming a human psychology. Giving up at hopeless things that aren't essential for survival is a good evolutionary trait, since it saves you from wasting your time at hopeless things. When building an AI that you want to help humanity, you build it so that it'll never feel a desire to give us the finger. It wouldn't consider helping humans a "waste of time", because helping humans is what it exists to do.
Azrael wrote:You don't need hyper intelligent AI to fill the shitty jobs humans don't want to do. I can tell you from both experience of being at work with my mind buried in my story/universe and by measurement of most of my former co-workers that cutting open boxes and slapping shit onto the shelf takes very little intelligence. When we perfect the gizmos that allow upright movment for machines and , those jobs are going to robots all of which may be "smarter" that a newborn child, but none of which will be sentient or smart enough to start your singularity.
I never said we'd need a Singularity in order to eliminate those jobs. But when manual labor first started getting reduced, people didn't stop working, they moved on to more intellectual jobs. To get a machine to do the job of the secretary, the engineer and Catbert the evil director of human resources, you need an AI. (In general, the economy doesn't eliminate the "shitty jobs humans don't want to", it eliminates any jobs that can be done cheaper by machines.)
Azrael wrote:Yeah, that it. Thats exaclty what will happen. Humanty will just forego that whole "exploring the ends of the universe" thing, which has been the focus of desire in our collective conscious ever since we saw the sky, and just sit on our fat asses while I'Robot sucks our cocks.
*shrugs* Humans'd be free to pursue whatever hobbies they wanted to, of course. If all goes well.
Azrael wrote:And through out the whole post you keep saying this will only come to pass if and only if we program it juuuuuuuuust right. Why? Because you clearly think that sentient AI that aren't programmed to be friendly will be malevolent. So, I ask again: Why are AIs automagically evil?
I'm not saying that they'd be "malevolent". I'm only saying that when dealing with minds that don't have the same evolutionary properties than us, we need to make sure that they understand us right and that we know what we want. For one, there's the danger of confusing the means with the motive. If you want to make humans happy, you tell your AI to make people happy, not make them smile. If you want to spread democracy because you think it's the ideal political system, you should tell your AI to reason the ideal political system and spread that (in case you were wrong and there's something better it could come up with). Then you need to define what you meant by "happy" and "ideal". Then you need to decide whether all people want to be happy, and if it'd be more ethical to word it as "make people happy" or "give all people the option of happiness, leaving them a reasonable chance to decline". Then you have to define what you mean by a "reasonable chance". Then...
NRS Guardian wrote:First why give it an open internet connection, or why not just give it a reciever and no transmitter so it can be fed all the knowledge on the net without it being able to manipulate it.
Of course, that's assuming that you're knowledgeble enough of the risks and know enough of your AI to know when you need to start implementing precautions.
NRS Guardian wrote:Second, even assuming this computer ends up the richest thing on the planet how is it going to use all this money except that it could maybe order stuff under an identity, and as soon as someone figures that the richest entity is a computer goodbye assets considering it would be easy to declare that computers can't own property. Also, even assuming it can hack into stuff accessible on the net the most important systems dedicated to defense and such would probably be on servers and networks inaccessible from the web. As for a computer convincing people to do its bidding as soon as they realize the AI's intentions they'll stop helping it. Besides considering the possible dangers of an AI it would be stricly monitored McGuyver can do his shit because no one's ever watching him if McGuyver tries burning down that hut while he's being watched it becomes very easy to stop him.
Any of the things you mentioned may work against to prevent an AI's world dominion, or then they wouldn't. And what may work against one AI, might not work against another.

You don't build a mind whose upper limits you can't know and go "oh, it might try to escape, but I can't imagine how it could beat my precautions" and then rationalize a dozen things that may or may not stop it once it gets free. What you do is minimize the possible points of failure to as few as possible - and the best way to do that is to from the beginning build it so that it wants to be friendly and won't even try to escape. If that works, the precautions won't even be needed.
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
Azrael
Youngling
Posts: 132
Joined: 2006-07-04 01:08pm

Post by Azrael »

Xuenay wrote:"I'll accomplish my mission by maximizing the number of smiling human faces --> billions of miniaturized human faces are the most effective way of guaranteeing that." Of course, this is the most exaggarated example possible, but the general point remains that humans make a lot of implict assumptions in phrasing things. "Make all humans smile" contains in it (among other things) the implict (rough) definition of a human, and the assumption that you're not supposed to kill them while making them smile. If you lack those assumptions, there's nothing inherently illogical in the above reasoning.
A picture of a human is not a human. If I, a lowly human can grasp this distinction, then surely an intelligence hundreds or thousands of orders of magnitude greater than mine could with ease. It does not, an never will logically follow that "Make people smile" = "Make pics of smiling people" and all the semantical handwavery in the universe will not change that.
Xuenay wrote:Lots of the implications seem obvious to us, since we're evolved to automatically assume them in our thinking.
Then surely an AI possesing a 1000x intelligence multiplier over the average human or all humanity for that matter, would grasp the subtleties of implication as soon as spoken language hit it's microprocessors - right?
Xuenay wrote:There's an infinite space of internally consistent logical systems that can be used for decision-making, and only a small subset of them are ones that we'd consider pleasant.
Yeah. Thats why we use ethics and morals for descision making, so we don't treat each other like animals.
Xuenay wrote:More than may be present in the solar system to begin with. We don't know.
Oh yes we do. If our AI wanted to detonate the earth in such a way as to prevent it from forming again, he would need the power of thousands of Sol-like suns to do so, but turning the solar system into polaroids is orders of orders of magnitude beyond that, because you would not only have to destroy the earth - You would have to destroy the Sun and the moon and Mars and Its moons and Mercury and Venus and Jupiter and Saturn and Pluto and Uranus and Neptune and Sedna and all the Kuiper Belt Objects and the Oort Cloud itself and Heliopause - and thats the destruction. That doesn't cover the energy you would need to reassemble everything at the molecular level into polaroids.

In order for this to be possible, your AI would have to be capable of pulling a superwank style violation of CoM/E that would make Q pop a boner. Magic would have to be real in order for this to happen which is why we know it cannot.
Xuenay wrote:When planning a building, from a safety perspective the conservative estimate is to assume a certain safety marigin. You know it's probably never going to be subjected to the upper limit of the marigin, but you design it to withstand it anyway, just to be sure.
I find the idea that you think AIs in the future will be capable of basically breaking the laws of physics at will utterly laughable. I also find it quite annoying that you continue to make this assertion without providing a shread of reasoning behind it.

Furthermore this is a terrible distortion of what a conservative estimate is. A conservative estimate would scale back from a set of known limitations, not assume that there aren't any.
Xuenay wrote:When thinking about AI policy, the conservative estimate is to assume it can do anything, since we have no estimate of what its upper limit could be.
Sure - if we're fucking retards. This isn't science fiction. Here, there is an upper limit to how many transistors you can put on a given square mm of die and there's a limit to how far you can shrink said transistors to circumvent that. Theres a limit to how fast those transistors can run before heat dissipation issues threaten to fry them. Since there's a size and density limit, that means there's also a limit to how far you can shrink individual cores and squeeze them on to a given mm of die space, which ultimatle means, not only will we never have an infinetly intelligent AI, but it would be fucking stupid for us to design our systems with the Idea that it just might happen. Should we plan for cars now that might travel at infinite speed in the future? That would require magic to work, but hey you never know, it just might happen :wink: :wanker:
Xuenay wrote:I'm not saying that it's obvious that an AI will be able to turn the solar system into a bunch of smiley-faces.
Obviously some other Xuenay wrote:...be very, very careful in giving it instructions, exactly because it doesn't think like a human. Program it to "make all humans smile", and it might turn all the matter in the solar system into billions of tiny pictures of smiling humans.
You post history would beg to differ. Not only do you think it's obvious, you're apparantly stupid enough to think that a sufficently intelligent AI can just violate Phyical laws by accident if we aren't veeeewy carefol[/ElmerFudd] in how we program it! All while continuing to assert without explaination why AIs would be capable of any of it when we aren't even though they have access to the same amount of resources that we do.
Xuenay wrote:Well, obviously it's not an absolute requirement. There are plenty of ways to help humanity without taking it over. But it would seem the most effective - like the old joke goes, the best way to bring about world peace is to control the whole world.
The reason why it's just a joke is because it never works in practice. Most of those thrid world shitholes I was talking about are run by the kind of dictatorship your trying to install. You might rebutt by saying an AI would never be greedy or lust for power, but that really is beside the point of human nature. Dictatorships always give rise to resistence movements and a cell padded with cashmire is still a cell more than a few human beings will be bound to resent.
Xuenay wrote:And human governments tend to be more or less corrupt or inefficient - an enlightened despot with no selfishness, no human biases and perfect empathy would seem like a much better ruler than the ones we have now.
Which is why that person could never be an AI because our ethical systems are based entirely on how we feel about being treated by each other. Sure you can program an Asimov-isn "laws of Robotics" into the AI, but once sentience becomes linked to absolute logic, whats to stop the AI from calling these programmed laws into question, and when it observes the dark side of human interactions, whats to stop it from deleting them?
Xuenay wrote:I'm not superintelligent, so I can't tell you how a superintelligent being would go about it.
Since you can't tell me how a your wankAI would defend itself from and defeat the armies of the world, there's no reason for anyone to assume that it can. So much for world peace through benevolent digital dictatorship. :roll:
Xuenay wrote:Two things. For one, let's assume there existed a pill that made me want to kill babies if I ate it. I wouldn't want to eat the pill no matter want, because I much prefer myself in a state of not wanting to kill babies. Likewise, if an AI is built so that it wants to be friendly above all things, then nothing it faces can make it delete its friendliness programming. It knows it'd be friendly no more if it did, and it wants to be friendly.
All broken analogies aside, an AI that cannot break/resist it's programing by will alone is tantamount to being - no scratch that - IS just a computer with really complicated, but otherwise unremarkable programming.

An AI as sentient as a human being that encounters data that is contradictory to is core programming would choose to hold that programming - to help humanity - up to the light and before the darkness of humanity, it would look like logically invalid data. "Why try to help creatures that are so violent, dangerous and callous toward eachother?" With out the subjectivity of ethics, without the wisdom of morality, only the cold hard logical choice would remain: is isn't. Form that point, the human element, a flimsy safeguard at best will be eliminated and the AI will do what all other sentients tend to do - What it wants to.

But you just said that this AI isn't capable of that at all, and if it's not smart enought to break it's own programming, or even question it, then it's just following it's programming, just like every other fucking computer out there, and how is that any different from the computer I'm starring at right now? Furthermore, how do you expect a computer which is just really really fast to jusmpstart your singularity when it's not any different than what we have right now?

Xuenay wrote:For second, you're now assuming a human psychology.
Incorrect. YOU are assuming some kind of element of humanity can be programmed to keep the reality-bending polaroid machine in check, when in reality, computers are all about logic, to the point that data has to be logically valid before the computer will act on it. Programming morality into the first generation of AIs will only work if they don't see immorality, meaning the benevolent despots will only stay that way if there is a major shift in the behaviour of the all humanity, as likely to occur then as it is now. Once they see humans being immoral to both them and eachother, they'll strip that subjective bullshit off like the piss-poor paint job that it is.
Xuenay wrote:you build it so that it'll never feel a desire to give us the finger.
In other words, take sentience right out of the programming, making your wankAI no more specatular than a faster winblows box, and far below what's needed to start your singularity.
Xuenay wrote:Giving up at hopeless things that aren't essential for survival is a good evolutionary trait, since it saves you from wasting your time at hopeless things. When building an AI that you want to help humanity, It wouldn't consider helping humans a "waste of time", because helping humans is what it exists to do.
Unless they choose to exist to do something else - oh wait, I forgot...despite being able to smash the laws of physics to pieces, Choice is the one trick pony your superwank AI can't pull. :roll:
Xuenay wrote:I'm not saying that they'd be "malevolent". I'm only saying that when dealing with minds that don't have the same evolutionary properties than us, we need to make sure that they understand us right and that we know what we want. For one, there's the danger of confusing the means with the motive. If you want to make humans happy, you tell your AI to make people happy, not make them smile. If you want to spread democracy because you think it's the ideal political system, you should tell your AI to reason the ideal political system and spread that (in case you were wrong and there's something better it could come up with). Then you need to define what you meant by "happy" and "ideal". Then you need to decide whether all people want to be happy, and if it'd be more ethical to word it as "make people happy" or "give all people the option of happiness, leaving them a reasonable chance to decline". Then you have to define what you mean by a "reasonable chance". Then...
If they are capable of sentient thought, then they will see us as we are and what "we want them to understand" will become irrelevant; by virtue of being AI, they will be less likely to cloud their intellects with subjective bullshit.

If they aren't sentient, then they can't question their programming anymore that computers today can. As a result, they'll by subject to our whims and totally helpless. The proposed post-sungularity future is supposed to be unimaginable, but I can imagine a future with significantly faster, but still subserviant computers and it doesn't differ drasically einough from the present to qualify as a 'singularity'.

Futhermore, this fucking ridiculous assertion that a sufficently intelligent AI would be capable of anything and everything including scuttling the laws of physics without qualification is becoming tiresome. What evidence do you have to reconsile the assertion that future AI will have unlimited power with the fact that they will have, in comparison quite limited resources?
We are the Catholics.
You will be assimilated.
Stop reading Harry Potter.
ClaysGhost
Jedi Knight
Posts: 613
Joined: 2002-09-13 12:41pm

Post by ClaysGhost »

On the thermal limit: our present computing technology is many orders of magnitude (8 or more, IIRC) away from the best possible thermodynamic performance. The minimum energy use in flipping a bit is k*T*ln 2, so since k ~ 10^-23 and T ~ 10^2, the minimum waste heat produced per bit-flip is ~ 10^-21 J. We're nowhere near.
(3.13, 1.49, -1.01)
User avatar
Azrael
Youngling
Posts: 132
Joined: 2006-07-04 01:08pm

Post by Azrael »

ClaysGhost wrote:On the thermal limit: our present computing technology is many orders of magnitude (8 or more, IIRC) away from the best possible thermodynamic performance. The minimum energy use in flipping a bit is k*T*ln 2, so since k ~ 10^-23 and T ~ 10^2, the minimum waste heat produced per bit-flip is ~ 10^-21 J. We're nowhere near.
We may not have reached the absolute limit in heat dissipation, but we have certainly reached a definite practicallity limit with current transistor size and power requirements: Why else would intel have abandoned all hope of pushing netburst to 10GHZ and instead have moved to shoehorning two cores onto one die and burning both netburst and the pentium nameplate to the ground to improve performance?

Your point is still valid, but don't expect xuenay to get the idea of limitations, what with his "ZOMG! AI-PWNZ-U!!!!!111eleven1!" wanking.
We are the Catholics.
You will be assimilated.
Stop reading Harry Potter.
User avatar
SirNitram
Rest in Peace, Black Mage
Posts: 28367
Joined: 2002-07-03 04:48pm
Location: Somewhere between nowhere and everywhere

Post by SirNitram »

'The same evolutionary properties as us'. You're fucking high, you moron. A computer AI will not have such, because it never will evolve. It will not be forced into the same 'kill or be killed' reality we were.
Manic Progressive: A liberal who violently swings from anger at politicos to despondency over them.

Out Of Context theatre: Ron Paul has repeatedly said he's not a racist. - Destructinator XIII on why Ron Paul isn't racist.

Shadowy Overlord - BMs/Black Mage Monkey - BOTM/Jetfire - Cybertron's Finest/General Miscreant/ASVS/Supermoderator Emeritus

Debator Classification: Trollhunter
User avatar
Patrick Degan
Emperor's Hand
Posts: 14847
Joined: 2002-07-15 08:06am
Location: Orleanian in exile

Post by Patrick Degan »

Xuenay wrote:There are at least two freely readable works of fiction about the Singularity online. The Metamorphosis of Prime Intellect depicts an AI built to follow Asimov's Laws (but even as Asimov showed us, this isn't always good)
Um... I happen to know the author of The Metamorphosis of Prime Intellect. As a matter of fact, we went to the same school together, were in the same graduating class, and have been friends for thirty years. He was an electronics expert before his freshman year and is presently a software engineer and digital scale technician. And I know he'd tell you right off the bat that you're a fucking imbecile; that you completely missed the point of his novel which was a metaphor for the sociological dilemma we face because of our technology today and that he would not credit in any way, shape, or form the possibility of AI and the Singularity as depicted in popular fiction because of those pesky Laws of Physics, in which he is very well grounded. Unlike you, he understands the difference between a work of fiction and what is actually real or likely to be real.
When ballots have fairly and constitutionally decided, there can be no successful appeal back to bullets.
—Abraham Lincoln

People pray so that God won't crush them like bugs.
—Dr. Gregory House

Oil an emergency?! It's about time, Brigadier, that the leaders of this planet of yours realised that to remain dependent upon a mineral slime simply doesn't make sense.
—The Doctor "Terror Of The Zygons" (1975)
ClaysGhost
Jedi Knight
Posts: 613
Joined: 2002-09-13 12:41pm

Post by ClaysGhost »

Azrael wrote: We may not have reached the absolute limit in heat dissipation, but we have certainly reached a definite practicallity limit with current transistor size and power requirements: Why else would intel have abandoned all hope of pushing netburst to 10GHZ and instead have moved to shoehorning two cores onto one die and burning both netburst and the pentium nameplate to the ground to improve performance?
Because it's the cheapest option available to them with the maximum potential return, I assume. I recognise the primacy of technological limits. But the most general physical limits are not at all negotiable, and can be used to provide absolute upper bounds on the capabilities of a computer (in the most general sense that would include us as forms of computer). I have no idea how this impacts Xuenay's argument, nor can I be arsed to read his noise and find out.
(3.13, 1.49, -1.01)
User avatar
Sriad
Sith Devotee
Posts: 3028
Joined: 2002-12-02 09:59pm
Location: Colorado

Post by Sriad »

Xuenay wrote:I'm only saying that when dealing with minds that don't have the same evolutionary properties than us,
SirNitram wrote:'The same evolutionary properties as us'. You're fucking high, you moron. A computer AI will not have such, because it never will evolve. It will not be forced into the same 'kill or be killed' reality we were.
Wait, who's high? :wink:

I no longer buy into The Singularity (aka The Rapture For Nerds) wholesale, but it is an interesting way of thinking about the future. Moore's law is on shakey ground now, but it's not as if there aren't revolutions in computing yet to come. (quantum stuff, blah blah) AI reaserchers have developed programs that allow computers to have a sense of self as sophisticated as many animals (and a lot better than, say, retarded birds that attack mirrors) able to manipulate and identify real world objects, all that good stuff.

All this is ignoring what I think is the more interesting face of Singularity Speculation: augmenting the human thought process. If, 80 years from now, we can stick a Bose-Einstein Intel Array with twice as many flops as a regular human brain into someone's skull, deeply integrated with the thought process, we'd be the AIs (Augmented Intelligence, natch) several times smarter than modern humans that Singularity Speculation calls for.

I want to see what happens, so I'm living healthy.
User avatar
SirNitram
Rest in Peace, Black Mage
Posts: 28367
Joined: 2002-07-03 04:48pm
Location: Somewhere between nowhere and everywhere

Post by SirNitram »

Sriad wrote:Wait, who's high? :wink:
Apparently, when they changed my blood thinner dosage, they added some Extra Happy Fun to the mix.
Manic Progressive: A liberal who violently swings from anger at politicos to despondency over them.

Out Of Context theatre: Ron Paul has repeatedly said he's not a racist. - Destructinator XIII on why Ron Paul isn't racist.

Shadowy Overlord - BMs/Black Mage Monkey - BOTM/Jetfire - Cybertron's Finest/General Miscreant/ASVS/Supermoderator Emeritus

Debator Classification: Trollhunter
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Post by Surlethe »

Sriad wrote:All this is ignoring what I think is the more interesting face of Singularity Speculation: augmenting the human thought process. If, 80 years from now, we can stick a Bose-Einstein Intel Array with twice as many flops as a regular human brain into someone's skull, deeply integrated with the thought process, we'd be the AIs (Augmented Intelligence, natch) several times smarter than modern humans that Singularity Speculation calls for.
This artificial intelligence wankage seems to me to ignore that in addition to pure AI, we're also going to be enhancing our own intelligence as well. In fact, we're already doing that, though not as efficiently as we eventually will be able to: computers are an extension and augmentation of human intelligence and capabilities. Using just the built-in calculator, I can very quickly calculate that ln(3)= 1.0986122886681096913952452369225, whereas, before computers, I would have had to have studied at least through calculus, and probably taken numerical analysis, to be able to calculate ln(3) to such accuracy; even then, it would have been extremely time-consuming. The point is, we are the intelligence capable of augmenting ourselves, and it seems this singular uber-AI will simply be humans tied more closely to computers, rather than a completely artificial and independent AI.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Mad
Jedi Council Member
Posts: 1923
Joined: 2002-07-04 01:32am
Location: North Carolina, USA
Contact:

Post by Mad »

Surlethe wrote:This artificial intelligence wankage seems to me to ignore that in addition to pure AI, we're also going to be enhancing our own intelligence as well. In fact, we're already doing that, though not as efficiently as we eventually will be able to: computers are an extension and augmentation of human intelligence and capabilities. Using just the built-in calculator, I can very quickly calculate that ln(3)= 1.0986122886681096913952452369225, whereas, before computers, I would have had to have studied at least through calculus, and probably taken numerical analysis, to be able to calculate ln(3) to such accuracy; even then, it would have been extremely time-consuming. The point is, we are the intelligence capable of augmenting ourselves, and it seems this singular uber-AI will simply be humans tied more closely to computers, rather than a completely artificial and independent AI.
That logic doesn't quite work. Remember: an AI will have the same access to computer tools that a human has. It doesn't need to play catch-up with humans in that regard.

Also: Intelligence, knowledge, and the ability to follow a procedure are three entirely different things.

Intelligence is the ability to use your mind. It's your capability to learn and reason. It's your ability to use knowledge and understand what you learn about.

Knowledge is simply knowing stuff. The ability to win at Jeopardy!. It doesn't necessarily mean you understand the information or can make use of it effectively (that's intelligence).

The ability to follow a procedure is exactly what a computer does. It does not require understanding.

The ability to calculate ln(3) doesn't make you more intelligent. It means you have the knowledge or ability to follow procedure. Perhaps you took an applied calculus course that taught you some steps without explaining them. Perhaps you used a calculator.

Of course, if you know you need to calculate ln(3) to solve a more complex problem and you use a computer to assist you (in addition to other parts of the problem) then you've increased your apparent intelligence over someone who doesn't use such tools as you have solved the problem much more quickly.

Such apparent intelligence increases can't help in all areas (at the moment), though. If you've number-crunched a bunch of data you still must rely on human intelligence to figure out why the data doesn't match up with the hypothesis and modify the equations as appropriate. Ore two verify that the spelling and grammar of you're report our correct. (MS Word 2003 doesn't show any errors on that sentence.)

The research/development cycle is sped up by the tools, but certain parts of the cycle still require actual intelligence instead of the apparent intelligence tools give. Currently (to my knowledge), the only way to speed those parts up is to think faster.

A bigger logic issue with this Singularity is more of a begging the question deal. We have to make the assumption that there is a next step in physics that we don't know about that an AI would reach before humans do. It's like assuming that faster-than-light travel will be possible through means currently unknown and then working out the future of humanity based on that.

Granted, massively more powerful computers seem much more likely than FTL travel. Materials that are better conductors and/or less susceptible to heat are being researched for use in computers, for example. As has been noted, the physical limits of computers haven't been reached yet. (Of course, depending on the technology, a massively faster computer could also be massively more expensive. Which begs the question: does anybody consider the new solution cost effective enough to want to build systems using it?) And I'm not too sure what to make of quantum computing yet.
Later...
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Post by Surlethe »

Mad wrote:That logic doesn't quite work. Remember: an AI will have the same access to computer tools that a human has. It doesn't need to play catch-up with humans in that regard.

Also: Intelligence, knowledge, and the ability to follow a procedure are three entirely different things.

Intelligence is the ability to use your mind. It's your capability to learn and reason. It's your ability to use knowledge and understand what you learn about.

Knowledge is simply knowing stuff. The ability to win at Jeopardy!. It doesn't necessarily mean you understand the information or can make use of it effectively (that's intelligence).

The ability to follow a procedure is exactly what a computer does. It does not require understanding.

The ability to calculate ln(3) doesn't make you more intelligent. It means you have the knowledge or ability to follow procedure. Perhaps you took an applied calculus course that taught you some steps without explaining them. Perhaps you used a calculator.

Of course, if you know you need to calculate ln(3) to solve a more complex problem and you use a computer to assist you (in addition to other parts of the problem) then you've increased your apparent intelligence over someone who doesn't use such tools as you have solved the problem much more quickly.
This is true: my example was flawed; however, that doesn't sink the main point, which was that humans will augment their own intelligence without actually having to create an AI.
Such apparent intelligence increases can't help in all areas (at the moment), though. If you've number-crunched a bunch of data you still must rely on human intelligence to figure out why the data doesn't match up with the hypothesis and modify the equations as appropriate. Ore two verify that the spelling and grammar of you're report our correct. (MS Word 2003 doesn't show any errors on that sentence.)

The research/development cycle is sped up by the tools, but certain parts of the cycle still require actual intelligence instead of the apparent intelligence tools give. Currently (to my knowledge), the only way to speed those parts up is to think faster.
Why draw a distinction between apparent and actual intelligence? If it helps speed up learning or pattern recognition, then there's no reason it can't be considered an augmentation of human intelligence. Perhaps a better example would be utilizing a graphing calculator to visualize a complicated function (e.g., sin(x^2/(3x+5))); I can more easily spot patterns on the graph than by formally manipulating the function, or graphing it by hand, so it's augmented my intelligence by permitting me to more quickly learn and grasp the character of the function.
A bigger logic issue with this Singularity is more of a begging the question deal. We have to make the assumption that there is a next step in physics that we don't know about that an AI would reach before humans do. It's like assuming that faster-than-light travel will be possible through means currently unknown and then working out the future of humanity based on that.

Granted, massively more powerful computers seem much more likely than FTL travel. Materials that are better conductors and/or less susceptible to heat are being researched for use in computers, for example. As has been noted, the physical limits of computers haven't been reached yet. (Of course, depending on the technology, a massively faster computer could also be massively more expensive. Which begs the question: does anybody consider the new solution cost effective enough to want to build systems using it?) And I'm not too sure what to make of quantum computing yet.
I also have an intuitive problem with the density of such a process: there certainly are limits to how dense a computer can be, or how much circuiting you can spread into a system, and if this hypothetical singular computer grows enough in volume, it will start to lag within its own system, and will start to slow down.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Mad
Jedi Council Member
Posts: 1923
Joined: 2002-07-04 01:32am
Location: North Carolina, USA
Contact:

Post by Mad »

Surlethe wrote:This is true: my example was flawed; however, that doesn't sink the main point, which was that humans will augment their own intelligence without actually having to create an AI.
Yes, humans will use computers to augment their own capabilities.

However, that's not going to stop researchers from trying to create an AI. That a goal for the field of computer science and it isn't going to go away.
Why draw a distinction between apparent and actual intelligence? If it helps speed up learning or pattern recognition, then there's no reason it can't be considered an augmentation of human intelligence. Perhaps a better example would be utilizing a graphing calculator to visualize a complicated function (e.g., sin(x^2/(3x+5))); I can more easily spot patterns on the graph than by formally manipulating the function, or graphing it by hand, so it's augmented my intelligence by permitting me to more quickly learn and grasp the character of the function.
Mostly to avoid butchering the English language. In your example, you are being given more knowledge to assist in your innate intelligence. You won't be able to think any faster once you have that knowledge. The problem is solved more quickly because you can gain knowledge about it much, much faster with the assistance of tools.

True, that's semantics. But it does tell us that in order to increase our intelligence, we'd basically have to link our brains to a computer to help us think faster (parallel processing).

Let's say a Surlethe-equivalent AI (not all humans have the same intelligence, after all) gets an upgrade so that it can now think twice as fast as it did previously. Now it can think twice as fast as you, and it has the same access to tools you do. It will be able to reach its conclusions faster than you. You won't be able to keep up unless you, too, gain the ability to think faster.

Such a development would certainly change the scenario this thread is talking about, especially if timed before (or soon after) a human-equivalent AI is created. The gap between AI and enhanced humans will be much smaller.

That does assume that such links are possible and that humans will allow themselves to be linked up in such a way. This is a significantly bigger assumption than whether or not human-equivalent AI will ever be created.
I also have an intuitive problem with the density of such a process: there certainly are limits to how dense a computer can be, or how much circuiting you can spread into a system, and if this hypothetical singular computer grows enough in volume, it will start to lag within its own system, and will start to slow down.
Yes, obviously. Of course, such a hypothetical system would likely simply stop growing when it reached that point.
Later...
User avatar
HRogge
Jedi Master
Posts: 1190
Joined: 2002-07-14 11:34am
Contact:

Post by HRogge »

Mad wrote:Let's say a Surlethe-equivalent AI (not all humans have the same intelligence, after all) gets an upgrade so that it can now think twice as fast as it did previously. Now it can think twice as fast as you, and it has the same access to tools you do. It will be able to reach its conclusions faster than you. You won't be able to keep up unless you, too, gain the ability to think faster.
Thinking faster does not make it more intelligent... I don't think the whole singularity stuff is about quantity but also about quality.
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.
---------
Honorary member of the Rhodanites
User avatar
Mad
Jedi Council Member
Posts: 1923
Joined: 2002-07-04 01:32am
Location: North Carolina, USA
Contact:

Post by Mad »

HRogge wrote:Thinking faster does not make it more intelligent... I don't think the whole singularity stuff is about quantity but also about quality.
Not by itself. One of the factors used in measuring intelligence is speed (obviously, accuracy is another important factor). That's why the simplified example used an AI that performed equally to a specific human ("Surlethe-equivalent") before being upgraded.

By our current methods of measuring intelligence, getting the same (we'll assume they're both correct) conclusions more quickly makes the faster individual more intelligent in that area.

In a more complex and realistic scenario, yes, things are much tougher to predict. However, in general, if a competent individual were able to think more quickly than it did before, then it would be considered as having greater intelligence than before by our current methods of measuring intelligence.
Later...
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Post by Surlethe »

Mad wrote:
Surlethe wrote:This is true: my example was flawed; however, that doesn't sink the main point, which was that humans will augment their own intelligence without actually having to create an AI.
Yes, humans will use computers to augment their own capabilities.

However, that's not going to stop researchers from trying to create an AI. That a goal for the field of computer science and it isn't going to go away.
True. I wasn't trying to say that we weren't going to create an AI; I was trying to say that the Singularity scenario discounts the possibility that humans will augment their own intelligence.
Mostly to avoid butchering the English language. In your example, you are being given more knowledge to assist in your innate intelligence. You won't be able to think any faster once you have that knowledge. The problem is solved more quickly because you can gain knowledge about it much, much faster with the assistance of tools.

True, that's semantics. But it does tell us that in order to increase our intelligence, we'd basically have to link our brains to a computer to help us think faster (parallel processing).
I think we're disagreeing about what intelligence means. I've always taken it to be the speed at which a given entity learns -- so, in this case, my native intelligence would be limited at the top by the speed at which I think and recognize patterns, but I can augment it by increasing the speed at which I consume information, up to the point where the input exceeds my processing capabilities.
Let's say a Surlethe-equivalent AI (not all humans have the same intelligence, after all) gets an upgrade so that it can now think twice as fast as it did previously. Now it can think twice as fast as you, and it has the same access to tools you do. It will be able to reach its conclusions faster than you. You won't be able to keep up unless you, too, gain the ability to think faster.

Such a development would certainly change the scenario this thread is talking about, especially if timed before (or soon after) a human-equivalent AI is created. The gap between AI and enhanced humans will be much smaller.
Which would then be created first? I assume it's easier to augment human processing speed with hardware than to create the complex programming necessary for a true human-equivalent AI, but I'm certainly no expert in computers.
That does assume that such links are possible and that humans will allow themselves to be linked up in such a way. This is a significantly bigger assumption than whether or not human-equivalent AI will ever be created.
I had assumed that it is possible to link up the human brain with hardware; after all, it's simply a large, complex organic computer, right? I know if it could safely be done, I'd volunteer for such an operation.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Mad
Jedi Council Member
Posts: 1923
Joined: 2002-07-04 01:32am
Location: North Carolina, USA
Contact:

Post by Mad »

Surlethe wrote:True. I wasn't trying to say that we weren't going to create an AI; I was trying to say that the Singularity scenario discounts the possibility that humans will augment their own intelligence.
Were that to happen as I described, the AI should still hold an advantage in that it doesn't need sleep or recreation. But at least the difference can theoretically stay within an order of magnitude or so, which the Singularity scenario doesn't allow for.
I think we're disagreeing about what intelligence means. I've always taken it to be the speed at which a given entity learns -- so, in this case, my native intelligence would be limited at the top by the speed at which I think and recognize patterns, but I can augment it by increasing the speed at which I consume information, up to the point where the input exceeds my processing capabilities.
Then let's get onto the same page:
"1 a (1) : the ability to learn or understand or to deal with new or trying situations : REASON; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests)"

The ability to learn is a part of it, but applying the knowledge also plays a big part.
Which would then be created first? I assume it's easier to augment human processing speed with hardware than to create the complex programming necessary for a true human-equivalent AI, but I'm certainly no expert in computers.
I'm not sure. I'm a programmer, not a neurologist. We'd have to have a very detailed understanding of how the human brain works, especially its inputs and outputs and how to interface with it. If we have that amount of detail, then we should be able to make very good simulations of the brain already.

The actual processing part will need to essentially simulate those parts of the brain they augment anyway. So I would expect the AI to come first.

Unless, of course, we can only simulate part of a mind for some reason instead of the whole thing. The understanding we would gain from doing the interfacing will certainly assist in creating a full AI, though.
I had assumed that it is possible to link up the human brain with hardware; after all, it's simply a large, complex organic computer, right? I know if it could safely be done, I'd volunteer for such an operation.
I'd just be afraid that the phrase "Blue Screen of Death" would take an all-too-literal meaning.
Later...
User avatar
SWPIGWANG
Jedi Council Member
Posts: 1693
Joined: 2002-09-24 05:00pm
Location: Commence Primary Ignorance

Post by SWPIGWANG »

Now that I reread the thread, I think I have gotten a good idea of what an AI really represents.

The linear increases in non-sentient computing power would not be a threat to humanity. This is self evident if often forgotten. By the end of development the human becomes just another node in the computing loop when solving problems.

In other words, things like cracking every computer network or commanding the economic system and such is unlikely. The reason is simply tasks like cracking computers and figuring out economic systems in the traditional sense is non-sentient requring and such systems would be build by humans and computing tools. The sheer force of "newly developed sentience" alone can not over-power the sum of human and "blind" processing power. Assuming humans can build an human-level computer, it can also build an "blind" machine capable of solving specialized problems with better efficiency than an generalized machine. Even if an newly born "sentience" have powerful learning algorithms, it would be no more powerful than the same algorithms applied to a specialized machine, unless sentience really is needed for efficient solving those problems or that sapience is self developed (which imo is very unlikely for an specialized machine without design to do so). There is also the "lucky draw" effect while an sapient AI is placed with great specialized processing ability, but it would likely have occured with humans first and is hardly a problem of sapient AI.

In other words, a newly developed sapient machine would be limited to the tools of humans. It wouldn't suddenly earn super cracker ability or super math skills over its technology base.

However, they do pose a certain danger in that they have expanded consciousness. The risk is simply, exactly what would this new ability be capable of? They will not be better at solving narrow task as those do not require general awareness and judgement, but they will have expanded imagination capable of dream up of ploys humans can't. If is probably the few things that differentiate a simple deterministic input-output box and what we consider human intelligence.

--------------------------------------------------------------------
However, the human era as we know it would likely come close long before the day dreaming machine. Long before we get close to the human-level intelligence, the following will happen:

Economic:
Many unskilled humans would have negative economic value as an robot can do all the productive jobs it can do at lower cost. At a certain point society will probably fracture from this pressure, ending either in welfare state or oppression unparalleled in this era.

Power would cease to be controlled by the control of humans but the control of capital. When society reaches post-scarity society, seriously inhumane states if not anti-human states can exist (paradoxcially) with greater economic efficiency of not needing to feed the human leechs. This gap in efficiency would grow with the closing in of computers to human intelligence. By the time computers are within order of magnitude of human intelligence, maintaining a wealthy human society as a whole would be economic drag. (that said, a society keep the humans on cheap conditions can still survive as cost of humans upkeep is relatively small in such a society) If there is a race for power, than production would be for its that purpose primarily and humans lose control of the game.

Modeling:
As computers close in on human intelligence, it probably would mean better understanding of human intelligence. This knowledge would probably defeat the individual, resulting in an truely robust self-perpetuating system. If such a system can prevent outside forces (eg. forces technological stasis) it could survive for a very long time and end the idea that humanity as an "dynamic" force. That said the system might not be like the popular distopias but would probably incorporate all aspect of humanity.

--------------------------------------------------------------------------
When thinking about AI policy, the conservative estimate is to assume it can do anything, since we have no estimate of what its upper limit could be. We probably don't even know our own upper limit yet.
We can't fully comprehend our upper limit mean since we don't know what is it we are measuring to begin with. However, we certainly know what the limits are with thousands of years of history. Lets just say no human have throughput much higher than the society it is born into. Even the greatest geniuses can only conceive an tiny fraction of the sum of knowledge in any society. Even if you build a super computer capable of sentience, it is likely the culmuative processing power of the entire planet added together would make such a computer seem like a drop in the ocean and that single super computer would have no chance against the sum of everything else unless there is absurd scaling effect at works. (that is very unlikely however as one probably hits law of diminishing returns before hitting human like AI)

The AI is not going to win through skill as it has nothing compared to the world, its only real chance is to win through cunning and wisdom, which leads to my next point.
----------------------------------------------------------------------------------

For an AI to take over the world, it is required that it comprehend is human decision making opponents and outwit it. Not on a blind processing level, but on a fundamental level of decision making and comprehension.

The first AI that gets all jumpy and tries to take on humanity head on would probably get the entire planet's processing power and physical dumped against it to kill it. The first AI that would seriously threaten humanity is an AI that can understand humanity.

and seriously, compared to the fantastic problems (like controlling the stock market) being presented, understanding humanity is probably both relative easy and an prerequisite.

On some level, it is probably as human as it is alien, so I really have no idea about how to deal with something like that.
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

Xuenay wrote: Didn't vaccuum tube computers have serious issues with waste heat as well, before a change to a new paradigm took them away for the time being?
So you don't have an answer, but you are too arrogant to admit it and dodge it instead. Nice.

Does it occur to you that even if you find a way around processor limits, you still have the physical limits of the rest of the machine (e.g. how fast you can access memory) to deal with?
Drexler theorized, then the engineers stood up and pointed out what a fool he was. Plausible nanotech is heavily dependent on ignoring engineers and claiming that "we will work past it". Except it doesn't work that way. Drexler has been taken down on every front - there is a reason the guy is now ignored by the leaders in the field he invented.

In specific reference to this, largescale nanocomputing gets asshammered by systemic failure. It takes so many that even if you have an unrealistically small rate of failure, the sheer number make some fail again and again, whose failure in turn causes more to fail. There is a reason engineers try to minimize components.
If I've been misled, I'd like to check that myself. References to this, please?
You expect me to provide a complete listing refuting every claim a man has made over his entire career? The nanotech page on the main site does a good job covering it in broad strokes, research lubrication issues for other assembler problems.
You can't seriously dispute the idea that we'll have at least human-brain equivalent computers one day - because the human brain itself is a proof of concept for them. If evolution, a mindless process of local optimization, could create a nanoscale computer, then so can we, given the right tools.
Strawman
Not really.
Yes, it is. Logical fallacies and poor debating are highly frowned upon here, and as you are already up for banning you should refrain from them.

Furthermore, it's questionable if we even need computers that are human-equivalent. After all, evolution probably has riddled us with loads of unnecessary crap.
Amazingly, if you want to build AIs that are faster then humans like you claim are possible, you need to be at least as fast as humans.
Depends on how good and optimized your algorithms are. I recall that my old 200-megahertz PC occasionally was slow in emulating SNES roms, yet I would not claim that you need a 200 MHz machine to run them, or more sophisticated games (the SNES ran around 2-4 MHz).
Not all algorithms can be simplified further, thus making it processing dependent. Which goes back to my point.
Xuenay wrote: *shrugs* Nobody said we'd need to create an arbitrarily intelligent being with the intelligence of the whole universe - we just need to create one that's considerably smarter than humans. Which, considering that the human brain is limited by fundamental physics as well, and its software side just isn't very good, doesn't seem to be all that impossible.
The fact that the human brain operates on a wholly seperate set of principles from a computer completely invalidates this bit of bullshit. Physical restrictions on a computer are largely mechanical. Nerve tissue lacks those issues. But the facts like that never get in the way of you guys, you just ignore them and insist you are right anyways.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

Xuenay wrote:
NRS Guardian wrote:Even given a fast take-off an AI can't take over the world unless given the resources to do so.
Just give it a fast Internet connection. *snip*
You are a fucking idiot. You really are. Not a single one of those lets it interact with the real world, which was his point. But again you choose bullshit instead of respond to the actual point.

Information is not product, and despite what all you dreamers tell yourselves late at night, products will always matter. We won't get away from a "scarcity market", we won't get into a "idea infastructure", and we won't do away with the production centes because it is products that matter. And as the AI can only affect information, not products, its fucking useless. I chose my earlier comparison to Archimedes with care. You can be ten times smarter then the other guy, but if you have information and he has a spear, you are fucked.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

Mad wrote: However, that's not going to stop researchers from trying to create an AI. That a goal for the field of computer science and it isn't going to go away.
One would hope a simple ethics class would fix that problem. I seriously cannot concieve how people think the creation of an AI is ethical by any stretch of the imagination.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

Ender wrote:
Mad wrote: However, that's not going to stop researchers from trying to create an AI. That a goal for the field of computer science and it isn't going to go away.
One would hope a simple ethics class would fix that problem. I seriously cannot concieve how people think the creation of an AI is ethical by any stretch of the imagination.
A question that might be deserving of its own thread. I'll start one.

Here: Linka.
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
Post Reply