The Singularity in Sci-Fi

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

SWPIGWANG wrote:
result of continious if rapid progress, but a "jump" - hence the use of the mathematical term "singularity". At any given point during the deveopment you cite, the people in question were well aware of what was going on.
Unless we have discrete time....jump makes no sense :P (okay, we aren't close to quantum-speed level of development)
Precisely my objection.
SWPIGWANG wrote:As for "people knowing what was going on", I disagree that is the case or a meaningful criteria.

You need stupidity or sheer chaos for the system to lose track of its own development.
Well, if people not knowing what is going on is not a meaningful criteria, why mention it? :P The very nature of a mathematical singularty as far as I'm aware, is precisely that the system loses track of its own development at that point - and that is precisely why I reject the concept; or at least the use of that particular term in the context. Too wanky.
Winston Blake wrote:
Lord Zentei wrote::? I rather got the impression that the singularity was not meant to be a result of continious if rapid progress, but a "jump" - hence the use of the mathematical term "singularity".
Maybe the idea you're thinking of is 'event horizon'. AFAIK a mathematical singularity really is just a result of continuous progress, like an asymptote of a hyperbola. It's unfortunate that references like 'reaching the Technological Singularity' or 'post-Singularity' are so commonly used, when mathematically you can't ever reach such a point. No doubt i'll soon be corrected by somebody who knows more maths.
Well no, I was not thinking about an event horizon. That is actually not a singular point, but the limit at which the escape velocity is equal to c. And though the singularity may be the result of a continious process is not exactly a part of it. Hence "singular point".
Winston Blake wrote:Anyway, i expect the idea has simply mutated since it was first pointed out that a graph of historical technological progress appeared to have an asymptote in the near future. The poetic similarity to mind-boggling black holes probably led to this asymptotic point being called the Technological Singularity, and nobody had any better names.
Well, that would be the reason I'm critical of the term: it smacks too much of "woo woo, the Future is here!". Of course, no technological develpoment can actually cross an asymptote - that would imply infinitely fast computation at that instant. It has to be still continious, at best it is just faster than before.

In that sense, people of the late middle ages and Renaissance were living in a "Singularity" because their development went much faster than during the dark ages. Given that you have exponential growth, by and large, throughout considerable parts of human history, anyone in those eras is living in a "singularity". The very nature of exponential growth imlplies that people at any point on the curve think that they are experiencing unprescedented growth while those in previous ages were moribound - but from a "gods eye view" outside the curve, observing the curve as a whole their point is not, in fact "singular" - the relative rate of growth is the same, and quite continious (as can be seen if you chart the thing on a log scale).
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
andrewgpaul
Jedi Council Member
Posts: 2270
Joined: 2002-12-30 08:04pm
Location: Glasgow, Scotland

Post by andrewgpaul »

Regarding the original post, and literary recommendations, I'd recommend Ken MacLeod's Fall Revolution series - The Star Fraction, The Stone Canal, The Cassini Division and The Sky Road, as well as Newton's Wake. However, they do kinda skirt round the 'singularity'; the main thrust of the plots involve the causes and side effects of the singularity, rather than dealing with it head on.

There's also the Culture series, but theat had its singularoty about 9,000 years before the time of the books, and guess what, it didn't change much after all :)
"So you want to live on a planet?"
"No. I think I'd find it a bit small and wierd."
"Aren't they dangerous? Don't they get hit by stuff?"
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Post by Xuenay »

andrewgpaul wrote:There's also the Culture series, but theat had its singularoty about 9,000 years before the time of the books, and guess what, it didn't change much after all :)
Generalization from fictional evidence isn't a very known fallacy, but it's a fallacy nonetheless.

As somebody who's donated to the Singularity Institute, I guess I should clear up things a bit. What makes this discussion so confusing is that there are several definitions for the term "Singularity". There is the "extrapolation of current trends shows everything will soon explode" definition, which in my view is pretty stupid (as the razor blade picture posted here demonstrates). I'll instead concentrate on the sensible definition of it. (I'm also slowly writing a more exhaustive essay on the subject - it's taking a while, but I can post a copy of it here when it's finished, if you folks are interested.)

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. -- Vernor Vinge in 1993.

The term "Singularity" was originally chosen to represent the fact that once we have intelligence that's smarter than us running things, we can no longer predict anything about its doings and about what's going to happen next (if we could, it wouldn't be superintelligent). The most likely reason for the Singularity to occur is the development of Artificial Intelligence.

The easiest way to understand how AI might run out of control is hardware-based. Assume that we develop a human-equivalent AI. Assuming Moore's Law still holds, within two years it'll be able to think twice as fast as us. After the next two, four times. Then eight. Then sixteen. Then thirty-two... (And that's a conservative assumption, since it assumes minds thinking faster than us wouldn't speed up Moore's Law. Theoretically, you could have the first doubling, after which the AIs will help reach the next doubling in one year, and then half a year, and then in three months...)

Of course, human-equivalent AI isn't really equivalent. We already have computers that have a lot less processing power than us, yet they beat the best of us in chess (while losing miserably at Go). Imagine a mind whose all knowledge is stored on a hard drive, and which can simply copy any information that somebody else learns. A mind that could learn all the knowledge of all human science in a matter of weeks - if it wasn't available online, it could copy it from elsewhere.

Which brings us to another topic, the ability to consciously observe the workings of your mind. Human introspection is very limited and can't really observe much of our inner workings, which is why our thinking is suspect to more biases than most people even imagine (see, for instance, http://en.wikipedia.org/wiki/List_of_cognitive_biases - last that I checked, it listed close to a hundred different biases, counting the ones on the linked memory biases page). People say that an AI would have to be subject to human failings, but there's really no reason why it should be that way - the human brain is a needlessly complex mess, optimized by evolution to a hunter-gatherer environment while maintaining a degree of backwards compatibility to earlier "models". Not only could an AI designer start from a clean state and build a much more bug-free system than the human brain, an AI that could observe its own mental processes and alter them could spot any errors in its thinking and correct them itself. No "my emotions are making me do things I really, really know I shouldn't", but instead "I'll edit my mental architecture to eliminate any adverse effects".

These are the reasons why people like the Singularity Institute for Artificial Intelligence are advocating serious attention to issues of friendly AI. If we're going to have superhuman intelligence that is smart enough to take us all over without much trouble and whose actions we cannot predict at all, we better be sure that we build it to be friendly to humanity. If it doesn't, we'll all end up dead or worse. (On the other hand, if it is friendly, it could create a more wonderful utopia than anything we've ever imagined. The stakes are pretty high, folks.)

Some links in conclusion. There are at least two freely readable works of fiction about the Singularity online. The Metamorphosis of Prime Intellect depicts an AI built to follow Asimov's Laws (but even as Asimov showed us, this isn't always good. Accelerando[/b] by Charles Stross won the 2006 Locus Award for Best Novel and is/was shortlisted for the 2006 Hugo Award for Best Novel as well as the 2006 Arthur C. Clarke Award and the 2005 BSFA award.

As for non-fiction, [url=http://www.agiri.org/essays/SingularityAGIGoertzel.htm]"Artificial General Intelligence and its Potential Role in the Singularity"
becomes an advertisement text for its author's Novamente AI engine at the end, but the beginning is a pretty good introductory text for Singularity-related matters. Another very good text that I recommend is Artificial Intelligence and Global Risk - it takes a while to get to the point, but gets very good towards the end. And of course there's my own Singularity essay, once I get it written. ;)
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

Xuenay wrote:Assuming Moore's Law still holds,
Except it won't. Which comes down to the fundamental flaw of the idea of a Singularity - it was postulated by a computer scientist instead of a real scientist. When you start looking at real world limitations, you see that the idea quite simply isn't going to hold up. Moore's law isn't expected to hold out another decade, much less the additional 7 Vinge gave it. Which menas you aren't going to have the processor power available that is needed to match, much less overcome the human brain. The NEC earth simulator has about 1/3rd the estimated speed of a human brain, and is the result of a massive amount of parallel processing. The result is that it takes a huge infastructure to support such a thing - which means that most groups working on AI aren't going to be able to aford it, for one, and more importantly, that the thing is going to have very little if any way to interact with the outside world. It is remenescent of Archimedes really - all the great ideas in the world are useless if you don't have a way to implement them or defend yourself.

And that's just to match or slightly exceed human ability. All these ideas like Matrioshka brains fall apart when you look at them from an engineering and entrhopic perspective.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
SWPIGWANG
Jedi Council Member
Posts: 1693
Joined: 2002-09-24 05:00pm
Location: Commence Primary Ignorance

Post by SWPIGWANG »

The term "Singularity" was originally chosen to represent the fact that once we have intelligence that's smarter than us running things, we can no longer predict anything about its doings and about what's going to happen next (if we could, it wouldn't be superintelligent). The most likely reason for the Singularity to occur is the development of Artificial Intelligence.
I disagree with this idea on a number of levels.

1. An intelligence smarter than the individual human already exists, its called civilization. An individual human is hopelessly stupid in comparison, while we can indeed model society as an large computer with humans being processing nodes.

2. Super-intelligence is an meaningless term. With the help of logic all intelligence problems can probably be reduce to some sort of math. (certainly, anything that is computable falls into discrete math) Even excluding tight tracing, compared to systems like social change, super-intelligence is far easier to predict as intelligence defines an limit set of output. (stupid output not allowed unlike potentially self destructing chaotic systems) The end problem is processing power, the sum of human knowledge have far exceeded the processing power of an individual human for a long time already.

I can't tell you how my laptop in front of me works except in rought terms and I can't predict the stock market. I don't need to wait for singularity.
If we're going to have superhuman intelligence that is smart enough to take us all over without much trouble and whose actions we cannot predict at all, we better be sure that we build it to be friendly to humanity.
That is assuming that the technology used to build the super intelligence do not improve every other part of intelligence in the civilization. So now you build super intelligent robot v2.3 that beats everything else. However, super intelligent robot v2.2 are probably around to stop what v2.3 is doing. Even the humans in the system can upgrade their "intelligence-capacity" with improvement to technology. By adding "non-decision making" system behind humans, the processing power of humans can increase linearly with super-intelligence as well and there is no reason for the human to know all the details as they are not relevent. A human with a calculator can do hard math even if the human don't know how to do two hundred thousand algerbric expressions in a second, but it doesn't need to know to produce results and that is what matters.

Intelligence itself does not have motivations (unless you program/evolve one). It is motivation that is a threat, and such things can't really be worst than humans.

I find the risk of genodical robots less than that of genocidical humans in an age where humans are unnecessary.
User avatar
SirNitram
Rest in Peace, Black Mage
Posts: 28367
Joined: 2002-07-03 04:48pm
Location: Somewhere between nowhere and everywhere

Post by SirNitram »

Xuenay wrote:Assuming Moore's Law still holds, within two years it'll be able to think twice as fast as us.
You're not aware he publically retracted his 'law', are you?
Manic Progressive: A liberal who violently swings from anger at politicos to despondency over them.

Out Of Context theatre: Ron Paul has repeatedly said he's not a racist. - Destructinator XIII on why Ron Paul isn't racist.

Shadowy Overlord - BMs/Black Mage Monkey - BOTM/Jetfire - Cybertron's Finest/General Miscreant/ASVS/Supermoderator Emeritus

Debator Classification: Trollhunter
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Post by Xuenay »

SWPIGWANG wrote:1. An intelligence smarter than the individual human already exists, its called civilization. An individual human is hopelessly stupid in comparison, while we can indeed model society as an large computer with humans being processing nodes.
"Civilization" is made up of individual humans acting in human ways, with actions predictable by humans. It is an extension of humanity's intelligence, not an entity in itself. Furthermore, civilization's intelligence is relatively compartmentalized. Liberal arts majors usually don't know everything that engineering majors have learned and vice versa - even when applications of knowledge from the other field might prove beneficial.
SWPIGWANG wrote:2. Super-intelligence is an meaningless term. With the help of logic all intelligence problems can probably be reduce to some sort of math. (certainly, anything that is computable falls into discrete math) Even excluding tight tracing, compared to systems like social change, super-intelligence is far easier to predict as intelligence defines an limit set of output. (stupid output not allowed unlike potentially self destructing chaotic systems) The end problem is processing power, the sum of human knowledge have far exceeded the processing power of an individual human for a long time already.
All sorts of problems can be reduced to math, but that doesn't mean that we would always understand the math (see four-color theorem, for one example). What you're essentially saying is "all problems are solvable, therefore the solutions to any problems can be predicted, even the solutions of minds with more processing power and completely different ways of thought than us", which is nonsense.

A good example is Robert Freitas' concept of Sentience Quotient, defined as SQ = log10(I / M), where I is the information processing rate (bits/s) and M is the mass of the brain (kg). Freitas calculated a Venus flytrap to have an SQ of +1, while humans have an SQ of +13. He also calculated that electronic sentiences could reach an SQ of at least +23 (which might be conservative, since the article was written in 1984 - it may be possible to get the number even higher using stuff like nanocomputing, I haven't done the math). An electronic mind could be as much smarter than us than we're smarter than Venus flytraps. Are you seriously going to argue that a civilization of Venus flytraps (assuming they could communicate somehow) could predict the actions of humans?
SWPIGWANG wrote:That is assuming that the technology used to build the super intelligence do not improve every other part of intelligence in the civilization. So now you build super intelligent robot v2.3 that beats everything else. However, super intelligent robot v2.2 are probably around to stop what v2.3 is doing.
What you're talking about is takeoff speed, which is currently an open question in Singularity circles. If there's a so-called slow takeoff, AI systems will develop gradually, and they can probably keep each other in check. It's been argued, however, that there is a possibility for a hard takeoff - where a single AI, through recursive self-improvement or some other means, becomes powerful enough to take over the entire planet in a very short time (ranging from days to a couple of years, depending on who you ask). The thought isn't necessarily as far-fetched as it sounds - while in our perspective it took a very long time for Homo Sapiens Sapiens to take over the planet, it was just an eyeblink on an evolutionary time scale. And once somebody develops an AI that is smarter than us, it doesn't need thousands of years to build up an infrastructure - we've already provided it one, all it needs is to take it over.
SWPIGWANG wrote:Intelligence itself does not have motivations (unless you program/evolve one). It is motivation that is a threat, and such things can't really be worst than humans.

I find the risk of genodical robots less than that of genocidical humans in an age where humans are unnecessary.
Genocidal robots aren't really the problem. Robots which consider humans unnecessary are. To quote Yudkowsky's AI Risk, "the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

It's the old genie in the bottle problem - you can ask your genie to do something, but you have to word it very, very carefully to make sure you don't get something you didn't want. Instructions whose real purpose are obvious and idiot-proof to you, having a human mind and human ways of thinking, may not be understood in the same way by a mind with arbitrary modes of thought.
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Post by Xuenay »

(Whoops, sorry for posting twice in a row - didn't realize that there were more responses than just one. :oops: )
Ender wrote:
Xuenay wrote:Assuming Moore's Law still holds,
Except it won't. Which comes down to the fundamental flaw of the idea of a Singularity - it was postulated by a computer scientist instead of a real scientist. When you start looking at real world limitations, you see that the idea quite simply isn't going to hold up. Moore's law isn't expected to hold out another decade, much less the additional 7 Vinge gave it. Which menas you aren't going to have the processor power available that is needed to match, much less overcome the human brain. The NEC earth simulator has about 1/3rd the estimated speed of a human brain, and is the result of a massive amount of parallel processing.
Blue Gene/L has has over two times the processing power of the human brain, using the low-end calculations of the human brain's processing power. If the law holds even a single decade, we'll already have pretty powerful computers after it - that's assuming no new breakthroughs or new paradigms of computing. I believe - though I'm too lazy to verify right now - that Drexler theorized (and so far hasn't been seriously challenged) nanocomputers several orders of magnitude faster than anything we have right now in his Nanosystems. You can't seriously dispute the idea that we'll have at least human-brain equivalent computers one day - because the human brain itself is a proof of concept for them. If evolution, a mindless process of local optimization, could create a nanoscale computer, then so can we, given the right tools.

Furthermore, it's questionable if we even need computers that are human-equivalent. After all, evolution probably has riddled us with loads of unnecessary crap. Some people believe a human-equivalent AI could be build with even today's commercially available hardware.
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
EnsGabe
Youngling
Posts: 54
Joined: 2006-07-10 09:49pm

Post by EnsGabe »

SirNitram wrote:
Xuenay wrote:Assuming Moore's Law still holds, within two years it'll be able to think twice as fast as us.
You're not aware he publically retracted his 'law', are you?
The original form of Moore's Law has been retracted, yes. It is impossible for the density of transistors on a wafer to double every [unit of time] indefinitely. What has maintained as an industry goal (not as a natural law as some would hold Moore's Law to be) is the doubling of processor performance achievable for a given investment of capital. IOW, computational ability per dollar has been increasing at a phenomenal rate, roughly approximating the net effect of Moore's original observation.

Part of the point is that while Japan's Earth Simulator is a phenomenal investmant now, an equivalently performant system may not be as expensive in two decades.
The Monarch: "Anyone wanna explain to me why my coccoon is charred?"
24: "Because you told us to blow it up"
The Monarch: "And why it is sideways?"
21: "We were following orders! You can't yell at us for following orders."
24: "Or kill us for following orders."
User avatar
SWPIGWANG
Jedi Council Member
Posts: 1693
Joined: 2002-09-24 05:00pm
Location: Commence Primary Ignorance

Post by SWPIGWANG »

"Civilization" is made up of individual humans acting in human ways, with actions predictable by humans. It is an extension of humanity's intelligence, not an entity in itself.
Civilization can easily be considered an entity in itself. After all, a computer is an extension of electrons' behaviour and theoretically with knowledge of how electroncs act, it is perfectly predictable.

Civilization is not really predictable. Who could have predicted (wide guesses don't count) a Hitler at 1914? It is a super-chaotic system that have patterns (strange attractors?) but no way of getting an deterministic result.

If we are talking about human-level intelligence, it is really very predictable in comparison, even if it runs at mutiple times of human speed. A computer running at 3 orders of magnitude faster than a human is a simple problem compared to the world running at 12+ orders of magnitude bigger while being an open system with absurd amount of unmeasured and unmeasurable varibles.
Furthermore, civilization's intelligence is relatively compartmentalized. Liberal arts majors usually don't know everything that engineering majors have learned and vice versa - even when applications of knowledge from the other field might prove beneficial.
A civilization's intelligence is not efficient (it is not its purpose after all) but knowledge can be spread around with no problem as long as the communication structure supports it, which it still does.

*besides liberal arts people don't do much :p

-------------------------------------------------------------------------
"all problems are solvable, therefore the solutions to any problems can be predicted, even the solutions of minds with more processing power and completely different ways of thought than us", which is nonsense.
If you want to know the output of a computer, build an identical one and you are done. It is the humans that builts the computer, they can build another one.
Are you seriously going to argue that a civilization of Venus flytraps (assuming they could communicate somehow) could predict the actions of humans?
If they have enough time and enough storage space and accurate model and measurement of humans.

I think the question is not about the absolute predictability of the system, but "comprehension" of the system since it is often errously assumed that one needs to 'comprehend' something to predict it.
where a single AI, through recursive self-improvement or some other means, becomes powerful enough to take over the entire planet in a very short time
It would need the physical processing capacity to do that. Why would we give an single AI that much resources when we are at the edge of human/super-human intelligence? In addition, why are we giving the AI so much physical access to everything else and so little security? Even a super AI would be screwed if it is locked inside a blackbox with out outside access.

To really "take over the world", the AI needs not to only outthink individual humans, but civilization itself. I don't think any thinking device could come close to doing that. It would be a part of the world(stuck by its immutable laws thats difficult to deal with at any level of intelligence), not above it.

---------------------------------------------
I think neuo-level man-machine interface would come eariler and have an bigger impact than human-level AI as it instantly boosts human mathmatical and memory capacity by orders of magnitude.
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

Xuenay wrote:(Whoops, sorry for posting twice in a row - didn't realize that there were more responses than just one. :oops: )
Ender wrote:
Xuenay wrote:Assuming Moore's Law still holds,
Except it won't. Which comes down to the fundamental flaw of the idea of a Singularity - it was postulated by a computer scientist instead of a real scientist. When you start looking at real world limitations, you see that the idea quite simply isn't going to hold up. Moore's law isn't expected to hold out another decade, much less the additional 7 Vinge gave it. Which menas you aren't going to have the processor power available that is needed to match, much less overcome the human brain. The NEC earth simulator has about 1/3rd the estimated speed of a human brain, and is the result of a massive amount of parallel processing.
Blue Gene/L has has over two times the processing power of the human brain, using the low-end calculations of the human brain's processing power.
The NEC can do 35.6 trillion calculations per second. What I'm seeing for the human is 100 trillion.
Here's my links:
Computer speed
Humans 1
Humans 2
Humans 3
Where are yours?

If the law holds even a single decade, we'll already have pretty powerful computers after it - that's assuming no new breakthroughs or new paradigms of computing.
Since the issue is materials and waste heat, you can have all the breakthroughs in computing you want and it won't cahange a damn think. Like I said, this is what happens when pretend science gets hit with real science.

I believe - though I'm too lazy to verify right now - that Drexler theorized (and so far hasn't been seriously challenged) nanocomputers several orders of magnitude faster than anything we have right now in his Nanosystems.
Drexler theorized, then the engineers stood up and pointed out what a fool he was. Plausible nanotech is heavily dependent on ignoring engineers and claiming that "we will work past it". Except it doesn't work that way. Drexler has been taken down on every front - there is a reason the guy is now ignored by the leaders in the field he invented.

In specific reference to this, largescale nanocomputing gets asshammered by systemic failure. It takes so many that even if you have an unrealistically small rate of failure, the sheer number make some fail again and again, whose failure in turn causes more to fail. There is a reason engineers try to minimize components.
You can't seriously dispute the idea that we'll have at least human-brain equivalent computers one day - because the human brain itself is a proof of concept for them. If evolution, a mindless process of local optimization, could create a nanoscale computer, then so can we, given the right tools.
Strawman
Furthermore, it's questionable if we even need computers that are human-equivalent. After all, evolution probably has riddled us with loads of unnecessary crap.
Amazingly, if you want to build AIs that are faster then humans like you claim are possible, you need to be at least as fast as humans.
Some people believe a human-equivalent AI could be build with even today's commercially available hardware.
Some people think pigs can fly. Yet my ham has never been airborn. If it was possible, why does no one do it?
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Xon
Sith Acolyte
Posts: 6206
Joined: 2002-07-16 06:12am
Location: Western Australia

Post by Xon »

Xuenay wrote:Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. -- Vernor Vinge in 1993.
People like this crack me up.

Vernor Vinge, while a reasonable writer, has a hardcore anti-government(Classical US Libertarian I think is the ideology) bent and is a singularity wanker. Convinced a "government" is non-viable yet somehow insists a few "super-human AI" are better

:lol:
"Okay, I'll have the truth with a side order of clarity." ~ Dr. Daniel Jackson.
"Reality has a well-known liberal bias." ~ Stephen Colbert
"One Drive, One Partition, the One True Path" ~ ars technica forums - warrens - on hhd partitioning schemes.
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Post by Xuenay »

SWPIGWANG wrote:Civilization is not really predictable. Who could have predicted (wide guesses don't count) a Hitler at 1914? It is a super-chaotic system that have patterns (strange attractors?) but no way of getting an deterministic result.

If we are talking about human-level intelligence, it is really very predictable in comparison, even if it runs at mutiple times of human speed. A computer running at 3 orders of magnitude faster than a human is a simple problem compared to the world running at 12+ orders of magnitude bigger while being an open system with absurd amount of unmeasured and unmeasurable varibles.
(Chaotic behavior does not imply non-deterministic, it only implies hard to predict.)

Um, what makes you think that an intelligence that we've built from an entirely clean state, with optimized modes of thought that may not at all correspond to human ones, and which is capable of recursive self-improvement and which only maintains in its code unaltered its original goals, would be very predictable?
A civilization's intelligence is not efficient (it is not its purpose after all) but knowledge can be spread around with no problem as long as the communication structure supports it, which it still does.

*besides liberal arts people don't do much :p
To a certain degree. Yet one person cannot learn all the knowledge of human science, if they want to apply the knowledge on a certain task they need to form teams of specialists - but a team of specialists won't get insights like "hey, I suddenly remember this obscure detail that we went through in our training and which is connected to this other obscure detail in the completely opposite field". Knowledge can be communicted, but it could also be much faster and more efficient.

* granted :P
If you want to know the output of a computer, build an identical one and you are done. It is the humans that builts the computer, they can build another one.
Assuming that the first one hasn't taken over the world and denied the other one all access to relevant information and resources by then...
Are you seriously going to argue that a civilization of Venus flytraps (assuming they could communicate somehow) could predict the actions of humans?
If they have enough time and enough storage space and accurate model and measurement of humans.

I think the question is not about the absolute predictability of the system, but "comprehension" of the system since it is often errously assumed that one needs to 'comprehend' something to predict it.
"Enough time", maybe, but if it was on the scale of billions of years, it probably wouldn't help them in time against the gardener who decided to get rid of all the flytraps and try a new variety of plant next month. ;)
It would need the physical processing capacity to do that. Why would we give an single AI that much resources when we are at the edge of human/super-human intelligence? In addition, why are we giving the AI so much physical access to everything else and so little security? Even a super AI would be screwed if it is locked inside a blackbox with out outside access.
The question being, do we know when an AI is on the edge of human/superhuman intelligence? It could be slowly improving itself under controlled conditions, then suddenly hit an unpredicted breakthrough and make an optimization that allows it to do a thousand improvements a minute when it was doing one an hour before - just when the researchers left for dinner. Then it'd check its Internet connection, con a Federal agent into arresting its creators ASAP and buy two hundred new server racks with funds it stole using security holes it found by analyzing publically-available software...

Granted, this particular secenario is pretty far-fetched. It's not likely that it would work, but there's a potentially infinite amount of other scenarios that might allow an AI to escape. The point is that A) if we don't start paying attention to these issues when AIs are still safely below the point of human-equivalence, it's going to be a nightmare to suddenly start jury-rigging safety measures and B) even if we do pay attention to AI safety, we should start it out by designing a mind that is friendly because it wants to be friendly, not because it has no other choice and is constantly seeking avenues of escape. Anything else is just way too risky.
To really "take over the world", the AI needs not to only outthink individual humans, but civilization itself. I don't think any thinking device could come close to doing that.
Why not? You said yourself that a mind is just a civilization of neurons, and humans are (at least occasionally) capable of predicting the behavior of other humans they encounter. Why couldn't a mind outthink a larger group of components? (Not to mention that "civilization" isn't really a unified whole anyway - it could easily play us against each other.)
I think neuo-level man-machine interface would come eariler and have an bigger impact than human-level AI as it instantly boosts human mathmatical and memory capacity by orders of magnitude.
I hope it does. I'm sorta afraid of an AI-driven Singularity.
Ender wrote:The NEC can do 35.6 trillion calculations per second. What I'm seeing for the human is 100 trillion.
Here's my links:
Computer speed
Humans 1
Humans 2
Humans 3
Where are yours?
Hmm, it looks like I might need to retract my statement for now. I did some looking, since I was puzzled why you and my source quoted the same page (Humans 3) for their conclusions, and it turned out that the brain estimates and the reports on the computers' processing power gave their figures in units that weren't directly comparable (MIPS vs. FLOPS), yet my source had compared them directly anyway. (Though the "computer speed" link of your also gives the number in FLOPS...)
Since the issue is materials and waste heat, you can have all the breakthroughs in computing you want and it won't cahange a damn think. Like I said, this is what happens when pretend science gets hit with real science.
Didn't vaccuum tube computers have serious issues with waste heat as well, before a change to a new paradigm took them away for the time being?
Drexler theorized, then the engineers stood up and pointed out what a fool he was. Plausible nanotech is heavily dependent on ignoring engineers and claiming that "we will work past it". Except it doesn't work that way. Drexler has been taken down on every front - there is a reason the guy is now ignored by the leaders in the field he invented.

In specific reference to this, largescale nanocomputing gets asshammered by systemic failure. It takes so many that even if you have an unrealistically small rate of failure, the sheer number make some fail again and again, whose failure in turn causes more to fail. There is a reason engineers try to minimize components.
If I've been misled, I'd like to check that myself. References to this, please?
You can't seriously dispute the idea that we'll have at least human-brain equivalent computers one day - because the human brain itself is a proof of concept for them. If evolution, a mindless process of local optimization, could create a nanoscale computer, then so can we, given the right tools.
Strawman
Not really. It demonstrates that a Singularity is possible, someday - we just can't say for sure when.

Note that I'm not trying to claim that there will be a Singularity in 2010, 2050 or even 2100, just that it will happen at some point. That point could be 35269, for all we know. But you can't, really, predict scientific progress - fusion power has been just around the corner for the last 50 years, but likewise in - 1940 I think, could've been 1950 - Clarke wrote of a moon landing in 2000 and was criticized for being too optimistic. The Internet grew from seemingly out of nowhere to its present size in 20 years or so.

All I'm saying that a Singularity will take place at some point, and just as we can't say that it will happen within lifetimes, there's no particular reason to consider it more likely that it won't happen, either.
Furthermore, it's questionable if we even need computers that are human-equivalent. After all, evolution probably has riddled us with loads of unnecessary crap.
Amazingly, if you want to build AIs that are faster then humans like you claim are possible, you need to be at least as fast as humans.
Depends on how good and optimized your algorithms are. I recall that my old 200-megahertz PC occasionally was slow in emulating SNES roms, yet I would not claim that you need a 200 MHz machine to run them, or more sophisticated games (the SNES ran around 2-4 MHz).
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
Mad
Jedi Council Member
Posts: 1923
Joined: 2002-07-04 01:32am
Location: North Carolina, USA
Contact:

Post by Mad »

Ender wrote:
Some people believe a human-equivalent AI could be build with even today's commercially available hardware.
Some people think pigs can fly. Yet my ham has never been airborn. If it was possible, why does no one do it?
Having the required hardware doesn't magically imbue the hardware with the required software. Even having arbitrarily high computing power doesn't mean we know how to program a human-equivalent AI.

Xuenay: I'm curious, though: what is the definition of an human-equivalent AI? An AI that can perform all the functions of a human? Or one that is just intelligent enough as far as thought-processes go? (Say lacking resource-intensive senses such as vision as we know it and hearing.)
Later...
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Post by Xuenay »

Mad wrote:Xuenay: I'm curious, though: what is the definition of an human-equivalent AI? An AI that can perform all the functions of a human? Or one that is just intelligent enough as far as thought-processes go? (Say lacking resource-intensive senses such as vision as we know it and hearing.)
It's a bit fuzzy - I'm not aware of there existing any strict definition, but "intelligent enough as far as thought-processes go" would probably be the closest. An intelligence that's broad (as opposed to narrow, like a chess AI) enough to handle most of the things that humans can, with at least roughly the same degree of success.
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

Ender wrote:
Xuenay wrote:Assuming Moore's Law still holds,
Except it won't. Which comes down to the fundamental flaw of the idea of a Singularity - it was postulated by a computer scientist instead of a real scientist. When you start looking at real world limitations, you see that the idea quite simply isn't going to hold up. Moore's law isn't expected to hold out another decade, much less the additional 7 Vinge gave it. Which menas you aren't going to have the processor power available that is needed to match, much less overcome the human brain. The NEC earth simulator has about 1/3rd the estimated speed of a human brain, and is the result of a massive amount of parallel processing. The result is that it takes a huge infastructure to support such a thing - which means that most groups working on AI aren't going to be able to aford it, for one, and more importantly, that the thing is going to have very little if any way to interact with the outside world. It is remenescent of Archimedes really - all the great ideas in the world are useless if you don't have a way to implement them or defend yourself.

And that's just to match or slightly exceed human ability. All these ideas like Matrioshka brains fall apart when you look at them from an engineering and entrhopic perspective.
Er, yes, I know that. My point was that even if we allow for the continuation of Moore's Law it is not appropriate to call it a "singularity", because it's growth function is still continious at all points. I was arguing against the idea of the singularity, not for it, as I´m sure you'll see.

But yes, ultimate limitations to processor power from fundamental physics is also an issue limiting the idea still further.
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

Lord Zentei wrote:Er, yes, I know that. My point was that even if we allow for the continuation of Moore's Law it is not appropriate to call it a "singularity", because it's growth function is still continious at all points. I was arguing against the idea of the singularity, not for it, as I´m sure you'll see.

But yes, ultimate limitations to processor power from fundamental physics is also an issue limiting the idea still further.
Ah, crap. Sorry about that, Ender, I somehow got the silly idea that you were responding to my post there. My bad for not reading your post properly.

andrewgpaul wrote:Regarding the original post, and literary recommendations, I'd recommend Ken MacLeod's Fall Revolution series - The Star Fraction, The Stone Canal, The Cassini Division and The Sky Road, as well as Newton's Wake. However, they do kinda skirt round the 'singularity'; the main thrust of the plots involve the causes and side effects of the singularity, rather than dealing with it head on.
The closest I can think of off-hand is Childhood's End by Arthur C Clarke, though that is not due to tecnological development nor is it due to internal forces within a civilization.
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
Xon
Sith Acolyte
Posts: 6206
Joined: 2002-07-16 06:12am
Location: Western Australia

Post by Xon »

Lord Zentei wrote:But yes, ultimate limitations to processor power from fundamental physics is also an issue limiting the idea still further.
Even with processing power heading to infinite there are algorithms which can not be achieved in polynomical time.

So not only are you limited by physics on how fast/much you can process you are also limited by the time taken for completion of a process.
"Okay, I'll have the truth with a side order of clarity." ~ Dr. Daniel Jackson.
"Reality has a well-known liberal bias." ~ Stephen Colbert
"One Drive, One Partition, the One True Path" ~ ars technica forums - warrens - on hhd partitioning schemes.
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

Xon wrote:
Lord Zentei wrote:But yes, ultimate limitations to processor power from fundamental physics is also an issue limiting the idea still further.
Even with processing power heading to infinite there are algorithms which can not be achieved in polynomical time.

So not only are you limited by physics on how fast/much you can process you are also limited by the time taken for completion of a process.
Indeed. There might be a way around some of that if quantum computers are made viable, though doubtless they too have both practical and theoretical limitations.
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Post by Xuenay »

Xon wrote:
Lord Zentei wrote:But yes, ultimate limitations to processor power from fundamental physics is also an issue limiting the idea still further.
Even with processing power heading to infinite there are algorithms which can not be achieved in polynomical time.

So not only are you limited by physics on how fast/much you can process you are also limited by the time taken for completion of a process.
*shrugs* Nobody said we'd need to create an arbitrarily intelligent being with the intelligence of the whole universe - we just need to create one that's considerably smarter than humans. Which, considering that the human brain is limited by fundamental physics as well, and its software side just isn't very good, doesn't seem to be all that impossible.
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

Xuenay wrote:*shrugs* Nobody said we'd need to create an arbitrarily intelligent being with the intelligence of the whole universe - we just need to create one that's considerably smarter than humans. Which, considering that the human brain is limited by fundamental physics as well, and its software side just isn't very good, doesn't seem to be all that impossible.
The interpretations of the Singularity do indeed insinuate infinite advancement, from the Wiki quote provided. Some of these imply that a mathematical singularity will occour in the develpoment of technology which reason suggests is not possible in reality, since that implies infinitely fast accelleration of develpoment. Another quote reads:
Good attested that an artificial mind in possession of a formal description of itself would be capable of incremental and additive self-improvements in its own intelligence ad infinitum.
The bolded bit is the sticking point: infinite advancement (which is doubtful will be so relevant anyway, as Xon and I have pointed out).

I don't think anyone here is rejecting the idea that it is possible to create an AI that is smarter than a human, but it is not just that which is implied by the Singularity idea, as far as I can see.
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Post by Xuenay »

Lord Zentei wrote:The interpretations of the Singularity do indeed insinuate infinite advancement, from the Wiki quote provided. Some of these imply that a mathematical singularity will occour in the develpoment of technology which reason suggests is not possible in reality, since that implies infinitely fast accelleration of develpoment. Another quote reads...
Gnnh. I wonder whose smart idea it was to include that in the Wiki article... I left a comment on the talk page about it, because it doesn't feel to me that "infinite advancement" is in any way integral to the definition of the Singularity. It's just the point where AIs get smarter than humans and end the human era, that's all. :)
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
Azrael
Youngling
Posts: 132
Joined: 2006-07-04 01:08pm

Post by Azrael »

It's just the point where AIs get smarter than humans and end the human era, that's all.
Why do people assume that sentient AIs would automatically be evil? Its a great cliche for sci-fi TV and movies, but no one has yet to explain why intelligence = malevolence.

Just because a computer is sentient now doesn't mean it thinks just like a human would. Beside that is not logical to conclude that simple beings so far beneath you pose a threat significant enough to warrant the time and resource intensive option of war, let alone Geocide and with no emotions, morals or ethiscs, logic is the only thing that's left.
We are the Catholics.
You will be assimilated.
Stop reading Harry Potter.
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Post by Xuenay »

Nobody's assuming that the AI's will be evil. It's just that:

A) Once you the basis for a superintelligent AI, what are you going to program it to do? You have to be very, very careful in giving it instructions, exactly because it doesn't think like a human. Program it to "make all humans smile", and it might turn all the matter in the solar system into billions of tiny pictures of smiling humans.

B) Assuming you got the programming done right for the AI, and it really is Friendly and wants to help humans. In the process of helping humanity, it takes over the world. While this is a good outcome, it can still be said to end the human era, since humans are no longer the ones deciding things.

C) Assume you get the programming right, but for whatever reasons (probably a slow takeoff and lots of other competing minds) your AI isn't enough to take over the world. In all likelyhood, it's still enough to do everything better than humans, so nearly all jobs that would usually have been filled by humans will go to computers. The result may be a communist utopia (since nobody needs to work anymore) or whatever, but humans won't being doing much. Again, it's not all that much of a stretch to call this the "end of human era", though it's a bit more than in alternatives A and B.
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
NRS Guardian
Jedi Knight
Posts: 531
Joined: 2004-09-11 09:11pm
Location: Colorado

Post by NRS Guardian »

Even given a fast take-off an AI can't take over the world unless given the resources to do so. In spite of the sci-fi cliche of humans giving an AI that much control considering the known dangers and humans' predilection for resisting giving up any sort of control even if beneficial tends against an AI being given that much control, even if we could be absolutely certain its intentions were benevolent. Also, considering we're using the human mind as a model and the AI which is based on human thought processes is basing its improved AIs on itself means that it's doubtful if an AI will be so drastically different from us that we will be totally unable to predict its actions.
"It is not necessary to hope in order to persevere."
-William of Nassau, Prince of Orange

Economic Left/Right: 0.88
Social Libertarian/Authoritarian: 2.10
Post Reply