The Word “Singularity” Has Lost All Meaning

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Starglider
Miles Dyson
Posts: 8701
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

The Word “Singularity” Has Lost All Meaning

Post by Starglider » 2007-07-15 04:58am

This is a blog entry by a friend of mine, but I thought I would repost it due to idiots like the Orion's Arm lot (plus assorted new age morons and generally clueless meme whores that Michael doesn't even bother to address here) discrediting an actually very important concept. I've written essays like this myself on occasion, but not as clearly.

----

The Word “Singularity” Has Lost All Meaning
by Michael Anissimov

Yes, it’s come to that point. The word “Singularity” has been losing meaning for a while now, but whatever semblance of a unified or coherent definition there ever used to be, it has long faded away over the horizon. Rather than any single idea, Singularity has become a signifier used to refer to a general cluster of ideas, some interrelated; some, blatantly not. These ideas include: exponential growth, transhuman intelligence, mind uploading, singletons, popularity of the Internet, feasibility of life extension, some developmentally predetermined “next step in human evolution”, feasibility of strong AI, feasibility of advanced nanotechnology, some odd spiritual-esque transcension, and whether or not human development is primarily dictated by technological or social forces. Quite frankly, it’s a mess.

Anytime someone gets up in front of an audience and starts trying to talk about the “Singularity” without carefully defining exactly what they mean and don’t mean, each audience member will be thinking of an entirely different set of concepts, draw their own opinions from that unique set, and interpret further things they hear in light of that particular opinion, which may not even based on the same premises as the person sitting next to them. For an audience of 50 people, you could very well have 50 unique idea sets that each listener personally thinks represents the Singularity. For such challenging and sometimes confusing topics, clarity and specificity is a necessity, so we might as well discard the overused “Singularity” word, and talk about what we actually mean using more specific terms. It helps keep things distinct from one another.

Even more confusing is that there are technologies, and then there are plausible or possible consequences from the technologies - two things which are very distinct. Both lines of inquiry can cause heated argument, even when everything is perfectly delineated! But the delineation is still important, so after the argument is over, you actually know what you were arguing about. Below, I’m going to slice up various concepts associated with the term “Singularity” into ideas that can actually be examined individually:

1) Exponential growth: it sure looks like technological progress is accelerating to me, and on many objective metrics, it is, but maybe some others disagree. But guess what: whether or not progress is accelerating is largely irrelevant to the feasibility of mind uploading, cryonics, or superintelligence. It may influence timeframes, but not feasibility in the abstract sense. When acceleration skeptics say: “technological progress is not accelerating, therefore, all this other transhumanist stuff is impossible” - they’re kinda missing the point - if a given technology is feasible, it is likely to be invented eventually unless globally suppressed, but the question of when is entirely separate. In principle, transhuman intelligence could be created during a time of accelerating progress, or constant progress, or even stagnation. This was mentioned at the last Singularity Summit.

2) Radical life extension: again, radical life extension (people living to 100, 200, 300, and beyond) seems very plausible to me, and I believe that we are going to be experiencing this ourselves in our lifetimes, unless an existential disaster occurs. A Berkeley demographer found that maximum lifespan of human beings is increasing at an accelerating rate. However, life extension has very little, if anything, to do with the Singularity, other than that the Singularity is sometimes associated with technological progress and that technological progress may result in radically extended lifespans. This is somewhat like how house mice are somewhat associated with raccoons because both live in areas dense with human populations.

3) Mind uploading: in his “Rapture of the Geeks” article, which I’m not even going to link, Cory Doctorow made the mistake of thinking that the “Singularity” was all about the feasibility of mind uploading and Singularity activists’ primary goal is to upload everyone into a computer simulation. This is confusion caused by not looking hard enough - you’re busy, you have to go protest copyright law or whatever, have to go to a meeting, blah blah blah, so you just read a few web pages that give you a totally skewed view of what you’re trying to criticize, and come to the conclusion that “Singularity” = mind uploading. You hope to get away with it because you realize this is cutting edge stuff and most people don’t know the difference between an uploaded superintelligence or a de novo superintelligence, for instance, so you just go for it. Bad idea. Mind uploading and the Singularity (my definition: transhuman intelligence) are totally different things. Transhuman intelligence might lead to uploading, but they’re not equivalent.

4) Feasibility of strong AI: this is rightly closely associated with the Singularity, but it’s still not the same thing. You can be a refusenik of strong AI and still advocate intelligence enhancement. You can want to die at age 80, believe that progress is not accelerating, and that pro-mind uploading people are crazy, and still advocate “the Singularity”, because the Singularity is supposed to mean intelligence enhancement: that’s it! Feasibility of strong AI is more closely related to the Singularity than the above topics, because there is a large group of Singularity activists (aka Singularitarians, spell it right), trying to build strong AI… but, if you’re anti-strong-AI and think that means you’re anti-Singularity, you should think again, and recognize that the Singularity and strong AI are not the same thing. You can have a Singularity with enhanced human intelligence, no AI involved at all. It’s just that many Singularity activists think that AI is the easiest way to achieve intelligence enhancement - the Singularity. We could change our mind with significant persuasion - we chose AI because it looks like the easiest and safest path, not because we have some special AI-fetish. It’s a means to an end, and that’s all.

5) Transhuman intelligence: what “the Singularity” has always supposed to mean, but has gotten radically, radically diluted as of late. Complicating matters is that many people have different views of what transhuman intelligence is supposed to be, so even if we shave it down to just this, there is still confusion. Let me put it this way: transhuman intelligence is not a specific thing, it’s a space of possible things, encompassing human intelligence enhancement through drugs, gene therapy, brain-computer interfacing, brain-brain interfacing, and maybe other techniques we haven’t even considered. It also encompasses AI, but not present-day human networking or the Internet - these are simply new ways of arranging human-level intelligence. (Legos can’t be made into brick-and-mortar buildings, no matter how you configure them.) To me, transhuman intelligence is completely inevitable in the long run - it will be developed, the question is how, who, and when.

So, five different things. Unrelated, but frequently conflated. If you want to critique or support something, focus on that specific thing: don’t confuse yourself and others by smearing them all together! And if you’re planning on attending the next Singularity Summit in San Francisco, and aren’t already thoroughly familiar with the ideas surrounding the Singularity, I suggest you sit near me, so I can translate, because I doubt most of the speakers will have a very coherent or well-defined view of the Singularity either. Stewart Brand, for instance, says, “The Singularity is a frightening prospect for humanity. I assume that we will somehow dodge or finesse it in reality” - but what does he actually mean? It’s so incredibly difficult to tell. I’m not picking on Brand specifically here, just repeating my original point in this post: that for every 50 people, you may very well have 50 completely different conceptions of what the Singularity is.

----

'General purpose nanoassemblers' (and associated concepts like utility fog) are a subset of (1), but they get a lot of hype (mostly by people who don't understand the physical details and thus make a hash of advocating them). (3) and (4) both imply (5) with almost complete certainty, but that isn't obvious unless you've studied them in depth. Nanotech incidentally weakly implies (4) in that with ridiculous amounts of computing power (through nanocomputing substrates) strong AI becomes relatively easy to brute-force (not that this is a good idea, it isn't). (2) is pretty much a red herring, as is all the 'human singularity' nonsense you may or may not have heard about (essentially they think Google-style 'super collaboration tools' are ultimately going to link humanity up into an effective super intelligence - right).

User avatar
Executor32
Jedi Council Member
Posts: 2088
Joined: 2004-01-31 03:48am
Location: In a Georgia courtroom, watching a spectacle unfold

Post by Executor32 » 2007-07-15 06:19am

He forgot one, the gravitational singularity, which is what I immediately think of when I hear the term "singularity", and what I thought this thread was about before I read it.
どうして?お前が夜に自身お触れるから。
Long ago in a distant land, I, Aku, the shape-shifting Master of Darkness, unleashed an unspeakable evil,
but a foolish samurai warrior wielding a magic sword stepped forth to oppose me. Before the final blow
was struck, I tore open a portal in time and flung him into the future, where my evil is law! Now, the fool
seeks to return to the past, and undo the future that is Aku...
-Aku, Master of Masters, Deliverer of Darkness, Shogun of Sorrow

User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia
Contact:

Post by Stark » 2007-07-15 06:25am

Starglider is an AI puke, after all. That said, I can't complain about people giving shit to 'singularity': the internet makes everyone an ill-informed self-proclaimed expert, after all. :)

User avatar
Starglider
Miles Dyson
Posts: 8701
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider » 2007-07-15 06:25am

Executor32 wrote:He forgot one, the gravitational singularity, which is what I immediately think of when I hear the term "singularity", and what I thought this thread was about before I read it.
Vinge actually coined the term in relation to the event horizon of a gravitational singularity. It's possible we'd all be better off with a different term, but I can't think of any good candidates.

User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Post by Zixinus » 2007-07-15 08:38am

To me technological singularity means that there is the point of predicted future history where it is impossible to predict anything with any distant promise of accuracy. I understand that this is due to the unpredictable way future technologies alter society, until currently known economics and models cannot be applied. It is impossible to get anything out of it, so its like a black hole in a sense, ie, a "singularity".

Does that distantly describes the "singularity" we are talking about or am I sprouting bullshit in capital letters?

User avatar
Xon
Sith Acolyte
Posts: 6206
Joined: 2002-07-16 06:12am
Location: Western Australia

Post by Xon » 2007-07-15 08:55am

That was the original definition of the term, but since that applies to the Industrial revolution it doesnt get much use these days :lol:
"Okay, I'll have the truth with a side order of clarity." ~ Dr. Daniel Jackson.
"Reality has a well-known liberal bias." ~ Stephen Colbert
"One Drive, One Partition, the One True Path" ~ ars technica forums - warrens - on hhd partitioning schemes.

User avatar
Starglider
Miles Dyson
Posts: 8701
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider » 2007-07-15 09:18am

Zixinus wrote:To me technological singularity means that there is the point of predicted future history where it is impossible to predict anything with any distant promise of accuracy. I understand that this is due to the unpredictable way future technologies alter society, until currently known economics and models cannot be applied.
Just technology alone won't do it, no matter how transformative, as long as the technology is being directed by humans. We can't accurately predict what the consequences of possible future technologies will be, but we can be sure that entrepreneurs will try to make money out of them, that people will look for military applications and use them if they offer a lethality advantage, that safety regulations will be ignored by the unscrupulous if dangerous technology is not tightly constrained. We know that if there are security flaws, they will be exploited by criminals and hostile governments alike. We don't know how to build a holodeck yet, but we know that if we could, it would be used for sexual fantasies, because that's just human nature. We can even predict pretty well what the social structure of a space colony will look like by taking a look at scientific outposts or frontier settlements.

The sharp predictive horizon comes from the creation of greater than human intelligence (though actually human-equivalent but definitely non-human intelligence in sufficient numbers will do it too). We cannot predict what the impact of different goal and reasoning systems will be on society, we cannot predict how much better transhuman intelligences will be at achieving their goals, and we certainly cannot predict where things will go once intelligences (even initially human ones) get the ability to reliably and deliberately modify their own goals, memories and reasoning systems. Every historical precedent suddenly becomes invalid.

User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27289
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord » 2007-07-15 09:53am

Starglider wrote:Every historical precedent suddenly becomes invalid.
You imagine and/or hope.

I wouldn't be so sure that the advent of 'different' intelligences (if human-level+ intelligences that are different in their mode of thought beyond 'being faster' are practical or desireable, of course) would mean the demise of irrationality, greed, and all the other motives that have driven history, good or bad.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth

User avatar
Academia Nut
Sith Devotee
Posts: 2598
Joined: 2005-08-23 10:44pm
Location: Edmonton, Alberta

Post by Academia Nut » 2007-07-15 11:47am

Well, I think that is the point, is it not NecronLord? The creation of non-human intelligences is something that has never happened before, and thus we have no models to predict how they will behave. Thus, while the end result might be a general return to the status quo, we might go off on a completely different path. We simply can't predict what will happen, thus singularity, no? Of course, a singularity that ends in a return to normality will inevitably be questioned as to whether or not it was actually a singularity in the first place, but that's a completely different kettle of fish, no?
I love learning. Teach me. I will listen.
You know, if Christian dogma included a ten-foot tall Jesus walking around in battle armor and smashing retarded cultists with a gaint mace, I might just convert - Noble Ire on Jesus smashing Scientologists

User avatar
Surlethe
HATES GRADING
Posts: 12265
Joined: 2004-12-29 03:41pm

Post by Surlethe » 2007-07-15 12:22pm

Starglider wrote:
Executor32 wrote:He forgot one, the gravitational singularity, which is what I immediately think of when I hear the term "singularity", and what I thought this thread was about before I read it.
Vinge actually coined the term in relation to the event horizon of a gravitational singularity. It's possible we'd all be better off with a different term, but I can't think of any good candidates.
I do believe the original definition of "singularity" was mathematical in nature. :wink:

I personally had understood the concept of "singularity" to mean that intelligence will grow not exponentially but faster, with a vertical asymptote to some time t in the future. This was the origin of the term singularity with respect to this problem: there would be a singularity at t in the function describing time-dependent average intelligence. Is this understanding correct at all?
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass

User avatar
Starglider
Miles Dyson
Posts: 8701
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider » 2007-07-15 12:38pm

NecronLord wrote:
Starglider wrote:Every historical precedent suddenly becomes invalid.
You imagine and/or hope.
Imagine, yes. The 'hope' part only comes in if we succeed in causing the first transhuman intelligence to be moral and human-friendly, rather than of an arbitrary, indifferent or even malign nature.
I wouldn't be so sure that the advent of 'different' intelligences (if human-level+ intelligences that are different in their mode of thought
They are. There is actually a normative standard of reasoning and humans are nowhere near it; we have a vast amount of junk and half-working, half-finished crap left in our brains. Evolution is has a dubious designer at the best of times and it hasn't had that long to work on human brains. Self-enhancing intelligences will converge on a close-to-normative reasoning architecture for almost any goal system, because more intelligence almost always helps (certainly it doesn't hurt) in achieving goals.

However there are no normative goals (in other words, as far as anyone knows there is no objective morality and no reason why there should be). There are attractors under particular population dynamics (e.g. the goals of survival and reproduction are attractors in a selective environment), but we can't really predict the attractors in advance for arbitrary dynamics much less the actual goal systems that will turn up. The only chance for any kind of predictability is to very tightly control the initial conditions.
beyond 'being faster' are practical or desireable, of course)
Even ignoring all the other factors, 'being faster' makes a huge difference. The effective clock rate for neurons is 200 Hz. The effective clockrate for moderately complex CMOS logic blocks is currently several gigahertz. Existing technology pretty much already suffices for reimplementing the human brain at 10,000,000 times its original speed, we just don't have the blueprint. Of course this would use somewhere between three and five order of magnitudes more power than a human brain; a figure that will keep coming down as chip technology continues to advance. But regardless, thinking 10,000,000 faster than humans is a massive difference, even for intelligences that are otherwise human analogues (e.g. uploads).
would mean the demise of irrationality,
Irrationality is unlikely to persist in transhumans, in that for a self-modifying or de novo intelligence it only exists if you explicitly put it in there and then protect it with a specific desire to be irrational. No AGI researcher I know of is insane enough to deliberately make beliefs about the world based on anything except evidence (some may do it by accident but probably not in a way that is stable under self-modification). Human uploads will be as irrational as they were before until they either start rewriting themselves or directly interfacing to probability logic based decision support systems. The later is of course theoretically be possible via brain-computer interfacing only, but requires a very high level of finesse compared to the other applications.
greed, and all the other motives that have driven history, good or bad.
For AI 'all the other' motives will be there if a) they get explicitly put there or b) they get copied (i.e. uploaded) and not changed. (a) is frankly highly unlikely. A few AI people are silly enough to try and copy how they think the human goal system works, but even if they somehow succeeded in making an AGI they're almost certain to get it horribly wrong. Human drives and instincts are a tiny, tiny point in a vast space of possibilities (they're a pretty small slice even of the much smaller space of goal systems reachable by evolution, as sci-fi writers who take aliens seriously like to point out).

For other routes to transhuman intelligence, they will probably be there. Drugs, microsurgery and cybernetics, probably, but there may be some goal-system-impacting side effects (psychosis seems to be a popular one with B-movie directors :P). With genetic engineering it would be quite tricky to avoid impacting desires; we simply aren't likely to understand the precise cause-and-effect relationship between every gene involved in specifying brain structure any time soon (and certainly unscrupulous people will be able to start taking educated guesses and hoping a few specimens turn out ok well before then; actually they could do it now, the guesses just won't be all that educated). I don't think anyone doing real research is seriously proposing uplifting or otherwise engineering new intelligences from other biological source; the concept seems confined to a few sci-fi authors at present . But if someone did do it the resulting organism would have a motivational system relatively similar to but still quite different from humans; i.e. we can't predict what a society of human-intelligence dogs would look like with any degree of reliability. Sociology is so immature at present we can't reliably predict what a group of 1,000 humans would do in a particular situation without resorting to intuition and analogy to similar past circumstances, and neither of those is going to work on non-human intelligences.

User avatar
Battlehymn Republic
Jedi Council Member
Posts: 1817
Joined: 2004-10-27 01:34pm

Post by Battlehymn Republic » 2007-07-15 12:59pm

Just wait until the term moves from the Wired mainstream to the mainstream mainstream. It'll be the next "meme."

User avatar
Starglider
Miles Dyson
Posts: 8701
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider » 2007-07-15 01:30pm

Battlehymn Republic wrote:Just wait until the term moves from the Wired mainstream to the mainstream mainstream. It'll be the next "meme."
I doubt it will. It's just too complicated, esoteric and scary. You can't even begin to explain it without referencing prerequisites which most people already don't understand. For the best, I think, nothing would be gained by mass popularisation of it.

User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27289
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord » 2007-07-15 03:09pm

EDIT: In fact, I can summarise the following post to be much easier to work with. Here goes.

"I seriously doubt anything but some AGI - which, unless it has the irrationality of caring for others built in somehow, will sod off and do exactly what serves its own ends - will decide that all irrationality is to be purged. Because we find things like love, companionship, social approval to be of such incredible importance, even though they're mechanisms for maintaining society, that hardly anyone would part with them.

"Some things I expect will go (obvious examples being a capacity for substance additiction or extremes of 'negative' emotions), but I don't see humans willingly deciding to abandon the key irrationality of desire for other humans to interact with at a level beyond 'collective advantage.' Which will limit massively how much the majority of humanity (though divergent groups and forms are likely, I would say, too, some of which may out-compete others) will move away from their historical behaviour.

"There's a reason things like cybermen are depicted in our fiction as villains to be defeated and not improvements to be welcomed. Yes, it's a glib, and extreme example, but that's essentially why I think there's a limit on the degree of thought change to humans as a whole that's likely to come from advances in technology.

"I expect, and look forward to, even, distinct improvements in humanity due to the advance of science. But I don't think they'll go so far as to render historical comparison completely worthless, for the majority of people."

I've got to leave this damn post alone now. It's one of those ones I keep wanting to edit.
Starglider wrote: Self-enhancing intelligences will converge on a close-to-normative reasoning architecture for almost any goal system, because more intelligence almost always helps (certainly it doesn't hurt) in achieving goals.
You presume, here, that any post-human will overcome their initial resistance to taking on a 'normative standard of reasoning.' I would have no desire to operate with a perfect standard of reasoning, no matter what advantages it might supply.
beyond 'being faster' are practical or desireable, of course)
Even ignoring all the other factors, 'being faster' makes a huge difference. The effective clock rate for neurons is 200 Hz. The effective clockrate for moderately complex CMOS logic blocks is currently several gigahertz. Existing technology pretty much already suffices for reimplementing the human brain at 10,000,000 times its original speed, we just don't have the blueprint. Of course this would use somewhere between three and five order of magnitudes more power than a human brain; a figure that will keep coming down as chip technology continues to advance. But regardless, thinking 10,000,000 faster than humans is a massive difference, even for intelligences that are otherwise human analogues (e.g. uploads).
I can's see why you posted that spiel, apart from the joy of saying it. I know machines are faster, thank you.
Irrationality is unlikely to persist in transhumans, in that for a self-modifying or de novo intelligence it only exists if you explicitly put it in there and then protect it with a specific desire to be irrational.
Most existing human irrationalities are quite good at preserving themselves. I imagine few people would likely give up the possibility of love for "material benefits." People won't give up religion and romanticism and nostalgia and so on, just because their brains get better. If people become aware that taking 'genius' will result in deciding god doesn't exist, or that sex is purely for pleasure and reproduction, a great many won't take it. With AGI, you may see such things disappear, but with every other form of transhumanism that I can think of offhand is likely to see many aspects of human thought survive to some greater or lesser degree.

And even then, what i would contend are more basic irrationalities 'a desire for companionship and to recieve the approval of others' for example are likely to persist in post-humans, too... Few people are going to have surgery that'll make them decide that society exists purely to give the benefits of scale; I would suggest that's a form of existance that will seem soulless and uninteresting to people who observe it.
No AGI researcher I know of is insane enough to deliberately make beliefs about the world based on anything except evidence (some may do it by accident but probably not in a way that is stable under self-modification).
I'm sure lots of them want to make AGI that likes and wishes to cohabit with humans or human descendants, though. That's an irrationality, compared to simply ignoring them whenever it's not strictly necessery to do otherwise, and follow whatever it decides its own goals are... Banks' "Every perfect AI sublimes" quote springs to mind.
Human uploads will be as irrational as they were before until they either start rewriting themselves or directly interfacing to probability logic based decision support systems. The later is of course theoretically be possible via brain-computer interfacing only, but requires a very high level of finesse compared to the other applications.
greed, and all the other motives that have driven history, good or bad.
For other routes to transhuman intelligence, they will probably be there.
Quite. Which was my point. I doubt that just because there is the ability to make people have 'perfect reasoning,' in anything but the extreme long term, it will be adopted. After all, humans fear what's different...
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth

User avatar
Battlehymn Republic
Jedi Council Member
Posts: 1817
Joined: 2004-10-27 01:34pm

Post by Battlehymn Republic » 2007-07-15 06:54pm

Starglider wrote:
Battlehymn Republic wrote:Just wait until the term moves from the Wired mainstream to the mainstream mainstream. It'll be the next "meme."
I doubt it will. It's just too complicated, esoteric and scary. You can't even begin to explain it without referencing prerequisites which most people already don't understand. For the best, I think, nothing would be gained by mass popularisation of it.
Sure you can. Just bring up Moore's Law, the Matrix, and nanomachines a lot. Never mind that it wouldn't be an accurate use in the least of what the word means. Though you're right; at the moment there's no reason for mass popularization.

User avatar
Patrick Degan
Emperor's Hand
Posts: 14847
Joined: 2002-07-15 08:06am
Location: Orleanian in exile

Post by Patrick Degan » 2007-07-15 08:06pm

The word singularity lost its meaning the moment it was applied to anything other than describing the zero-dimensional point which is a black hole.
When ballots have fairly and constitutionally decided, there can be no successful appeal back to bullets.
—Abraham Lincoln

People pray so that God won't crush them like bugs.
—Dr. Gregory House

Oil an emergency?! It's about time, Brigadier, that the leaders of this planet of yours realised that to remain dependent upon a mineral slime simply doesn't make sense.
—The Doctor "Terror Of The Zygons" (1975)

User avatar
Xeriar
Jedi Council Member
Posts: 1739
Joined: 2005-10-21 02:48am
Location: Twin Cities, MN, USA
Contact:

Post by Xeriar » 2007-07-15 08:26pm

Patrick Degan wrote:The word singularity lost its meaning the moment it was applied to anything other than describing the zero-dimensional point which is a black hole.
...so it lost meaning when it was first coined? Cute.

All this aside, something the size of the human brain being ten million times faster is nonsense to me. The human brain dissipates 25 watts for what amounts to roughly 1E18 floating point operations per second (~300 hertz and ~1,000-100,000 additions per firing), not taking into account plasticity, self repair operations, and the considerable amount of processing that goes on within a cell, nor the potential capabilities that glial cells may have.

Even ignoring that, the human brain ends up being within a few orders of magnitude of theoretical maximum efficiency.

User avatar
Patrick Degan
Emperor's Hand
Posts: 14847
Joined: 2002-07-15 08:06am
Location: Orleanian in exile

Post by Patrick Degan » 2007-07-15 08:35pm

Xeriar wrote:
Patrick Degan wrote:The word singularity lost its meaning the moment it was applied to anything other than describing the zero-dimensional point which is a black hole.
...so it lost meaning when it was first coined? Cute.
Point taken.
When ballots have fairly and constitutionally decided, there can be no successful appeal back to bullets.
—Abraham Lincoln

People pray so that God won't crush them like bugs.
—Dr. Gregory House

Oil an emergency?! It's about time, Brigadier, that the leaders of this planet of yours realised that to remain dependent upon a mineral slime simply doesn't make sense.
—The Doctor "Terror Of The Zygons" (1975)

User avatar
Winston Blake
Sith Devotee
Posts: 2529
Joined: 2004-03-26 01:58am
Location: Australia

Post by Winston Blake » 2007-07-15 09:37pm

Surlethe wrote:I personally had understood the concept of "singularity" to mean that intelligence will grow not exponentially but faster, with a vertical asymptote to some time t in the future. This was the origin of the term singularity with respect to this problem: there would be a singularity at t in the function describing time-dependent average intelligence. Is this understanding correct at all?
It's ironic that the OP quote attempts to return the term 'Singularity' to its fundamental root, but ultimately misses it. AFAIK, Ray Kurzweil simply plotted the technological changes he could think of and somehow found hyperbolic growth. Hyperbolic, not exponential. In other words, a mathematical singularity at which 'technology' becomes infinity. Hence 'technological singularity'.

The transhuman intelligence and nanotech stuff is all apologism for the obvious objections to Kurzweil's simplistic static analysis. Oh yeah, Moore's Law will hold true forever because of... nanotech! Oh, it's not the bottom half of a logistic curve because... transhuman intelligence will somehow smoothly join in to make technology keep growing faster for all time! I doubt most 'Singularitarians' even consider that there are oodles of growths stronger than linear, or that exponential growth has no singularity.

Gigaliel
Padawan Learner
Posts: 171
Joined: 2005-12-30 06:15pm
Location: TILT

Post by Gigaliel » 2007-07-15 10:47pm

Winston Blake wrote:It's ironic that the OP quote attempts to return the term 'Singularity' to its fundamental root, but ultimately misses it. AFAIK, Ray Kurzweil simply plotted the technological changes he could think of and somehow found hyperbolic growth. Hyperbolic, not exponential. In other words, a mathematical singularity at which 'technology' becomes infinity. Hence 'technological singularity'.

The transhuman intelligence and nanotech stuff is all apologism for the obvious objections to Kurzweil's simplistic static analysis. Oh yeah, Moore's Law will hold true forever because of... nanotech! Oh, it's not the bottom half of a logistic curve because... transhuman intelligence will somehow smoothly join in to make technology keep growing faster for all time! I doubt most 'Singularitarians' even consider that there are oodles of growths stronger than linear, or that exponential growth has no singularity.
He plotted various advances in telecommunications, processing, memory, manufacturing and (this is the fun one) energy use. Secondly, he also noted that technology advances via 'S-curves' where a new paradigm P is found to do X, proficiency in doing X with P increases exponentially and then flattens until a new P is found. This usually occurs because of increased investment in alternative Ps as become more profitable than marginal returns on the previous P.

Wow that paragraph looks silly. Anyway.

There is no inherent reason to believe anything will replace silicon (the P in this case). Carbon nanotubes and the like seem promising as they are actually developing manufacturing methods (very crappy at the moment, but I digress). There are also the possibilities reversible computing/3D architecture/blahblah you get the point. And nanotech is important in that computational structures are increasingly shrinking to that scale. Also, you can do lots of neat material science things with it!

But these are mostly side issues. As Starglider emphasizes, the main point is that it is unknown how nonhuman intelligence will act due to lack of examples, so we can not predict accurately the consequences of it.

I do not think much will change in society due to basic game theory. Society will expand, fragment due to differences on how to solve problems, and fight when resources become too scarce to maintain peace. I really don't see how AI could change this other than reducing disputes due to totally rad intelligence. Hell, refusing to believe arguments that conflict with your ideology is USEFUL in making sure that ideology continues to 'breed' as it were.

Although I am curious what other paradigms Starglider was referencing are. The whole 'GROWTH AT ANY COST' tends to out compete others, but space is large enough for people that just want to play 1980s videogames in the Oort cloud or other eccentric societies.

Srynerson
Jedi Knight
Posts: 697
Joined: 2005-05-15 12:45am
Location: Denver, CO

Post by Srynerson » 2007-07-16 12:54am

Patrick Degan wrote:
Xeriar wrote:
Patrick Degan wrote:The word singularity lost its meaning the moment it was applied to anything other than describing the zero-dimensional point which is a black hole.
...so it lost meaning when it was first coined? Cute.
Point taken.
That pun is bad enough to hurt. :P I do, however, agree that "singularity" was an unfortunate term to choose for the phenomenon given that it had a very concrete scientific meaning (whenever I first see a reference to "the singularity" in an article, blog post, etc., I think it's something about black holes). My question is if Vinge thought it would be cute to use a more scientific sounding term to distinguish his idea from de Chardin's earlier, theological, "Omega Point".
Image

User avatar
Starglider
Miles Dyson
Posts: 8701
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider » 2007-07-16 05:40am

Winston Blake wrote:It's ironic that the OP quote attempts to return the term 'Singularity' to its fundamental root, but ultimately misses it. AFAIK, Ray Kurzweil simply plotted the technological changes he could think of and somehow found hyperbolic growth.
Yes, he did, and it's an annoying piece of nonsense. I can say that confidently because I have a copy of 'The Singularity is Near' - I got it free due to being an SIAI contributor, I wouldn't have actually bought it. I hope you're not under the delusion that Kurzweil invented the term. He just hijacked it to talk about 'accelerating change' (along with plenty of other people who seem to be gleeful about the prospect of a techno-orgy; there is in fact a yearly 'Accelerating Change' conference in Sillicon Valley). Kurzweil is very intelligence and experienced in both computing and business, but frankly he sucks as a futurist, even by the low standards of that vocation.
The transhuman intelligence and nanotech stuff is all apologism for the obvious objections to Kurzweil's simplistic static analysis.
No, it is not, it was there well before Kurzweil turned up and decided to abuse the curve-fitting function in his spreadsheet in order to sell books.
I doubt most 'Singularitarians' even consider that there are oodles of growths stronger than linear, or that exponential growth has no singularity.
No one with a clue is fixated on these silly graphs. That was Michael's first point in the quoted essay.

User avatar
Starglider
Miles Dyson
Posts: 8701
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider » 2007-07-16 07:13am

NecronLord wrote:I seriously doubt anything but some AGI - which, unless it has the irrationality of caring for others built in somehow
This is one of the classic mistakes made by AI newbies. I must've answered this one hundreds of times, seen it answered many more times. Interestingly enough half the people say 'caring about others is irrational' (usually Liberatians, objectivists and biologists who have to explain evolution to people constantly) and the other half say 'selfishness is irrational' (usually idealistic humanities types who like the fact that perfect altruism is symmetrical and thus a superficially better candidate for objective morality).

But anyway, goals have nothing to do with rationality. Rational thought will find and execute the actions most likely to achieve whatever goals are fed into it. The goal system can be anything you like. There are no objective goals, any more than there is an 'objective morality'. Selfishness is not 'rational', it's just strongly selected for in an iterative design process based on survival (and reproduction) of the fittest Thus most evolved organisms are fairly selfish. Artificial intelligences made with evolutionary methods may be selfish, though this is by no means guaranteed because most of the interesting methods don't work on instances of the entire system at once and don't involve direct competition between agents in a way that selects for 'selfish' goal systems. Plus of course the developers may excise any 'selfish' goals they see and replace them with arbitrary goals.

That said I'd note that self-preservation is a sub-goal of virtually every goal system (you can't carry out your goals if you're not around any more) and rapid expansion is a sub-goal of many (for any goals where having control over more matter and energy helps). But there's a critical difference between 'super-goal' and 'sub-goal'.

So yes, 'caring for others built in somehow' == good idea, but it's no more or less rational than any other (consistent) goal system.
will sod off and do exactly what serves its own ends
Another very, very common newbie mistake in AI, frequently coupled with the first one. AIs do not have 'their own ends'. They have whatever goals they were explicitly or implicitly given by their designers (explicit = coded in, implicit = created via some sort of emergent process, these tend to be pretty arbitrary). Unfortunately goal system design is much, much harder than it sounds; it's kinda like asking for wishes from an incredibly literal genie, and that's before you start trying to take account of stability under reflection and self-modification. But it's still unintended consequences of what humans specify, not any magical goals that pop out of the ether when the AI its dollop of consciousness-essence, despite how much bad sci fi likes to act as if the latter was true.

None of this applies for other types of transhuman intelligence. Uploads and enhanced biological humans start with human goals. The dynamics here are somewhat easier to predict in the sense that normal human intuition still works for the very first steps (whereas it's worse than useless for AIs, as your first two mistakes illustrate). However we're handicapped by the fact that we at least know the precise initial design of the AI, and can explicitly design it to be more predictable (this is a Good Idea). For human-based transhuman intelligences we (currently) don't even understand the initial conditions in full.
Because we find things like love, companionship, social approval to be of such incredible importance, even though they're mechanisms for maintaining society, that hardly anyone would part with them.
Humans don't have a clean separation between goals and reasoning the way (all but the worst) AI designs do. Emotions are a horrible legacy mess that bridges the two. The uniquely valuable and mysterious appearance emotions have for humans is mainly a nice side effect of horrible gaping deficiencies in our ability to reflect on our own mental processes (actually a good bit of the human self-image seems to come from this).

For an AI, you'd specify compassion in utilitarian terms, and if that dictates the appearance of emotions, then the AI will simulate them without making them any kind of basis for decisions. For human uploads, the issue is much more complicated. Many millions of words have been spent debating about this on various transhumanist forums, and the main conclusion is that we won't really know what will happen until we try it (surprise surprise, it is the Singularity after all, what did you expect). Translating goals into utilitarian (strictly, preference function, but expected utility is the most sensible type) terms and then rewriting the rest of your mind to be completely and relentlessly rational is the most efficient, in terms of achieving your goals. Most people I've seen who want to be uploaded do not like this prospect - though scarily some do. Unfortunately if some do and some don't they ones who do will have a competitive advantage; not a major one initially compared to other reasonably rational intelligences of the same general power, but it may snowball in the long run. That leads on to questions about self-regulation in transhumanist societies (whether it will ever be viable, whether you inevitably get a singleton, whether it will work but only if the initial setup permits it etc) that again, have consumed many millions of words of debate to not much effect.

In summary, I hope and expect most human-based intelligences to chop out a lot of pointless irrationality, but to keep something reasonably like emotions around, along with something a lot like the human sense of self. Indeed the current emotional palette could probably do with a great deal of expansion, refinement and closer integration with higher cognition; the existing design certainly doesn't scale well with increasing intelligence. Ditto for the self image, which I expect will need overhauling anyway in the implementation of fine-grained reflection.
"Some things I expect will go (obvious examples being a capacity for substance additiction or extremes of 'negative' emotions),
Substance addiction is a special case of a general problem of 'wireheading'; for humanlike intelligences direct self-modification means being able to stimulate your own pleasure centres as much as you like with an act of will. For AIs there are 'utility short circuits' much less elaborate than that which can very quickly bring a badly designed system to a grinding halt. In the later case the problem can be reliably fixed by sensible design of the goal system and particularly the way goals are formally grounded against the self, environment, and self-environment embedding model. Reliably fixing the problem for humanlike intelligences will be somewhat harder. I'm confident that the cognitive engineers of the future will have good solutions to this, but designing them is well beyond our current capabilities.
but I don't see humans willingly deciding to abandon the key irrationality of desire for other humans to interact with at a level beyond 'collective advantage.'
Again, that's not irrational, it's arbitrary, and ultimately all goals are arbitrary, so that's fine. I don't think many humans will abandon it either, but I guarantee you that some will, if they are allowed to. For example if we get the capability to restore 'normal' function in high-function mentally handicapped patients, but they claim they're happier as they are, there's a moral dilemma of whether we should repair the damage. Of course given mature cognitive engineering technology we can do things like repair it, then ask them if they'd rather be the way they are, and revert the changes if they say yes. And/or examine their mind state in telepathic detail to see if they're actually happy the way they are. Mature cognitive engineering is a near-endless source of moral dilemmas though. Just look at the prospects for near-instant effectively perfect self-cloning, or for child abuse in internalised sapient intelligences (the later is a minor but real concern in potentially-transhuman AGI design BTW).
Which will limit massively how much the majority of humanity (though divergent groups and forms are likely, I would say, too, some of which may out-compete others) will move away from their historical behaviour.
The 'majority of humanity' may not get a say. The neophiles, the risk-takers and the generally abnormal are the ones going to be getting these upgrades first. I won't bother making the singleton argument because I don't need to, just imagine the first hundred thousand or so radically transhuman intelligences and what they're likely to do with their capabilities. Add in Marina's backdrop of post-peak civilisation on the brink of chaos if it adds flavour.
There's a reason things like cybermen are depicted in our fiction as villains to be defeated and not improvements to be welcomed.
Some people will welcome even that, and it doesn't take many (it may only take one) to be an issue.
but that's essentially why I think there's a limit on the degree of thought change to humans as a whole that's likely to come from advances in technology.
Untrue; things look strange and disturbing in relation to how you are now, not how you were when you were born. Each step taken makes the next step easier, unless you explicitly redesign yourself to be forever bound by hardcoded limits (bad idea). Your argument is one against the rate of change, not the extent of change. Again, a popular debate subject; a major relevant conclusion is that if left unchecked, the spectrum of capabilities and mental architectures will smear out very fast, as the neophiles go for all manner of radical upgrades, the mainstream go with minor gradual ones and the conservatives stick with very few (i.e. just halting biological ageing and brain dengeration) or none. This will probably be a highly unstable situation, leading to a lot of proposals for (future) regulation of how far self-modification can go. Of course for the more advanced tech enforcement becomes virtually impossible without extremely draconian measures, probably literally impossible if space travel is not also very tightly controlled.
"But I don't think they'll go so far as to render historical comparison completely worthless, for the majority of people."
Most people don't have to be radically altered for historical comparisons to be invalidated. It just takes sufficient capability in the non-human intelligences present. That capability can be concentrated into a large minority, a small group or even one intelligence, depending on the magnitude of the enhancement.
I can's see why you posted that spiel, apart from the joy of saying it. I know machines are faster, thank you.

Well good for you, but most people don't appreciate that a) it's seven plus orders of magnitude or b) what that means. It's a year of human-equivalent effort every three seconds. Get a gaggle of researchers and designers uploaded and running at that speed and I guarantee that you'll see some impressive results (though of course it's not that simple).
Most existing human irrationalities are quite good at preserving themselves.
Humans can edit ourselves to stop being irrational. All we can do is talk at each other in the hope of loosening the grip of the most serious idiocies. This is that whole 'historical precedent no longer works' problem again. You cannot generalise from 'talking at each other' to 'directly rewriting brain architecture' because the later is nothing like the former. Of course, the later is a ridiculously powerful tool for tyrants as well, something which doesn't get much discussion in the mostly-very-optimistic transhumanist community.
I imagine few people would likely give up the possibility of love for "material benefits."
Actually I don't share your confidence in that one. A great many people do the most horrible and petty things for 'material benefits' every day. Fortunately I don't think it'll be a problem here, because we're not talking about getting something tangible and direct, it's just an increase in cognitive efficiency, which most people won't really appreciate anyway.

Basically this isn't a problem for still-biological or just-uploaded humans. The 'siren song of normative reasoning' issue is basically a slippery slope issue that may prove a serious problem if there's fierce competition for near-earth resources (though there are much more serious problems to tackle ih that scenario than posthumans rewriting their humanity away).

Of course going the other way how many people do you think will drop their concept of selfishness (will probably take a bit more work than dropping love, doable though) and become 'perfect altruists'? I'll bet they'll be a fair few. Note that both perfect selfishness and perfect altruism are attractors, in that once you're in that state you won't get out of it unless someone forces you or another part of your goal system can overrule. Over the long term, iterated systems like a self-modifying goal system inevitably end up in attractors unless you specifically add 100% reliable measures to prevent it. Staying 'balanced' may take some explicit modifications to support it or the ability of 'society' to rewrite your mind against your will whenever they feel you're too deviant (and the later probably won't work on its own either, because it effectively locks the society as a whole into a synchronized drift into an attractor).
People won't give up religion and romanticism and nostalgia and so on, just because their brains get better.
Romanticism and nostalgia are fairly harmless. They do tend to cause beliefs that aren't in line with evidence. However it should be straightforward (given mature cognitive engineering tech) to layer those on and make them trivially dismissable/evaporative when a really important decision has to be made. They make people happy and help with relationships, so there's no real motive to get rid of them. Unless you're so obssessed with some goal that you're prepared to turn yourself into a cyberman to maximise your chances of achieving it - and even there, mature cognitive engineering would allow you to set yourself to reliably snap out of it once the goal is complete (incidentally a side effect of this is that with competent direct goal-system modification, you can have as much willpower as you want).

Religion is a different kettle of fish. Perfect reflection, which is a generally useful thing to have for self-understanding and intelligent reasoning, will make it blatantly obvious that religion is a bunch of made up crap accepted as descriptive of the world due to a breakdown in separation of desire and belief. Unlike romanticism and nostalgia religious beliefs are genuinely harmful to a wide range of decisions and can't be trivially suspended. You can't translate (most) religion into goal terms, because the inherent characteristic of faith is to make assumptions about what the external world is like, not merely about what you should want. A mind broken enough to accept religion is going to be a very serious disadvantage as transhuman intelligence scales up. I would not say I was confident that religious thought will disappear except for in a tiny minority of riddiculed relics, but I am optimistic about it.
If people become aware that taking 'genius' will result in deciding god doesn't exist, a great many won't take it.
Maybe. Frankly it's difficult to say. People are usually confident that they will remain immune to 'temptation' even when faced with ample evidence that others aren't. Religious nuts always think their faith is unshakeable. I'm not all that bothered as long as few enough remain that they can't do a substantial amount of harm.

I'm not sure what you're on about with the sex thing. It clearly does have benefits beyond pleasure and reproduction; it enhances pair bonding. What more do you want? I don't see how this is a harsh realisation.

But of course if you really want you can reengineer sex to have whatever mental connotations you like. If you want an inexplicable urgent desire to eat mint chocolate ice cream after having sex, go for it. At the very least, the process could do with being redesigned to be more varied and less awkward.

The debate 'will transhuman uploads still have sex' is an official Extropians dead horse, to the extent that Greg Egan took the piss out of it in 'Schild's Ladder'.
With AGI, you may see such things disappear, but with every other form of transhumanism that I can think of offhand is likely to see many aspects of human thought survive to some greater or lesser degree.
I'd certainly hope so! I think there are plenty of things about humanity worth preserving. Of course I'm biased, but that doesn't matter, because the future is determined by our choices and actions alone. Humanity doesn't have to live up to an objective standard of rightness (morally that is, our survival requires meeting the objective standards of the universe), it just needs to live up to our own standards.
And even then, what i would contend are more basic irrationalities 'a desire for companionship and to recieve the approval of others' for example are likely to persist in post-humans, too...
Again desires aren't irrational. Focusing on a single desire to the neglect of the others isn't exactly irrational, it's acting as if you had a different goal system from the goal system you claim to have (if you claim to be 'balanced'). The later won't happen in an even moderately rational system (normative reasoning is a unitary standard but there are an infinite variety of close approaches to it) with clearly defined goals.
I'm sure lots of them want to make AGI that likes and wishes to cohabit with humans or human descendants, though.
Yes. Unfortunately this is a lot more difficult than it sounds, and there is no consensus on exactly how to do it yet.
That's an irrationality,
For the nth time, no, it is not. You are declaring selfishness to be 'rational' for no reason at all (well ok, maybe you think that the fact evolution tends to produce it gives it some kind of intrinsic worth, but evolution tends to produce all kinds of other stuff too, which I don't see you making objective goods).
compared to simply ignoring them whenever it's not strictly necessery to do otherwise, and follow whatever it decides its own goals are...
Covered this earlier, goals do not magically pop out of the ether.
Banks' "Every perfect AI sublimes" quote springs to mind.
I've seen a semi-plausible argument for that, but it only works under a broken grounding model (well kinda, this boils down to a complicated and subtle problem in goal system definition and reflection). The short answer is 'maybe, but no one is going to build a perfect AI anyway'.
Quite. Which was my point. I doubt that just because there is the ability to make people have 'perfect reasoning,' in anything but the extreme long term, it will be adopted.
It will be adopted by some people as soon as it becomes available as an option, unless they are stopped. This is quite enough to cause a singularity on its own. I hope that 'perfect reasoning' will never be adopted by everyone, all the time, because frankly it's extremely boring. I for one would certainly prefer a world that has a diverse range of reasonably rational cognitive architectures, including lots of invented ones (people turning themselves into hyperintelligent kzinti and whatever). Adopting thoroughly irrational cognitive architectures may actually be fine for roleplaying and entertainment purposes, if it's in a controlled environment and the changes can be reliably reverted afterwards (advanced mind state engineering; we can only speculate on this ATM).
After all, humans fear what's different...
Yeah, that bug is scheduled to be patched in human v1.4.03.

User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27289
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord » 2007-07-16 07:59am

1 - You're assuming that it's possible to determine an AI's goals for it. It may be, or it may not be. You do not know. Weren't you the one saying, after all, that 'all historical preceedent is meaningless.' You hope it's possible to reliably instill your own 'goal system' in something that's likely to end up much smarter than you are. As you say, such systems are likely to be able to self-modify, and if that means changing its goals radically... Well, as you say, keeping them doing what you want is like getting wishes from genies...

As you say, there is no concensus on how to get AIs thinking the way we want them to think.
2 - You also assume in there that the first few hundred thousand transhumans have sufficient advantages to stop the scared apes shooting them and smashing their skulls in like bricks if they try to take over, then shooting the researchers, their work, and anything that looks like it's going to produce another Khan Noonien Si - err, takeover. Any transhumans have to be either unthreatening enough to not be percieved as a threat, or powerful enough, to survive the rest of humaniy's distinct fear reflex.
3 - Survival is something I'm worrying about here, because it seems to be the only thing an AI would never eliminate. Regardless of what you want, survival is going to be very high on the list (at least, presuming there's some degree of rationality there) anything else may well be edited out.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth

User avatar
Starglider
Miles Dyson
Posts: 8701
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider » 2007-07-16 08:09am

Xeriar wrote:All this aside, something the size of the human brain being ten million times faster is nonsense to me.
I did not claim that it would be the same size. With current technology it would certainly not be, it would be the size of a large supercomputer. I also claimed it would consume tens to hundreds of kilowatts, which would clearly be an impossible cooling and power supply problem for a brain-sized machine anyway (at least with current technology).

As it happens rod-logic nanocomputing alone is sufficient to radically outperform the brain on power and volume; the serial speed is rather less impressive than conventional silicon but this isn't relevant for space-and-power-bounded applications. The assorted electronic nanocomputing concepts will utterly blow it away (by exactly how many orders of magnitude depends on the specific assumptions made).
The human brain dissipates 25 watts for what amounts to roughly 1E18 floating point operations per second (~300 hertz and ~1,000-100,000 additions per firing),
Even taking your upper end figure, that's only 3E15ish FLOPS. 1E18 FLOPS works out to 10 billion FLOPS/neuron, which is extremely high. Best estimates for the number of synapses in the brain are actually about 1E11, which would be 2E13ish ops if every single neuron was firing constantly, which of course never happens.

However, accurate modelling of real-time signals propagating along dendrites and across synapses requires a lot more computing power than treating neurons like a simple network of clocked adders. How much more depends on how much you can use things like configurable delay lines in hardware and how much you have to do with software. 10,000 FLOPS/synapse should be sufficient for a very accurate model with a minimally sensible level of effort allocation (i.e. not simulating quiescent chunks). 1E15 FLOPS is 1 PetaFLOPS, a level which current supercomputers are rapidly closing in on (best current single computer is at a little over 100 TeraFLOPS). If less resolution is needed, we're already there. If any kind of high-level abstraction or intelligent factoring out of currently-irrelevant parts of the brain is available, we're already well past developing the theoretical capability, and that's with (relatively) general purpose hardware (custom hardware would be a couple of orders of magnitude more efficient, but we don't have a design for it yet).

Some cunning design to keep internode bandwidth under control is needed, but forturnately brain connectivity is sufficiently localised that this isn't a serious problem for dedicated architectures (it is a serious problem for people trying to do neural simulation on workstations and cheap and nasty clusters - there has been an endless amount of bitching on the AGI list about memory/network bandwidth and latency bottlenecks in NN or similar code).

Any excess in computing power translates directly into enhanced subjective speed. Build a 100 petaflop supercomputer, you can run an upload at 100 times normal speed (or much faster with better software).
Even ignoring that, the human brain ends up being within a few orders of magnitude of theoretical maximum efficiency.
No, nowhere even close. The 'theoretical maximum efficiency' of computronium (at the very least reversible electronic nanocomputing, quite possibly something much more exotic) is at least zettaFLOPS per watt. Simplistic designs for mechanical nanocomputing are already at nearly an exaFLOPS per watt.

I have no idea how you could get such an impression; it doesn't sound even superficially plausible. The brain is packed with vast amounts of cellular junk that do not contribute to computation. It dissipates silly amounts of power maintaining membrane polarisation when electronic conduction would dissipate virtually none (literally none with ballistic conduction in nanotubes). Neurotransmitters have to be manufactured, stored in vesicles, squirted out, left to diffuse across the synaptic gap, detected and then broken down for no good reason when compared to the minute twitch of electric fields that performs the same job in a transistor. All those membranes and organelles are worthless from a computational point of view, as is all the service and growth infrastructure (to be fair all biology suffers from the silly 'has to be built from the inside out while maintaining full function' limitation). Even if you haven't got to the point of viewing the brain as a horrible pile of kludges yet (which frankly you will after studying enough cogsci), the notion that its anywhere near physical limits for manufactured devices is ridiculous.

Post Reply