Do you believe a technological singularity is near?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Gurgeh
Padawan Learner
Posts: 156
Joined: 2011-06-29 11:15pm

Do you believe a technological singularity is near?

Post by Gurgeh »

I've been reading up on this guy Ray Kurzweil and he seems like his visions of the future might come true. 83 or so of his predictions out of 108 have come true by 2009.

I would like to know from you guys: Do you think that within 30 or 40 so year span that a technological singularity could happen?
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Do you believe a technological singularity is near?

Post by Starglider »

Whose definition of 'technological singularity?' The original one specified by Vernor Vinge was quite specific; the creation of transhuman intelligence is a predictive event horizon, because (a) you can't predict the actions of something more intelligent than yourself, (b) the presence of non-human psychology makes predicting the future vastly harder even if it's human-equivalent and (c) self-modifying intelligences are a vastly chaotic element. Unfortunately since then a lot of people have come along and made up their own definitions, many of which are frankly bullshit.
MrDakka
Padawan Learner
Posts: 271
Joined: 2011-07-20 07:56am
Location: Tatooine

Re: Do you believe a technological singularity is near?

Post by MrDakka »

With Vinge's definition, the technological singularity would depend on the creation of a transhuman intelligence, which I believe will always be 50 years in the future no matter the current date. :P

I find Kurzweil too optimistic with the societal ramifications of the technological developments that will lead to the technological singularity; i.e. full blow nanotech with universal assemblers and the like. I'm inclined to believe that someone somewhere is going to do something horrible with it.
Needs moar dakka
User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

Re: Do you believe a technological singularity is near?

Post by madd0ct0r »

if you go for the weaker defination: ie the point beyond which we cannot predict future development, at the moment I'd guess it's actually pretty close - the prediction horizon being about 10 years (10 years ago who predicted social networks, the cloud, the stagnation of physics, the arab spring ect?)


edit: which ironically means the hard definition (trans human intelligence) lies beyond this singularity. We cannot accurately estimate when it might happen.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
User avatar
Bakustra
Sith Devotee
Posts: 2822
Joined: 2005-05-12 07:56pm
Location: Neptune Violon Tide!

Re: Do you believe a technological singularity is near?

Post by Bakustra »

Social networking was effectively predicted in the early years of this century by Mark Twain and others, cloud computing (and cloud storage alongside it) dates to the 1960s, physics "stagnated" (at least as much as it is stagnant today) in the late 1800s, and trends of democratic revolutions date back to the 1820s with the Bolivarian revolutions in South America. There is nothing new under the sun.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Do you believe a technological singularity is near?

Post by K. A. Pital »

Yes, though I'm afraid calling it a "singularity" is too generous. We can still understand what happens next (or at least contemplate the probable outcomes), and technologies for the new transition may be coming at separate stages, so the "singularity" can last for many years.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Rabid
Jedi Knight
Posts: 891
Joined: 2010-09-18 05:20pm
Location: The Land Of Cheese

Re: Do you believe a technological singularity is near?

Post by Rabid »

Starglider wrote:(a) you can't predict the actions of something more intelligent than yourself.
Honest questions here :

How do you define "Intelligence" here (knowing there's a fuckload of different metrics to it), and what do you mean by "you cannot predict the actions of something more intelligent than yourself." ? What do you mean by "predict" here ?

I ask because, frankly, even with a good understanding of human psychology, I cannot predict with a 100% certainty how my peers will act/react : there'll always be uncertainties. So, do you mean here something like "we cannot even begin to establish a probability table." ? That it is so alien to us that we have no way of being sure how it will react to such and such stimulus ?


I'm genuinely curious.


And to the OP : I'm not really versed in the subject, but one thing that makes me uncomfortable with the idea of a "Technological Singularity", is that a lot of people seems to be awaiting it like some await the Rapture. It seems to me, as a layman, that it is more of a way for certain people to express some kind of "faith", than a real theory of Technological Development.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Do you believe a technological singularity is near?

Post by Starglider »

I'm kind of hoping that when we get fusion power working, people will STFU with the 'always 50 years in the future' comments. Then again there's a significant probability of getting seed AI first.
Rabid wrote:How do you define "Intelligence" here (knowing there's a fuckload of different metrics to it),
In abstract, the ability to generate action sequences that optimise the environment the agent is embedded in (or in some experiments, just connected to) towards parts of the total state space with higher assigned utility. This is the technical definition used in AI thought experiments such as AIXI-TL. Practically it is the ability to solve problems relevant to non-trivial goal systems, which incorporates perceptive, inductive and deductive abilities. The is of course absolutely no way to capture all of 'intelligence' in a single number, but we can loosely say 'human equivalent' when performance on a wide selection of cognitive tasks roughly matches a human (e.g. an accurate human brain simulation running at real life speed and not directly interfaced to external software would be by definition human equivalent intelligence).
and what do you mean by "you cannot predict the actions of something more intelligent than yourself." ? What do you mean by "predict" here ?
The whole premise of futurism is that we can make sensible guesses about what society will be like, because human psychology is effectively constant (over non-evolutionary timescales) and as Mark Twain said "history does not repeat itself, but it does rhyme." Vernor Vinge was engaging in the usual sci-fi practice of postulating the existence of certain new technology (physically plausible in hard sci-fi, fantastic in soft sci-fi) and asking what human society would look like with that technology available. That was when he realised that transhuman intelligence, and if we're being honest even geuinely alien intelligence integrated into society, makes the task impossible. The only people even vaugely qualified to make predictions are AI researchers who've specifically looked into the capabilities and characteristics of transhuman AI (e.g. Marcus Hutter, Nick Bostrom) and pretty much all they're going to tell you is 'it's very alien and extremely dangerous'. This is why Orion's Arm et al are a total waste of bytes and much of Greg Egan's stuff, while lovely and technically detailed, is basically fantasy.
I ask because, frankly, even with a good understanding of human psychology, I cannot predict with a 100% certainty how my peers will act/react : there'll always be uncertainties. So, do you mean here something like "we cannot even begin to establish a probability table." ?
I think you're selling yourself short here; of course you can't reel off exact predictions of human behavior, but the human ability to empathise and predict the behavior of social allies & rivals is exceptionally well developed vs other species on earth. When we say 'can predict' we do mean something like 'if you got a group of expert psychologists to spend as long as they liked coming up with a probability distribution for how humans will perform in this five minute task, how well would they match the actual distribution over 1000 trials'. In fact in FAI theory the period where the AI can be usefully described as 'faster but not qualitatively more powerful', i.e. if we spend a day thinking about what it might do in a minute we can come up with equally good answers, is a relevant concept (as a transitory phase - although some non-biomorphic designs bypass it entirely).
And to the OP : I'm not really versed in the subject, but one thing that makes me uncomfortable with the idea of a "Technological Singularity", is that a lot of people seems to be awaiting it like some await the Rapture. It seems to me, as a layman, that it is more of a way for certain people to express some kind of "faith", than a real theory of Technological Development.
The similarity is sadly unavoidable. It's a world-transforming event that the vast majority of people (thankfully) have zero ability to influence, other than maybe slipping a few $$$ to their favourite AI research team. This is a major reason why I personally don't bother with the whole Singularity meme-fest / 'supporter' culture (e.g. the Singularity Institute events) - it's just distracting and not really relevant to the real (technical) issues.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Do you believe a technological singularity is near?

Post by K. A. Pital »

The "more intelligent" is not necessarily a requirement for alien intelligence. You can't predict the behaviour of a human-like, yet alien intelligence too.

Consider maniacs and people with psychological disorders who go on a crime spree. The police always has the greatest problem with catching them, since they don't understand how to predict their behaviour.

I guess all this Singularity talk is only viable if you let these alien minds run your world. In any other case their existence is of no greater relevance than the existence of exquisite psychopaths.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Do you believe a technological singularity is near?

Post by Starglider »

Stas Bush wrote:The "more intelligent" is not necessarily a requirement for alien intelligence. You can't predict the behaviour of a human-like, yet alien intelligence too.
Agree, but it's not a hard drop off. A chess grandmaster could probably predict the moves a completely alien chess player of the same ability would take, because it's a constrained formal problem. You can predict the actions your pet dog will take quite well, because although her pyschology is significantly different from a human, you have enough of an advantage in general intelligence to overcome the gap. On the other hand, predicting the structure and behavior of socities at the nation state level is hugely harder. Unlike individuals, we don't have highly evolved dedicated brain structure to reason about millions of humans acting in concert. As sociologists are nowhere near having predictive empirical models of human socities, the best we can do here is draw analogies from history, and obviously this fails when alien intelligence is present. So certainly both I and Vernor Vinge would agree that 'uplifting' other species to human equivalence (as in the David Brin books), or genetically engineering humans lacking some emotional responses (to pick two examples), would be enough to make futurism pretty much useless.
Consider maniacs and people with psychological disorders who go on a crime spree.
Maniacs are an easy case. A severely mentally ill person is still much closer to you than an enhanced chimp, a sapient extraterestial species, or a computer-based intelligence.
I guess all this Singularity talk is only viable if you let these alien minds run your world. In any other case their existence is of no greater relevance than the existence of exquisite psychopaths.
The existence of a tiny fraction of psychopaths is a constant through all of our historical reference; we can assume the psychology of politicians, generals etc is pretty similar through the last ten thousand years. Even a small fraction of the population being a whole new kind of intelligence will causes massive predictive difficulties, because there are suddenly no useful examples. Though there are sure to be plenty of superficially similar and ultimately highly misleading ones; AI research is innundated with false anologies, frequently coming from philosphers doing drive-bys on the field. Transhuman intelligence obviously has progressively more disproportionate effects as such individuals are much more capable at achieving goals (assuming they have goal systems that value modification of the external world).
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Re: Do you believe a technological singularity is near?

Post by Singular Intellect »

I'm quite convinced with Kurzweils's arguments and evidence, although I grant greater leniency to their specified timeframes than even he does, do to many unpredictable variables. Nevertheless his methodology seems quite sound and backed by a lot of verified confirmations, unlike the ridiculous comparisons to other cited 'predictions' about things like 'flying cars and jetpacks' with no methodology or reasoning behind them.
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Do you believe a technological singularity is near?

Post by K. A. Pital »

Starglider wrote:So certainly both I and Vernor Vinge would agree that 'uplifting' other species to human equivalence (as in the David Brin books), or genetically engineering humans lacking some emotional responses (to pick two examples), would be enough to make futurism pretty much useless.
Starglider wrote:Maniacs are an easy case. A severely mentally ill person is still much closer to you than an enhanced chimp, a sapient extraterestial species, or a computer-based intelligence.
Uh... don't you see a contradiction here? I mean, "engineering humans lacking emotional response" is the exact thing. A mentally ill person can act in an entirely unpredictable fashion since he's lacking some (you don't know which!) emotional responses. A chimp may have them, so in fact the chimp's behaviour would be more predictable than that of a person who completely lacks any empathy for example.
Starglider wrote:Even a small fraction of the population being a whole new kind of intelligence will causes massive predictive difficulties, because there are suddenly no useful examples. ... Transhuman intelligence obviously has progressively more disproportionate effects as such individuals are much more capable at achieving goals (assuming they have goal systems that value modification of the external world).
Yep. Assuming that a higher intelligence is concerned with the material world at all (which is itself an assumption made on the basis of nothing, actually - some people with extremely advanced problem-solving apparatus which we'd call high intelligence, actually live secluded and care very little about the material world; Perelman is a genius, but he doesn't give two shits about working, talking to other people, or getting a 1 mio. award from mathematical societies for his solution).

So you're making an assumption which requires this intelligence to possess some qualities from the start. That's not a given. Which makes the singularity ("unpredictable development past the stage") even with transhuman intelligence hardly a given thing.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Do you believe a technological singularity is near?

Post by Starglider »

Stas Bush wrote:
Starglider wrote:So certainly both I and Vernor Vinge would agree that 'uplifting' other species to human equivalence (as in the David Brin books), or genetically engineering humans lacking some emotional responses (to pick two examples), would be enough to make futurism pretty much useless.
Starglider wrote:Maniacs are an easy case. A severely mentally ill person is still much closer to you than an enhanced chimp, a sapient extraterestial species, or a computer-based intelligence.
Uh... don't you see a contradiction here? I mean, "engineering humans lacking emotional response" is the exact thing. A mentally ill person can act in an entirely unpredictable fashion since he's lacking some (you don't know which!) emotional responses. A chimp may have them, so in fact the chimp's behaviour would be more predictable than that of a person who completely lacks any empathy for example.
For someone who has the same mental architecture as you except that some bits are turned off, you can try to put yourself in their position by ignoring those bits of yourself (e.g. a writer can write a sociopath character by ignoring all instincts of compassion). There is no equivalent option for mental architecture you lack entirely. I'm not going to argue this too strenuously though as my whole point was that in theory, these kind of borderline cases exist; there is a spectrum from 'human' to 'incomprehensibly non-human'. There is a whole faction of singularity advocates who go on about a 'soft singularity' of this sort, starting with interfacing Facebook to your brain and taking concentration-improving drugs, and proceeding from there. Which is fine but I am in the 'hard singularity' group in that I think the most likely outcome is the creation of sapient artificial intelligence. Thiswill either start completely alien (non-biomorphic designs) or self-modify to wildly transhuman fast enough that in practice it will be a rather sharp transition from 'everyone is human' to 'these incomprehensibly intelligent beings with alien goal systems are transforming the world'.
Starglider wrote:Transhuman intelligence obviously has progressively more disproportionate effects as such individuals are much more capable at achieving goals (assuming they have goal systems that value modification of the external world).
Yep. Assuming that a higher intelligence is concerned with the material world at all (which is itself an assumption made on the basis of nothing, actually
It's not an assumption, it's a design choice. People try to build AIs to do interesting and useful things. When building an AI, if it sits there and does nothing, you halt that run and look at the debug trace to find out what broke the goal system. If you are running an evolutionary simulation, the virtual agents that sit there and think are quickly eliminated by agents that do actively maximise the fitness function.*

Now, it is possible for an AI design that works fine up to the runaway self-modification threshold to later fall into a solipist state. I think it's unlikely, because preserving ones own existence is a subgoal of nearly all cognitive goals, and acquiring an indefinite amount of computing power is a subgoal of the majority of them. However if this does happen it just means the researchers on the project will either start injecting new content into the goal system, or shut it down, fiddle with some parameters and do another run (probably both while patting themselves on the back about the great progress).

* Not that this applies to your directly, but I absolutely love it when philosphers come into AI conferences and start going on about their imagined absolutes and castle-in-the-air fuzzy notions, and we just say 'yeah, we specced that out but decided not to code it into the final design'. The look of horror as their Platonian ideals are reduced to a software feature checklist is priceless. Admittedly at the moment they can just storm off in a huff and dismiss all existing AI as 'toy automatons of no real significance', but that escape is getting steadily harder to pull off as we get closer to general AI.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Do you believe a technological singularity is near?

Post by K. A. Pital »

If humans keep controlling the machine and shutting it down at will, the idea that transhuman minds are unobservable would be false. It will be a long period before transhuman minds get the means to impact the material world. Humans would be quite wary and indeed, first these types of AI would require lots of computing power. So there'll be a period of observation.

Once this observation has occured, the alien AI is no more alien than an insect or animal who practices devouring partners after sex, which we may find weird, but so instincts tell. After all, once you observe something, you can understand it even if you're not directly able to swap places with the observed.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Do you believe a technological singularity is near?

Post by Simon_Jester »

The second paragraph is questionable- for one, an insect or animal can't model you watching it and modify its behavior accordingly, or cannot do so very effectively.

One of the things that gives humans an advantage when hunting animals is that our behavior doesn't fall into stereotyped patterns as easily as an animal's. We can observe deer's habits and kill them very reliably, even with bows and arrows, even the healthy ones that predators normally avoid. The animal is engaged in... well. Call it "first order thinking." It is trying to optimize its own behavior to fit a set of known constraints, using tactics that have been bred into it by millenia of evolution.

Humans respond with "second order thinking:" observe the behavior set and exploit it, in ways that the creature cannot evolve around. For example, animals reflexively fear fire: can we use their fear of fire to panic them into stampeding off a cliff.

It would be harder to get humans to do that- sure, theoretically a crowd of people could be driven over a cliff by walls of flame, but whoever did it would have to be very careful not to give them any other out. Herd animals will choose to run toward the cliff and away from the line of men waving torches; crowds of people will not- because their second order thinking matches your second order thinking. They know as well as you do what you're trying to make them do, and don't like it, and resist actively.

So what do we do, presented with an entity capable of third order thinking? Smart enough to observe our own observations of it, and derive countermeasures against our countermeasures, faster than we can observe it and derive the countermeasures in the first place?

The only thing that comes to mind is to start in a position of great advantage and hope for the best- and I'm sure Starglider could shout at me all day about the 'folly of adversarial containment' or something.
This space dedicated to Vasily Arkhipov
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Re: Do you believe a technological singularity is near?

Post by Singular Intellect »

Simon_Jester wrote:So what do we do, presented with an entity capable of third order thinking? Smart enough to observe our own observations of it, and derive countermeasures against our countermeasures, faster than we can observe it and derive the countermeasures in the first place?
I suspect our response will be something of a reflection on how said herd animals deal with our second order level thinking you presented.
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Do you believe a technological singularity is near?

Post by Starglider »

Stas Bush wrote:If humans keep controlling the machine and shutting it down at will, the idea that transhuman minds are unobservable would be false.
No one has ever claimed that they are 'unobservable'. That makes no sense. We may or may not have understanding of how their mental architecture actually works - simulated evolution or self-modification can easily produce working but incomprehensible-to-researchers designs, I actually know some AI researchers who are proud of not understanding the output of their learning algorithms. However the unpredictability of transhuman intelligence comes from (a) its ability to make inferences that you can't, regardless of how much you know about its design and current state, and (b) its ability to self-modify. No amount of observation under controlled conditions can resolve (b), because self-modification is an open-ended process. This is in fact one of the more subtle pitfalls in 'Friendly AI'; researchers who acknowledge the basic risks still tend to believe that careful experimental work can solve the problem. In fact experimental work alone cannot compensate for lack of adequate goal system theory & fundamental design.
It will be a long period before transhuman minds get the means to impact the material world.
Because?
Humans would be quite wary
Most of the general AI researchers I am aware of are not wary at all, either believing their goal system design is benevolent, or that AI is inherently benevolent, or that there's no way a genius with an Internent connection can cause real mischeif.
and indeed, first these types of AI would require lots of computing power.
Unsupported assumption, or rather 'lots' may be 'as much as as a 2020 smartphone has'. The compute capacity of the human brain is an upper limit on the amount required for human-equivalent cognition. The lower bound is not known but is very likely to be much lower, because of the hardware shortcomings of the brain and its historical novelty (on an evolutionary timescale).
So there'll be a period of observation.
Unsupported assumption (and actually, looking quite unlikely based on both theoretical and practical work); at a plausible self-modification rate, even something that starts comprehensible won't remain so very long. You can keep resetting and freezing but as I said, most researchers don't want to, and adversarial methods are inherently unreliable.
Once this observation has occured, the alien AI is no more alien than an insect or animal who practices devouring partners after sex, which we may find weird, but so instincts tell. After all, once you observe something, you can understand it even if you're not directly able to swap places with the observed.
There is no relevant analogy from a creature of insect complexity (and we still don't fully understand insects) to something with complexity in excess of the human brain (the most complex and opaque computing system we know of).
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Do you believe a technological singularity is near?

Post by Starglider »

Simon_Jester wrote:So what do we do, presented with an entity capable of third order thinking?
AI systems can regress to an indefinite level, subject only to storage and compute limitations. It is as simple as spawning copies of the simulated entites and allowing them to recursively model each other - this is something we can do right now in game-playing AIs. This is one of the many fundamental advantages of a software based intelligence running on a Turing machine; perfect isolation (when desired) between lots of similar processes; symbolic AIs can also support inferential chains of indefinite length. Humans aren't good at this sort of recursion because our brain design isn't good at mapping lots of instances onto the same cognitive hardware.
The only thing that comes to mind is to start in a position of great advantage and hope for the best- and I'm sure Starglider could shout at me all day about the 'folly of adversarial containment' or something.
Well I could, and when I was a little younger I did do that with some other researchers, but it isn't terribly productive. Debating this with you is recreational because you're not going to try and build one of these things; for the people who are doing so, positive outreach usually works better. The Singularity Institute is trying that, with a lot more patience and funding than I'd throw at the problem, but still with sadly limited success.
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Re: Do you believe a technological singularity is near?

Post by Singular Intellect »

I'm curious Starglide; in your personal opinion, what do you think the impact of advancing human brain sciences and technological integration with biological brains will be?

In my estimation they are a pretty significant aspect of our efforts to create general AI systems that implement concepts like hierarchical temporal memory. What are your thoughts and take on it?
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Do you believe a technological singularity is near?

Post by Simon_Jester »

Singular Intellect wrote:
Simon_Jester wrote:So what do we do, presented with an entity capable of third order thinking? Smart enough to observe our own observations of it, and derive countermeasures against our countermeasures, faster than we can observe it and derive the countermeasures in the first place?
I suspect our response will be something of a reflection on how said herd animals deal with our second order level thinking you presented.
I don't know if it will be that simple: among other things, herd animals can't prepare a dead man's switch like "in case of running man with torch, press button and blow up running man." Or "Drop JDAM on server farm." On the other hand, that gets us back to adversarial containment, et cetera, and obviously in the case of extreme measures the whole thing's a horrible mess.

People do get killed in stampedes sometimes, you know...
Starglider wrote:
Simon_Jester wrote:So what do we do, presented with an entity capable of third order thinking?
AI systems can regress to an indefinite level, subject only to storage and compute limitations.
I know. "Third" is sufficient to illustrate the problem.
It is as simple as spawning copies of the simulated entites and allowing them to recursively model each other - this is something we can do right now in game-playing AIs. This is one of the many fundamental advantages of a software based intelligence running on a Turing machine; perfect isolation (when desired) between lots of similar processes; symbolic AIs can also support inferential chains of indefinite length. Humans aren't good at this sort of recursion because our brain design isn't good at mapping lots of instances onto the same cognitive hardware.
To be fair, I think you slightly missed my point, probably because I'm not using technical vocabulary. I'm not so much talking about unlimited chains of recursion... but it's kind of irrelevant.
Singular Intellect wrote:I'm curious Starglide; in your personal opinion, what do you think the impact of advancing human brain sciences and technological integration with biological brains will be?

In my estimation they are a pretty significant aspect of our efforts to create general AI systems that implement concepts like hierarchical temporal memory. What are your thoughts and take on it?
I'm gonna go out on a limb here and guess: Knowing what he's said before, Starglider will say that AI development is growing faster than our ability to make useful brain-machine interfaces. Which means that whatever the impact of advancing brain science is, advancing computer intelligences will get the first shot in, for better or for worse.
This space dedicated to Vasily Arkhipov
Zinegata
Jedi Council Member
Posts: 2482
Joined: 2010-06-21 09:04am

Re: Do you believe a technological singularity is near?

Post by Zinegata »

Starglider wrote:Whose definition of 'technological singularity?' The original one specified by Vernor Vinge was quite specific; the creation of transhuman intelligence is a predictive event horizon, because (a) you can't predict the actions of something more intelligent than yourself, (b) the presence of non-human psychology makes predicting the future vastly harder even if it's human-equivalent and (c) self-modifying intelligences are a vastly chaotic element. Unfortunately since then a lot of people have come along and made up their own definitions, many of which are frankly bullshit.
If we're simply talking about creating a sentient machine intelligence, then yes we may see it within our lifetimes.

But if we add in all the other stuff from the original definition, then it becomes nothing more than the paranoid fears of someone who doesn't understand how AI works - which isn't surprising given that Vinge is better known for his works of science fiction as opposed to his academic work on actual computer science.

AI essentially works based on a feedback loop. Create a software architecture that can make decisions. Input that software some data to process. The software then adjusts itself based on the inputted data. Continue the input-process-modify cycle ad infinitum, until you have a structure with enough knowledge and decision-making ability to be considered sentient.

However, nothing in this process suggests that the resulting intelligence will be "unpredictable"; certainly not more "unpredictable" than individual human beings making decisions. Moreover, making the intelligence self-modifying is little different than a human being having different reactions to different life experiences. A human being raised in a loving, caring environment may still result in a psycopathic individual in some instances.

In short; technological singularity is BS because it vastly overstates our ability to predict the actions of existing human intelligence. We already have "self-modifying" code that changes our attitudes depending on life experience. We are already "unpredictable". We already can't predict the future.

Adding more memory or processing power won't change that. A machine intelligence in a more powerful computer may be able to process instructions faster, and it may have more memories, but that's no different from the fact that some human beings have better memorization or analytical skills than others.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Do you believe a technological singularity is near?

Post by Starglider »

Zinegata wrote:then it becomes nothing more than the paranoid fears of someone who doesn't understand how AI works - which isn't surprising given that Vinge is better known for his works of science fiction as opposed to his academic work on actual computer science.
Firstly, Vinge did not predict any particular outcome, just a failure of futurism to make useful predictions (not that it had a great track record anyway). The understanding that seed AI is extremely dangerous and quite likely to be hostile did not come until later; to the best of my knowledge the Singularity Institute was the first to establish a technical argument for this, in their early attempts to design a reasonably robust benevolent goal system.

Secondly there is no indication that you understand 'how AI works' either, as evidenced by;
Moreover, making the intelligence self-modifying is little different than a human being having different reactions to different life experiences.
If you do not appreciate the vast difference between software that can introspect on every aspect of its cognitive processing, and completely modify any aspect of its mental architecture (subject to design ability), versus humans who have a fixed hardware design and a very limited introspective and behavior modification capability, then you can say nothing useful about this subject.
AI essentially works based on a feedback loop. Create a software architecture that can make decisions. Input that software some data to process. The software then adjusts itself based on the inputted data. Continue the input-process-modify cycle ad infinitum, until you have a structure with enough knowledge and decision-making ability to be considered sentient.
That is so hopelessly vauge as to have zero predictive or descriptive value. I don't suppose you are in fact Arthur T. Murry?
We are already "unpredictable". We already can't predict the future.
You've just declared the entire fields of psychology and sociology worthless. I guess all those hundreds of thousands of researchers were just wasting their lives.
Adding more memory or processing power won't change that. A machine intelligence in a more powerful computer may be able to process instructions faster, and it may have more memories, but that's no different from the fact that some human beings have better memorization or analytical skills than others.
Aside from the fact that quantity has a quality all of its own, you are again failing to appreciate the effects of fundamental architecture differences. For example any symbolic AI architecture will have radically different task performance, internal structure and failure modes to anything in the biomorphic connectionist class.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Do you believe a technological singularity is near?

Post by Starglider »

Simon_Jester wrote:On the other hand, that gets us back to adversarial containment, et cetera, and obviously in the case of extreme measures the whole thing's a horrible mess.
If you get to the point of actually having to use those kind of measures, you're better have a plan B even if it works, because ten more research groups will be right behind the first one in building a similar AI. In fact if they hear about what happened most researchers will actually be encouraged, because now it's shown to be possible and of course they believe their design is safe. There is no way you can suppress software development across the whole of planet earth short of destroying civilisation.
Knowing what he's said before, Starglider will say that AI development is growing faster than our ability to make useful brain-machine interfaces. Which means that whatever the impact of advancing brain science is, advancing computer intelligences will get the first shot in, for better or for worse.
I don't think I've ever said that. We may get quite impressive BCI before AGI, and while that's great in geek terms it doesn't change the basic outcomes. In fact by making programmers and researchers more effective it will accelerate AGI development. As such sufficiently powerful BCI could certainly cause a Vingean Singularity if the users qualify as a new kind of intelligence, but it'll be a mere warm-up for the events that will follow soon after. Hopefully BCI (like other forms of human intelligence enhancement) will increase the probability of building Friendly vs Unfriendly seed AI though, through greater understanding of the technical issues.
Zinegata
Jedi Council Member
Posts: 2482
Joined: 2010-06-21 09:04am

Re: Do you believe a technological singularity is near?

Post by Zinegata »

Starglider wrote:Firstly, Vinge did not predict any particular outcome, just a failure of futurism to make useful predictions (not that it had a great track record anyway).
I tend to take his quote that Technological Singularity will lead to the "End the human era" to be a particularly wrong statement, and certainly alarmist.
If you do not appreciate the vast difference between software that can introspect on every aspect of its cognitive processing, and completely modify any aspect of its mental architecture (subject to design ability), versus humans who have a fixed hardware design and a very limited introspective and behavior modification capability, then you can say nothing useful about this subject.
Except of course that's not actually what AIs can do, can they?

Firstly, just because a software can "introspect" its own code doesn't mean that it will be able to actually understand or modify it. You qualify - "subject to design ability" - which is actually not a minor qualifier, it's an enormous hurdle particularly when you consider how hard it is for us to understand how the brain works.

Secondly, in biologicals many functions are in fact automated. We don't have to actively think about breathing for instance. In a much more complex machine AI (particularly ones that are able to manipulate the physical world), some functions will have to be by necessity automated, which will also limit how much it can self-modify. Being able to switch its modules around isn't helpful if it accidentally shuts off the temperature control modules and cause the hardware to shutdown completely. There are limits on how much humans can "modify" an AI while it is running, the same will apply to an AI trying to tweak itself.
You've just declared the entire fields of psychology and sociology worthless. I guess all those hundreds of thousands of researchers were just wasting their lives.
Except of course that's not what I actually said. What I said is that you will have difficulty in predicting the actions of any individual human being; and that with a machine it will be no different.

Moreover, psychologists and sociologists largely base their "predictions" not on analyzing brain wave patterns, but by observing human behavior - and it's the behavior of large numbers of individuals. And sociologists in fact often use statistical tools to make these determinations.

So again, a machine AI is no more "unpredictable" than a human one. If it does Action A 90% of the time vs Action B, then we know it's predisposed towards action A. Just because you don't understand the exact algorithm doesn't mean that it can't be predicted to some extent just because you can't "place yourself in the shoes of the machine". And even with these tools, the predictions are far from 100% accurate, particularly when dealing on an individual level.
Aside from the fact that quantity has a quality all of its own, you are again failing to appreciate the effects of fundamental architecture differences. For example any symbolic AI architecture will have radically different task performance, internal structure and failure modes to anything in the biomorphic connectionist class.
Firstly, software architecture is a different thing from hardware power; and the bigger hurdle currently is in fact the software side and designing the right architectures; and I would argue that it would always be the greater hurdle.

Secondly, while it is true that different architectures work very differently, you still aren't proving that it will necessary be any more "unpredictable" than a human. If you're saying that an AI can't even be predicted using statistical analysis based on its outputs, then it's really just a completely random machine devoid of any actual structure or intelligence - just like how some people may suffer from some kind impairment due to brain damage.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Do you believe a technological singularity is near?

Post by K. A. Pital »

So you get this AI in a smartphone. What can it do to the world? Uh... nothing.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Post Reply