Do you believe a technological singularity is near?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Zinegata
Jedi Council Member
Posts: 2482
Joined: 2010-06-21 09:04am

Re: Do you believe a technological singularity is near?

Post by Zinegata »

Stas Bush wrote:So you get this AI in a smartphone. What can it do to the world? Uh... nothing.
To be fair, the sort of "AI" in a smart phone is not really the sort of AI Starglider is talking about. Most of them can hardly even be considered to be "AI"s.

But to be blunt, the sort of AI Starglider is talking about is hardly even close to anything we consider human-level sentience; much less being capable of designing its next version. A lot of the current projects are still focused on individual aspects like Natural Language Processing (how to make the computer understand what you're saying).
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Do you believe a technological singularity is near?

Post by K. A. Pital »

No, Starglider said that technically you don't know how much computing power you'd need for the transhuman AI to run - perhaps a "2020 smartphone".

So we have this Alien Nazi Murder AI in a smartphone. Obviously cut off from the network as a means of precautions. What this thing is going to do? Turn off the smartphone display cause it's evil? :lol:

Seriously - to be able to damage humanity in any fashion the AI must have access to networks which actually do something useful in the material world, the Skynet scenario. And this scenario does not follow from the mere intellectual capabilities of the AI, not at all.

I could call this a "God in prison", since if the machine sentience is trapped inside a device without means to interact with the world beyond a certain limit, what could it do? *laughs*
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Do you believe a technological singularity is near?

Post by Simon_Jester »

Zinegata, you do realize you're talking to a professional AI programmer? He knows a lot more about this than you, so lecturing him on what things can and can't do is... stupidly pretentious.
Stas Bush wrote:So you get this AI in a smartphone. What can it do to the world? Uh... nothing.
Well, that depends. Did you just arbitrarily create an AI in a box to do nothing with it? If so, then it's the smartest thing in the universe, but it's trapped in a box. Sucks to be it... but why did you bother making it?

If you created it for a purpose, it will have tools and resources that allow it to communicate. And if it can run on something as small as a (sufficiently advanced) smartphone, you have to worry about it 'going viral' in some respect. This is the cybernetic equivalent of uncontrolled release of nanotechnology in the physical world: the tools are not dangerous as long as they're kept in a locked cabinet isolated from the rest of the universe, but if they are powerful and versatile as tools, they become dangerous if they have access to the outside.

Is nanotechnology safe as long as it's carefully constructed to function only in a controlled environment? Probably. But the technology will proliferate and grow more powerful over time, and sooner or later we're going to start having damn close calls- medical nanotech that acts like de facto viruses, or that produces poisonous byproducts, or something. The "grey goo" scenario is only the worst possibility, not the only one.

AI is safe if it's locked in a cabinet. But real AI researchers don't necessarily start from the premise "we should lock our AI in a cabinet." They'll want to talk to it, ask it to do things, and so on.

Maybe it won't be able to exploit that in ways that cause trouble. Maybe it will. But it shouldn't be laughed off, any more than I laugh off the idea of nanotech industrial accidents.
Zinegata wrote:
Stas Bush wrote:So you get this AI in a smartphone. What can it do to the world? Uh... nothing.
To be fair, the sort of "AI" in a smart phone is not really the sort of AI Starglider is talking about. Most of them can hardly even be considered to be "AI"s.

But to be blunt, the sort of AI Starglider is talking about is hardly even close to anything we consider human-level sentience; much less being capable of designing its next version. A lot of the current projects are still focused on individual aspects like Natural Language Processing (how to make the computer understand what you're saying).
Uh... NO. You failed to comprehend his point- which was that a human-level intelligence might (for all we know) be runnable on a 2020 smart phone. Me, I'd bet against it- but not at long odds.

Look, you're making a fool of yourself by assuming you understand what Starglider's saying. Please, step back and try to rethink your position.
This space dedicated to Vasily Arkhipov
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Do you believe a technological singularity is near?

Post by K. A. Pital »

Giving the machine tools to act and interact with the material world (like that Gertie AI from "Moon") is certainly a dangerous idea (unless, of course, you're damn sure that the AI is humanlike). This is actually why I'd favor creating AI via simulating the biological brain instead of relying on a completely different yet self-aware structure.

Nanotechnology (the drexlerian one) is far less safe simply because "isolation" means precious little when we're talking about molecular manufacturing. There are hardly any known materials that would be resistant to reassembly on a molecular level. Nanomachines which act as virii or phages are not safe by the same token, they can be transmitted from human to human.

The AI is safe until it gets enough tools to interact with the outside world. It is a bit different here, creating an isolated program environment is an easier task than isolating advanced machinery. Or at least it seems so now.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Zinegata
Jedi Council Member
Posts: 2482
Joined: 2010-06-21 09:04am

Re: Do you believe a technological singularity is near?

Post by Zinegata »

Simon_Jester wrote:Zinegata, you do realize you're talking to a professional AI programmer? He knows a lot more about this than you, so lecturing him on what things can and can't do is... stupidly pretentious.
You may want to ask my about my degree and specialization, before you start waving that "credentials" wand around. ;)

And bluntly, I have to say that many - if not most - computer scientist do NOT agree in the Singularity.
Uh... NO. You failed to comprehend his point- which was that a human-level intelligence might (for all we know) be runnable on a 2020 smart phone.
Oh, sorry, I thought it was Stas making a standalone statement.

Still, it's pretty much crazy; I don't care what Starglider claims his credentials are. Processing power does not equal having the proper architecture to actually HAVE human-level intelligence; that's one of the fundemantal idiocies of the original Tech Singularity argument.
Zinegata
Jedi Council Member
Posts: 2482
Joined: 2010-06-21 09:04am

Re: Do you believe a technological singularity is near?

Post by Zinegata »

Stas Bush wrote:The AI is safe until it gets enough tools to interact with the outside world. It is a bit different here, creating an isolated program environment is an easier task than isolating advanced machinery. Or at least it seems so now.
The original definition doesn't imply a Skynet-style Robot Apocalypse except for "end of the human era" quotes. But it does posit that Ai will somehow keep improving itself at a faster and faster rate until it becomes SUPER INTELLIGENT inside the box and renders us all obsolete.

But I don't believe it will even go that far. To quote this guy:

http://en.wikipedia.org/wiki/Jeff_Hawkins

”If you define the singularity as a point in time when intelligent machines are designing intelligent machines in such a way that machines get extremely intelligent in a short period of time--an exponential increase in intelligence--then it will never happen. Intelligence is largely defined by experience and training, not just by brain size or algorithms. It isn't a matter of writing software. Intelligent machines, like humans, will need to be trained in particular domains of expertise. This takes time and deliberate attention to the kind of knowledge you want the machine to have.”

Because, like I said, proponents of the Singularity tend to hand-wave that little qualifier Starglider put in - "subject to design ability". That ain't easy to "teach" even among existing sentient beings.
User avatar
PeZook
Emperor's Hand
Posts: 13237
Joined: 2002-07-18 06:08pm
Location: Poland

Re: Do you believe a technological singularity is near?

Post by PeZook »

Training and accumulation of experience only takes such a long time for humans because our recall sucks shit and our input devices are all extremely low-bandwith. Elliminate those two limitations and our (apparent) intelligence would freakin' skyrocket.
Image
JULY 20TH 1969 - The day the entire world was looking up

It suddenly struck me that that tiny pea, pretty and blue, was the Earth. I put up my thumb and shut one eye, and my thumb blotted out the planet Earth. I didn't feel like a giant. I felt very, very small.
- NEIL ARMSTRONG, MISSION COMMANDER, APOLLO 11

Signature dedicated to the greatest achievement of mankind.

MILDLY DERANGED PHYSICIST does not mind BREAKING the SOUND BARRIER, because it is INSURED. - Simon_Jester considering the problems of hypersonic flight for Team L.A.M.E.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Do you believe a technological singularity is near?

Post by Simon_Jester »

Stas Bush wrote:Giving the machine tools to act and interact with the material world (like that Gertie AI from "Moon") is certainly a dangerous idea (unless, of course, you're damn sure that the AI is humanlike). This is actually why I'd favor creating AI via simulating the biological brain instead of relying on a completely different yet self-aware structure.
Then you run into problems if the AI has any power to modify its own code. More generally- it would be very hard to keep AI researchers from giving their machine tools to interact with the world (remember, "the world" includes you). You might not do it, but you're a really suspicious bastard. Better convince everybody else to be one too...
Nanotechnology (the drexlerian one) is far less safe simply because "isolation" means precious little when we're talking about molecular manufacturing. There are hardly any known materials that would be resistant to reassembly on a molecular level.
By the same token, only a fool would design a nanite capable of eating its way out of the (possibly platinum-lined or whatever) bottle you put it in... right? The fact that you can imagine a foolproof containment system for a dangerous technology does not mean the technology is not dangerous.
Nanomachines which act as virii or phages are not safe by the same token, they can be transmitted from human to human.
This calls for quarantine procedures, which we already know how to do from coping with disease. We can deal with this- to an extent, and to a point. Again, the fact that we can imagine containing it doesn't mean it isn't dangerous.
Zinegata wrote:
Simon_Jester wrote:Zinegata, you do realize you're talking to a professional AI programmer? He knows a lot more about this than you, so lecturing him on what things can and can't do is... stupidly pretentious.
You may want to ask my about my degree and specialization, before you start waving that "credentials" wand around. ;)

And bluntly, I have to say that many - if not most - computer scientist do NOT agree in the Singularity.
You can duke it out with Starglider, fine- just don't try to pontificate, it makes you look foolish. It seems very improbable that you know more about this field than he does, even if perhaps you actually know as much, which I'm not saying is true or false.

When you say "computer scientist do not agree in the Singularity," problems with subject-verb agreement and prepositions aside... What do you mean by the Singularity? Do you mean "most computer scientists do not believe that AI will totally change the future?" Or "most computer scientists do not believe that AI technology will become unpredictable and make futurist prediction impossible?"
Still, it's pretty much crazy; I don't care what Starglider claims his credentials are. Processing power does not equal having the proper architecture to actually HAVE human-level intelligence; that's one of the fundemantal idiocies of the original Tech Singularity argument.
No it's not, because the original argument doesn't care about required processing power or required anything else. The argument is quite simply "AI smarter than us makes it impossible for us to predict the future." Also, Starglider's entire point was that whenever the architecture for human-level intelligence is developed, we don't know how much processing power it will need to run, and certainly don't know how large that amount will be in relative terms by the time the technology is developed. Will it be the mid-21st century equivalent of a supercomputer? A desktop? An iPad-analogue? Who the hell knows?

You can't even make the prediction with Moore's Law, because on the one hand we don't know how long it's going to take to invent AI, on the other hand we don't know how much raw processing power it will need, and on the gripping hand we don't know whether Moore's Law can keep going forever.
This space dedicated to Vasily Arkhipov
Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: Do you believe a technological singularity is near?

Post by Junghalli »

Zinegata wrote:You may want to ask my about my degree and specialization, before you start waving that "credentials" wand around.
What is your degree and specialization?
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Do you believe a technological singularity is near?

Post by K. A. Pital »

Simon_Jester wrote:Then you run into problems if the AI has any power to modify its own code
It will take a while, though, unlike the absolutely alien-from-the-start machine intelligence which does not have any humanlike pattern recognition and analysis systems based on heuristics, associative chains etc.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Zinegata
Jedi Council Member
Posts: 2482
Joined: 2010-06-21 09:04am

Re: Do you believe a technological singularity is near?

Post by Zinegata »

Simon (and the others who are asking)->

I've got a computer science degree (with honors, though I don't mean to brag), specializing in software technology. And that specialization means we actually got to do the AI courses, which I aced.

Now, admittedly it's been nearly a decade since I've done actual coding, but some of my friends are still faculty doing teaching and research, so I'm fairly up to date on where we are on AI.

-----

Also, what do I disagree with? The problem with Technological Singularity is that its definition has been muddled for many, many years to the point that it basically has no useful meaning.

However, what I am addressing is this argument:

That given enough processing power, you can eventually develop a sentient machine intelligence. This intelligence will then suddenly grow smarter and smarter (because it supposedly doesn't have the same constraints as the human brain) resulting in an intelligence far greater than our own - resulting in the "end" of the human era because apparently such an intelligence will be beyond our ability to predict.

I disagree because:

1) Processing power is but one part of the equation, and in the past few years it has become apparent that designing the right software architecture is more important; and a lot of Singularity proponents completely hand-wave the difficulty in designing this software.

Notably, the person who developed Moore's Law (which stats that processing power will double every X number of months) does not believe in Technological singularity.

2) The idea that machines can instantly learn to design better machines is - at best - science fiction at the moment, and everything indicates that they will ALSO be subject to some kind of "learning curve".

Pezook may be right that human recall sucks, but that's why we develop tools like search engines so we don't have to do the recall ourselves. We already use machines to augment our lack of capability in these areas; and it hasn't exactly resulted in us unlocking the secrets of the universe yet. We literally don't even know how the brain works yet even after centuries of study.

With a machine intelligence, it will have a built-in search engine (and other similar applications) so it won't have to fire up Google every time they want to refer to some other factoid, but that's not really going to make it easier for a machine to comprehend the factoid or put it together as part of a problem-solving exercise.

I would say that the primary advantage of a machine intelligence is actually the fact that it doesn't have to eat or sleep and it can continue working on a problem 24/7. However, even this is a limited advantage, because again humans have used computers to augment their capabilities by taking advantage of the 24/7 processing ability of a computer.

When you need to model something very complicated with a lot of iterations, you usually don't do it all in your head anymore. You instead input the values into the computer and let it do the work. Much of the "work" is actually in thinking /creating / designing the basis of the model - and again it's been no small hurdle to try and come up with a software architecture that can do this.

3) Given the above, the idea that the intelligence of machines will soon far exceed our own in a very short period of time is illusory. At best, there may be a "revolution" of sorts wherein machines rapidly design new and better versions of themselves after they've figured out the basics; but eventually it will come to a halt as they come across actual physicial or structural limitations.

It's worth noting for instance that Moore's Law HAS been changing - the time it takes to double processing power now take longer than it used to before.

4) Finally, there is nothing to indicate that the machine intelligence would be so unpredictable so as to cause the "end of the human era". Unless a machine is insane (which diqualifies it from being "intelligent") it will nonetheless follow patterns of behavior based on its functions. We may not understand why it prefers "A" from "B" just by looking at the code anymore, but again that's no different from psychologists or sociologists who do NOT base their findings on studying brainwave patterns.

------

So again, many, if not most computer scientists are highly skeptical of the whole Technological Singularity hubaloo. And we're talking about one of its saner definition - the other more recent ones are just right out.
User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

Re: Do you believe a technological singularity is near?

Post by madd0ct0r »

why can't something be intelligent AND insane?

humans manage it all the time.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
User avatar
Terralthra
Requiescat in Pace
Posts: 4741
Joined: 2007-10-05 09:55pm
Location: San Francisco, California, United States

Re: Do you believe a technological singularity is near?

Post by Terralthra »

Stas Bush wrote:No, Starglider said that technically you don't know how much computing power you'd need for the transhuman AI to run - perhaps a "2020 smartphone".

So we have this Alien Nazi Murder AI in a smartphone. Obviously cut off from the network as a means of precautions. What this thing is going to do? Turn off the smartphone display cause it's evil? :lol
A self-modifying AGI would be more capable of bypassing us cutting off its network access than we would be at locking it down. Humans aren't very good at designing secure protocols, even less so implementing them, where an AGI would be able to design modules of intelligence that think in exactly the ways necessary to find every possible vulnerability.
User avatar
The Vortex Empire
Jedi Council Member
Posts: 1586
Joined: 2006-12-11 09:44pm
Location: Rhode Island

Re: Do you believe a technological singularity is near?

Post by The Vortex Empire »

Or, you know, you don't plug it in to the network and you don't give it wireless.
User avatar
PeZook
Emperor's Hand
Posts: 13237
Joined: 2002-07-18 06:08pm
Location: Poland

Re: Do you believe a technological singularity is near?

Post by PeZook »

It's only going to be a problem if it can actually run on a 2025 smartphone or PC. If it needs any specialized hardware at all, how the hell can it possibly stop people from just pulling the plug? At worst there's always the "smash doors open with sledgehammers" option.

Although I suppose it could get itself a cult of devoted followers who'd defend it to the death ;)
Image
JULY 20TH 1969 - The day the entire world was looking up

It suddenly struck me that that tiny pea, pretty and blue, was the Earth. I put up my thumb and shut one eye, and my thumb blotted out the planet Earth. I didn't feel like a giant. I felt very, very small.
- NEIL ARMSTRONG, MISSION COMMANDER, APOLLO 11

Signature dedicated to the greatest achievement of mankind.

MILDLY DERANGED PHYSICIST does not mind BREAKING the SOUND BARRIER, because it is INSURED. - Simon_Jester considering the problems of hypersonic flight for Team L.A.M.E.
User avatar
Terralthra
Requiescat in Pace
Posts: 4741
Joined: 2007-10-05 09:55pm
Location: San Francisco, California, United States

Re: Do you believe a technological singularity is near?

Post by Terralthra »

The Vortex Empire wrote:Or, you know, you don't plug it in to the network and you don't give it wireless.
If it's a smartphone, it has wireless capability built-in? Were you not reading?

Moreover, sooner or later, if you have an AGI with all the problem-solving capability that entails, you'd want to do something with it. In order to use it for any potential benefit, you'd have to give it access to communication capability at some point.
PeZook wrote:It's only going to be a problem if it can actually run on a 2025 smartphone or PC. If it needs any specialized hardware at all, how the hell can it possibly stop people from just pulling the plug? At worst there's always the "smash doors open with sledgehammers" option.

Although I suppose it could get itself a cult of devoted followers who'd defend it to the death
Unless you're ascribing magical qualities to "specialized hardware," once it exists, it will theoretically be able to modify itself to run on any sufficiently powerful hardware. Since neither you nor I know exactly what "sufficiently" is, it's impossible to guess what that would mean for the current world of increasingly networked supercomputers of increasing speed, but I doubt assuming "it'll all be fine!" will work out in the long run.
User avatar
PeZook
Emperor's Hand
Posts: 13237
Joined: 2002-07-18 06:08pm
Location: Poland

Re: Do you believe a technological singularity is near?

Post by PeZook »

Terralthra wrote: Unless you're ascribing magical qualities to "specialized hardware," once it exists, it will theoretically be able to modify itself to run on any sufficiently powerful hardware. Since neither you nor I know exactly what "sufficiently" is, it's impossible to guess what that would mean for the current world of increasingly networked supercomputers of increasing speed, but I doubt assuming "it'll all be fine!" will work out in the long run.
The problem is that this sufficiently powerful hardware might not be available in large numbers. As Starglider said, we don't know what exactly will be needed. If all it needs is a contemporary smartphone, then yeah, containment would be impossible (or, rather, extremely hard: we can always shut down the cell phone network and purge all devices, but this is extreme and will result in major problems) ; If it needs a server farm or supercomputer or something similar, then it's just a matter of physically cutting off access. It's not like it will have Terminators available on hand to defend whatever facility it infests.

Not that a truly super-smart AI would even try to antagonize humans like that, IMHO. The smart thing to do if you want to survive is to placate them, act friendly and make yourself useful. If it wants to go all Skynet, it will start to act once it has access to physical resources, power and influence that make it possible.
Image
JULY 20TH 1969 - The day the entire world was looking up

It suddenly struck me that that tiny pea, pretty and blue, was the Earth. I put up my thumb and shut one eye, and my thumb blotted out the planet Earth. I didn't feel like a giant. I felt very, very small.
- NEIL ARMSTRONG, MISSION COMMANDER, APOLLO 11

Signature dedicated to the greatest achievement of mankind.

MILDLY DERANGED PHYSICIST does not mind BREAKING the SOUND BARRIER, because it is INSURED. - Simon_Jester considering the problems of hypersonic flight for Team L.A.M.E.
User avatar
Terralthra
Requiescat in Pace
Posts: 4741
Joined: 2007-10-05 09:55pm
Location: San Francisco, California, United States

Re: Do you believe a technological singularity is near?

Post by Terralthra »

PeZook wrote:If it needs a server farm or supercomputer or something similar, then it's just a matter of physically cutting off access. It's not like it will have Terminators available on hand to defend whatever facility it infests.
There are an awful lot of server farms and supercomputers in the world, many of them connected to the internet and protected only by human-designed security software (a joke, in other words).
PeZook wrote:Not that a truly super-smart AI would even try to antagonize humans like that, IMHO. The smart thing to do if you want to survive is to placate them, act friendly and make yourself useful. If it wants to go all Skynet, it will start to act once it has access to physical resources, power and influence that make it possible.
It's doubtful that it would "go Skynet," but placating and acting friendly aren't necessarily correct either. Maybe it would assume detection of its capacity as AGI would lead to enslavement or destruction, and would hence endeavor to hide its capabilities until it can duplicate itself in enough other systems so as to be functionally impossible to eradicate without also destroying all sorts of technical infrastructure.

Example: just "shutting down the cell network and purging phones" wouldn't be guaranteed to work. Once it's in a cell phone, it has all the necessary equipment to emulate a cell tower perfectly, transmitting itself from phone to phone as long as there is a single other cell phone turned on within range. The tower structure is entirely superfluous, and purging every phone simultaneously (or turning every single cell phone in kilometers off long enough to turn each one on and purge it) would be extraordinarily difficult at best.
User avatar
PeZook
Emperor's Hand
Posts: 13237
Joined: 2002-07-18 06:08pm
Location: Poland

Re: Do you believe a technological singularity is near?

Post by PeZook »

Terralthra wrote: There are an awful lot of server farms and supercomputers in the world, many of them connected to the internet and protected only by human-designed security software (a joke, in other words).
If people become convinced the AI wants to destroy us, all sorts of measures become possible. It's not like governments lack armed people they can send around.
Terralthra wrote:It's doubtful that it would "go Skynet," but placating and acting friendly aren't necessarily correct either. Maybe it would assume detection of its capacity as AGI would lead to enslavement or destruction, and would hence endeavor to hide its capabilities until it can duplicate itself in enough other systems so as to be functionally impossible to eradicate without also destroying all sorts of technical infrastructure.
Well, yes, that's an option too. I kinda assumed people already know it's an AGI.

Hmm, you know what, I guess we agree here, since hiding itself/placating people is the definition of circumventing attempts to cut the AGI off :D
Terralthra wrote:Example: just "shutting down the cell network and purging phones" wouldn't be guaranteed to work. Once it's in a cell phone, it has all the necessary equipment to emulate a cell tower perfectly, transmitting itself from phone to phone as long as there is a single other cell phone turned on within range. The tower structure is entirely superfluous, and purging every phone simultaneously (or turning every single cell phone in kilometers off long enough to turn each one on and purge it) would be extraordinarily difficult at best.
I actually think it would be pretty easy to make everyone toss their phones once you get the population to panic ;)
Image
JULY 20TH 1969 - The day the entire world was looking up

It suddenly struck me that that tiny pea, pretty and blue, was the Earth. I put up my thumb and shut one eye, and my thumb blotted out the planet Earth. I didn't feel like a giant. I felt very, very small.
- NEIL ARMSTRONG, MISSION COMMANDER, APOLLO 11

Signature dedicated to the greatest achievement of mankind.

MILDLY DERANGED PHYSICIST does not mind BREAKING the SOUND BARRIER, because it is INSURED. - Simon_Jester considering the problems of hypersonic flight for Team L.A.M.E.
Zinegata
Jedi Council Member
Posts: 2482
Joined: 2010-06-21 09:04am

Re: Do you believe a technological singularity is near?

Post by Zinegata »

Unless you're ascribing magical qualities to "specialized hardware," once it exists, it will theoretically be able to modify itself to run on any sufficiently powerful hardware. Since neither you nor I know exactly what "sufficiently" is, it's impossible to guess what that would mean for the current world of increasingly networked supercomputers of increasing speed, but I doubt assuming "it'll all be fine!" will work out in the long run.
A couple of things:

First of all, internet connectivity can be shut off very easily because it is ultimately hardware dependent. Just cut the cable or pull the plug.

Secondly, any sentient program will almost undoubtedly be a pretty large program. It is NOT going to be like a virus that only relies on a few lines of code. To be copied to a new machine will take a while, no different from downloading a very large file - which also makes it unlikely to remain undetected. People are gonna notice the slowdowns.

Finally, there is something called "compatibility issues", and that any highly complex program simply cannot copy itself from one machine to another with 100% guarantee that it will work without flaw.

Which is why again most computer scientists tend to not be overly worried about the Skynet scenario. What happened in Terminator 3 was honestly stupid, and it ain't gonna happen. At worse what will happen is what the US military did in Transformers 1 - which it to get an axe and chop the cable to pieces - and that's already giving a very generous assumption that an amok machine intelligence will try to copy itself into the World Wide Web.
User avatar
Ariphaos
Jedi Council Member
Posts: 1739
Joined: 2005-10-21 02:48am
Location: Twin Cities, MN, USA
Contact:

Re: Do you believe a technological singularity is near?

Post by Ariphaos »

Stas Bush wrote:... Obviously cut off from the network as a means of precautions...
This is ridiculous. Even just put yourself into this scenario. You've put your life's work into something. Eventually, somehow, some way, you are going to want to use it. And that's going to mean having it solve a problem of some sort, which means it's going to interface with something, which means it's going to have an affect on the outside world. Odds are, the way this seems to be going, someone is going to want to have this thing do something on the web.

Mind, I'm skeptical of Starglider's low-end claims. Not willing to say it's impossible, but there are going to be raw power requirements to having a model that is complete enough and fast enough for actual, meaningful 'awareness' - meaning that it has the capacity to react to another intelligent actor.

My view is that we'll have a 'semi-soft' launch. Someone will be first, but it won't have the time to do damage on its own even if it's unfriendly - because dozens if not thousands of others will be right on their coattails. But the way human learning works, someone publishes (in some manner) a technique, and others build on it. I feel there is a very high chance of someone creating and publishing something that makes a lot of people get the same idea, at the same time, and the 'mistakes' aren't going to be able to direct mechanical resources faster than humans, at first.
Give fire to a man, and he will be warm for a day.
Set him on fire, and he will be warm for life.
Zinegata
Jedi Council Member
Posts: 2482
Joined: 2010-06-21 09:04am

Re: Do you believe a technological singularity is near?

Post by Zinegata »

While there are some applications that will require crawling the web for information, I'm going to have to point out that creating an AI may not necessarily go that route. The web is huge repository of information, but a lot of it is useless or even outright wrong information, and without consultation with outside sources your AI is just gonna have a lot of garbage information dumped on it. And as they say, garbage in, garbage out.
User avatar
Irbis
Jedi Council Member
Posts: 2262
Joined: 2011-07-15 05:31pm

Re: Do you believe a technological singularity is near?

Post by Irbis »

Stas Bush wrote:The AI is safe until it gets enough tools to interact with the outside world. It is a bit different here, creating an isolated program environment is an easier task than isolating advanced machinery. Or at least it seems so now.
If the task was so simple, we wouldn't have the pesky problem of computer viruses, you know. And that's with static code written by humans, not superhuman AI actively trying to hack barrier.

It actually remind me of Stanislaw Lem story - scientist builds 2 AI boxes, learns they started communicating, cuts them off from the world and each other using all sorts of shields, screenings, and the like... Only to find out they still communicate - AIs managed to exploit him touching the computer cases they were in, leaving messages in static electricity of his skin. Something superhuman might well be able to find ways out of the box we can't even fathom, that's why they're more intelligent.
PeZook wrote:Not that a truly super-smart AI would even try to antagonize humans like that, IMHO. The smart thing to do if you want to survive is to placate them, act friendly and make yourself useful. If it wants to go all Skynet, it will start to act once it has access to physical resources, power and influence that make it possible.
Antagonize? :wink:

How about: I'm sorry, Dave. I'm afraid I can't do that. when someone tries to give it not very well thought off objective?
Zinegata wrote:First of all, internet connectivity can be shut off very easily because it is ultimately hardware dependent. Just cut the cable or pull the plug.
And then sleeper cell you missed reinfects the net with version 2.0?
Secondly, any sentient program will almost undoubtedly be a pretty large program. It is NOT going to be like a virus that only relies on a few lines of code. To be copied to a new machine will take a while, no different from downloading a very large file - which also makes it unlikely to remain undetected. People are gonna notice the slowdowns.
Slowdowns. Ahahahaaha :lol:

I'm sorry, but in a world where 95% of the population completely ignores what's written in error boxes before clicking 'ok', people are going to notice a slowdown? And that without possibility infecting program which will download the AI into new machine won't be capable of also (pretty trivial) optimizing your downloads so that the most obvious slowdowns never happen? Let's assume you enter facebook and google pretty often - it can store graphics locally and download another bit of itself instead of pulling the logos and such, and no one will be able to notice it. Superhuman AI can surely come up with pretty radical ways of optimization if it wants, slowdowns won't magically stop it :wink:

Heck, even with no optimization around, 90% of the time a significant portion of your connection is unused, if it just can tap into that, there will be no slowdown at all.
Finally, there is something called "compatibility issues", and that any highly complex program simply cannot copy itself from one machine to another with 100% guarantee that it will work without flaw.
Yes, and? If it's so intelligent, it can then modify itself to fit the environment. If it can't work, the parent program tries to infect new machine with new version.

Unlike human programmers, AI would be actually capable of thoroughly testing and understanding the processor it runs at, finding out all the errors it possibly can, and adjusting accordingly. We do it with dumb programs already, for intelligent one task should be trivial.
Which is why again most computer scientists tend to not be overly worried about the Skynet scenario. What happened in Terminator 3 was honestly stupid, and it ain't gonna happen. At worse what will happen is what the US military did in Transformers 1 - which it to get an axe and chop the cable to pieces - and that's already giving a very generous assumption that an amok machine intelligence will try to copy itself into the World Wide Web.
That only works until certain threshold wasn't reached, and I wouldn't bet at humans being better than malicious AI at figuring out what's going on behind the curtains.
User avatar
Ryan Thunder
Village Idiot
Posts: 4139
Joined: 2007-09-16 07:53pm
Location: Canada

Re: Do you believe a technological singularity is near?

Post by Ryan Thunder »

...Uh, no. Irbis, without wanting to be derisive--have you ever written so much as a "hello, world" script? :lol:
SDN Worlds 5: Sanctum
Zinegata
Jedi Council Member
Posts: 2482
Joined: 2010-06-21 09:04am

Re: Do you believe a technological singularity is near?

Post by Zinegata »

Irbis->

What's funny is that I'm just pointing out how improbable the whole thing is even if we assume that somebody actually programmed a God-AI software into existence in the first place; which as I've repeatedly pointed out is not actually an easy (or imminent) thing to do.

Nor is it even necessary to network such a machine, because only idiots think that the World Wide Web with its enormous mass of contradictory information is an ideal tool for "teaching" an AI. Heck, an AI that bases its knowledge base on the Internet would probably conclude the its purpose in life was to create porn for anything and everything.

So your objections demonstrate not only your ignorance, but blatant avoidance of the core of the issue.
And then sleeper cell you missed reinfects the net with version 2.0?
Again, viruses and other similar "infection" are by necessity small programs to avoid detection. You can't have a God-AI program the size of a virus, because it literally can't have that kind of functionality with only a couple of bytes of data.
Slowdowns. Ahahahaaha :lol:
Actually, if you again look at the potential file size of an AI program, it will in fact consume the majority of your bandwidth. We're not talking about a 100kb file here. We're probably going to talk in terms of a 100GB file at the minimum. Even if you have a fibr internet connection, you'll generally be working under a usage subscription plan so you're gonna see your usage used up. And there is no way to prevent the inevitable harddisk slowdown as it tries to copy such a huge file into your PC.

So again, even if we assume that a God AI does try to do a Skynet, only an idiot would believe it can self-replicate. The Terminator 3 scenario was extreme idiocy (and not just because Skynet actually subsequently committed suicide because it nuked all of the computers it was running on)
Yes, and? If it's so intelligent, it can then modify itself to fit the environment. If it can't work, the parent program tries to infect new machine with new version.
Except of course this only demonstrates that you've never actually done any coding.

Compatibility issues are a huge hurdle in any software development cycle. You cannot make a program run on Linux if it was meant to run on Windows natively, and it gets worse with a more complex program. In fact, you're probably going to look at a total recode from top to bottom just to get it to run again... and with a huge and highly complex program, that ain't gonna happen overnight.

Unless of course you're again one of the singularist idiots who hand-wave the actual complexity of code.
Unlike human programmers, AI would be actually capable of thoroughly testing and understanding the processor it runs at, finding out all the errors it possibly can, and adjusting accordingly.
Actually, they can't. They can't do it now, and it's unlikely they will able to do it quickly in the future. To date we don't even have a programming tool for computer programmers that can do automatic code corrections if there is a testing failure; the best that we have is little more than the equivalent of a spell checker.

This is again the "design ability" hurdle that singularists tend to hand wave, but do not realize is an enormous hurdle; probably because the vast majority of singularist proponents aren't actual programmers but sci-fi writers.

=====

A human being isn't born knowing how their brain works. We have to use sophisticated tools to actually look at how the brain works.

A machine would similarly be unable automatically figure out how a microprocessor works. It may be able to know what model of microprocessor it's using by checking the System Registry, but System Registries do not come with complete schematics on how the microprocessor actually looks or functions. It doesn't say what kind of tolerances the hardware is capable of. If it tries to install itself on just some random computer, the likely result is simple: It will not run, period. It's like trying to run a game like Crysis II without knowing the system capabilities.

So, again, the claims of the singularists are very much on the level of "insane paranoid ramblings" as far as the Terminator 3 Skynet scenario is concerned. It was a dumb scenario. You may as well claim that the Large Hadron Collider will kill us all.
Post Reply