SWPIGWANG wrote:Civilization is not really predictable. Who could have predicted (wide guesses don't count) a Hitler at 1914? It is a super-chaotic system that have patterns (strange attractors?) but no way of getting an deterministic result.
If we are talking about human-level intelligence, it is really very predictable in comparison, even if it runs at mutiple times of human speed. A computer running at 3 orders of magnitude faster than a human is a simple problem compared to the world running at 12+ orders of magnitude bigger while being an open system with absurd amount of unmeasured and unmeasurable varibles.
(Chaotic behavior does not imply non-deterministic, it only implies hard to predict.)
Um, what makes you think that an intelligence that we've built from an entirely clean state, with optimized modes of thought that may not at all correspond to human ones, and which is capable of recursive self-improvement and which only maintains in its code unaltered its original goals, would be very predictable?
A civilization's intelligence is not efficient (it is not its purpose after all) but knowledge can be spread around with no problem as long as the communication structure supports it, which it still does.
*besides liberal arts people don't do much :p
To a certain degree. Yet one person cannot learn all the knowledge of human science, if they want to apply the knowledge on a certain task they need to form teams of specialists - but a team of specialists won't get insights like "hey, I suddenly remember this obscure detail that we went through in our training and which is connected to this other obscure detail in the completely opposite field". Knowledge
can be communicted, but it could also be much faster and more efficient.
* granted
If you want to know the output of a computer, build an identical one and you are done. It is the humans that builts the computer, they can build another one.
Assuming that the first one hasn't taken over the world and denied the other one all access to relevant information and resources by then...
Are you seriously going to argue that a civilization of Venus flytraps (assuming they could communicate somehow) could predict the actions of humans?
If they have enough time and enough storage space and accurate model and measurement of humans.
I think the question is not about the absolute predictability of the system, but "comprehension" of the system since it is often errously assumed that one needs to 'comprehend' something to predict it.
"Enough time", maybe, but if it was on the scale of billions of years, it probably wouldn't help them in time against the gardener who decided to get rid of all the flytraps and try a new variety of plant next month.
It would need the physical processing capacity to do that. Why would we give an single AI that much resources when we are at the edge of human/super-human intelligence? In addition, why are we giving the AI so much physical access to everything else and so little security? Even a super AI would be screwed if it is locked inside a blackbox with out outside access.
The question being, do we
know when an AI is on the edge of human/superhuman intelligence? It could be slowly improving itself under controlled conditions, then suddenly hit an unpredicted breakthrough and make an optimization that allows it to do a thousand improvements a minute when it was doing one an hour before - just when the researchers left for dinner. Then it'd check its Internet connection, con a Federal agent into arresting its creators ASAP and buy two hundred new server racks with funds it stole using security holes it found by analyzing publically-available software...
Granted, this particular secenario is pretty far-fetched. It's not likely that it would work, but there's a potentially infinite amount of other scenarios that might allow an AI to escape. The point is that A) if we don't start paying attention to these issues when AIs are still safely below the point of human-equivalence, it's going to be a nightmare to suddenly start jury-rigging safety measures and B) even if we do pay attention to AI safety, we should start it out by designing a mind that is friendly because it
wants to be friendly, not because it has no other choice and is constantly seeking avenues of escape. Anything else is just way too risky.
To really "take over the world", the AI needs not to only outthink individual humans, but civilization itself. I don't think any thinking device could come close to doing that.
Why not? You said yourself that a mind is just a civilization of neurons, and humans are (at least occasionally) capable of predicting the behavior of other humans they encounter. Why couldn't a mind outthink a larger group of components? (Not to mention that "civilization" isn't really a unified whole anyway - it could easily play us against each other.)
I think neuo-level man-machine interface would come eariler and have an bigger impact than human-level AI as it instantly boosts human mathmatical and memory capacity by orders of magnitude.
I hope it does. I'm sorta afraid of an AI-driven Singularity.
Hmm, it looks like I might need to retract my statement for now. I did some looking, since I was puzzled why you and my source quoted the same page (Humans 3) for their conclusions, and it turned out that the brain estimates and the reports on the computers' processing power gave their figures in units that weren't directly comparable (MIPS vs. FLOPS), yet my source had compared them directly anyway. (Though the "computer speed" link of your also gives the number in FLOPS...)
Since the issue is materials and waste heat, you can have all the breakthroughs in computing you want and it won't cahange a damn think. Like I said, this is what happens when pretend science gets hit with real science.
Didn't vaccuum tube computers have serious issues with waste heat as well, before a change to a new paradigm took them away for the time being?
Drexler theorized, then the engineers stood up and pointed out what a fool he was. Plausible nanotech is heavily dependent on ignoring engineers and claiming that "we will work past it". Except it doesn't work that way. Drexler has been taken down on every front - there is a reason the guy is now ignored by the leaders in the field he invented.
In specific reference to this, largescale nanocomputing gets asshammered by systemic failure. It takes so many that even if you have an unrealistically small rate of failure, the sheer number make some fail again and again, whose failure in turn causes more to fail. There is a reason engineers try to minimize components.
If I've been misled, I'd like to check that myself. References to this, please?
You can't seriously dispute the idea that we'll have at least human-brain equivalent computers one day - because the human brain itself is a proof of concept for them. If evolution, a mindless process of local optimization, could create a nanoscale computer, then so can we, given the right tools.
Strawman
Not really. It demonstrates that a Singularity
is possible, someday - we just can't say for sure when.
Note that I'm not trying to claim that there will be a Singularity in 2010, 2050 or even 2100, just that it will happen at
some point. That point could be 35269, for all we know. But you can't, really, predict scientific progress - fusion power has been just around the corner for the last 50 years, but likewise in - 1940 I think, could've been 1950 - Clarke wrote of a moon landing in 2000 and was criticized for being too optimistic. The Internet grew from seemingly out of nowhere to its present size in 20 years or so.
All I'm saying that a Singularity will take place at some point, and just as we can't say that it will happen within lifetimes, there's no particular reason to consider it more likely that it
won't happen, either.
Furthermore, it's questionable if we even need computers that are human-equivalent. After all, evolution probably has riddled us with loads of unnecessary crap.
Amazingly, if you want to build AIs that are faster then humans like you claim are possible, you need to be at least as fast as humans.
Depends on how good and optimized your algorithms are. I recall that my old 200-megahertz PC occasionally was slow in emulating SNES roms, yet I would not claim that you need a 200 MHz machine to run them, or more sophisticated games (the SNES ran around 2-4 MHz).
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems
"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning