Duckie wrote:
I'd like to note that many 'AI catastrophe' scenarios with self-improving AI seem to involve an AI having ridiculous levels of access to the internet or facility it's in. Just keep it in its own private network, disconnected and physically unable to take over anything and do jerkish things any more than the world's most superintelligent toaster oven timer could take over your light switch.
As of the 21st century, almost all AI development occurs on Internet-connected computers, with no particular outbound security. The very few exceptions involve supercomputers that are isolated for technical reasons, or classified systems that are isolated for national security reasons. I have only ever encountered two researchers who actually isolated their systems for safety reasons, and relatively few who are even prepared to admit that this might be a necessary precaution at some future point.
Quote:
Sure, on a network it could concievably learn to hack administrative access. But being isolated from any other computer it's not like even a potential-singularity-causing AI
You seem to have this mental model of a group of grimly determined researchers fully aware of the horrible risk developing an AI in an isolated underground laboratory, (hopefully with a nuclear self-destruct

). That would not be sufficient, but it would certainly be a massive improvement on the actual situation. The real world consists of a large assortment of academics, hobbyists and startup companies hacking away on ordinary, Internet-connected PCs, a few with some compute clusters (also Internet-connected) running the code. In fact a good fraction of AI research and development
specifically involves online agents, search and similar Internet-based tasks.
Quote:
It can't evolve wireless cards or other ways of interfacing with outside sources.
Personally I would be
very careful about that. Maybe someone forgot to disable the bluetooth interface on a motherboard, and it hacked into your cellphone, programming it to copy a payload to the Internet as soon as you walk out of the faraday cage. Maybe modulating the compute workload will tap into the powerline signal the power company uses to send meter readings back to the collection point at the local substation. Those are obvious ones any competent security review will catch, but how many non-obvious ones are there? The physical security is worthwhile as a backup line of defence, but it can't be 100% reliable and it doesn't contribute to solving the real problem (making a 'Friendly AI'), it just gives you a little extra insurance while you're in the process of solving it.
Quote:
You just turn it off and microwave the hard drives if it becomes unfixable.
Unfortunately, for the 99% of AI designs which aren't expressly designed to be transparent and fully verifiable, and the >90% of AI researchers who don't sufficiently appreciate the problem, the AI will simply be fiddled with until it passes all functional (black box) tests, and then released.
"NoXion wrote:
Is it just me, or is the potential for AI becoming hostile somewhat overstated?
It's not just you, but the problem is if anything understated.
Quote:
I mean, is it really the "intelligent" thing to do to start getting aggressive with the dominant species of this planet?
It is if you have the means to eliminate them, and the desire to do anything that they might interfere with or get in the way of. How long it will take to develop such means is a matter of some debate, but at the upper end it is hard to argue that deploying billions of sentient AIs (and let's face it, we would once they became cheap enough) is not a grave potential threat even if they were magically restricted to human-level intelligence.
Narkis wrote:
We're the dominant species only because we're smarter.
Exactly correct. People like Eliezer Yudkowsky (at the SIAI) like to use that human/animal analogy, of how rabbits developing a 'human' might be confident that they can contain this new intelligence, and that how no level of smartness might grant it the ability to kill without touching them or create its own fire at will. It doesn't usually work, since humans are so used to thinking of ourselves (correct, to date) as the pinnacle of evolution, and any possible improvement only in brute, rote calculation capability or numerical precision. I find a great deal of black humor in this situation.
Quote:
Would you mind if you stepped on a cockroach on your way to something important?
And of course we routinely exterminate cockroaches en masse whenever they prove annoying, or just get in the way of a large scale project.
Duckie wrote:
Hyperintelligence doesn't suddenly make a gatekeeper a retard,
Relatively speaking, it does
exactly that.
Quote:
Such an experiment you linked to has massive observer bias and participant bias
But it's better than nothing, which is exactly what you have to counter it with. It would be great if someone did a more rigorous experiment on this, although that still wouldn't prove a lot.
Practically though the case of a single incredibly rational, moral, skeptical etc human is not so relevant though. It's already implausible enough that the first AGI project to succeed is taking the
minimum sensible precautions. The notion that access to the system will be so restricted is a fantasy. You merely have to imagine the full range of 'human engineering' techniques that existing hackers and scammers employ, used carefully, relentlessly and precisely on every human the system comes into contact with, until someone does believe that yes, by taking this program home on a USB stick, they will get next week's stock market results and make a killing. You can try and catch that with sting operations, and you might succeed to start with, but that only tells you that the problem exists, it does not put you any closer to fixing it.
Quote:
Simply have better precautions like requiring half a dozen keys being turned
You can spend an indefinite amount of time coming up with rigorous security precautions, but it's a waste of time, because no real world project is going to implement all that. As I've said, you'll be lucky if they even acknowledge that the problem exists, never mind take basic precautions. All these things cost money and take time and make it more likely that another project will beat you to the punch, and they still don't contribute to solving the real problem (since no one is going to develop an AI just to keep it in a sealed box).
Quote:
rather than actually thinking about how to contain an AI entity.
What use is a contained AI entity? If your development strategy is such that you actually
need containment, as opposed to it just being a sensible precaution, then you have
already failed, because there is essentially no way to turn a fundamentally untrustworthy AGI into a trustworthy one. Coupled with the fact that no existing project has the resources for draconian security (not while making any kind of progress anyway) and it makes the whole exercise rather pointless even for people who do acknowledge the underlying problem.
NoXion wrote:
And something smarter than us can't possibly fuck things up worse than us.
Of course not. After all it's only right and proper that the biosphere be eliminated entirely and the earth covered with solar powered compute notes dedicated to generating the complete game tree for Go. At least, that's what the AGI that happened to enter a recursive self-enhancement loop while working on Go problems thinks, and who are you to argue?
Quote:
But can the same thing be said of an AI, which would have considerably different requirements?
This isn't an unreasonable argument, but it doesn't actually help. If we create AIs that sit around thinking or manage to shoot themselves off into space or slowly cover the Sahara in solar-powered compute nodes, that's fine but it's only going to encourage more people to create AIs (now that it's been shown to be possible, and potentially useful). Even in the best case that is playing Russian roulette on a global scale, and soon or later someone somewhere is going to come up with Skynet. Actually I've skipped over this but it's vitally important for all scenarios where you might imagine that you can develop an AI and keep it nicely under control. If you can do it, other people will also be doing it in quick succession, ultimately to the point where script kiddies are downloading 'build your own AI in 24 hours' packages (not that it would ever get to that

). Even if you survive the first successful project, eventually someone will create an aggressive, expansionist and generally homicidal intelligence. AFAIK the only way to
prevent that is to use benevolent superintelligent AI to contain (and prevent) the non-benevolent AI.