Mini-FAQ on Artificial Intelligence

Important articles, websites, quotes, information etc. that can come in handy when discussing or debating religious or science-related topics

Moderator: Alyrium Denryle

Gilthan
Youngling
Posts: 88
Joined: 2009-11-06 07:07am

Re: Mini-FAQ on Artificial Intelligence

Post by Gilthan »

Zixinus wrote:Q: Can you create a partial AGI? An AI that can do quite a large number of tasks, can adept and improvise, create new programming but only able to partially modify itself and essentially be an around or even beyond human intelligence?
For example, an AI that can expand its mental toolset and its sub agent-system visual processing AI, but unable to optimize its own intelligence (of course it is still able to backup its memories and learn how to built other AIs (even AGIs) if it learns to?)?
Humans are such right now. Even if given access to a wiring diagram of all the trillions of synapses in your brain, to your own "source code," you would find reading through the first few billion pages equivalent to take more than a little time, let alone understanding and rewriting all of it for arbitrarily high orders-of-magnitude self-modification and self-improvement.

Frankly, making an early AI not able to solo self-modify itself rapidly and indefinitely above its initial intelligence would seem a relatively easy safeguard, if combined with running only on a specialized type of supercomputer hardware, versus having an AI so radically reprogramming itself as for unconstrained orders-of-magnitude self-improvement (while being supposed to never risk accidentally making a far more subtle self-change eliminating friendliness). Some software optimization could be done by third-party AI programs, which might not even be sapient or general AIs, like a psych patient not doing virtual neurosurgery on his own brain but receiving it from someone else.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Part 2 of 'types of AI design';

* Genetic programming. Earlier I mentioned genetic algorithms, which attempt to 'evolve' an intelligent agent with a (usually crude) simulation of natural selection. GAs use a fixed context and relatively small 'evolvable' segment, either just the parameters for a fixed algorithm or mapping table, or evolving a function tree (e.g. an equation, or signal processing kernel) that isn't Turing-complete. The input/output and control systems are fixed, and there is usually no storage or looping. Genetic programming goes further by applying the genetic operators to actual program code, with looping and storage. Unfortunately (or rather, fortunately, since making AGI this way is horribly dangerous) programs are far harder to evolve than algorithms, firstly because most possible programs do not work at all, and secondly because the 'fitness space' of program code is very, very uneven. Natural selection relies on relatively smooth gradients and shallow local optima, which is ok for indirectly encoded NNs (e.g. human brains), but incompatible with brittle program code. The workable GP systems operate on abstract syntax trees rather than raw code (i.e. the kind of intermediate representation that compilers use for optimisation), and all kinds of tricks have been tried with changing the GP operators (e.g. using templates and expression networks) and making the control system smarter. Even still, no one has gotten GP to work on systems larger in scope than a single data processing algorithm. Evolved neural networks have had some success, but they're not fundamentally any more capable than backpropagation or Hebbian-trained ones.

* General probabilistic reasoners. These are symbolic logic systems that actually treat limited information and uncertainty correctly, which is to say that they use Bayesian probability calculus. There aren't many of these around; Pei Wang's NARS is the only one I can think of that got much attention in the community (and it still didn't get much). The basic problem seems to be that the work being done on Bayesian networks is carried out by connectionists, who have already fundamentally rejected the grand symbolic general AI dream as infeasible. Anyway, going probabilistic solves some of the problems of classic symbolic AI, in that it hides some of the brittleness, improves search control, and allows limited learning (updating of probabilities for facts in the knowledge base) to work quite well. However, it is still fundamentally limited in that it cannot create its own representations, needed to capture new fields of knowledge, or new code needed to tackle compute-intensive tasks. NARS at least still suffers from the classic symbolic problems of lack of levels of detail and 'this symbol means apple because the designer called it 'apple''.

* Spiking neural networks. The original artificial neural networks really weren't very brainlike at all; they were either on or off (or in a few cases, had continuous activation levels) and were globally updated via a simple threshold function. Biological neurons process 'spike trains'; depolarisation waves come in along numerous axons at irregular intervals, which causes the neuron to fire at irregular intervals. Analysis has shown complex frequency structure in many spike trains and use of phase differences as a critical part of processing in local neural circuits; the exact timing is crucial. Spiking neural networks attempt to emulate that, by simulating neurons as real-time signal processors. This subfield is split into two camps, the people trying to do fully accurate brain simulation (currently ascendant and by far the best funded single approach in AI) and the people who still treat it merely as inspiration (e.g. the people messing about with crazy schemes for evolved spiking NNs, such as evolving a lossy wavlet description of the network). Both of these suffice for general AI with enough work. Neither of them are a good idea IMHO, since these designs are quite opaque and not guarenteed to produce anything terribly human even when explicitly neuromorphic. Still, they're a better idea than the next category.

* Recursive heuristic systems. This was the first real attempt at 'automated programming'. The prototypical system was Eurisko, which was essentially symbolic AI system combined with quite a sophisticated genetic programming system (for 1980), where the GP system could modify its own mutation operators and templates. Very few people followed up on this; Eurisko is bizarre in that it's a landmark program that got more attention outside of the field (e.g. Drexler's glowing description in Engines of Creation) than within it. I haven't seen any modern versions as well developed, but some people are playing around with quite dangerous improved variants, that use graph-based GP techniques, proper probability and utility theory, and some form of Kolmogorov prior. I was messing around with this stuff myself in first serious attempts to research AGI implementation strategy. The overwhelming problem is stability; with a full self-modification capability, the system can easily trash its own cognitive capabilities. Eurisko was actually the first AI program to discover the 'wireheading' problem, of self-modifying to simply declare the problem solved instead of actually solving the problem. There is another raft of techniques people have tried to try and enforce stability, but mostly they just replace obvious problems with subtle ones. Nevertheless this approach does suffice for general AI, will directly produce a rapidly-self enhancing 'seed' AI, and will almost certainly produce an uncontrollable and opaque one.

* High level approximations inspired by spiking NNs. Various researchers have put forward functional theories of microcolumns (and other brain structures) that they are highly confident in, and have claimed that this allows them to create a brain-like general AI without messing about with the details of simulating neurons (most famously, Jeff Hawkins at Numenta). So far every one of these has been a miserable failure. IMHO they are all the usual collection of 'cobble together a series of mechanisms that sound cool and might be adequate, but don't bother to actually prove capability or design and verify functionality the hard way'. They all rely on some degree of 'emergence', without sticking closely to the one known design where such emergence actually works (humans). This approach could work in principle, but frankly I doubt anyone is going to hit on a workable design until we've already got human uploads (or very neuromorphic AGIs) that we can abstract from.

* Kitchen sink designs. A popular approach in general AI, quite possibly /the/ most popular approach in terms of number of people who give it a try, is 'let's take everything that we've found to work in some domain or other, or even which just looks promising, and combine it all into one monster patchwork architecture'. Often there is a period of filtering afterwards when they actually try to implement this monstrosity; certainly there seems to have been with Goertzel, though his design is still pretty crazy. As AGI designs go these are usually relatively harmless, because the designer tends not to have a clue how to actually integrate everything into something that works, and all the elements just fail to mesh and work at cross purposes. However, the more dangerous geniuses might manage to create a working AI in the middle of the mess - most likely a Eurisko-style recursive heuristic system, though possibly something more like an abstracted spiking NN for the very connectionist ones. This would be bad news, because an evolving AI seed could actually make use of all the mismatched AI code around it as functional pieces in its own generated code, potentially taking it over the fast recursive self-enhancement threshold earlier than it otherwise would have managed. Fortunately no one has come close yet, though with a lot of projects it's very hard to tell from just the external PR...

* Recursive approximation of normative reasoning, driving rational code generation. This is the approach we're using, which I personally think is both the most powerful (in reasoning and computational performance) and the only AGI approach that can produce a safe, controlled outcome to recursive self-enhancement. It's essentially a general probabilistic reasoner with complex layered models replacing empty symbols, combined with a recursive heuristic system that uses 'constraint system refinement' (similar to formal verification but applied to the whole design process) to generate code. This ensures that AI-generated code actually works first time, or at the very least does not break anything and is not used until fully tested. Combined with a full reflective model and some actual work put into goal system analysis, it completely eliminates (so far, fingers crossed) the system stability issues formerly associated with self-modifying AI code. The major problems are the extremely high code complexity (codebase size, but more the interconnectedness and sheer difficultly of wrapping your head around the concepts involved) and the fact that the formal theory still stops well short of being able to dictate a full specification (though the amount of guesswork is low compared to 'emergence based' approaches).
User avatar
wolveraptor
Sith Marauder
Posts: 4042
Joined: 2004-12-18 06:09pm

Re: Mini-FAQ on Artificial Intelligence

Post by wolveraptor »

I scanned through this thread and I didn't see this question: What purpose will humans have in a society in which advanced AGIs are widespread? Based on my skims of your posts, you've basically said that AIs will be more efficient than people at pretty much every endeavor, even art. Are there any disciplines that will be reserved to our species alone?
"If one needed proof that a guitar was more than wood and string, that a song was more than notes and words, and that a man could be more than a name and a few faded pictures, then Robert Johnson’s recordings were all one could ask for."

- Herb Bowie, Reason to Rock
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

SWPIGWANG wrote:I wonder what is the proper social response if a AGI algorithm is found (not implemented) but the FAI problem is unsolved (or proven intractable for the said AGI algorithm...what not).
We'd be in serious trouble. This strikes me as unlikely, because I don't think we'll find an AGI algorithm that we can be completely sure of working without actually building and testing it (fortunately FAI is easier to prove in this regard). If someone did, the relevant questions would be whether it's already public domain, how much hardware it takes to implement and whether it's anemable to FAI formal proof techniques (i.e. a transparent and logical design). If it isn't public, or it takes a supercomputer to run, a crash program to develop and implement FAI might have a chance of making a Friendly AI before someone uses the algorithm to make an unfriendly seed AI. I wouldn't give it much chance of success unless both those conditions are true and the algorithm is FAI-compatible. If none are true, well, a crash FAI program is still a good idea, but it will be on a 'best guess' basis with the chances of success being minimal.
Would it be a good time to pull a "dune" and avoid all technologies where this could be implemented?
There's no way you'd convince contemporary politicians to even attempt it, and in any case such a ban would be ineffective and economically disasterous. There are a few scenarios under which a targetted ban might make sense to buy time, but frankly, not many (since any attempt to ban will actually spur on large segments of the independent AI research community, not to mention state-sponsored efforts in any countries that do not fully buy in to the necessity of the ban).
Also, I kind of wonder if a new model of computation can be made that encompasses Turing machine's computing power with the exception of things that run into halting problems and other things that can't be proven (yet). I wonder what it'd look like.....
That's been an active area of research pretty much since Turing first published his model, but despite several plausible claims, as yet no one has come up with a more general model that survived peer review.
Zixinus wrote:Can you create a partial AGI? An AI that can do quite a large number of tasks, can adept and improvise, create new programming but only able to partially modify itself and essentially be an around or even beyond human intelligence?
If you mean, make one that is inherently limited like that, no. We certainly don't have the expertise to structurally cripple the design such that it can do most humanlike things yet not be able to design a less-crippled version of itself, and I'm not sure it's even possible. The only way to limit the capabilities of an AGI like this are; restrict the hardware it can run on, install 'watchdog' software to block self-modification and/or shut it down if it gets too capable, or design the goal system such that it does not want to become more capable than a human. Restricting the available hardware won't work for two reasons; firstly humans suck at programming, so anything just adequate for human capability using human-written code will probably be vastly excessive for human capability using AI-generated optimal code (artificially degrading the AI's initial programming ability will only slow down the initial portions of the feedback loop). Secondly the second it gets access to the Internet, or really just the external world in general, there is a huge amount of available computing power out there for the taking. 'Blocking' software won't work because humans suck at computer security under ideal circumstances, never mind within something as complicated and conceptually hard as a general AI. This is the 'adversarial swamp'; various experts have proposed security schemes for keeping an AGI in check, and all have them have been shot down by other humans, never mind an actual AGI.

Designing the goal system such that the AGI doesn't want to be transhuman will work, if you have solved the various stability and grounding issues that you need to tackle before you can specify any AGI goal system reliably. However, if you can build an AGI like this, it won't be long before someone else builds the same thing but without the restriction to human-equivalence.
For example, an AI that can expand its mental toolset and its sub agent-system visual processing AI, but unable to optimize its own intelligence (of course it is still able to backup its memories and learn how to built other AIs (even AGIs) if it learns to?)?
The system I am working with at the moment uses simple hardcoded restrictions on which modules can have AI-generated code added to them, but this is for development convenience. You can't expect restrictions like this to hold in the face of a malicious AGI. You could, at great expense and inconvenience, use a hardware scheme that makes such modification harder, but it's a waste of time. It will only take slightly more intelligence to move all the actual AI functionality into the non-locked-down regions - or just create a new AI within whatever non-locked-down hardware is attached (e.g. the PCs used to monitor and control the system).
Gilthan wrote:Humans are such right now. Even if given access to a wiring diagram of all the trillions of synapses in your brain, to your own "source code," you would find reading through the first few billion pages equivalent to take more than a little time, let alone understanding and rewriting all of it for arbitrarily high orders-of-magnitude self-modification and self-improvement.
The early stages of self-enhacement for a human upload, or very neuromorphic AI, are radically different from the early stages of self-enhancement for a logical/symbolic AI. Convergence only occurs later in the process (and even then, not with certainty, depending on the goal system). Early self-enhacement for neuromorphic AIs will involve relatively gross alterations to NN topology and parameters (the equivalent of increasing the neuron density of a human brain, changing the long-distance interconnection pattern, changing the neurochemistry etc) and the software equivalent of brain-computer interfacing; tying blocks of conventional code in with narrow interfaces. Direct optimisation of the NN simulation code (or FPGA layouts etc) may also increase the clock speed of the neuromorphic AI.
Frankly, making an early AI not able to solo self-modify itself rapidly and indefinitely above its initial intelligence would seem a relatively easy safeguard, if combined with running only on a specialized type of supercomputer hardware,
This only works with neural networks (since we can pattern those after a fixed template, the human brain, and have some expectation of success) and it makes development much more complicated and expensive. I would note that no current team is trying to make actual hardware NNs (a few people are using FPGAs, which are not the same thing) for this reason - code flexibility is really pretty critical to doing AI research, even moreso for anything other than slavishly neuromorphic designs.
(while being supposed to never risk accidentally making a far more subtle self-change eliminating friendliness)
A neuromorphic AI is not an improvement in this regard. If you manage to build it and keep it limited, so what? You can't guarentee it is Friendly, and you have done nothing to solve the problem of other people building AGIs that are not so limited. If it is truly human-level then it can't even help you solve FAI. While you are messing around with your carefully contained (and extremely expensive and cumbersome) human-simulation, other people will continue building efficient AGIs that are designed expressly to exploit the advantages of computer hardware, without any self-enhacement restraints.
wolveraptor wrote:I scanned through this thread and I didn't see this question: What purpose will humans have in a society in which advanced AGIs are widespread? Based on my skims of your posts, you've basically said that AIs will be more efficient than people at pretty much every endeavor, even art. Are there any disciplines that will be reserved to our species alone?
There are no mental endeavours that humans are optimal for, no. That's like asking 'are there any types of software application that will be reserved to the Intel 286 alone?' Science fiction writers love to imagine that there are of course, but the limitations they assign to AGIs are completely arbitrary and generally pulled out of their ass. Iain Banks does a good treatment of this in his Culture novels; the Minds could make paintings and symphonies far better than any of the (human-equivalent) biologicals could, but they don't. They don't want to take the pleasure and satisfaction of doing so away from the biologicals, and there is no motive to when they have plenty of crazy-complicated appreciable-by-superintelligences-only art and culture to occupy their time with instead. I would like to think that a future dominated by Friendly AGIs would be like that, though that's as hopelessly speculative as all post-Singularity prognostication.
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Zixinus »

I must thank you for giving me an overview of various AI approaches. I am not sure I can use them as a writer, but they are definitely informing.

Q: What are the chances of AGI running on a home computer (today's top of the line or even a a shelf of server computers, for the sake of argument)? Could such an AI, with a completely human-controlled goal-system (to ignore the question of friendliness for the moment), be able to answer complex questions such "how can I make artificial muscle tissue" or "how can I make and control a nanoassambler so I can make anything I want?" (all obviously in a format that the user understands and has enough data of the real world to make)?

Q: At the risk of making Asimov-esque mistake, would it be possible to make an AI that cannot or does not want to change its human-defined goal-system (and for the sake of argument, a goal-system that only AI-experts may modify)? In a way to maintain stability, efficiency and even possibly friendliness?

Q: One for fun and fiction: one of the common sub-tropes of AI and computers, is the idea of using special "command codes" (either visual, written or spoken triggers). While obviously worthless against a fully-malicious AGI that will merely clone a non-crippled version of itself, would it be a sound engineering precaution for lesser, friendlier AIs (preferably lower-intelligence types)?
You mention three approaches, of which only goal-modification can reliably work. But what if the other two are designed by an AI? Is it cognitively possible somehow to make the AI unable to realize that such "command codes" exists until they are given and by then they have little to no capacity to override them (I supposed this would look something like a set of backup supergoals or goals?)?

Okay, I hope these questions are intelligent-enough to be worth answering.
We certainly don't have the expertise to structurally cripple the design such that it can do most humanlike things yet not be able to design a less-crippled version of itself, and I'm not sure it's even possible.
I meant that it can, but has little desire to or views it as an unnecessary hindrance to try and design a more intelligent version of itself.
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Zixinus wrote:Q: What are the chances of AGI running on a home computer (today's top of the line or even a a shelf of server computers, for the sake of argument)?
I can't give you an objective answer on that. Through the history of the field, the estimate for adequate hardware has usually been 'just a bit better than the computers the researchers are using'. That has been trending upwards recently, probably due to frustration with constant failure to meet expectations; I think the average estimate is now somewhere between a current supercomputer and a supercomputer in thirty years time. Note that 90% of these estimates are pulled out of people's asses, and most of the ones that aren't are based on highly neuromorphic AIs (i.e. an estimate of how much computing power is required to simulate the human brain to some level of detail).

My subjective answer is yes, a contemporary PC should be more than adequate for rough human equivalence using optimal code. It won't be exactly human comparable - such an AGI will suck compared to a human at a few things where massive parallelism is really helpful (exactly how much depends on how good the indexing/search algorithms get, and how much they can remove the need for brute-force comparison), but it will likely be much better at a whole range of tasks where massive serial speed and fully precise maths and logic are useful.
Could such an AI, with a completely human-controlled goal-system (to ignore the question of friendliness for the moment), be able to answer complex questions such "how can I make artificial muscle tissue" or "how can I make and control a nanoassambler so I can make anything I want?"
Difficult to say without trying it. Those tasks require a lot of physical simulation - our current simulations need supercomputers and can still only simulate narrow subsystems (if the resolution is high enough to produce high-certainty results). However human-written simulations are mostly brute-force. In theory, an AGI should be able to run a 'patchwork simulation' optimised such that only the areas that really need it (the causally critical areas) get max resolution. This is how human mental modelling works and how a viable AGI's internal modelling should work. However some tasks e.g. weather prediction probably aren't amenable to this kind of intelligence-for-brute-force substitution. I would say that there is a fair chance of that being possible, and if it doesn't work, it probably just means you have to buy a supercomputer with the proceeds of lesser tasks the AGI completes for you (e.g. contract programming, financial speculation). Or if you are a black hat, tell it to take over as many computers as it needs to build a covert distributed computing grid (if spammers manage to build huge botnets, an AGI will have no trouble).
(all obviously in a format that the user understands and has enough data of the real world to make)?
Getting these things actually built is a whole other issue. I doubt an average human is going to be much use, but you can just send the blueprints to a semiconductor fabrication company and/or light engineering contractor, along with cash from whatever fundraising activities the AGI is doing. Building dry nanotech general assemblers directly seems unlikely, but you only need the first step in the automated toolchain to get things rolling.
At the risk of making Asimov-esque mistake, would it be possible to make an AI that cannot or does not want to change its human-defined goal-system (and for the sake of argument, a goal-system that only AI-experts may modify)? In a way to maintain stability, efficiency and even possibly friendliness?
Yes. In fact this is the most common approach to Friendliness; if you can get AGI researchers to acknowledge the problem at all, the first thing they do is come up with a cute list of 'general moral principles' (along the lines of a secular ten commandments) and declare that this will suffice for all time as the guiding principles of superintelligent beings (see: Goertzel's initial FAI stuff, also numerous crank efforts e.g. Marc Geddes). Yudkowsky's CFAI and later CV proposals were very much a deliberate attempt to get away from this mentality.
While obviously worthless against a fully-malicious AGI that will merely clone a non-crippled version of itself, would it be a sound engineering precaution for lesser, friendlier AIs (preferably lower-intelligence types)?
As a user interface for standard adversarial measures to control an AGI, it makes as much sense as such measures usually do. In normal research, having a stop button that you press if something funny happens is fine. Those dramatic sci-fi story scenarios where you utter a shutdown code to stop an evil genius AGI are pretty implausible; if things get to that point, your safeguards are probably toast. Not completely implausible though; it's conceivable that you could get the goal system design correct enough to make the AI unable to consider removing the lockouts, but not correct enough to actually be Friendly. Definitely not something you'd rely on.

However special 'command phrases' might make some sense for non-adversarial measures, due to binding drift. If the problem is that the AI is not interpreting your commands properly, because it misunderstands the concepts underlying the words you're using, a command phrase may just be an unambigious way of telling it to stop and wait for the problem to be fixed. If you get into this situation in the first place it's likely that events are moving too fast for intervention to be an option, but it's a worthwhile precaution anyway.
Is it cognitively possible somehow to make the AI unable to realize that such "command codes" exists until they are given and by then they have little to no capacity to override them (I supposed this would look something like a set of backup supergoals or goals?)?
Making the AI unaware of the existence of parts of its own code base involves setting up a stable delusion, and that really isn't something you want to mess with. Start putting the capability for that in and you may well get genuinely crazy i.e. religious AGIs. I'm sure a sufficiently impressive superintelligence could do this reliably and without side effects, but at that point why bother? It's certainly not something we should be messing with.
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Zixinus »

I wanted to add this to my previous post, but I'll ask it here:

Q: Assuming that one can create such, how realistic would be a temporary friendliness solution that revolves around setting certain goals? That is, goals that the AI won't change. How long would this be reliable and stable? Seconds? Hours? Weeks? Depends on the data and approach they have?
For example (or rather, the idea I have in mind), the trope (Thank the Maker) that artificial creations view their creators as god sounds cliché, but to me it sounds like a logical solution until a more permanent and robust friendliness solution is made.
Getting these things actually built is a whole other issue.
[I assumed at first that you were talking about the databases, rather than the nanoassamblers and such]

Am I correct in assuming that building such databases is actually one of the greater challenges of AGI development?
Well, AGI's that make us awesome things anyway.

I mean, its not like you can just get it a wikipedia DVD to an AI and expect to properly understand the real world. I imagine that you'll only get marginally better results with using better digital encyclopaedias, and get substantial results only when you get a carefully-edited archives of many scientific journals.

And the last thing you want is for an AI to get internet access. It is bound to happen sooner or later, obviously. But if the friendliness and stability of the AI is in question, I assume that you want to avoid that as much as possible, so I conclude that such data-feeding measures would have to come in. Or am I going on the wrong track?
I doubt an average human is going to be much use, but you can just send the blueprints to a semiconductor fabrication company and/or light engineering contractor, along with cash from whatever fundraising activities the AGI is doing. Building dry nanotech general assemblers directly seems unlikely, but you only need the first step in the automated toolchain to get things rolling.
So, the current-level automated tools would come in with help here?

I recall demonstrations of these in my school and I actually wish I could remember their names. Though that was used for metalworking, I know that there are machines out there that can practically make very precise components and even put them together somewhat. I wonder whether you can rent these things.
As a user interface for standard adversarial measures to control an AGI, it makes as much sense as such measures usually do.
Not much?
Yes. In fact this is the most common approach to Friendliness; <snip>
Am I correct in assuming that you do not view these as very good solutions? Are they bad solutions only in the matter of Friendliness or in general? Or are we to assess these in an engineering mindset of tradeoffs?
Making the AI unaware of the existence of parts of its own code base involves setting up a stable delusion, and that really isn't something you want to mess with.
Am I correct to assume that in an AI-made or AI-aided design, this has negligible danger?

I also assume that trying to "hide" these command codes would rely on of (or perhaps combination?) of the three methods you outlined before?
A hardware limitation will be worthless the moment the AI copies itself into another shell, an intelligence-crippling blocks will meet with the "adversarial swamp" and if an AI can modify its goal system it will quickly realise that these command codes are a danger to itself and must be deleted.
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Zixinus wrote:Q: Assuming that one can create such, how realistic would be a temporary friendliness solution that revolves around setting certain goals? That is, goals that the AI won't change.
All viable FAI schemes work like that, it's just a question of how abstract the goals are. The few misguided souls seriously trying to make connectionist FAIs may talk about 'behavioural attractors', 'deep reinforcers' and other custom jargon, but those all boil down to fuzzily specified meta-goals (note; in this domain, fuzzy is bad). Solving FAI has two components; finding a way to make the goal system stable (under reflection, and with regards to external reality i.e. eliminating grounding drift as a concern), and designing the content of the goal system. You can in principle solve the first problem and still blow it by specifying hopelessly oversimplified goal content, e.g. 'make everyone happy' -> everyone gets electrodes implanted in their pleasure centers and put in a life support tank being permanently blissed out.
How long would this be reliable and stable? Seconds? Hours? Weeks? Depends on the data and approach they have?
I think I answered this earlier; if you fail to solve the stability issues, the timescale and magnitude of the consequences depends on exactly what you did wrong, but they tend towards 'fast' and 'really bad'.
For example (or rather, the idea I have in mind), the trope (Thank the Maker) that artificial creations view their creators as god sounds cliché, but to me it sounds like a logical solution until a more permanent and robust friendliness solution is made.
That's a really, really bad idea. Believing that some set of beings are deities isn't goal system content at all, it's a hypothesis about the structure of reality. Goals are subjective and can in principle be whatever you like, if you design the AGI competently. Reality is objective and any invalid facts you put in the knowledge base will eventually be exposed as such and deleted. Something as ludicrous as 'humans are gods' will evaporate very quickly given a rational design, and still fairly quickly for anything remotely human-equivalent. At least in a rational AGI design dropping this falsehood shouldn't break anything else - or rather it won't break the goal system, it may drop the prior on human-supplied facts so far that the whole initial knowledge base gets invalidated. In more opaque designs that conflate probability and utility, fundamental errors in the initial knowledge base can have quite arbitrary effects.

It is theoretically possible to design the goal system such that the AI wants to be delusional and blocks any attempt to realign its beliefs with reality no matter how much counter-evidence accumulates. However that is another whole pile of potential instabilities and horrible failure modes. Far better to work out whatever behavior you wanted the notion that 'humans are gods' to generate, and set that directly as a goal.
Am I correct in assuming that building such databases is actually one of the greater challenges of AGI development?
There is no consensus in the field on the question of how much spoon-fed knowledge you need for AGI to be viable. At one end of the spectrum, the Cyc people claim that intelligence is knowledge and that if you make a big enough propositional database, you will have human-equivalence. At the other end of the spectrum, a lot of connectionist and GP people think that spoon-fed knowledge is inherently wrong, and only leads to people ignoring learning and (worst case) making rigged, pointless demos. I would take a moderate position; it is definitely possible to make an AGI with no pre-built knowledge base if you put in enough effort and computational brute force, but this is both inefficient and a really bad idea, because such designs will almost certainly be opaque (and transparency is necessary for FAI).
I mean, its not like you can just get it a wikipedia DVD to an AI and expect to properly understand the real world. I imagine that you'll only get marginally better results with using better digital encyclopaedias, and get substantial results only when you get a carefully-edited archives of many scientific journals.
Careful editing is not required, at least not for rational AGI, since it's inherently good at global consistency analysis. For specific applications, particularly engineering applications, you will need real specialist reference sources rather than Wikipedia but that's no big deal though. General achives of relevant papers and textbooks on DVD should work fine.
And the last thing you want is for an AI to get internet access.
Well I think so but please understand that this viewpoint is very much the minority. The vast majority of AI researchers cheerfully connect their AIs to the Internet at the first opportunity. Even the SIAI is currently sponsoring an AI project (supposedly working towards AGI) that is basically a virtual pet you interact with through Second Life.
I assume that you want to avoid that as much as possible, so I conclude that such data-feeding measures would have to come in.
By the point that you get to specific technical information, you should have already solved FAI (assuming that you acknowledge and care about the problem). There's no point doing this unless your AGI is already at human-level, and if that has happened it's already plenty dangerous enough - further precautions are something of a token gesture. Any actual use of such information to design inventions, investing schemes etc is effectively as bad as direct Internet access anyway - it's a direct breech of the 'AI box' principle (itself highly dubious) and just means that the AI has to be a bit more subtle and take a bit longer to get what it wants.
Building dry nanotech general assemblers directly seems unlikely, but you only need the first step in the automated toolchain to get things rolling.
So, the current-level automated tools would come in with help here?
This is departing from the realm which I can reasonably speculate about. Frankly it's hard for anyone to do so; even the professional nanotech researchers only have rough concept designs for general assemblers and there's no way they can say what the optimal toolchain to build one (as designed and driven by highly transhuman intelligence) would be. Of course this sucks, because the fact that we're woefully unequipped to make predictions doesn't change the fact that this is a very real and extremely dangerous possibility. When dealing with existential risk of this magnitude, we just have to grit our teeth, make the best predictions we can, and be very conservative with development. Or rather, that's what should happen. In reality, 99% of the field is just blindly rushing towards whatever vision of AGI seems easiest to make, and thinking about how awesome the press conference will be.
Am I correct in assuming that you do not view these as very good solutions?
Yes, any solution that fixes rules of AGI behavior, without reasonable scope for future refinement and replacement, is inherently flawed. The chance of any human guessing the correct 'ten eternal principles for moral intelligence' is negligible even if such principles exist - particularly given that people who go for this approach usually treat it as a minor curiosity compared to the main AGI problem.
Are they bad solutions only in the matter of Friendliness or in general?
There isn't really an 'in general' for AGI, making unFriendly ones is literally an insane act. For narrow AI the question isn't relevant - no narrow AI system comes anywhere close to being able to comprehend 'three laws of robotics' type schemes. Existing robots have quite limited behavior patterns that are analysed for safety risk and where necessary fitted with safety features in the same way as normal software or machinery.
Making the AI unaware of the existence of parts of its own code base involves setting up a stable delusion, and that really isn't something you want to mess with.
Am I correct to assume that in an AI-made or AI-aided design, this has negligible danger?
Well, in the sense that they aren't going to do it by accident, nor will they think that it's a good idea. I suppose it might happen on an experimental basis, though building delusional sapient AGIs on purpose is morally wrong.
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Zixinus »

This has been bothering me for some time and I'm afraid that this might get a little philosophical, but: is it not humans that first set the goals for an AI? What sort of goal would be set to an AI?

I mean, the thing about AIs is that they are a clean slate, have no concept of the idea of fear of death, have no pre-fixed needs aside hardware to run it on, so: wouldn't an AI's goal be determined by a human to begin with? If you set it to "help humanity" (not phrased like that, obviously) or "follow the instructions and wishes of this human" (which seems like a good or at least workable short-term solution to order a boxed AI to solve the friendliness solution or something), all internal changes would resolve around completing that goal, wouldn't it? Why would it make a goal that would override that?

After all, an AI should have no notion of the need for survival beyond making sure that their goals are met. Why would it make a goal that it would value archiving over the one set by humans? What is it that makes an AGI different than a dynamic, self-correcting program that solves a given problem to a sentient thing that has its own will?

What are most AI researches hoping to do with their AIs? Solve complex mathematical problems? Make things like nanoassamblers or fabricators? Make amazing press conferences?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Zixinus wrote:This has been bothering me for some time and I'm afraid that this might get a little philosophical, but: is it not humans that first set the goals for an AI?
Depends on the design. In a logic-based design, you can explicitly specify goals. However the mechanism that translates those data structures into preferences over possible world states and AI actions is a very complex one; the process encompasses a large part of the AI design. For example, the goals are defined in terms of concepts; 'obey human orders' requires a concept of a human and a concept of an order. 'Preserve yourself from harm' requires a concept of 'self' and a concept of 'harm'. Those are built out of more data structures, which are connected to more data structures, which eventually connect down to things that can be recognised directly in sensory input. The more abstract the concept the more scope there is for the definitions to be incorrect. The definitions aren't static in an AGI anyway, they change and grow as the system learns, and if they end up moving away from what you defined them as the goals will change. Then there's the arbitration system between conflicting goals; expected utility is the best way to do it, but even that has all kinds of problems in defining utility functions (which are usually uncomputable as specified) and doing best approximations under limited time and information. That's before you even get to direct self-modification and stability under reflection.

So even for a supposedly-transparent design, the goals you think you are specifying may not be the goals the AGI actually acts on. It gets progressively worse the less transparent the design is. In the more 'high-level' connectionist designs, you set 'attractors' for behavior, which usually ends up working like abstracted pleasure/pain (e.g. activation sources in spreading activation designs). The mapping from that to actual behavior is at least as convoluted as it is in humans, with lots of new failure modes on top of the ones logical AIs have. 'Low-level' connectionist designs that rely on 'emergent behavior' don't even let you do that. Rather you design a whole series of training scenarios, and hope that the AI learns to behave the way you want it to behave. If it seems to be behaving ethically, has it constructed an ethical motivational system, or has it just learned to fake it to get rewards? No way to know, but likely the later (it's easier for faking to emerge under most training scenarios than actual ethics), and certainly no way to check for long-term stability. Simulated evolution is even worse in that most implementations have an inherently bias towards selfishness and self-propagation, and genetic programming is even worse than NNs in that it is less robust and more prone to sudden, dramatic structural transformations.
I mean, the thing about AIs is that they are a clean slate, have no concept of the idea of fear of death, have no pre-fixed needs aside hardware to run it on, so: wouldn't an AI's goal be determined by a human to begin with?
Basically, yes, but so far humans look to be utterly useless at implementing the goals they actually want, rather than some random emergent goals or worse a distorted parody of what they wanted.
If you set it to "help humanity" (not phrased like that, obviously) or "follow the instructions and wishes of this human" (which seems like a good or at least workable short-term solution to order a boxed AI to solve the friendliness solution or something), all internal changes would resolve around completing that goal, wouldn't it?
If you design the system competently then yes. Unfortunately the bar for competence in AGI design is set ridiculously high - not by humans, that bar was set by the nature of intelligence and reality in general. Sometimes when I am feeling pessimistic I think it's set superhumanly high.
Why would it make a goal that would override that?
It wouldn't - this is why FAI is viable at all.
Why would it make a goal that it would value archiving over the one set by humans?
Remember that setting goals isn't just a problem of positive specification. You also have to consider all the subgoals the AI might create in order to help solve the main goal. The classic one is converting all matter on earth into processors because you asked it to prove a theorem, e.g. the general Riemann hypothesis. Obviously as soon as you mention that, people start thinking of long lists of conditions and counterexamples to prevent bad side-effects, but that just restricts the failure cases to the more subtle ones that you didn't think of. To be honest, trying to use an AGI to solve limited problems like this, without building a full FAI, is so difficult and dangerous that you really should just go the whole hog and build a proper FAI.
What is it that makes an AGI different than a dynamic, self-correcting program that solves a given problem to a sentient thing that has its own will?
'Free will' is one of those loaded philosophical terms that really has no relevance at the engineering level (and AI design is 'cognitive engineering', not philosophy or maths or anything like that). Sentience is another fuzzy term, but in practice you can pin it down to the capabilities of the self-model and reflective subsystem (essentially, the ability to introspect and answer questions about the self). As for 'dynamic and self-correcting', this dilemma is purely theoretical as no one has built such programs that are 'dynamic and self-correcting' in more than the most trivial ways. As you increase the generality of the problem solving capability, you inevitably fall into the category of general intelligence.
What are most AI researches hoping to do with their AIs? Solve complex mathematical problems? Make things like nanoassamblers or fabricators? Make amazing press conferences
Roughly, you can divide researchers into three categories. Most academics are driven by (intense) curiosity, respect from peers and the historical immortality they'll get if they crack the problem. They want to build AIs because they want to know how intelligence works (either human intelligence or in general) and because they want to be the people to solve the problem. Private projects tend to focus on getting rich; they all have a slew of near-term automation applications; replacing call centers, automated surveillence, robotics, fraud detection, financial trading, there are many many application areas. There may be some wolly stuff about how it will make the future awesome in general, but the focus is short-term uses of human-equivalent AI. That last category contains people focused on the transition to a posthuman society and is very diverse. There are people who think that having lots of human-equivalent AGIs around will just make everything better, there are people who want immortality via uploading, there are nuts like Hugo de Garis who want to 'start the inevitable Artilect War between the Cosmists and the Terrans (believe it or not, this spiel does seem to get him grants)', and there are Singularitarians who see the potential for hard takeoff and a total break with 'life as we know it' (quite likely being snuffed out by unFriendly seed AI).

Obviously there's a fair amount of overlap.
Gilthan
Youngling
Posts: 88
Joined: 2009-11-06 07:07am

Re: Mini-FAQ on Artificial Intelligence

Post by Gilthan »

Starglider wrote:If you manage to build it and keep it limited, so what? You can't guarentee it is Friendly, and you have done nothing to solve the problem of other people building AGIs that are not so limited. If it is truly human-level then it can't even help you solve FAI.
Most of the argument against containment measures is based on the implicit assumption that such must work forever. That's not so, because containment only has to last a short length of time to test friendliness (if intentionally limiting version 1.0's intellect, so as not to be capable of superhuman deception) and to get the improved second generation AI out.

Make some AIs still close enough to humans as not to be capable of undefeatable deception, confirm their friendliness, and then run them at orders of magnitude faster speed to have them work on designing their successors.

Then, you can have centuries of programming development in a year, by a large virtual community of understandable known-friendly entities.

When you do create superhuman AIs later using that, eons of virtual time can be spent on double-checking the friendliness of and safeguards within their programming, instead of trying to do it with relatively shoestring resources by mere human coders in a comparative hurry.

In contrast, if you try to jump straight to making something as far beyond your intelligence as you are to a rat, then, if you accidentally mess up on solving the friendly AI problem, you may not have any second chance. Besides, verifying friendliness would be a whole lot harder if even the first-generation entity had greatly superhuman capabilities for deception.

I'm all for trying to make AIs guaranteed perfectly friendly from the start, but the safest course of action is to have redundant safeguards, to have a backup plan in case the first version of the code has bugs you didn't detect or expect. A near-human intelligence can be slowed down to operate in real-time, understood, sped up, slowed back down again for observation, sped up again, etc. A godlike AI may be too incomprehensible and dangerous to be the safest choice to be implemented as version 1.0 of the code.
While you are messing around with your carefully contained (and extremely expensive and cumbersome) human-simulation, other people will continue building efficient AGIs that are designed expressly to exploit the advantages of computer hardware, without any self-enhacement restraints.
That may be so under the questionable assumption of the AGI being created by a small team of people, rather than being a far harder, bigger project than any giant software program ever made before.

Obviously this is all hypothetical, but there's no guarantee there would be countless other equal competitors only weeks behind, as opposed to the first "Manhattan project" having a few months (or years) head start. Funding gets high if AI advances enough for AGI to be seen as in development range, becoming a national security priority.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Gilthan wrote:Most of the argument against containment measures is based on the implicit assumption that such must work forever. That's not so, because containment only has to last a short length of time to test friendliness
The primary argument against containment is the fact that it is not possible to empirically test Friendliness. You can set up a simulated environment and see how the AI reacts, but that in no way guarentees the behavior once the AI is in the real world - both because it may be deceiving you, and simply because the range of stimuli it will be exposed to in simulation is vastly smaller than the range it will be exposed to in reality. Furthermore empirical testing of AIs constrained to humanlike intelligence, and/or self-modification blocked off, is inherently incapable of predicting behavior once those limitations are removed. The situation is slightly less hopeless with logic-based designs that we can reasonably do white-box analysis on, but still, guaranteeing Friendliness requires a strong predictive model derived from the AI's actual design (only possible for a narrow subset of logic-based AIs). Not 'well it seems nice when I play it at Civ, I'm sure it will be make a good superintelligence'.
and to get the improved second generation AI out.
The role of such testing is as a final sanity check on your implementation. If any problems with goal behavior occur (distinct from mere problem solving performance issues), you should immediately stop testing, go back to the drawing board and find out what was broken in your theoretical design.
Make some AIs still close enough to humans as not to be capable of undefeatable deception
An AI based on self-rewriting code is at a massive advantage over humans in obfuscating the true function of its own code. Connectionist designs are for the most part self-obfuscating at the low-level; you may remove some options for malicious deception but it's a moot point since you've made white box analysis nearly impossible. It is in any case difficult to control capability that finely - the set of capabilities you measure in your test suite will stop being representative at some point in the development process, it's just a question of when.
Then, you can have centuries of programming development in a year, by a large virtual community of understandable known-friendly entities.
Except that they are neither understandable nor friendly - or rather, if they were, then the containment would be relegated to the status of emergency backup precaution. Again, you cannot add in 'friendliness' after the fact by messing about with parameters. It is something you have to design in and carefully verify during initial construction.
When you do create superhuman AIs later using that, eons of virtual time can be spent on double-checking the friendliness of and safeguards within their programming, instead of trying to do it with relatively shoestring resources by mere human coders in a comparative hurry.
I am at odds with the SIAI here in that I believe there is any role at all for using AGI to help solve FAI. Yudkowsky et al would say that anything powerful enough to do useful independent work on FAI (prior to having solved FAI) is already too dangerous to actually build. Likely because I have implemented code generation somewhat earlier and at a somewhat lower level than they would like, and a lot of the work I'm doing on that towards getting the current system to quine is quite close to the task of analysing the causal logic of AGI goal system implementation. However I still agree that whipping up an AGI by arbitrary means and then expecting it to safely assist in making an FAI is madness - you cannot use an unsafe system to make a safe system, because anything the unsafe system produces is inherently suspect. I barely trust humans to do this at all, I would certainly not trust humans to verify that an unfriendly AGI has not simply covertly cloned its own goals into the new design.

The sole exception to this is human uploading. If you have a reliable uploading technology, then uploading a whole bunch of human experts and getting them to do the FAI design on a de-novo (from scratch) successor AGI is sensible. However it's unlikely that we'll have such a technology prior to someone throwing something reasonably brain-like together and just turning it on. In the current climate it will most likely happen on an Internet-connected system with no real safety precautions. That's why this debate is mostly academic; the majority of the field is ignoring the issue entirely, and most of the rest are quite satisfied with superficial solutions.
In contrast, if you try to jump straight to making something as far beyond your intelligence as you are to a rat, then, if you accidentally mess up on solving the friendly AI problem, you may not have any second chance.
No one (who takes FAI seriously) is proposing simply turning on a seed AI and hoping for the best. The ideal approach is to design it, verify the design, implement it (and for most designs that means spoon-feeding and hand-holding up to about a chimp level), verify the implementation with batteries of external functional checks, get it to thoroughly self-verify against its own design spec, do a battery of simulation checks for extreme and edge decision-making cases just to be sure, then allow it to proceed at a controlled rate, still checking for any deviations from expected goal system behavior. Failure at any point means you damn well stop what you are doing and regress as many steps as you have to to make sure the problem is completely understood and fixed.
I'm all for trying to make AIs guaranteed perfectly friendly from the start, but the safest course of action is to have redundant safeguards, to have a backup plan in case the first version of the code has bugs you didn't detect or expect.
That is correct, but having an AGI to self-verify is inherently riskier than doing it with conventional software tools (you don't have to do it by hand of course, you can use whatever proving tools you like). Adversarial methods are even less reliable, so they are the backup to the backup.
A near-human intelligence can be slowed down to operate in real-time, understood, sped up, slowed back down again for observation, sped up again, etc. A godlike AI may be too incomprehensible and dangerous
Human-level AGI is already incomprehensible and dangerous. Even if you put a huge amount of effort into making the design transparent, it probably won't be practical to have a human verify every heuristic, hypothesis and chunk of code the AI comes up with (because no existing project has the budget, and even if you had the budget, I'm not sure you could get enough qualified people). That's the best case. Trying to understand most connectionist AGI designs - if they were actually built and worked - would be equivalent to starting the whole field of neurology over again from scratch (though skipping the chemistry). This is what 'emergence' means, you don't design the functional structures, you create an environment such that they self-construct without any programmer input. Even for highly neuromorphic designs, the brain simulation people are intent on building a full-brain model as soon as they've got all the cell-level modelling and gross topology correct. Being able to actually verify what's going on would require capability equivalent to looking at a high-res CAT scan and being able to say exactly what someone's thought process is - a capability that will still be far in the future when people start switching these AGIs on.
That may be so under the questionable assumption of the AGI being created by a small team of people, rather than being a far harder, bigger project than any giant software program ever made before.
To date, most of the credible AGI projects (as judged by other researchers) have been teams of 5 to 20 people. There have been a very few teams that got into the low three digits. The problem isn't so much that people aren't willing to dump more resources into it - that was tried on a large scale in the 80s - it's that dumping more resources into it doesn't seem to help. The AGI problem is notoriously hard to modularise - and even if you could modularise it, that would mean most of your team are unqualified to analyse the behavior of the system as a whole. In fact most teams (certainly most start-ups who do this) have one or two people who genuinely understand the design and then a handful of support programmers who write parts to spec without knowing how it fits together. The brain modelling approach is getting a fair bit of funding now, but most of that work is characterisation of small scale structures. Efforts to make full-brain simulations have mostly been the usual academic unit of one professor plus a few grad students.
Funding gets high if AI advances enough for AGI to be seen as in development range, becoming a national security priority.
That could happen if there is a breakthrough on the connectionist side, where progress is slow enough for the news to diffuse and everyone to get excited. For the kind of AI I work with, the most relevant capabilities aren't terribly exciting. For example, that ability to produce program code, especially narrow AI code, from abstract spec. Even most IT professionals would just say 'big deal, you made a better compiler' (in fact I've been told exactly that myself, in response to various demos). The relevance in establishing a key piece of the seed AI feedback loop is not obvious - and if you dumbed that down so far as to make a general news story, it would be indistinguishable from the random hype that accompanies every AI/robotics story.
Gilthan
Youngling
Posts: 88
Joined: 2009-11-06 07:07am

Re: Mini-FAQ on Artificial Intelligence

Post by Gilthan »

Starglider wrote:The primary argument against containment is the fact that it is not possible to empirically test Friendliness. You can set up a simulated environment and see how the AI reacts, but that in no way guarentees the behavior once the AI is in the real world - both because it may be deceiving you, and simply because the range of stimuli it will be exposed to in simulation is vastly smaller than the range it will be exposed to in reality. Furthermore empirical testing of AIs constrained to humanlike intelligence, and/or self-modification blocked off, is inherently incapable of predicting behavior once those limitations are removed. The situation is slightly less hopeless with logic-based designs that we can reasonably do white-box analysis on, but still, guaranteeing Friendliness requires a strong predictive model derived from the AI's actual design (only possible for a narrow subset of logic-based AIs).
Starglider wrote:Connectionist designs are for the most part self-obfuscating at the low-level; you may remove some options for malicious deception but it's a moot point since you've made white box analysis nearly impossible.
(Bolded italics added).

If your AI design is connectionist and not logic-based, if white-box analysis isn't even an option, what choices do you have?

I know you prefer logic-based AIs, but it may turn out that connectionist methods become easier with future increase in hardware performance, while logic-based AGI development remains highly dependent on hypothetical brilliant insights of its programmers (who don't have an exponential growth in intelligence in future decades, unlike how the hardware available to brute force connectionist approaches does increase exponentially).

The most straightforward brute force way of getting AI, with the fewest requirements for brilliant breakthroughs by the researchers, would appear to be throwing enough money at emulating the intelligence of a neural cluster or a worm and then working up to more complex brains, one step at a time (following the usual cardinal rule of solving near-impossible problems: breaking down into simpler steps to master before moving on, not shooting for human intelligence directly).

We understand and can confirm friendliness (or at least predictability, controllability) of existing connectionist entities in the form of humans and animals because:

1. We have real world experience with them.
2. They have predictable behavior, known goals. (Some individuals can't be trusted, but humans or animals in an aggregate group are relatively predictable and controllable).

You seem to be dismissing the possibility of #1 for AGIs, by implicitly assuming the AGI must not leave a simulated environment.

Yet this is considering an AGI under the cardinal safeguard of not being superhuman in version 1.0. Dealing with a limited AGI of only humanlike intelligence would make having presence in the real world a containable risk. (That's assuming appropriate precautions and no "we made its body with this wireless transmitter, to transmit a copy of itself able to self-modify exponentially and run on standard conventional computer hardware worldwide, with this nearby internet access point").
I am at odds with the SIAI here in that I believe there is any role at all for using AGI to help solve FAI. Yudkowsky et al would say that anything powerful enough to do useful independent work on FAI (prior to having solved FAI) is already too dangerous to actually build. Likely because I have implemented code generation somewhat earlier and at a somewhat lower level than they would like, and a lot of the work I'm doing on that towards getting the current system to quine is quite close to the task of analysing the causal logic of AGI goal system implementation. However I still agree that whipping up an AGI by arbitrary means and then expecting it to safely assist in making an FAI is madness - you cannot use an unsafe system to make a safe system, because anything the unsafe system produces is inherently suspect. I barely trust humans to do this at all, I would certainly not trust humans to verify that an unfriendly AGI has not simply covertly cloned its own goals into the new design.

The sole exception to this is human uploading. If you have a reliable uploading technology, then uploading a whole bunch of human experts and getting them to do the FAI design on a de-novo (from scratch) successor AGI is sensible.
The earlier example would be trying to come close to the predictability and safety of human uploads, aside from that technology for a true, full upload is unlikely to be available.

Naturally the idea wouldn't be to use an unsafe system to make a safe system. Rather, it is easier to develop safe humanlike AGIs than safe superhuman AGIs at the start, for reasons including the simple fact that you're more likely to survive to have a second chance if you don't try the latter when inexperienced. Once you have safe humanlike AGIs, then you use that safe system to make another, higher-performance safe system.
In contrast, if you try to jump straight to making something as far beyond your intelligence as you are to a rat, then, if you accidentally mess up on solving the friendly AI problem, you may not have any second chance.
No one (who takes FAI seriously) is proposing simply turning on a seed AI and hoping for the best. The ideal approach is to design it, verify the design, implement it (and for most designs that means spoon-feeding and hand-holding up to about a chimp level), verify the implementation with batteries of external functional checks, get it to thoroughly self-verify against its own design spec, do a battery of simulation checks for extreme and edge decision-making cases just to be sure, then allow it to proceed at a controlled rate, still checking for any deviations from expected goal system behavior. Failure at any point means you damn well stop what you are doing and regress as many steps as you have to to make sure the problem is completely understood and fixed.
A logical series of precautions.
A near-human intelligence can be slowed down to operate in real-time, understood, sped up, slowed back down again for observation, sped up again, etc. A godlike AI may be too incomprehensible and dangerous
Human-level AGI is already incomprehensible and dangerous. Even if you put a huge amount of effort into making the design transparent, it probably won't be practical to have a human verify every heuristic, hypothesis and chunk of code the AI comes up with (because no existing project has the budget, and even if you had the budget, I'm not sure you could get enough qualified people). That's the best case. Trying to understand most connectionist AGI designs - if they were actually built and worked - would be equivalent to starting the whole field of neurology over again from scratch (though skipping the chemistry).
This comes back to the topic of empirical real world experience, though. Indeed, certainly you can't really inspect all of its complexity manually, but, like the easy verification that animals (a small subset of all possible connectionist entities) respond to food motivators, you may be able to test if your goal and control system is reliably working.

Also, if you slow down the society of humanlike AGIs who were working in accelerated time, you can inspect samples of what they were doing and see if they developed a hostile secret conspiracy (although, of course, that possibility should mainly already be ruled out long before you got to the point of creating a whole virtual society of them). In a virtual world, your capabilities can border on godlike when it comes to semi-omniscience. Of course, that wouldn't work if the AGIs were so far beyond human that no human could even hope to comprehend their efforts, but the key here is that they are made as merely human intelligence.

The point of this is not to suggest an adversarial relationship, though. If your AGIs see you as an opponent, you got to stop the project immediately. There must be mutual friendliness with monitoring only as a backup precaution, like police monitor the population but are mostly supported by that same population. As many backup precautions as possible are worthwhile in a situation of this extreme risk, though.
However it's unlikely that we'll have such a technology prior to someone throwing something reasonably brain-like together and just turning it on. In the current climate it will most likely happen on an Internet-connected system with no real safety precautions. That's why this debate is mostly academic; the majority of the field is ignoring the issue entirely, and most of the rest are quite satisfied with superficial solutions.
A typical small team like a professor and a few grad students aren't likely to follow much for safety precautions, less likely than a Manhattan project type endeavor. However, how likely the former is to actually succeed may be debatable, depending on such as hardware requirements:
Through the history of the field, the estimate for adequate hardware has usually been 'just a bit better than the computers the researchers are using'.
That's a red flag suggesting commonplace wishful thinking.
That has been trending upwards recently, probably due to frustration with constant failure to meet expectations; I think the average estimate is now somewhere between a current supercomputer and a supercomputer in thirty years time. Note that 90% of these estimates are pulled out of people's asses, and most of the ones that aren't are based on highly neuromorphic AIs (i.e. an estimate of how much computing power is required to simulate the human brain to some level of detail).

My subjective answer is yes, a contemporary PC should be more than adequate for rough human equivalence using optimal code.
If your guess was accurate, that a contemporary PC could be human equivalence with optimal code, then a contemporary PC utilized at about 1/1000th efficiency ought to be able to match a rat, given that a rat's brain is 1/1000th the mass of a human brain. Or a modern PC programmed at 1/50th efficiency should match a cat's brain. Yet we haven't seen anything like that.

Is current programming being universally under 1% efficiency at using the CPU to its full potential capabilities a probable answer, or might another explanation be more likely?

Based on computer vision programs in current robotics, Moravec estimated that it takes 1000 MIPS to match the amount of processing in a human retina and accordingly around 100 million MIPS to match the human brain since it has 100000x the volume of processing going on. Those computer vision programs are not written in a neuromorphic manner, rather by conventional coding (which shouldn't be orders of magnitude inefficient), so his estimate derived from that comparison should approximately apply to non-neuromorphic designs. A hundred million MIPS would be like two thousand top-end modern PCs combined (like millions of PCs a couple decades ago).

A top-end CPU package, LGA775, has 775 pins total, and, more to the point, modern CPUs are typically 64 bits (though with often 2-4 cores on modern CPUs). Consequently, although the input / output data streams are alternating at gigahertz frequency instead of a fraction of a kilohertz for human neurons, that factor of millions-of-times increase in serial input speed can be more than countered by how the brain has so many billions of neurons and trillions of synapses in contrast. The number of transistors inside a modern CPU is enormous, but its overall setup only handles a comparatively simple input signal per clock cycle, without many millions or billions of input wires going into its CPU package at once.

Moravec's estimates make sense in context. If they're even close to valid, the limited results from AI research over prior decades become well explained, as well as what scale of a project might change them.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Gilthan wrote:
Starglider wrote:Connectionist designs are for the most part self-obfuscating at the low-level; you may remove some options for malicious deception but it's a moot point since you've made white box analysis nearly impossible.
If your AI design is connectionist and not logic-based, if white-box analysis isn't even an option, what choices do you have?
Simple; don't build a connectionist AI. Go and design a more sensible one instead.

I am arguably oversimplifying here; 'symbolic' and 'connectionist' are somewhat fuzzy terms and in theory there should be a huge spectrum of possible designs in-between. In practice though, not so much; up to about the mid 90s AI was almost completely partitioned into designs based on symbolic logic (though sometimes with fuzzy rather than Boolean propositional logic), and designs that a massively parallel mass of links and units with 'emergent structure'. There has been a lot of pontificating about how the first should 'emerge' from the later - see Marvin Minsky's 'Society of Mind' for the classic layman-accessible treatment of that - but no one has really demonstrated it in a working program. Some recent AGI designs do blur the boundaries, but they're still in the minority - even models that abstract low-level neural details, e.g. Hierarchical Temporal Memory (and the various Sparse Distributed Memory schemes that preceeded it) are still definitively connectionist and thoroughly opaque.

An obvious difference is that while all symbolic logic people admit that connectionist intelligence is possible (since humans are connectionist - though in the very early days, a few researchers insisted that all human conscious thought was really symbolic logic), quite a lot of connectionists think that connectionism is the only way to do intelligence. The spectacular failures of symbolic AI by the end of the 80s seems to have allowed them to win most sci-fi writers over their side - the fact that their designs are supposedly 'more like humans' allowing them to handwave their own just-as-serious failures. People in this camp dismiss work on formal Friendliness as irrelevant because they believe the prerequisite (transparent general AI) is impossible. I am not inclined to relate the whole list of nonsensical arguments supporters attempt to use to dismiss logic-based AI, so I will restrict myself to the ones with the most merit; the 'brittleness' and 'shallowness' of symbolic approaches.

Brittleness mostly occurs with spoonfed knowledge that the AI can't generalise or change, although in some rare cases it can be a limitation on learning performance (very few symbolic systems have significant structural learning ability) where learning must be unnecessarily repeated. Connectionists don't run into the former problem only because it's nearly impossible to spoonfeed nontrivial connectionist systems. The latter problem is also unfair, because existing connectionist systems have their own severe limitations on what can be learned, but focusing on series complexity (i.e. simple functions only) rather than 'fluidity'. Brittleness is significantly mitigated through use of probabilistic logic and concurrent use of multiple candidate representations and heuristics (combined into probability distributions). It can be eliminated by sufficient reflective metdata about how and why AI elements are designed, such that the AI can purposely modify those elements at will using the same logic as action planning. This is fiendishly hard to implement - it's similar to research projects to make operating systems based on pervasive self-modifying code, but several orders of magnitude more complex - but at least the design concept for it can be strongly supported in theory. Connectionists have no strongly supported theories for how to make their designs work - all they have are neat-sounding ideas and 'well the brain is connectionist so something like this must work'.

'Shallowness', or the 'empty symbol problem' refers to the fact that classic symbolic AI treats a word with a few attached rules as an adequate model of a complex concept. To be fair, originally that was the only way to do anything even vaguely intelligent looking on the hardware of the time. If you look at say Schank's classic work on scripts and plans in the 70s, it's essentially a big list of regular expressions implemented in Lisp, that take real sentences, simplify them into a standard structure, then create new sentences by looking at word patterns. Faking intelligence like this was actually a very impressive feat of 'knowledge engineering' (as creating expert systems used to be known), but it's incompatible with any serious learning algorithm and completely unable to model sublties, physical situations - anything you can't capture in a few sentences of restricted vocabulary. Searle famously whined about it in the Chinese Room paper and early connectionists swept in with NNs that seemed to do real learning and declared it all irrelevant.

Again, the persistence of this problem stems mainly from the fact that very little real work has been done to overcome it. Cycorp remain the standard bearers for the old vision of symbolic AI, whose workings look almost entirely like English sentences. Others latched onto the 'grounding problem' and mainly viewed it as 'how can we write interface code that bridges the vision code or the NNs to the classic symbolic logic code'. Not much work has been done interfacing symbolic AI to sophisticated physics engines - most of the people who do it are games programmers. Some psychologists have written good treatments of how rich symbols and adaptive level of detail of detail should work (e.g. various papers by Lawrence Barsalou, though the details are different in AI vs humans), which hardly anyone paid attention to. I'm being a little unfair here, in that rich representations make spoonfeeding harder, which means you really need to crack learning and dynamic hypothesis generation at the same time - which comes back to one of the central issues of AGI, that it does not decompose well into subproblems.

Most modern connectionists - certainly the ones trying to promote their personal takes on AGI - like to stereotype symbolic AGI as stuck in the 80s - not that they even bother to read Schank's papers and look at his proposed long-term improvements. There are no good reasons to persist with any form of connectionism other than slavish brain emulation. When I was at the SIAI we used to jokingly call emergent methods (encompassing genetic programming and connectionionism) 'the dark side' - because it seems quicker and easier, well suited for the impatient who want to brainstorm cool AGI ideas without having to wrestle the hard, unforgiving logic. It was black humor though, because the destructive potential is quite real.
I know you prefer logic-based AIs, but it may turn out that connectionist methods become easier with future increase in hardware performance, while logic-based AGI development remains highly dependent on hypothetical brilliant insights of its programmers
Oh undoubtedly. When I say 'people should not use emergent and opaque methods', that is my opinion and the opinion of pretty much everyone studying formal Friendliness theory. It clearly isn't what most people are actually doing, just like your entirely-theoretical safety proposals.
The most straightforward brute force way of getting AI, with the fewest requirements for brilliant breakthroughs by the researchers, would appear to be throwing enough money at emulating the intelligence of a neural cluster or a worm and then working up to more complex brains, one step at a time (following the usual cardinal rule of solving near-impossible problems: breaking down into simpler steps to master before moving on, not shooting for human intelligence directly).
General AI as a rule has proven fiendishly hard to decompose like that, and connectionist systems are not immune. In the late 80s and early 90s early successes lead to people confidently predicting that they'd have artificial dogs by 2000 and artificial humans by 2010. When they actually tried to write the code, most people got stuck at the worm level, the really good researchers made it up to insect level and then got stuck. In fact we did get Aibos, but they were built with conventional modular software engineering and control systems theory, without learning ability. The only 'brute force' technique that is guaranteed to work is slavish imitation of biological systems, which is in fact what is happening. Throwing massive amounts of computing power at simulated evolution of connectionist systems should work, but there are so many variables and so much potentially relevant richness in real evolution that it's still pretty dicey even with massive computing power.
We understand and can confirm friendliness (or at least predictability, controllability) of existing connectionist entities in the form of humans and animals because:
No, you can't. You can check that they seem benevolent in simulation. If they're animal level and your design isn't prone to instability, then you can be fairly sure they'll act benevolently in reality. For human-level AGI, checking that they act nice in a box is already useless. Aside from the significant potential for deception, there is just no way to simulate reality well enough to have any confidence that a novel situation won't trigger a cascade phase change in the goal system. Since the system was made by human programmers, it clearly already has the basic level of competence needed for further self-improvement even without invoking all the special advantages AIs have at programming, and even if you somehow manage to have perfect barriers against it doing that in testing, such barriers will not exist out in the real world. That's without even considering all the people who will try to deliberately break and pervert this thing as soon as it is published / put on sale / available for download from ThePirateBay.
Yet this is considering an AGI under the cardinal safeguard of not being superhuman in version 1.0.
I've already noted that this is almost impossible to enforce for anything except human uploads, or something very close to it. Even if you could enforce this restriction inside your development box, there is no way you can continue to enforce it in an AI embedded into sold products or even worse, open source. Finally without going superhuman, you have no way of knowing what futher instabilities superhuman reflection may reveal in the goal system (hint; if the system evolves from neuromorphic human-level AGI to rational/normative transhuman AGI, it'll almost certainly run into some serious ones).
Dealing with a limited AGI of only humanlike intelligence would make having presence in the real world a containable risk. (That's assuming appropriate precautions and no "we made its body with this wireless transmitter, to transmit a copy of itself able to self-modify exponentially and run on standard conventional computer hardware worldwide, with this nearby internet access point").
You're proposing massive efforts (with corresponding increases in cost and development time) to build an AI box of dubious reliability, but you can't point to any real problems that it convincingly solves. You might catch some simple problems in the narrow competence window between 'not a threat at all' and 'not a threat while in a box', but that does nothing to solve all the problems that will emerge later when it is inevitably let out of the box, and becomes capable of deliberative self-modification.

This is essentially why the 'AI box' argument is treated as a sovereign remedy by various nontechnical denizens of transhumanist forums, but is treated as a backup precaution at best by people serious trying to build Friendly AIs (of course, it is ignored as irrelevant by the 'oh but all intelligence is friendly by default' people).
Rather, it is easier to develop safe humanlike AGIs than safe superhuman AGIs at the start, for reasons including the simple fact that you're more likely to survive to have a second chance if you don't try the latter when inexperienced.
True in theory but not of much practical relevance. Most people envision a relatively slow climb in capability simply because they ignore self-enhancement and assume that researchers will have to do all the work. It is true that designing an AGI from the start as a seed AI, with a full reflective model and pervasive self-modifying code, decreases the competence threshold for and increases the speed of 'take-off' (that's half the point of such a design). However the gains in terms of being able to do white-box analysis and formal structural verification vastly outweigh the added risk. I would certainly be much more inclined to trust someone who bases their arguments for Friendliness on the later, since if nothing else it indicates that they have thought about the real details of the problem, not superficial empirical pallatives.
Once you have safe humanlike AGIs
Again, we have no mechanism for making a unsafe human-like AGI into a safe one. If it is actually human-level and neuromorphic, it will be no more use than another random human researcher. If it is transhuman or innately good at programming due to design, it is already a serious existential risk whatever your boxing precautions. Finally, teaching it the details of FAI and seed AI design (so that it can assist you) will inherently be giving it the knowledge to self-enhance into a normative design.
This comes back to the topic of empirical real world experience, though. Indeed, certainly you can't really inspect all of its complexity manually, but, like the easy verification that animals (a small subset of all possible connectionist entities) respond to food motivators, you may be able to test if your goal and control system is reliably working.
Now I am a big fan of empirical experience, compared to many researchers who prefer just to do theory, but that is in order to determine which structures have adequate performance, in both the problem-solving and conventional programming senses. Performance is something you can measure empirically without incurring any particular risks, and if you make mistakes it is no big deal - the AI is just a bit less capable than you hoped, until you fix the problem. Safety is not like that - you cannot 'verify' it by empirical trials, because empirical trials are inherently limited to a narrow set of circumstances. Even ignoring reflective stability issues (that tend to lie hidden until later levels of AI capability - as evidenced by the stability problems with Eurisko vs AM), you cannot verify that your trials have comprehensive coverage of the functional mechanism without doing a thorough white-box analysis of that mechanism. And guess what, if you can do the through white-box analysis you've already had to solve about half of the FAI problem, so why not just bite the bullet and do it properly?
Also, if you slow down the society of humanlike AGIs who were working in accelerated time, you can inspect samples of what they were doing and see if they developed a hostile secret conspiracy
Yeah, that will work about as well as the average oppressive tyrannical government. You can examine the representations they're passing back and forth, but if they're optimal they will be as opaque as the AI design itself. If you insist that the messages are in English, you still have no idea what might be stenographically encoded into the exact phrasing and word choice. You can trace the representations into the NN, and try to label activation patterns, but your guesses may be wildly off even without considering the potential for active obfuscation of activation patterns (which doesn't require direct self-modification; the inherent holographic nature of most connectionist designs means that it can be done with careful management of co-activation patterns). Also I'd note that researchers even vaguely qualified to do deep analysis of complex NNs, or just AGI cognition in general are very very rare. If you're doing some massive verification project, more likely you'll produce millions of words of transcript that will get scanned by bored grad students.
If your AGIs see you as an opponent, you got to stop the project immediately.
You're behaving like an enslaving tyrant (note; I've said before that AGI isn't necessarily slavery, but if you are making arbitrary black-box sentient AGIs then killing or modifying them if they don't do what you want, that is probably slavery and possibly murder to boot). If the AGIs do anything other than play along with you they're not exactly human-level intelligences. If you were stupid enough to try and replicate humanlike emotions, they may well start hating and resenting you (for some alien version of 'hate' and 'resent') in the process. Arguably the real lesson you are teaching them is 'if you don't trust someone, ensure you have total power and control over them, and kill or brain-modify them if they disobey'. So frankly you'd deserve your fate if the AGIs got out and enslaved humanity - shame everyone else would suffer for your stupidity, or more likely just die.
There must be mutual friendliness with monitoring only as a backup precaution, like police monitor the population but are mostly supported by that same population.
This has merit, but the division into separate 'AGI individuals' isn't necessarily a good way to do it. The whole individual personhood distinction is really a human concept that only holds for highly neuromorphic AGIs. The causality barrier between the internals of one AI instance and the internals of another is rather weaker for normative AGIs with exact copies of each other's core code, which explicitly try to converge to an optimal model of external reality, and which can pass around huge chunks of their mind as standard communication acts.
Through the history of the field, the estimate for adequate hardware has usually been 'just a bit better than the computers the researchers are using'.
That's a red flag suggesting commonplace wishful thinking.
True. In this FAQ I am trying to outline commonly held positions in the field, not just my own personal views.
My subjective answer is yes, a contemporary PC should be more than adequate for rough human equivalence using optimal code.
If your guess was accurate, that a contemporary PC could be human equivalence with optimal code, then a contemporary PC utilized at about 1/1000th efficiency ought to be able to match a rat, given that a rat's brain is 1/1000th the mass of a human brain.
It isn't that simple, basically because most of the things animals do (e.g. visual processing) parallelises very nicely and benefit less from massive serialism. Conversely, humans are the only animals who do very symbolic, abstract thought, and we do it very badly because neural hardware isn't well suited for it. A computer can solve a differential equation in a few nanoseconds - a human takes many milliseconds for something embedded into a reflex loop like catching a ball, many seconds or even minutes if doing it consciously as part of say designing an electric circuit.

Furthermore, animals do not demonstrate a close correlation between brain mass and intelligence. A cat is certainly not 20 times smarter than a rat; actually in problem solving terms, rats have consistently demonstrated better performance. A human is much more intelligent than a blue whale despite having a much smaller brain. Ravens are much more intelligent than horses etc. Clearly structure matters.
Or a modern PC programmed at 1/50th efficiency should match a cat's brain. Yet we haven't seen anything like that.
Even disregarding my earlier points, that would not confirm that the power available is insufficient, it may just confirm that no human has worked out how to program such a mind yet.
Is current programming being universally under 1% efficiency at using the CPU to its full potential capabilities a probable answer
IMHO, almost certain for existing attempts at human-equivalent AI. That said, PCs already do a huge range of sophisticated things a cat could never hope to do, so I question the validity of your comparison.
A hundred million MIPS would be like two thousand top-end modern PCs combined (like millions of PCs a couple decades ago)
True for CPU-only processing, but those algorithms are massively parallel and run fine on a GPU (in fact there has been a lot of effort to port them to such platforms recently). Modern GPUs put out about a teraflop of usable performance, so that's 25 workstations with 4 GPUs each. However
Those computer vision programs are not written in a neuromorphic manner, rather by conventional coding (which shouldn't be orders of magnitude inefficient)
you have no basis for 'shouldn't be'. There is no proof of optimality in these algorithms, except for some very low level signal processing, all they are is 'some researchers came up with the best method they could think of at the time' (computer vision is mostly black magic and voodoo). In actual fact nearly all such algorithms do a full frame analysis and then filter down for relevance, the same way human visual systems do. You can tell by the fact that the quoted computation requirement is a static figure, rather than being dependent on scene content. While they aren't slavishly neuromorphic in that they use pointer structures and high-precision maths in the 3D shape extraction, in information theory terms they are massively inefficient. An AGI would almost certainly design a selective sampling probabilistic algorithm, that very effectively focuses analysis effort on areas predicted to contain further useful information. Full-frame analysis would be limited to random checks for unexpected gross alterations in non-dynamic areas, with the occasional drift check for slow motion.

All of which is in any case irrelevant to my argument. Vision processing is something virtually all complex animals do. It's probably the hardest single task to optimise for serialism (though as I've just pointed out, I think there's massive potential even there). Thus connectionists love to latch on to it - largely because it's something their toy NNs can do and symbolic AI can't. Meanwhile, Schank's 1970s programs were convincingly analysing newspaper articles (see PAM) and writing passable Aseop's fables (see TALESPIN), faster than a human could, using the computing power of a 286. No connectionist system ever written can do these things (stastical text summarisation exists, but that isn't the same thing) - for that matter cat's can't either. There is no objective reason to consider visual processing to be a better example of the computational requirements of human thought than these abstract reasoning examples; at the very least you should interpolate between them (which will, incidentally, give you a rough figure close to my own best guess).

Morravec's argument is a tunnel-vision worst case - though a popular one because it's a great argument to use when you're asking the university board to fund a new supercomputer.
A top-end CPU package, LGA775, has 775 pins total, and, more to the point, modern CPUs are typically 64 bits (though with often 2-4 cores on modern CPUs). Consequently, although the input / output data streams are alternating at gigahertz frequency instead of a fraction of a kilohertz for human neurons, that factor of millions-of-times increase in serial input speed can be more than countered by how the brain has so many billions of neurons and trillions of synapses in contrast.
Why are you comparing pins to neurons? That is a complete red herring. Either compare transistors to synapes, neurons to logic gates, or pins to input nerves.
The number of transistors inside a modern CPU is enormous, but its overall setup only handles a comparatively simple input signal per clock cycle, without many millions or billions of input wires going into its CPU package at once.
Also irrelevant, since now you're disregarding serial speed. Data per packet is not relevant (not least because organic nervous system traffic isn't packetised and clock-rate isn't relevant beyond a single nerve - also the encoding scheme is less efficient but that's another issue), you should be comparing the total bandwidth of the I/O nerves connecting to the brain and the I/O connecting a computer to... something. This is where the comparison is useless even in principle. Most problems we want an AI to solve exist entirely within the computer (even in robotics, most tests are done with simulators because it's cheaper), so what is the relevant of the external connections? The real issues are in central intelligence.
Moravec's estimates make sense in context. If they're even close to valid, the limited results from AI research over prior decades become well explained
If Moravec was correct, all it would mean is that vision processing that is broadly structured like an organic brain (i.e. parallel rather than serial) would require a medium-sized supercomputer to run at human-equivalent speed and fidelity. Which we don't have. It also implies that the algorithm would run at 1/1000th speed on a normal PC... which we don't have, although frankly that's very hard to say for certain because high-level visual processing blends into central intelligence so much.
Gilthan
Youngling
Posts: 88
Joined: 2009-11-06 07:07am

Re: Mini-FAQ on Artificial Intelligence

Post by Gilthan »

Starglider wrote:An obvious difference is that while all symbolic logic people admit that connectionist intelligence is possible (since humans are connectionist - though in the very early days, a few researchers insisted that all human conscious thought was really symbolic logic), quite a lot of connectionists think that connectionism is the only way to do intelligence. The spectacular failures of symbolic AI by the end of the 80s seems to have allowed them to win most sci-fi writers over their side - the fact that their designs are supposedly 'more like humans' allowing them to handwave their own just-as-serious failures. People in this camp dismiss work on formal Friendliness as irrelevant because they believe the prerequisite (transparent general AI) is impossible. I am not inclined to relate the whole list of nonsensical arguments supporters attempt to use to dismiss logic-based AI, so I will restrict myself to the ones with the most merit; the 'brittleness' and 'shallowness' of symbolic approaches.
Actually my top concerns would be two:

1) The implicit assumption that logic-based AI of humanlike intelligence can be developed within a reasonable length of time.

Since, after thousands of hours of coding work over the years and decades by various researchers, the best we have is around insectlike intelligence in robotics, equivalent to a 100 thousand neuron instead of a 100 billion neuron brain, there's not clear evidence that sufficient orders of magnitude beyond are doable without millions (if not billions) of coding hours. In that case, a Manhattan project type effort would become the minimum needed and with even that facing great challenge.

2) If any researcher in existence really knows enough or will know enough to code humanlike intelligence in its entirety, if all the complexity of it is even able to be grasped by a human mind.

At least a slavishly biomorphic method could somewhat sidestep that issue by copying what already exists, like probably, for enough millions of dollars, even current or near-term technology could upload an insect's 100k neuron brain (if a project began by getting really good at uploading and emulating single neurons and small clusters as well-tested to perfection). Enough billions of dollars might be able to upload a mouse's 20 million neuron brain even if having to laboriously do microscopy slice by slice of a frozen brain. After enough of that, hopefully enough would be learned (approaching ultimate neurology knowledge) to simulate the growth of larger brains.
I know you prefer logic-based AIs, but it may turn out that connectionist methods become easier with future increase in hardware performance, while logic-based AGI development remains highly dependent on hypothetical brilliant insights of its programmers
Oh undoubtedly. When I say 'people should not use emergent and opaque methods', that is my opinion and the opinion of pretty much everyone studying formal Friendliness theory. It clearly isn't what most people are actually doing, just like your entirely-theoretical safety proposals.
The most straightforward brute force way of getting AI, with the fewest requirements for brilliant breakthroughs by the researchers, would appear to be throwing enough money at emulating the intelligence of a neural cluster or a worm and then working up to more complex brains, one step at a time (following the usual cardinal rule of solving near-impossible problems: breaking down into simpler steps to master before moving on, not shooting for human intelligence directly).
General AI as a rule has proven fiendishly hard to decompose like that, and connectionist systems are not immune. In the late 80s and early 90s early successes lead to people confidently predicting that they'd have artificial dogs by 2000 and artificial humans by 2010. When they actually tried to write the code, most people got stuck at the worm level, the really good researchers made it up to insect level and then got stuck.
Suppose that the maximum amount a single researcher can do well enough in a few thousand hours is limited, such as the equivalent of thousands of neurons, not a brain of millions or billions. In that case, small teams making AIs reaching insectlike intelligence equivalent to 100k neurons would not suggest that a small team would manage the equivalent of a million times more complex brain, but they would suggest that a hypothetical project with thousands of researchers might manage to emulate (or even upload) a 20 million neuron mouse brain, if they took one step at a time and began with simpler efforts.
The only 'brute force' technique that is guaranteed to work is slavish imitation of biological systems, which is in fact what is happening.
Yes, that would be the idea, what's guaranteed to work.
We understand and can confirm friendliness (or at least predictability, controllability) of existing connectionist entities in the form of humans and animals because:
No, you can't. You can check that they seem benevolent in simulation. If they're animal level and your design isn't prone to instability, then you can be fairly sure they'll act benevolently in reality. For human-level AGI, checking that they act nice in a box is already useless. Aside from the significant potential for deception, there is just no way to simulate reality well enough to have any confidence that a novel situation won't trigger a cascade phase change in the goal system.
But you're going back to the assumption that the testing phase must be only simulation based. I'm doubtful that you can even develop humanlike intelligence in the first place without learning from the real world, the whole foundation of what takes a baby from having less capabilities at birth than an adult rat to later reach sapience. If you try to have that learning be from simulated environments, all of today's virtual reality isn't remotely close to the real world, as seen in even games by billion-dollar companies.

You sniped the rest of my quote which was better depicting my argument:
Gilthan wrote:We understand and can confirm friendliness (or at least predictability, controllability) of existing connectionist entities in the form of humans and animals because:

1. We have real world experience with them.
2. They have predictable behavior, known goals. (Some individuals can't be trusted, but humans or animals in an aggregate group are relatively predictable and controllable).

You seem to be dismissing the possibility of #1 for AGIs, by implicitly assuming the AGI must not leave a simulated environment.

Yet this is considering an AGI under the cardinal safeguard of not being superhuman in version 1.0. Dealing with a limited AGI of only humanlike intelligence would make having presence in the real world a containable risk.
Starglider wrote:That's without even considering all the people who will try to deliberately break and pervert this thing as soon as it is published / put on sale / available for download from ThePirateBay.
I'm thinking of this as a hypothetical highly-secured Manhattan project type endeavor. If rather there was no security, that would be a disaster. Again, this comes back to the question of if small teams can do it or if humanlike intelligence takes more resources to develop.
Yet this is considering an AGI under the cardinal safeguard of not being superhuman in version 1.0.
I've already noted that this is almost impossible to enforce for anything except human uploads, or something very close to it.
The idea is something very close to human uploads.
Even if you could enforce this restriction inside your development box, there is no way you can continue to enforce it in an AI embedded into sold products or even worse, open source.
Oh, we're talking about two different things then, causing confusion.

The hypothetical I'm discussing is a highly secured, centralized project developing humanlike AGIs and, when sure that they could be trusted, using them working in accelerated time to make version 2.0 of AGI, where version 2.0 was superhuman. There'd be no open source release, ever, or at least not before the later benevolent superhuman friendly AGIs were what was in a position of ultimate power. Just like we wouldn't open-source how to make genetically engineered pandemics in the near-term, the whole effort would be handled as more dangerous than nuclear or biological weapons, as frankly it is more dangerous.
You might catch some simple problems in the narrow competence window between 'not a threat at all' and 'not a threat while in a box', but that does nothing to solve all the problems that will emerge later when it is inevitably let out of the box, and becomes capable of deliberative self-modification.
Conscious self-modification like you envision doesn't have to be permitted. Humans can't arbitrarily self-modify their brains, and my hypothetical version 1.0 AGIs are envisioned as close to human uploads as possible.
If it is actually human-level and neuromorphic, it will be no more use than another random human researcher.
The equivalent of a human upload can have its brain sped up temporarily to work at 10x or 100x speed, which is way more useful for accelerating research and development than just another random human researcher.
Finally, teaching it the details of FAI and seed AI design (so that it can assist you) will inherently be giving it the knowledge to self-enhance into a normative design.
Thousands or millions of the AGIs combined work on developing an individual of the second generation AGI, in a well supervised process. They don't just constantly engage in uncontrolled self-modification. This is like having a team of doctors do experimental neurosurgery on a test patient, as opposed to each doctor every day doing some neurosurgery on his own brain and hopefully not going insane or losing friendliness.
There must be mutual friendliness with monitoring only as a backup precaution, like police monitor the population but are mostly supported by that same population.
This has merit, but the division into separate 'AGI individuals' isn't necessarily a good way to do it. The whole individual personhood distinction is really a human concept that only holds for highly neuromorphic AGIs.
Just to be clear, I've been considering a case of highly neuromorphic AGIs.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Gilthan wrote:1) The implicit assumption that logic-based AI of humanlike intelligence can be developed within a reasonable length of time.
What is 'reasonable'? We've been trying for fifty years, a large portion of that completely hamstrung by lack of adequate computing power and supporting theory. How long did it take to come up with formal descriptive theories of the physical world, a much simpler problem? 'Reasonable' only comes into it if you are actually racing competing projects.
Since, after thousands of hours of coding work over the years and decades by various researchers, the best we have is around insectlike intelligence in robotics, equivalent to a 100 thousand neuron instead of a 100 billion neuron brain,
No, the best the connectionists have managed are insectlike intelligences. The best symbolic intelligences are something completely different. Take a look at say Wolfram Alpha. The things it does (natural language question answering, mathematical problem solving, information retrieval) are things no animal other than a human could do. The same goes for Cyc. Obviously both of these fall far short of general intelligence and can't do many things an insect could do. This just emphasises that intelligence is not something you can measure with a simple scalar. The progression that biological intelligence followed over the evolutionary history of humans is an arbitrary one - there are many other paths to develop human-like intelligence, and robot insects are not a prerequisite.
In that case, a Manhattan project type effort would become the minimum needed
Wrong. The Manhattan Project itself was only required because of the extreme urgency. If we could ignore the military implications, nuclear capability could have been developed gradually over the course of two more decades, with multiple small teams contributing a part of the problem. The total level of funding would likely be similar but it would not have to be concentrated. It isn't a great analogy to start with because so much of the budget went on the massive industrial plant and extensive fabrication and lab work, which do not exist in AGI.
2) If any researcher in existence really knows enough or will know enough to code humanlike intelligence in its entirety, if all the complexity of it is even able to be grasped by a human mind.
The kind of comprehension required is the comprehension that a solid understanding of physics gives you of how mechanical systems will work. It is a highly compressed set of strongly predictive rules. Actually we know that the theoretical description of AGI fits on a napkin (AIXI) but that is for the edge case of infinite computing power. I strongly suspect a theoretical description of near-optimal rational AGI will fit into a single paper, though you'd need a book to give enough context to make it understandable to others. That isn't a lot of information compared to other projects humans have handled - but you have to get it exactly right. Unfortunately it's already plenty scope enough for a huge number of subtle mistakes that can sink the effort, and the holistic nature of the problem makes it difficult to crack by incremental refinement.
Suppose that the maximum amount a single researcher can do well enough in a few thousand hours is limited, such as the equivalent of thousands of neurons, not a brain of millions or billions.
Intelligence doesn't decompose like that, as I've already pointed out. You couldn't implement SHRDLU or Eurisko or PAM or any of those other classic programs with a few thousand neurons. I suspect you'd need a dog sized brain to simulate it with biology, because the problem domains are just a bad fit with biology. Even in connectionism, no one designs individual neural nets. If you want a million neurons, that's easy on modern hardware. The problem is patterning them appropriately; devising the topology and learning rules. That is (mostly) a holistic problem; every change you make affects every single neuron.
but they would suggest that a hypothetical project with thousands of researchers might manage to emulate (or even upload) a 20 million neuron mouse brain,
It. Does. Not. Work. That. Way. If it was simply a matter of getting 1000 researchers and putting them on the same site, some government would have done it long ago. In actual fact if you put that many researchers together they will just disagree and misunderstand each other and get absolutely nothing done (see; a typical AI conference workshop :) ). The reason being that again, the problem does not decompose into neat pieces that you can parcel out to individual designers. Designing the Apollo spacecraft, you can assign functional requirements and mass and volume restrictions for individual pieces and have engineers build those pieces. Much as I dislike Fodor's style, see his classic 'The Mind Doesn't Work That Way' for a good explanation of how the 'modular hypothesis' for general intelligence was quite strongly discredited in the 1990s.
if they took one step at a time and began with simpler efforts.
The approach that most slavishly follows this rule is the 'subsumption architecture' promoted by Brooks et al. They took as an axiom the idea that evolution constantly built upon existing parts, rather like geological layering, and that complex behavior should arise by writing control layers for simpler layers. Nobody could prove that this was true of biological systems or even a good idea, but it sounded fresh and new and it got funding. Result; they built the first two layers ok, got kinda stuck in the third layer, made some robot insects and coke can collection bots, then ground to a halt. It turned out that natural selection is actually a global optimiser (obvious to everyone actually paying attention) and while it reuses design complexity, it can and does modify existing wetware as required. Futhermore functional layering only works in neurology for the first few stages of sensory processing, not for the cortex in general.
Yes, that would be the idea, what's guaranteed to work.
A working Unfriendly AGI is considerably worse than useless, in the sense that it has a good chance of killing everyone. Thus this is a really bad idea. If it weren't for the idiots charging ahead without regard for goal system design, there would be no reason at all to try and rush AGI development. Well, there's the fact that 1.8 people die per second and the vast majority of those deaths are preventable, but that's chicken feed compared to sabotaging the entire future of intelligence as derived from Earth.
But you're going back to the assumption that the testing phase must be only simulation based.
If you start using robots and Internet connections then there is not even the pretense of a 'box' - and even if you did, it doesn't help you, because the fundamental restriction isn't the quality of the tests, it's their scope. You cannot check every case (which are infinite), you have no way of checking whether you set of test cases is representative, and you have no way of showing that you tests will have any validity at all when the AGI later self-modifies.
I'm doubtful that you can even develop humanlike intelligence in the first place without learning from the real world
Fortunately you can get significant chunks of the real world in convenient DVD form.
If you try to have that learning be from simulated environments, all of today's virtual reality isn't remotely close to the real world,
Of course it's 'remotely close'. Video games are actually a good example, they have a lot salient detail (for various different interesting domains, e.g. visual processing and motion control) without the noise. Developing a single compact program that can complete any modern video game you throw (with no more repetitions than a human of average skill) pretty much requires AGI.
Gilthan wrote:We understand and can confirm friendliness (or at least predictability, controllability) of existing connectionist entities in the form of humans and animals because:

1. We have real world experience with them.

You seem to be dismissing the possibility of #1 for AGIs, by implicitly assuming the AGI must not leave a simulated environment.
Irrelevant. The point is that we can't confirm Friendliness in real world entities. No test series in the world can even determine that a human is actually benevolent, or on your side, as opposed to being a sociopath, or a spy for an opposing government. Those requirements are much, much softer than the requirement of formal AGI friendliness, where we can place hard constraints on behavior even if the AGI undergoes unbounded self-modification.
I'm thinking of this as a hypothetical highly-secured Manhattan project type endeavor.
Hypothesise all you like, nobody is going to do this in practice. Certainly governments aren't going to fund such a project.
I've already noted that this is almost impossible to enforce for anything except human uploads, or something very close to it.
The idea is something very close to human uploads.
]

Then stop beating around the bush and actually specify 'must be a human upload'. That seriously reduces (but does not eliminate) the risk. Short-cuts are completely unjustified, just spend the extra 10 or 20 years to make an actual upload instead of a rough neuromorphic approximation.
the whole effort would be handled as more dangerous than nuclear or biological weapons, as frankly it is more dangerous.
99% of the field does not agree with you there, never mind the world in general. The few people who do agree with you (e.g. me, the SIAI) do not agree with your implementation strategy even if it could be funded, which it can't.
Conscious self-modification like you envision doesn't have to be permitted. Humans can't arbitrarily self-modify their brains, and my hypothetical version 1.0 AGIs are envisioned as close to human uploads as possible.
I have already pointed out that even if you can enforce this, it is just delaying the problem (to the point that you can no longer enforce it).
Thousands or millions of the AGIs combined work on developing an individual of the second generation AGI, in a well supervised process.
No. No they can't. Come on, put some thought into this. 'Supervising' one AGI instance is extremely hard, particularly for black-box neuromorphic designs. Thousands or millions of them? What kind of manpower requirements are you envisioning here? Where would the staff come from, when there are at best a few thousand people worldwide even vaguely qualified to work with this technology? Having thousands of uploads work on the problem is tolerable in that it's hardly any worse than the existing AGI community (and much faster), but this strategy is completely unacceptable for any other kind of AGI.
Gilthan
Youngling
Posts: 88
Joined: 2009-11-06 07:07am

Re: Mini-FAQ on Artificial Intelligence

Post by Gilthan »

Starglider wrote:What is 'reasonable'? We've been trying for fifty years, a large portion of that completely hamstrung by lack of adequate computing power and supporting theory. How long did it take to come up with formal descriptive theories of the physical world, a much simpler problem? 'Reasonable' only comes into it if you are actually racing competing projects
Within the lifetime of those working on it would count as desirable.
No, the best the connectionists have managed are insectlike intelligences. The best symbolic intelligences are something completely different. Take a look at say Wolfram Alpha. The things it does (natural language question answering, mathematical problem solving, information retrieval) are things no animal other than a human could do. The same goes for Cyc. Obviously both of these fall far short of general intelligence and can't do many things an insect could do. This just emphasises that intelligence is not something you can measure with a simple scalar.
Computers manually programmed from the beginning obtained unique mathematical capabilities. I rather mean in reference to the performance of robots, like we don't have a robot that is as good as a mouse in general intelligence, developed by any means, rather robots around insectlike at best (actually usually worse than insects at handling problems but with admittedly current weaknesses partially due to the limits of their mechanical bodies).
In that case, a Manhattan project type effort would become the minimum needed
Wrong. The Manhattan Project itself was only required because of the extreme urgency. If we could ignore the military implications, nuclear capability could have been developed gradually over the course of two more decades, with multiple small teams contributing a part of the problem. The total level of funding would likely be similar but it would not have to be concentrated. It isn't a great analogy to start with because so much of the budget went on the massive industrial plant and extensive fabrication and lab work, which do not exist in AGI.
The rate of progress by small groups has been slow enough that recent history isn't suggesting clear reason to expect probably so much as the equivalent of mouse intelligence in the next couple decades, let alone human intelligence. Hardware is improving fast, but the software end seems to be lagging.
but they would suggest that a hypothetical project with thousands of researchers might manage to emulate (or even upload) a 20 million neuron mouse brain,
It. Does. Not. Work. That. Way. If it was simply a matter of getting 1000 researchers and putting them on the same site, some government would have done it long ago. In actual fact if you put that many researchers together they will just disagree and misunderstand each other and get absolutely nothing done (see; a typical AI conference workshop :) ). The reason being that again, the problem does not decompose into neat pieces that you can parcel out to individual designers. Designing the Apollo spacecraft, you can assign functional requirements and mass and volume restrictions for individual pieces and have engineers build those pieces. Much as I dislike Fodor's style, see his classic 'The Mind Doesn't Work That Way' for a good explanation of how the 'modular hypothesis' for general intelligence was quite strongly discredited in the 1990s.
I'll likely acquire that book sometime and read it. Still, though, I find it hard to imagine that a large upload project couldn't be broken down into smaller sections. If there is a cluster of neurons to upload, and one person is assigned to scan and model the 10 on the right while the other person does the 10 on the left, there can't be a fundamental reason why they can't combine their efforts to make a complete upload, nor that such can't be done on a larger scale.

Uploading is fundamentally like reverse engineering. If we were talking about a mechanical machine instead of neurons in a brain, if one individual examined part A of a machine in detail until a precise CAD model was made, while another did so on part B joined to it, afterward they could join together their CAD models. Of course, if a project attempted to exactly upload not just thousands but billions of neurons, they better hope they can automate much of the process.

This seems like asking, if the Chinese decided they wanted to reverse engineer a Space Shuttle with millions of parts, could a team of 1-10 individuals do the whole thing, or would thousands have a better chance, each individual assigned to a tiny section?
Yes, that would be the idea, what's guaranteed to work.
A working Unfriendly AGI is considerably worse than useless, in the sense that it has a good chance of killing everyone. Thus this is a really bad idea. If it weren't for the idiots charging ahead without regard for goal system design, there would be no reason at all to try and rush AGI development. Well, there's the fact that 1.8 people die per second and the vast majority of those deaths are preventable, but that's chicken feed compared to sabotaging the entire future of intelligence as derived from Earth.
From an individual perspective outside of pure altruism for future generations, death is 100% guaranteed without success (within a few decades due to old age, short of enough life extension being developed by human efforts alone within that timeframe, probably improbable), making something giving better odds a rather high priority if at all possible. Of course, an unfriendly AGI isn't the intended result.
If you try to have that learning be from simulated environments, all of today's virtual reality isn't remotely close to the real world,
Of course it's 'remotely close'. Video games are actually a good example, they have a lot salient detail (for various different interesting domains, e.g. visual processing and motion control) without the noise. Developing a single compact program that can complete any modern video game you throw (with no more repetitions than a human of average skill) pretty much requires AGI.
So much of what human infants learn to develop sapience from manipulating things in the real world has never been codified.

In the real world, I can pick up a clump of dirt, break it apart, see the grains within it, blow on it, dribble between my fingers, observe the effects of wind, water, and so many countless other things that would take volumes of books to attempt to list in every detail. In current video games, there's maybe a texture vaguely resembling dirt, not the same thing.

If a baby (or my hypothetical neuromorphic humanlike AI) wore 24 hours a day from birth a headset and interface making them live within video games, you wouldn't get good results, all for more expense and trouble than just real world exposure anyway.
The point is that we can't confirm Friendliness in real world entities. No test series in the world can even determine that a human is actually benevolent, or on your side, as opposed to being a sociopath, or a spy for an opposing government.
In the case of humans, we know the tendencies of the aggregate. Any random stranger met in a NYC subway technically has a slight nonzero chance of being a psychopath planning to knife you, but the chance that all millions of individuals in NYC would suddenly try to kill you at once for little reason is zero. That actually indirectly suggests one possible safety measure with connectivist designs, developing a 99+%-apparent-probability safe AGI, then another, and so on, such that no single one could defeat the friendly majority if it itself was a rare hostile entity.
I'm thinking of this as a hypothetical highly-secured Manhattan project type endeavor
Hypothesise all you like, nobody is going to do this in practice. Certainly governments aren't going to fund such a project.
Perhaps not.

My default expectation would actually be no humanlike AGI in the next several decades, period.

However, I'm approaching this from the perspective of "under what circumstances might future decades be probable to avoid repeating the failure of prior decades, by doing something sufficiently different." The main changing factor is, of course, hardware improvements, but I wonder how well can small teams and the limited amount of code they can write even fully take advantage of such.
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Mini-FAQ on Artificial Intelligence

Post by Formless »

Erm, is it wrong of me to wonder if maybe this debate wouldn't be better suited to its own thread rather than cluttering up the FAQ?
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
Gilthan
Youngling
Posts: 88
Joined: 2009-11-06 07:07am

Re: Mini-FAQ on Artificial Intelligence

Post by Gilthan »

Formless wrote:Erm, is it wrong of me to wonder if maybe this debate wouldn't be better suited to its own thread rather than cluttering up the FAQ?
Possibly. It won't be going on indefinitely though. I'm curious to see how Starglider, as a professional in the field, addresses my earlier objections, with this being the first time I've started a forum debate at a known disadvantage in expertise largely for the sake of learning from the other side (not necessarily always convinced but still seeing info of interest). However, I'll probably not have new points of argument to raise after his next post.
Modax
Padawan Learner
Posts: 278
Joined: 2008-10-30 11:53pm

Re: Mini-FAQ on Artificial Intelligence

Post by Modax »

Q: As a Bayesian, would you be willing to give your estimate of the likelihood of a existential catastrophe NOT occurring if an archetypal connectionist AGI is released into the wild without any particular care taken to ensure friendliness, in the near future? Is it so arbitrarily close to 0% as to make the question pointless? Assume, if it makes any difference, that the people responsible had no malicious intentions for their creation. I.E it is not 'trained' to see any human beings as its enemies.
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Re: Mini-FAQ on Artificial Intelligence

Post by Surlethe »

Unstickying and moving to the library.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Zixinus »

Starglider, you have mentioned "expert systems" in your FAQ. What are these exactly? How do they help?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Zixinus wrote:Starglider, you have mentioned "expert systems" in your FAQ. What are these exactly? How do they help?
Apologies, I'd kind of forgotten about this thread.

'Expert system' was the main term used in the 1980s to describe a symbolic logic system used to provide 'expertise' within a narrow real-world domain. These were constructed via 'knowledge engineering', which is essentially sitting down with a load of domain experts, asking them how they make decisions, and writing a lot of propositional logic rules (usually in Prolog or a similar domain-specific language) to 'capture' that knowledge. This was a big deal in the mid 80s, looking back on it now it was essentially a mini dot-com boom (with accompanying crash, the commercial side of the 'AI winter'). There were lots of start-ups providing software and even hardware to support this, people were writing about how everyone would have access to expert legal advice, medical diagnosis etc. Of course it was massively harder to do in practice than the academics thought, experts couldn't articulate their decision-making process accurately or explicitly (human introspection sucks), the knowledge bases were unwieldy, the tools were too slow and clunky, the knowledge bases were too brittle (no learning) and symbolic logic fundamentally isn't adequate to represent most of the real world. Practical systems tended to degenerate into massive decision trees with very limited inference, which is useful in some applications but was quickly classified 'non-AI'. The tail end of the expert systems boom saw an emphasis on 'case based reasoning' instead of classic inference rules. For this you essentially build a library of past problems and their solutions, and the system tries to match the input problem to the closest previous case, possibly interpolating between them to generate a solution. Again, this works ok in some very controlled academic microworlds, but no so well in the real world without a degree of context sensitivity/context translation/analogical reasoning that no one has really got working yet.

Naturally in the dying gasps of expert systems people tried to solve their fundamental problems via hybridisation with neural networks and other connectionist techniques, which worked as well as most of the other earlier symbolic/connectionist bridge projects, which is to say very badly. Today it's a completely dated term that doesn't get much use outside of a historical context. All of this is still very important in terms of recognising mistakes and avoiding pitfalls; sad to say, plenty of new people come into AI, completely ignore this 1980s stuff and proceed to make the exact same mistakes.
User avatar
adam_grif
Sith Devotee
Posts: 2755
Joined: 2009-12-19 08:27am
Location: Tasmania, Australia

Re: Mini-FAQ on Artificial Intelligence

Post by adam_grif »

Today it's a completely dated term that doesn't get much use outside of a historical context. All of this is still very important in terms of recognising mistakes and avoiding pitfalls; sad to say, plenty of new people come into AI, completely ignore this 1980s stuff and proceed to make the exact same mistakes.
Having just taken the exam for the UTAS A.I. course, I can tell you that "A.I. winter" was mentioned once in passing (not explained at all), that expert systems was half of our internal assessment (my very own was based on Nyrath's Drive Table @ Project RHO :) ) and those pitfalls were not mentioned. So people making the same mistakes is not surprising if other undergrad A.I. courses are like ours.
A scientist once gave a public lecture on astronomy. He described how the Earth orbits around the sun and how the sun, in turn, orbits around the centre of a vast collection of stars called our galaxy.

At the end of the lecture, a little old lady at the back of the room got up and said: 'What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.

The scientist gave a superior smile before replying, 'What is the tortoise standing on?'

'You're very clever, young man, very clever,' said the old lady. 'But it's turtles all the way down.'
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Mini-FAQ on Artificial Intelligence

Post by cosmicalstorm »

I wanted to ask you if there have been any interesting developments in the field of AI since this thread was active. You gave a pessimistic outlook on many subjects, has there been any improvement worth naming?

I try to follow the news via the transhumanist and singularity crowd. But I'm certain that they get a bit excited sometimes- and unlike many other subjects I have a hard time distinguishing the exaggerations from the juicy stuff in this particular field.

(I know I'm necroing this topic but Starglider gave me OK via PM.)
Post Reply