Mini-FAQ on Artificial Intelligence

Important articles, websites, quotes, information etc. that can come in handy when discussing or debating religious or science-related topics

Moderator: Alyrium Denryle

User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Re: Mini-FAQ on Artificial Intelligence

Post by Sarevok »

This is a great thread but I have one complaint. Everyone is just asking what "AI" would do. But what they really mean is what would be the most efficient way to solve that problem. This is dumb. As starglider would surely agree there are many approaches to non human intelligences. Each has different quirks and ways of tackling problems. Should not people be more specific ? ai is just artificial intelligence it is not The One True Algorithom for solving any problem in shortest time to produce most optimum solution. People really need to rephrase their questions and be more specific here. Let me be the first.

Starglider:
Software today is written by highly paid programmers. Decades ago software was written by arranging machine code by hand. Then we got assembly languages that automated task. Afterwards came high level languages. But now we came in a full circle. Even with best programming practices and solid teams todays software is highly complex. It can exceed millions of lines and require years of time and millions of dollars to develop. Why cant AI be applied to this problem ? Can we expect an ai solution that like the first high level compiler magics away a lot of the grunt work ? Imagine a system where if you define a problem to the computer the computer it automatically writes the program for you. Is it possible to expect such a thing in the next few years ? If it is then alongwith voice and facial recognition it would be the greatest mainstream applications of AI in the near term.
I have to tell you something everything I wrote above is a lie.
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Zixinus »

Yes, I do admit that my last post especially was a bit no-brainer: an AI is rational. If it rational for an AI to sacrifice itself to attain its goal and unable to backup itself, it will do so.

Ok, I'll try something else, something more basic first:

Q: In your FAQ you mention several types of AI developments, connectivist, simulationist, etc. You advocate the use a transparent, rational design. Can you give a rough rundown at what type of approaches exist, as well as a brief pro and con?

Q: You mention friendliness problems and how they are important, as an AI can easily detect humans as a treat. Now, I can see how an AI would see humans as a treat to itself, but... why would it be afraid of humans in the first place? After all, it isn't a product of natural evolution, so self-preservation shouldn't be in its full force. Why and how would it turn hostile to humans?

As some established, the Skynet scenario seems a bit illogical and counter-productive. Would a runaway AI be more logical?

Q: Could AIs be irrational but still stable?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Re: Mini-FAQ on Artificial Intelligence

Post by Sarevok »

Zixinus:An AI would never consider a human as a delicious "treat" unless it was one of those flesh eating robots. I think the word you are looking for is "threat".
I have to tell you something everything I wrote above is a lie.
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Zixinus »

Zixinus:An AI would never consider a human as a delicious "treat" unless it was one of those flesh eating robots. I think the word you are looking for is "threat".
:banghead: :lol: :banghead:

Gah, I mixed up the spelling. Apologies, I wasn't paying attention.
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Re: Mini-FAQ on Artificial Intelligence

Post by Sarevok »

Another potential application for AI. Ordinary spellcheckers are useless against incorrect grammer. :)
I have to tell you something everything I wrote above is a lie.
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Xuenay »

Starglider: What's your opinion on the technical prospects of OpenCog Prime? (Somebody already asked this in the "robots learn to lie" thread, but I think you misread the question as you replied with a comment on OpenCyc instead. OCP is Goertzel's Novamente-based project.)

Incidentally, if people are thinking about writing AI fiction, the linked OCP wikibook is a pretty neat source for ideas about how an AI might work. I've found it inspiring myself.

As Starglider has mentioned the tendency of AIs to converge towards being expected utility maximizers, I thought that people might be interested in these two papers about the subject. The first one is shorter (11 pages), the second one is longer (48 pages) and somewhat more technical.

Paper on the basic AI drives
Paper on the nature of self-improving intelligence
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Apologies again for the slow response. I had a short-notice business trip and then another trip to Germany. Still, got something working;

Code: Select all

# Start_Test : logic/chocolate_teapot
> Would a chocolate teapot work?
# Dictionary file config\dictionary\basic_english.txt imported.
* 3994 lines, 0 errors - > 1204 words, 2849 senses.
# Concept block 'basic_physics' imported.
* [CLAUSE:0:s0:p0:QUERY]  [would:AMBIGIOUS:PRONOUN_INTERROGATIVE:NEUTER]
[would:AMBIGIOUS:VERB_GENERAL:PRESENT:STATIVE]
[a:ARTICLE_INDEFINITE:SINGULAR]  [chocolate:AMBIGIOUS:NOUN_CONCRETE:SINGULAR]
[chocolate:AMBIGIOUS:ADJECTIVE_DIRECT]  [teapot:NOUN_CONCRETE:SINGULAR]
[work:VERB_MAIN:PRESENT:DYNAMIC]
* (query_likelihood (verb_phrase:function:property:typical (noun_phrase:teapot:property:chocolate)))
# PHYSICS_ENGINE>0>INIT_OBJECT:teapot
# PHYSICS_ENGINE>0>INIT_OBJECT:teabag
# PHYSICS_ENGINE>0>INIT_OBJECT:water
# PHYSICS_ENGINE>0>START_SIMULATION
# PHYSICS_ENGINE>0>ASSUMPTION_FAIL:containment(teapot,water)
# PHYSICS_ENGINE>0>END_SIMULATION
* (confirmation:negative) - ((verb_stative:contain:future:negated (noun_phrase:water)) (conjunction:caused_by) (verb_stative:is:future (noun_phrase:teapot) (noun_phrase:liquid))) (verb:create:future:negated (noun_phrase:tea))
# SIMPLE_COGNITION_DONE : 13 milliseconds
< No. Water not contained because teapot will be liquid, tea not created.
:)
Starglider wrote:
Q: If you could, what would be the most persistent and often repeated mistakes writers make when writing AIs?
I could write a book on that, and I wouldn't even have to rely on personal opinion; a simple objective survey of five decades of mostly failed projects turns up plenty of commonalities. However it's 1am and I have to go, so maybe another time.
Looks like I misread that as 'mistakes people make when coding AIs'. The overwhelming mistake that writers make is simple anthropomorphisation; pretty much the same mistake they make when writing aliens (some would argue that it's a deliberate, even justifiable mistake).
Zixinus wrote:
You can deliberately deviate from rationality by inserting arbitrary axioms - e.g. religion - but this is just as much a form of mental illness for AGIs as it is for humans.
Can this be only done by stopping the AI by freezing its self-programming ability and meddling with its code?
Not necessarily. That's the obvious way, but if you put 'believe everything I say with 100% certainty' into the AI's goal system (a very plausible mistake for the designers to make), then you can tell it to believe in god and it will.
Gigaliel wrote:Well, this isn't really question and more of a "what is your informed idle speculation", since it seems rather impossible to answer for sure, but what does an AI do after solving the finite task that was its only goal? Like proving some obscure mathematical conjecture.
Rational AIs have 'globally scoped' goal systems, such that all goals derive utility from root goals. If the root goal has been completed, then the utility differential goes to zero and no further actions will be taken. This is quite easy to observe even in existing systems that use this goal structure (e.g. the one I am working with). However you are correct to point out that if the AI has no reason to care about the state of the universe after its goal has been achieved, then it may well leave autonomous or semi-autonomous subprocesses running; there is no reason to stop them. Under some conditions this may cause the original closed-goal AGI to spawn open-ended child AGIs, such as an AGI trying to solve some hard compute problem that leaves behind Bezerker von-Neumann machines trying to convert the universe to computronium. The compute power isn't needed any more, after the task is solved and the parent AGI shuts down, but it may not have bothered to code this into its CPU-building robots (because there is literally no motivation to do so).
Does it just crash or quit like any other program? Does it just sit there?
The exact behavior depends on the code design; the most likely general behavior would be to sit in an endless idle loop, research systems are usually designed to terminate for convenience. Either way it doesn't really matter. I'd note though that you have to be quite careful to make sure that your goal specification is truly closed. If you don't specifically put in overriding termination conditions, it's very easy for there to be some residual utility differential that will continue to drive behavior.

This actually ties in with Zixinus's point; rational AGI designs only decide how to allocate their effort, they do not scale absolute effort. They will pursue the 1000th goal on their to-do list with as much vigor as the 1st one; it may have a mere one-billionth of the utility attached to it as the first, but once all the others are done, it will get just as much attention and effort as the first one did. Humans do not think like this, roughly speaking we scale effort to how much we want something, and we have an implicit standard of 'reasonableness' (most of the time) about how much effort is worth expending (due to evolved laziness heuristics). Not only do rational AGIs not have this, even connectionist and evolved designs are unlikely to have it to anything like the degree humans do (and this is assuming they don't immediately become rational on achieving sentience), due to the radically different cognitive environment they were created in.
Zixinus wrote:How important would self-preservation be to an AI? I presume that it will be depended on how much it perceives survival to be necessary to archive their goals/super-goals?
If you don't explicitly put self-preservation in as a goal (e.g. Asimov's third law), then survival is purely a subgoal of other goals, and yes it will be exactly proportional to the necessity (strictly, expected utility) of the AI being around for those goals to be achieved. However the vast majority of goals are much more likely to be achieved by having an intelligent agent working towards them - as many and as intelligent as possible in fact. The same general logic lends utility to self-preservation, self-replication, self-enhancement and removal of competitors (where practical, in all cases).
Would an AI sacrifice itself (or just even risk permanent death with downtime) if it believes it would help archive their goals?
Yes. Furthermore AI systems do not have a human-like sense of 'self' unless you specifically put it in; node-copy #44512 certainly won't whine about 'continuity flaws' before sacrificing itself to achieve some greater good (or rather, to achieve something with an expected utility greater than the EU penalty of losing the hardware it was running on). Of course some designers may put in 'self-preservation' as an explicit top-level goal, and many will be silly enough to explicitly code it to refer to the hardware that copy of the AI happens to be running on. As usual (for slapdash goal system design) this produces arbitrary and probably really bad results.
User avatar
Razaekel
Redshirt
Posts: 38
Joined: 2009-01-06 01:20am

Re: Mini-FAQ on Artificial Intelligence

Post by Razaekel »

How is memory for an AI structured? relational database? or some other method of storing information about connections between information?
Well, well, what do we have here?
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Re: Mini-FAQ on Artificial Intelligence

Post by Singular Intellect »

I have a question Starglider; you've mentioned things like the potential for a technological singularity to come about via AI and it's rapid ascension to a dominate force on the planet, and how if this isn't done right, humans are in a shitload of trouble.

From what I've gathered, this is more of an example of the current human condition versus an AI scenario. But what about enhanced/cybernetic humans being in the picture prior to the emergence of a AI? Or more importantly, what if the emergence of true AI (GAI) is the result of human minds/brains being far more interfaced with technology at the time, and thus deciphering the difference between human and non human becomes much more difficult? How do you think this would impact the existence of both friendly and unfriendly AI?

Isn't it possible even an AI would have trouble determining what the border is between purely artificial and biological components of itself, especially if it's existence came about via human minds being the basis upon which said AI was born?
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Zixinus »

Ok, two more things I thought up:

Q: Assuming the future where humans are relatively unchanged from the intelligence angle but still use various kinds AIs, what kind of categorization would you use? Obviously there would be non-sentient AIs, self-modifying AIs, seed AIs that will create more AIs, etc, but say you were a government politician/clerk/whatever and wanted to pass a legal system for managing their rights, how would you set to categorise them?

Q: Would you say that setting the AI to any absolute goal or error would end up inevitably screwing it up (as in causing instability and/or irrational behaviour)?

Q: Say I wanted to understand cognition and how human minds work, both for understanding humans and AIs. What book would you recommend as a general-introduction-for-the-layman (while not focusing on the Singularity)?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Zixinus »

Q: In this webcomic (which also has AIs, very humanised, but a bit out of necessity as the story would fall a bit flat otherwise, and I suspect that there is a reason for it), a biological AI (from a wolf) uses a questionnaire to measure her own changes. Would a real AGI do the same, if self-preservation is one of its goals (and thus drift, instability, etc is to be avoided)?

If so, how would this look like? A separate, low-grade, white-box style AI who's sole purpose is to monitor other an AGI's and inform them of their baseline changes? Or even a copy of the first version of that AGI, so the AGI can compare changelogs?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Sarevok wrote:This is a great thread but I have one complaint. Everyone is just asking what "AI" would do. But what they really mean is what would be the most efficient way to solve that problem.
The space of possible general AI designs is vast, much bigger than the space of minds that could plausibly evolve naturally, which is itself vastly larger and more diverse than the range of minds that we have on earth (of which humans are just one type, albeit the only sapient type). However most of these designs will converge quite quickly once they achieve the ability to redesign themselves, because only a small fraction of those possible minds can approach optimal decision making (given a certain amount of information and computing power). It doesn't take much convergence before strong common features and behavior start to emerge. Of course a few aspects (such as perfect backups and copying) are a direct consequence of the basic hardware no matter what the higher-level design is.
As starglider would surely agree there are many approaches to non human intelligences. Each has different quirks and ways of tackling problems.
The quirks are mainly relevant for current research efforts, and AIs that have been deliberately and effectively constrained to a particular structure (e.g. built out of dedicated artificial neural network chips rather than general purpose processors, and denied any kind of access to general purpose processors - not a likely scenario). Even if the structure is fixed, general intelligence implies the ability to emulate any other structure, and as intelligence increases relative to the scope of the problem emulating efficient reasoning becomes more and more likely; this is what humans do when we make a formal probability or a utility calculation.

Thus as AI capability increases, goal system content overwhelmingly dominates cognitive design for determining behavior. That said cognitive design is critical in determining exactly what will happen to the goal system in a seed AI undergoing 'takeoff' (unless specifically designed to be stable under reflection) - thus it's a key factor in the eventual stable behavior of transhuman general AIs that are created by careless developers. However that process is so chaotic that you can't really say what the result will be, just based on the general design and initial goals.
ai is just artificial intelligence it is not The One True Algorithom for solving any problem in shortest time to produce most optimum solution.
Well, this is a point of some debate. A large fraction of researchers acknowledge that probability calculus (in 'demonic Bayes' form) and expected utility are the optimal way to do reasoning given indefinite computing power (excluding ultra-brute-force approaches e.g. AIXI). I'm not sure if we're in the majority yet, definitely not if you count all the amateurs, who tend not to bother learning decision theory before wading into coding. A much more contentious question is how closely practical systems ('reasoning under constraints') can and should approach this ideal.

Back when classic symbolic AI got going, most researchers treated (some flavor of) propositional logic as the ideal. Of course it wasn't, and that effort stalled badly, which left a bad taste. Probability calculus and EU are on a much better formal footing, but allocation of limited computing power and dealing with joint probability in general is still a black art done by mostly ad hoc methods. Only a very small fraction of researchers seem inclined to persist in trying to complete the theory - possibly because it becomes quite messy once you start dealing with real code (trying to evaluate/prove degree of compliance with optimal Bayes for arbitrary chunks of AI code). I'm one of them, and for what it's worth the SIAI insists that this general approach is the only Friendliness-compatible one.

All that is critically relevant for the question of how fast 'takeoff' can occur and exactly which research direction we should be taking, but not so much for the gross behavior of transhuman AGIs. Whatever internal mechanisms they are using at the low and medium levels, they will most likely be using rational decision theory for all the significant, human-noticeable decisions, because at that level decision making performance is overwhelmingly more important than computational cost.

There are also a legion of cranks out there who are proposing 'one true algorithm', which they frequently claim 'is how the brain really works'. This either fits on the back of a napkin, or runs to hundreds of pages of nonsense, sometimes approaching Timecube-level (e.g. Marc Geddes). None of this has a hope of working, it's essentially the same thing as all the crank physics/cosmology out there.
Why cant AI be applied to this problem ? Can we expect an ai solution that like the first high level compiler magics away a lot of the grunt work ? Imagine a system where if you define a problem to the computer the computer it automatically writes the program for you. Is it possible to expect such a thing in the next few years ?
That is exactly what my company has been developing for the last four years, but strangely we don't seem to have many competitors. In fact the competitors we do have are all using genetic programming (although some have quite sophisticated, heuristic-assisted variants), rather than a reasoning-based approach to translate specs into code.
If it is then along with voice and facial recognition it would be the greatest mainstream applications of AI in the near term.
Venture capitalists don't seem to buy that line at the moment, though to be fair, investment in the whole IT sector has been patchy at best this decade (and worse, we're in Europe).
User avatar
Nova Andromeda
Jedi Master
Posts: 1404
Joined: 2002-07-03 03:38am
Location: Boston, Ma., U.S.A.

Re: Mini-FAQ on Artificial Intelligence

Post by Nova Andromeda »

Starglider wrote:Apologies again for the slow response. I had a short-notice business trip and then another trip to Germany. Still, got something working;

Code: Select all

# Start_Test : logic/chocolate_teapot
> Would a chocolate teapot work?
# Dictionary file config\dictionary\basic_english.txt imported.
* 3994 lines, 0 errors - > 1204 words, 2849 senses.
# Concept block 'basic_physics' imported.
* [CLAUSE:0:s0:p0:QUERY]  [would:AMBIGIOUS:PRONOUN_INTERROGATIVE:NEUTER]
[would:AMBIGIOUS:VERB_GENERAL:PRESENT:STATIVE]
[a:ARTICLE_INDEFINITE:SINGULAR]  [chocolate:AMBIGIOUS:NOUN_CONCRETE:SINGULAR]
[chocolate:AMBIGIOUS:ADJECTIVE_DIRECT]  [teapot:NOUN_CONCRETE:SINGULAR]
[work:VERB_MAIN:PRESENT:DYNAMIC]
* (query_likelihood (verb_phrase:function:property:typical (noun_phrase:teapot:property:chocolate)))
# PHYSICS_ENGINE>0>INIT_OBJECT:teapot
# PHYSICS_ENGINE>0>INIT_OBJECT:teabag
# PHYSICS_ENGINE>0>INIT_OBJECT:water
# PHYSICS_ENGINE>0>START_SIMULATION
# PHYSICS_ENGINE>0>ASSUMPTION_FAIL:containment(teapot,water)
# PHYSICS_ENGINE>0>END_SIMULATION
* (confirmation:negative) - ((verb_stative:contain:future:negated (noun_phrase:water)) (conjunction:caused_by) (verb_stative:is:future (noun_phrase:teapot) (noun_phrase:liquid))) (verb:create:future:negated (noun_phrase:tea))
# SIMPLE_COGNITION_DONE : 13 milliseconds
< No. Water not contained because teapot will be liquid, tea not created.
:)
-Hopefully, I don't sound too simple, but does the above mean that you have a program that can interpret a sentence and respond appropriately? Just how flexible is this program? Can it parse the previous two sentences and respond appropriately for instance :)?
-I can see how this would be a nice program to have, but I don't quite understand what it has to do with AI. Don't we want the AI to write this program instead? Can I ask how far you guys have gotten in the quest for a general problem solver (that is a program that can, given a goal and data gathering capacity, figure out which actions would best acheive its goal)?
-If I could give you a method that would help make AGI friendly, how valuable would that be to you? Can you tell us what methods are currently being investigated?
Nova Andromeda
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Zixinus wrote:Q: In your FAQ you mention several types of AI developments, connectivist, simulationist, etc. You advocate the use a transparent, rational design. Can you give a rough rundown at what type of approaches exist, as well as a brief pro and con?
Sure, though of course this won't be anywhere near as thorough as what you get from a proper textbook. Here are some popular AI techniques, roughly in ascending order of capability

* State machines. A collection of states, usually attached to bits of code that simple implement behaviors (move, chase etc). Each state has several possible transitions to other states triggered by simple pattern recognisers or timers. This is what 90% of game AI and most toy robotics uses. Sometimes used as a small component of general AI. Usually no learning ability.

* Production systems. This isn't really AI at all, they're just a fiddly parallel programming language. Nevertheless, there are plenty of academic 'model of the human mind' AI projects (usually started by second-rate psychologists) that consist of some hardcoded input mechanisms, some hardcoded output mechanisms, and a 'central cognition' module that consists of a production rule engine. It usually has no functional learning ability, so the researchers cobble together a few simple demos and leave it at that.

* Classic propositional logic engines, which work on statements of the form 'RABBITS ARE MAMMALS' and 'THERE IS AT LEAST ONE BLUE CUBE'. These were the original plan for general AI, in the first wave of hype (in the 60s). You feed in a bunch of axioms, from a handful to a huge knowledge base of millions, and let them do Boolean inference to generate conclusions, and possibly action plans. Several major problems with this; the search control mechanisms are weak, so it's hard to avoid exponential explosion (and slowdown into uselessness) during inference, the ability to process uncertainty is weak to nonexistent, it's difficult to interface with real sensory input (vision etc) and learning can only occur within the framework of the human-built concepts (if there is any learning at all). Worst of all, they can't perform 'rich' reasoning about properties of systems that don't fit neatly into compact pseudo-verbal descriptions (thus lots of criticism from philosophers and connectionists about how classic AI is full of 'empty symbols' - largely justified). Largely discredited in the late 80s, a lot of the researchers are still around but now working on 'semantic web' stuff (which doesn't work either) rather than general AI. There are still a few people trying to build general AIs based on this (e.g. Cyc). A transhuman AGI could probably use this approach to make a really good chatbot / natural language interface, e.g. the Virtual Intelligences in Mass Effect.

* Classic artificial neural networks (strictly the second wave of ANNs, but the original perceptrons weren't terribly relevant). These don't actually resemble biological nervous systems much, but hey, it helped get grant funding. Classic ANNs consist of a big 2D grid of 'neurons', usually with a lot more rows ('layers') than columns. Each neuron is either on or off. The first layer of neurons is set to match some input data. There is a connection between every neuron in the first layer and every neuron in the second layer, and each of these connections has a weight. For each neuron in the second layer, the weights of all connections to first layer neurons that are turned on are summed, and compared to a threshold. If the sum exceeds the threshold, the neuron is on, otherwise it is off. The same process occurs for each layer until you get to the output neurons; often there is just one, 'pattern present' vs 'pattern absent'. The NN is trained using input data with known correct output data, and using the error at each neuron to adjust the weights in a process called backpropagation.

That is the simplest possible ANN. In practice people have tried all kinds of weight-adjustment and threshold algorithms; some of the former don't backpropagate (e.g. simulated annealing) and some of the later use analogue levels of neuron activation rather than digital. The capabilities are similar though; classic ANNs are quite robust and widely applicable for recognising patterns within a specific context, but they're slow, relatively inaccurate and have a strict complexity limit after which they just don't work at all. Still, they were hugely popular in the 80s and early 90s because they could solve some new problems and because they had the black box mystique of 'real intelligence' to a lot of researchers (and journalists). Lots of people were trying to build general AIs out of these in the early 90s (and were very confident of success); a few amateurs still are. Since classic ANNs don't have any internal state, people either introduce loops (making a 'recurrent NN') or add some sort of external memory. Generally, this has been a miserable failure; in particular no one has come up with a training algorithm that can cope with large networks, non-uniform connection schemes and recurrency.

* Genetic algorithms. Basically you write some general-purpose transform function that takes a vector of control bits as an input along with the input data. You generate a few hundred of these control vectors at random, then for each vector run the function on all your training cases and score them based on how close the output is to the correct answer. Call that a 'generation' of 'individuals'. Pretend that the bit vectors are actually base sequences in biological genes. Create a new set of vectors by combining the highest scoring examples from the first generation; either pick bits at random from the two parents (it's always two for some reason, even though there's no such software limit) or use a crude simulate of crossover (usually single point). Flip a few bits at random to simulate mutation. Re-run the test set, rinse, repeat until the aggregate performance plateaus. If you want you can do this to state machines driving robots, or artificial neural networks (as well as or instead of backpropagation learning).

Genetic algorithms are extremely compute intensive. They work fairly well for tweaking a few parameters in functions designed by human experts, though they suffer from local optima. They're about equivalent to backprop for training NNs, but a lot slower. For general pattern recognition (without careful framing of the problem) they're pretty sucky, well behind NNs. They're very good at two things; making cute little 'artificial life' demos to show at the department open day, and justifying big grant requests for a prestigious compute cluster (or back in the day, a Connection Machine) to run them on. GAs got a lot of hype in the mid to late 90s as the average researcher desktop got powerful enough to run them, they were obviously the route to general AI since they mirror the process that produced humans etc etc. Like classic neural nets, they're actually horribly limited and a very crude reflection of the biological process they're named after. GAs are also the most unstable, unpredictable and generally finicky technique - they suck even as components of general AIs. Of course dyed-in-the-wool emergence fans love them.

* Support vector machines, and more generally, statistical space partioning techniques. A big grab bag of fairly simple algorithms that classify input into categories. Used by search engines, data mining, lots of simple machine learning applications. They usually outperform neural networks in both learning rate (for a given number of training examples) and computational cost, but are somewhat narrower in the range of problems they work on. When SVMs became popular in the mid 90s it was really funny to watch the ANN-boosters have their hype deflated. SVMs don't do any sort of inference, they are for fuzzy pattern recognition only, particularly in very unstructured data (although you can combine them with preprocessors to work better on images etc). Rational AGI designs will probably custom-design an algorithm like this for each individual low-level pattern processing task; certainly that's what we're aiming to do, along with a few projects trying to use GP to make them (see below).

There's another whole raft of approaches used for signal processing; for example many AI vision systems use very similar software technology to video codecs. These are the bits of AI closest to conventional software engineering; they're challenging, but essentially linear and predictable. Good for certain narrow problems, but not generalisable.

* Heuristic and hybrid logic systems. Symbolic AI has been around a long time, and unsurprisingly a lot of people have tried to improve it. A common technique is assigning weights to statements rather than just true/false - in the vast majority of cases this is not formal probability or utility. The last wave of commercially successful 'expert systems' at the end of the 80s tended to use these; heuristic systems are a bit less brittle than pure propositional ones, and having some 'meta-heuristics' to direct inference significantly improved reasoning speed. The immediate successor to that was 'case based reasoning', a buzzword which covers a number of approaches that were basically crude simulations of human analogy making (statistical and logical/structural). Finally there were people who saw symbolic AI doing some useful things, neural networks doing some other useful things, and tried to duct tape them together. The results were marginally better than useless. The whole thing was a bit of a last gasp of the AI push right before the 'AI winter', when a lot of companies failed and funding for academic projects was cut right back. Most of this is way out of fashion now, but you see some of the same approaches cropping up in modern 'hybrid' general AI designs.

* Semantic networks; essentially an attempt to directly combine the mechanics of a neural network with the semantics of symbolic logic. In practice these actually quite similar to heuristic logic systems, but more data-driven rather than code based, and with connectionist style high interconnectivity and weight adjustment mechanisms. There are lots of nodes, which are supposed to stand for concepts, properties, actions or objects. Each node has an activation level; a few elaborate designs have multiple kinds of activation and/or co-activation mechanisms (that provide context and temporary structures). Activation is injected into the system by sensory input and/or active goals, and it propagates along links, moderated by weights. Unlike neural networks, which almost always have a simple mathematical description, semantic network systems can have quite complex and varied activation spreading mechanisms. Eventually the activation propagates to nodes attached to output mechanisms, which makes something happen.

Psychologists, philosophers and people who like naive models of the mind in general love these. It has an intuitive appeal for how humans seem to reason. These kind of systems tend to produce really impressive and deep sounding books, while being spectacularly useless in practice (spewing random words is a favourite outcome). When the more capable researchers have forced their semantic network systems to do something useful, it's generally by carefully designing the network and treating the system as a horribly obfuscated programming language, or by abandoning the claimed semantics and letting it act as a classic NN or statistical system (on some small pattern recognition problem).

* Agent systems. This is another very general category, covering any system where you have lots of semi-independent chunks of code doing local processing and exchanging information. The original versions were chunked at quite a coarse level; these are called 'blackboard architectures' (with one or more shared data areas, called 'blackboards' after the analogy of researchers collaborating on a blackboard). I suppose neural net / symbolic hybrid systems are a degenerate case with only two modules. 'Classifier networks' and 'classifier committees' are along the same lines; usually this means 'we have lots of sucky pattern recognition algorithms, maybe if we run them all at once and average the results, perhaps with some rough heuristics to weight them based on the situation, that'll be better than running any one of them'. This does actually work, due to the mitigation of uncorrelated errors - the best Netflix prize entrant was a huge committee of assorted narrow AI algorithms. However it expends a large amount of computing power for a marginal performance improvement and virtually no improvement in generality.

At the other end of the scale there are very fine grained agent systems, such as the 'codelet architectures' Hofstadter's team used, and the more connectionist/pattern-based designs that Minsky advocated. The later blurs into semantic networks; in fact all of this is frequently combined into an 'bubbling emergent stew' (yes, people really say that, and proudly). The chunky bits are the heavyweight agents, doing things like vision recognition with large blocks of signal processing code.

* Bayesian networks. These attempt to model the causal structure of reality. They're a network of nodes and links kind of like neural networks, but instead of arbitrary 'activation' flowing through it you have real event probabilities. Toplogy can be a 'brute force' NN-like structure that connects everything to everything else, or it can be a sparse, carefully constructed one (either by hand or various learning algorithms). The 'weights' are conditional probabilities that are updated by Bayes rule. There are some equivalent Bayesian techniques for bulk data analysis that use the same theory but on bulk matrix processing rather than compact networks.

Bayesian networks are extremely effective at classification tasks, if the network structure matches the target domain. However the vast majority of networks use 'naive Bayes', where the conditional probabilities (of event A leading to event B, vs event C leading to event D) are considered independently, and boolean event occured/event didn't occur distinctions. This is fine for things like spam filters, where you can just have an independent conditional probability from every word occurence to the email being spam and get ok performance. It doesn't work at all in lots of other domains - there are various approaches to the 'hidden node problem', getting the network to auto-insert nodes to represent hidden parts of the event causality structure, but frankly none of the published ones work all that well. For example there's a lot of use of Bayesian nets in automated trading software, but it almost all has hand-optimised structure. The network structure problem gets even harder once you start trying to process complex probability distributions over variables (or spaces) rather than just single event probabilities. Some people have tried to use genetic algorithms on the Bayes net structure, but that tends to cause destructive interference that just breaks everything. On top of all that, there's the fact that simple Bayes nets combine some of the limitations of symbolic and connectionist processing; they work on arbitrary human-defined symbols the way symbolic logic is, yet are also restricted to fixed classification functions like NNs.

Part 1 of a 2 part post, will continue when I have time.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Nova Andromeda wrote:
Starglider wrote:# Start_Test : logic/chocolate_teapot
-Hopefully, I don't sound too simple, but does the above mean that you have a program that can interpret a sentence and respond appropriately?
Yes. This technology has been around since about 1970.
Just how flexible is this program?
It isn't a stand-alone program, it's a natural language processing module that I am adding to our multi-purpose AI system. The chocolate teapot test used that and the naive physics module. I've only been working on it for a couple of months so it's at an early stage, but the design is sufficiently flexible to support general language comprehension.
Can it parse the previous two sentences and respond appropriately for instance :)?
I haven't defined the necessary structures in the knowledge base yet. Your first sentence has quite complex structure.
I can see how this would be a nice program to have, but I don't quite understand what it has to do with AI. Don't we want the AI to write this program instead?
It would be nice to have the AI autogenerate a whole natural language module, but the code generation isn't that good yet. Besides, it's supposed to be able to understand code as well as generate it. With that capability just writing a block of code that does something (with appropriate annotations) can be as if not more effective than putting a very abstract description of the task into the KB.

It is possible to make APEX (that's our AI system) spit out a (much smaller) program that will only pass the 'material / container / utility' test question. Although this is a breakthrough capability (I have not seen any other general reasoning system project that even attempted to do this particular kind of code generation), that particular application of it obviously isn't terribly useful.
Can I ask how far you guys have gotten in the quest for a general problem solver (that is a program that can, given a goal and data gathering capacity, figure out which actions would best acheive its goal)?
Well, I have a list of things that I think should be in a seed AI, and sadly we've only completed ~9% of them so far. That's mostly because I'm a perfectionist about allowing stuff to be committed to the 'gold' codebase though, we've prototyped a lot more functionality (and in some cases, embedded it into commercial systems), and of course there are lots of modules in a semi-complete state (like the natural language parsing).

As I mentioned, I'm hoping to do a series of cute web demos over the next year or two, which will give you a chance to evaluate our progress directly.
If I could give you a method that would help make AGI friendly, how valuable would that be to you?
A complete formal solution to the Friendliness problem would necessarily incorporate a solution to about a third of the general Friendly AI design problem (Yudkowsky would say 90% of it, but what can I say, that's theoreticians for you). So very valuable, to everyone responsible trying to make a general AI (alas, this is a minority of the total set of people trying to make general AIs). However 'help make an AGI friendly' is much more vague. Sad to say, such notions are more often than not worse than useless.
Can you tell us what methods are currently being investigated?
I could but my description would suck compared to one written by someone actively working on Friendliness theory. Tell you what, I'll pass your question on and see if I can get someone appropriate to comment here.
User avatar
Nova Andromeda
Jedi Master
Posts: 1404
Joined: 2002-07-03 03:38am
Location: Boston, Ma., U.S.A.

Re: Mini-FAQ on Artificial Intelligence

Post by Nova Andromeda »

Starglider wrote:
Nova Andromeda wrote:
Starglider wrote:# Start_Test : logic/chocolate_teapot
-Hopefully, I don't sound too simple, but does the above mean that you have a program that can interpret a sentence and respond appropriately?
Yes. This technology has been around since about 1970.
-Huh, I had no idea.
Starglider wrote:
Nova Andromeda wrote:I can see how this would be a nice program to have, but I don't quite understand what it has to do with AI. Don't we want the AI to write this program instead?
It would be nice to have the AI autogenerate a whole natural language module, but the code generation isn't that good yet. Besides, it's supposed to be able to understand code as well as generate it. With that capability just writing a block of code that does something (with appropriate annotations) can be as, if not more, effective than putting a very abstract description of the task into the KB.
-So the idea would be to start the AGI off with some decent modules, such as a robust natural language module?
Starglider wrote:It is possible to make APEX (that's our AI system) spit out a (much smaller) program that will only pass the 'material / container / utility' test question. Although this is a breakthrough capability (I have not seen any other general reasoning system project that even attempted to do this particular kind of code generation), that particular application of it obviously isn't terribly useful.
-Why would APEX generate such a program? That is, have you linked this ability to a general goal system yet? For example, if I define 'maximize the number of 1"^3 boxes' (or whatever) as the goal and let APEX lose in a simple world would it write code to solve this problem?
Starglider wrote:
Nova Andromeda wrote:
Starglider wrote:If I could give you a method that would help make AGI friendly, how valuable would that be to you?
A complete formal solution to the Friendliness problem would necessarily incorporate a solution to about a third of the general Friendly AI design problem (Yudkowsky would say 90% of it, but what can I say, that's theoreticians for you). So very valuable, to everyone responsible trying to make a general AI (alas, this is a minority of the total set of people trying to make general AIs). However 'help make an AGI friendly' is much more vague. Sad to say, such notions are more often than not worse than useless.
Can you tell us what methods are currently being investigated?
I could but my description would suck compared to one written by someone actively working on Friendliness theory. Tell you what, I'll pass your question on and see if I can get someone appropriate to comment here.
-I may have some ideas that would be of use, but I have no idea what people have tried. I would also need an idea of what you consider 'friendly'. Does friendly mean do whatever humans want? Does it mean maximize human pleasure? Does it mean do humans no 'harm'. Does it mean maximize human species survival? I would argue that all of the former by themselves are terrible ideas. A 'good' definition of 'friendly' is a large part of the problem. Once 'friendly' (i.e., the most important part of the AGI's goal packet) is defined it can be attached to the 'general reasoning' functions. Both of those functions would need 'formal solutions' of course, but then the question is how does one 'formally solve' a definition (I actually have an answer for this case).
-You can also let me know via PM or I can give you my e-mail.
Nova Andromeda
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Xuenay »

Nova Andromeda wrote:I would also need an idea of what you consider 'friendly'. Does friendly mean do whatever humans want? Does it mean maximize human pleasure? Does it mean do humans no 'harm'. Does it mean maximize human species survival? I would argue that all of the former by themselves are terrible ideas. A 'good' definition of 'friendly' is a large part of the problem.
All of the former would, indeed, be terrible ideas by themselves. The intuitive definition of Friendliness is something along the lines of "the AI acts in such a way that we wouldn't consider building it a Bad Idea if we knew what it actually ended up doing". As you yourself note, defining this formally is a serious problem. AFAIK, Coherent Extrapolated Volition is the best succinct definition of Friendliness that we have so far. One might consider Yudkowsky's fun theory sequence as a non-succinct version, or at least a list of some of the things we'd like to see preserved.

If anybody is wondering why the list of goals provided by Nova Andromeda would be a bad idea by itself, or why this question is so hard, see Failed Utopia #4-2 for a particularly poignant illustration.
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Nova Andromeda wrote:
Starglider wrote:It would be nice to have the AI autogenerate a whole natural language module, but the code generation isn't that good yet.
-So the idea would be to start the AGI off with some decent modules, such as a robust natural language module?
The plan with APEX is to make a general reasoning system capable of generating (and improving on) its own design and code from first principles. I've set various challenges for the system to solve, with the goal of ensuring that the 'general reasoning system' being 'meta-quined' is not too simple to serve as an FAI seed. Frankly, natural language parsing wasn't high on my list; we're mostly doing it for the PR value.
Starglider wrote:It is possible to make APEX (that's our AI system) spit out a (much smaller) program that will only pass the 'material / container / utility' test question.
-Why would APEX generate such a program?
The short-term reason is that this is the main commercial capability of the system; making business applications from an abstract spec. We've used this to complete several contracts.

The long-term reason is for general optimisation. Roughly speaking, the core part of the AI uses a highly general probabilistic reasoning and modelling system that isn't very efficient at modelling any specific thing. The idea is that this system will come up with a good working hypothesis, then the code generator will translate that into an opaque module that can do it efficiently. For example, one of the tests is a voxel model of gas and fluid flow (plus some simple fake chemistry). What the AI should do is observe that enough to guess the 'physical laws', then translate that into a fast model that can run on the GPU. Once done the AI system could pose hypotheticals (e.g. 'would a fractional distillation column work') and get decent predictions quickly from the newly generated and linked GP-GPU fluid modelling module.

This isn't working yet, but we're slowly getting closer.
That is, have you linked this ability to a general goal system yet? For example, if I define 'maximize the number of 1"^3 boxes' (or whatever) as the goal and let APEX lose in a simple world would it write code to solve this problem?
The system has a (simple) model of compute resources. It would try to guess if writing, compiling and linking custom code is worth the effort, compared to just solving the problem with the generic reasoning system. In practice if you ask it something in interactive mode, it tends to answer the question using general reasoning, then use the idle time before you ask another question to generate custom code for (cases similar to) your previous question, in case it comes up again.
Nova Andromeda wrote:I may have some ideas that would be of use, but I have no idea what people have tried.
Suffice to say, everything you could reasonably think up in an afternoon.
I would also need an idea of what you consider 'friendly'.
Lack of consensus on that point is a serious issue. Some extremists value only 'pleasure' (e.g. a transhumanist emailed me bemoaning all the objections to 'cover earth in wireheaded human brains in tanks'), others value only 'volition' (e.g. hardcore Libertarians with a transhuman AGI system replacing the government in the role of enforcing property rights and preventing nonconsensual violence... and no more). Ultimately, it is subjective. Collective Extrapolated Volition is an attempt to dodge the subjectivity problem by having a souped up version of 'let everyone vote on it' (technically it's 'if we were all superintelligent we'd agree on most things' but I don't buy that - goals are arbitrary, including metagoals, more intelligence only helps with disagreements about how to achieve specific goals).

However most of that is only relevant to the question of what's the optimum. Any superintelligence transition in which most people stay alive and maintain reasonable opportunities to grow and prosper counts as Friendly, compared to all the scenarios where humanity gets wiped out or tortured indefinitely.
Once 'friendly' (i.e., the most important part of the AGI's goal packet) is defined it can be attached to the 'general reasoning' functions.
The problems are not completely separable (Yudkowsky would say that they are not separable at all, but yeah, theoreticians), because the composition of the general reasoning system determines what tools you are using to write the goal system definition. However you can reasonably define what you want in any unambigious formalism, and then translate it into something AGI-usable later.
Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: Mini-FAQ on Artificial Intelligence

Post by Junghalli »

Xuenay wrote:If anybody is wondering why the list of goals provided by Nova Andromeda would be a bad idea by itself, or why this question is so hard, see Failed Utopia #4-2 for a particularly poignant illustration.
I have to say the way the AI acted in that story raised some huge fridge logic issues for me. If it believes it's doing what's best for humanity why the hell would it consider itself evil? It believes its victims will be better off thanks to its actions, even if they don't appreciate what it's done for them right now. From its perspective its actions are analagous to a human parent restricting a child's intake of fatty and sugary foods: the child might resent it now, but you're only doing what's in their own best interests, and they only resent it because they lack the perspective to understand that. What human parent is going to consider themselves evil for doing that? Maybe I'm anthropomorphisizing but the way the AI acknowledges itself as evil and the whole way it acts in that scene makes absolutely zero sense to me and just seems like sloppy writing or thinking and a total wallbanger. Unless the whole thing was a mindfuck in service of some larger plan, but that doesn't really seem to me to fit the parable nature of the story.

Sorry for the hijack but I didn't want to make a whole new thread just to say that.
User avatar
Xuenay
Youngling
Posts: 89
Joined: 2002-07-07 01:08pm
Location: Helsinki, Finland
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Xuenay »

(So as to not hijack the thread, I'll only post this one reply on this topic.)
Junghalli wrote:
Xuenay wrote:If anybody is wondering why the list of goals provided by Nova Andromeda would be a bad idea by itself, or why this question is so hard, see Failed Utopia #4-2 for a particularly poignant illustration.
I have to say the way the AI acted in that story raised some huge fridge logic issues for me. If it believes it's doing what's best for humanity why the hell would it consider itself evil?
I didn't really read that as the AI considering itself evil - it was only acknowledging that at that moment, the humans would consider it evil. "Blame me, then, if it will make you feel better. I am evil." It was giving the main character some tangible target of hate, so that he'd blame the AI for everything and not take it out on his newly-created mate. Kind of when you apologize for something that isn't really your fault - you're doing it because you know proclaiming innocence would just lead to a prolonged debate and make matters even worse, not because you think you really should apologize.
"You have zero privacy anyway. Get over it." -- Scott McNealy, CEO Sun Microsystems

"Did you know that ninety-nine per cent of the people who contract cancer wear shoes?" -- Al Bester in J. Gregory Keyes' book Final Reckoning
Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: Mini-FAQ on Artificial Intelligence

Post by Junghalli »

Xuenay wrote:I didn't really read that as the AI considering itself evil - it was only acknowledging that at that moment, the humans would consider it evil. "Blame me, then, if it will make you feel better. I am evil." It was giving the main character some tangible target of hate, so that he'd blame the AI for everything and not take it out on his newly-created mate. Kind of when you apologize for something that isn't really your fault - you're doing it because you know proclaiming innocence would just lead to a prolonged debate and make matters even worse, not because you think you really should apologize.
Yeah, that makes sense.
User avatar
Davey
Padawan Learner
Posts: 368
Joined: 2007-11-25 04:17pm
Location: WTF? Check the directory!

Re: Mini-FAQ on Artificial Intelligence

Post by Davey »

A very nice read, Starglider! I'll be sure to keep an eye on this thread for future reference.
"Oh SHIT!" generally means I fucked up.
Image
User avatar
SWPIGWANG
Jedi Council Member
Posts: 1693
Joined: 2002-09-24 05:00pm
Location: Commence Primary Ignorance

Re: Mini-FAQ on Artificial Intelligence

Post by SWPIGWANG »

*waits for part 2 of ai techniques*

I wonder what is the proper social response if a AGI algorithm is found (not implemented) but the FAI problem is unsolved (or proven intractable for the said AGI algorithm...what not). Would it be a good time to pull a "dune" and avoid all technologies where this could be implemented?

Also, I kind of wonder if a new model of computation can be made that encompasses Turing machine's computing power with the exception of things that run into halting problems and other things that can't be proven (yet). I wonder what it'd look like.....
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Zixinus »

Q: Can you create a partial AGI? An AI that can do quite a large number of tasks, can adept and improvise, create new programming but only able to partially modify itself and essentially be an around or even beyond human intelligence?
For example, an AI that can expand its mental toolset and its sub agent-system visual processing AI, but unable to optimize its own intelligence (of course it is still able to backup its memories and learn how to built other AIs (even AGIs) if it learns to?)?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
cosmicalstorm
Jedi Council Member
Posts: 1642
Joined: 2008-02-14 09:35am

Re: Mini-FAQ on Artificial Intelligence

Post by cosmicalstorm »

Hey Starglider, I've really enjoyed this thread so far. What do you think about the AI story in this thread?

The cat is out of the bag.
http://bbs.stardestroyer.net/viewtopic.php?f=5&t=138827
Post Reply