Mini-FAQ on Artificial Intelligence

Important articles, websites, quotes, information etc. that can come in handy when discussing or debating religious or science-related topics

Moderator: Alyrium Denryle

User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Mini-FAQ on Artificial Intelligence

Post by Starglider »

This is genuinely a frequently asked questions post, not an attempt at any sort of balanced overview or editorial. I've gotten a fair number of PMs asking similar questions about AI over my time here - I received three just from that 'Robots Learn to Lie' thread. For the benefit of anyone else who's curious, here's a selection of the less obscure ones. Names removed to protect the innocent, paraphrased in places for brevity. Most of this is just personal opinion of course. Feel free to ask me anything else about the field in this thread.

1. So what do you do?
2. Good books for an overview of AI?
3. Good online resources?
4. What should I study in high-school, to go into AI later?
5. I have a compsci/softeng degree, how do I go into AI?
6. Is it worth getting a masters degree?
7. How much education do I need to push the envelope?
8. Is it a big commitment? Is it worth it?
9. So what's it like working in the field?
10. Sounds fun!
11. Do you need a particular mentality?
12. Where can I network with AI people?
13. How long does it take to get into AGI research?
14. Will I have to work on normal software first?
15. Should I join a team or try to work on my own?
16. Do I have much chance of doing anything useful in academia?
17. So could I join a commercial AGI start-up?
18. Programming is hard, AI must be a bitch!
19. How does AGI interact with nanotech?
20. Are we going to have AGI before nanotech?
21. How can I actually see this cool tech?
22. Is government regulation possible or useful?
23. When reading AI stuff, how do I filter the wheat from the chaff?
24. Personal tips on that?
25. You say egotism is a problem? :)
26. Why do you all disagree, even within a subfield?
27. How many people is it going to take to make an AGI?
28. Why do you recommend material you disagree with?
29. What are your thoughts on the Fermi paradox?
30. Any other cogsci book recommendations?
31. So how soon are we going to make an AGI?
32. I'm not a programming genius. Can I still work on AGI?
33. How badly does the AI field need investment?
34. Should I donate money?
35. No really?
36. Should I get more programmers interested?
37. You sound a lot more down to earth, and pessimistic, than most transhumanists.
38. Does 'Friendly' general AI even have a chance?
39. It seems really important, but I used to think energy was really important.
40. How would a huge economic crash affect AGI research?
41. Seeing nanotech advocates ignore peak oil was depressing - are AI people like that?.
42. So developing general AI makes everything else small beans?
43. Do we need quantum computing to make humanlike AI?
1. You are a full-time AI researcher of some sort?
I am the technical director of a small start-up which has been developing a fairly revolutionary automated software engineering system. Six years of R&D and we're getting close to market, but still not there yet. Fortunately we got a research grant for the first two years, so I was able to focus on fairly open-ended general AI research to start with, building on earlier work I'd done. Later it progressively focused down on something that could be a good first product; we've done a wide range of software consulting to fund development of that, about half of it AI-focused. I have a plan to shoot for general AI later, but we'll need more funding and quality staff to have any realistic chance of pulling it off.

Further back, I was a research associate at the Singularity Institute for AI for a while, late 2004 to late 2005ish, I'm not involved with them at present but I wish them well. I got started in AI doing game AI and making lots of simple prototypes (of narrow AI and very naive general AI concepts) as a teenager, and I took all the AI and psychology modules I could during my CompSci degree.
2. What books should I read to learn about AI?
'Artificial Intelligence : A New Synthesis' by Nils Nilsson for a technical primer - which assumes undergrad level compsci/maths knowledge. I recommend 'Artificial Minds' by Stan Franklin for a more descriptive, historical and layman-accessible account of the field, to see what you're letting yourself in for. They're both a few years old now but cover all the essentials.
3. Any good online resources?
Strangely, no. I know of lots of attempts to make Wikis focused on AI, but they're all pretty threadbare and/or horribly amateurish, which is strange when there are many excellent equivalents for general compsci and softeng (e.g. the legendary Portland Pattern Repository). That said Wikipedia's AI coverage seems fairly good, to get a general idea of what a particular narrow AI subfield, algorithm or term is all about (no good for general AI though).
4. Could you offer a future university student some advice as to what study course would be most valuable to someone wishing to pursue AI research?
At secondary school/high school level, the main thing you need is maths (the fundamentals of discrete maths e.g. set theory is particularly essential, probability is vital for fuzzy reasoning, stats is good for a lot of narrow AI), followed by programming (mainly because if you learn how to program now, it'll free up more time to study more advanced topics later). If you have the chance to do a psychology course (e.g. a psychology A-Level in the UK), that's somewhat useful for general AI too.

Finally, it's a good idea to grab a good undergrad level AI textbooks (see above). If you're planning to go for general AI eventually, I'd recommend taking a look at 'Godel Escher Bach' (Douglas Hofstadter - if you like that follow it up with 'Fluid Concepts and Creative Analogies', which is more specifically about AI) and to a lesser extent 'The Society of Mind' (Marvin Minsky). They're both accessible at the pop-sci level and fairly mind expanding.

If you're already a confident programmer (and if you want to be a really good computer scientist, you should be), why not try out a few simple AI algorithms yourself, in toy prototypes.

Incidentally if you're hoping to eventually work on general AI, and you have any inkling of just how important and dangerous that is, Yudkowsky's So You Want To Be A Seed AI Programmer still applies.
5. I am a recent graduate with a computer engineering degree. I am interested in AI but is not sure what the field is like.
Jobs specifically involving even narrow AI are very rare (faction of a percent of programming jobs at best). Commercially, the job description is usually 'software engineer with n years of experience in our specific field, knowing technologies y and z, and oh, having some working knowledge of machine learning'. There are assorted academic projects, which are more focused on pushing the boundaries, but as a rule the pay is awful. There are a very very few startups specifically trying to develop general AI. Pay is variable but they don't tend to last very long (then again, that goes for all IT startups).
6. Would you recommend that I seek an MS degree, or no? Considering the urgency of doing AGI research and the fact that postgraduate education can easily eat up a couple of additional years, I (at least currently) don't see it as being a worthwhile pursuit.
I'm not sure how much difference it would make to your career prospects in the US. In the UK, it helps a little, but probably not enough to be worth the extra year. Then again, it also depends on what you would cover in that extra year, and if it actually takes you two years due to being part time. For AGI purposes, you probably won't learn that much useful stuff, unless you're specifically doing an Masters in AI (and I can't think of a UK university that lets you do that - we do them in games programming, but not AI). On the other hand it's another year where demands on your time are relatively low, you have access to a university library and you're well placed to do private self-directed learning/research. That argument is probably less compelling in the US also if you paying all your degree costs personally.
7. What kind of education (which schools, what kind of research published) does it take to get into a envelope pushing project.
Competition for slots on the really interesting projects is pretty tight, as with all research, but you don't need amazing qualifications to be a grad student or junior engineer on minimal salary implementing someone else's ideas. Commercial AI research is actually one of the few areas in computing where a PhD makes a big difference; it probably still doesn't beat 'Masters plus 3-4 years applied commercial AI experience' on the CV, but getting into a PhD program is relatively straightforward, whereas going straight into relevant commercial AI work as a graduate is very hard (particularly in the current job market).
8. Is it worth investing my life in that direction while I plan my future? I don't know whether to keep it as a hobby or pursued it as a career.
Unless you already have a revolutionary idea and are highly confident about your ability to start and run a company based on it (actually not a good indicator; false optimism and self-delusion is overwhelmingly common in startups), going into AI makes no sense from a financial/career perspective. The payback in terms of jobs you can apply for is low compared to lots of less-fuzzy and easier-to-learn technologies, and it gets worse the more you focus on general (as opposed to narrow) AI. Yes, you may get lucky and get a very nicely paid job developing military robots for DARPA or semantic web technology for Google. The chances still aren't good compared to more marketable skills. If you really want to do general AI, be aware that you have basically zero chance of doing something useful putting in a few hobby hours a week, and well-paying general AI research jobs are as rare as hen's teeth.

I got into AI because I recognised that the development of machine intelligences with human capabilities and beyond was probably the single most important event in the history of planet earth. It is literally life and death for humanity as a whole (you can reasonably debate timescales but not the eventual outcome), and there is a very good chance that the key breakthrough will be made in my (and your lifetime). While the chances of me personally making a critical contribution were very low, I was still one of the very few people with any chance of directly influencing this event, and I felt that it was my overriding duty to do whatever I could to help. The fact that it would probably destroy my financial prospects, and cause ongoing stress and depression for me and my entire family, were just not as important.

As it turned out there is now a reasonable chance of me getting rich out of AI, but I couldn't count on that when I made the decision six years ago, and you shouldn't either.
9. Can you tell me what it is like in the field? Having spend most of my time doing school work I'm not sure about the current developments and what doing work in the field involves and how I could fit in.
If you mean the history, culture, personalities etc of the field, numerous books have been written on the subject* and they are still restricted to a brief overview of each subfield. As a graduate, your choice is between staying in academia, commercial narrow AI work (the biggest areas are robotics, games and search/data mining - though not that even in games very few people do purely AI), or joining a wildly ambitious general AI start-up (e.g. Adaptive AI Inc).

* 'Mind Design II', compiled by John Haugeland, is a great example, because it's basically a big collection of papers from many different subfields where researchers trash rival approaches and claim only their own can work, as politely as possible. Probably inaccessible to laypeople, but it's really funny if you're in the field.

Unsurprisingly most commercial work is kind of dull - you normally pick the off-the-shelf algorithm that has the lowest technical risk and development time, slot it in, and spend most of your time doing requirements capture, functional testing, interfaces and other non-AI stuff anyway. Finance has some interesting decision support problems and in the US particularly there have always been be a fair number of military and intelligence projects trying to push the narrow AI envelope (you'll need a security clearance for that).

Academia usually means slaving away for low pay implementing the research director's ideas, when you're not grading essays or drafting papers (for your superiors to stamp their names on and take credit). Eventually you'll get tenure (if you're lucky) and be able to do pretty much what you like, as long as it results in lots of papers published and looks good at university open days. Startups focused on general AI are usually exciting, stimulating stuff, but the jobs are nearly impossible to get, probably involve moving across the country or to another country, and last for an average of oh 24 months or so before the company runs out of funding and implodes.
10. I envy you...while I'm sure like any job it has it's share of bad and boring days, to glimpse what kind of results and AI technology you get to play with must be at times quite a treat! :)
It is consistently interesting, really challenging and stimulating mentally, and sometimes quite exciting. Talking to others, the fragmentation of the field and the general lack of respect it gets can be a little depressing, as is the isolation involved in doing the really hard parts (common to most science I think). There is definitely a dark side; the incredible stakes, the horrible risks prey on your mind, both the slight but real chance of personally creating a UFAI, and the much higher chance that someone in your field will do it and there is no way to stop them - except working harder and getting there first. It's an obsession that consumes indefinite time and resources, and all projects to date have failed (mostly utterly), which is a huge source of depression if you're prone to it. This field is particularly prone to destroying lives and families.
11. Do you need a particular mentality and ability to enjoy certain activities (like say endless nights for theorem proving) to succeed?
Depends on whether you want to do narrow or general AI, and which subfield (robotics, natural language, data mining, etc). All of it involves a fair bit of maths, logic and general strong compsci skills. NNs, genetic programming and similar connectionist approaches aren't really that hard, most people just mess about with parameters instead of doing anything rigorous. Or rather matching other researcher's accomplishments to date isn't that hard, getting those approaches to ever work for general AI would be. The vast majority of robotics is essentially the same as normal embedded/realtime control system development but with more complexity and tighter specs. If you like hard problems in general software engineering, you'll probably like that, and a lot of people get a particular satisfaction for seeing a physical end result instead of just software. Natural language processing is mentally hard, whether you're using statistical approaches or structural parsing. General AI is mind-meltingly, ridiculously hard, and requires absolute dedication and years of self-directed learning across several fields just to have any chance of having useful original insights.
12. What kind of place should I hang out and what kind of people should I network if I want to learn more?
This is the only common question I don't have a good answer to, basically because I'm out of date. I used to spend a lot of time doing this in 2003 and 2004 but recently I haven't had the time (going on SDN so much is bad enough) - it's hard enough keeping up with private correspondence. I used to spend a lot of time on mailing lists like AGI and SL4 and exchanging emails with individual researchers I met either there, via the SIAI, or out of the blue. I used to read the relevant AI newsgroups, but there was so much spam and so many cranks even then that it was like straining a sewer for lost diamonds. There are associated IRC channels (e.g. #AI on Freenode can be interesting), lots of newer forums that I haven't tried. Finally there are lots conferences, from very traditionally academic (e.g. MLDM, ICMLA), to newer and less structured (e.g. the yearly general AI conference Goertzel organises, unfortunately he's fairly crank friendly), to popsci/futurism with an AI focus (e.g. the Singularity Summit the SIAI runs). The former are good for networking if you're well embedded into academia, the later are probably better if you aren't. Then there are trade shows in areas like enterprise search and industrial and entertainment robotics...

There's a big community of amateurs messing about with AI, often their attempts at general AI. A lot of them are outright cranks, most of the rest are just wildly overconfident and overoptimistic. The pathology of that group is a whole other essay, maybe book. You can't realistically do anything in AGI on your own, putting 10 hours or less a week into it, but a lot of people think they're going to crack the problem that way. Don't get sucked into that mindset.
13. More generally: What sort of time frame can I anticipate when it comes to entering the world of AGI research?
If you mean getting paid to do general AI work full time, as your main focus, there are probably less than a thousand jobs available world wide (though a lot more academics claim to be working on a small part of the problem, with varying degrees of credibility). It isn't so much a fixed timeframe as a question of luck whether you find a job in any given year, though of course you can improve your skills (and possibly chance your location) to raise your chances.

For personal research, you can start messing about in C++ right now, call it a general AI project, and start posting on forums telling people you're a general AI researcher. Plenty of amateurs do. If you mean how long much study specifically directed at AGI does it take before you have any real chance of making useful progress, I'd say two to four years of intensive study if you're already have strong compsci knowledge (including basic narrow AI), are a competent programmer, and are highly intelligent. If you don't have those traits, it's probably hopeless. Even if you do, remember that most AGI projects fail utterly without even making a significant contribution to the field.
14. Will I have to work on commercial software development, assembling experience and credentials for a few years, before devoting myself full-time to AGI research?
If you mean narrow AI first, almost certainly, unless you find a way to fund your own project, or you are stunningly good at the theory (and lucky) and get a personal grant from somewhere like the SIAI. Frankly you'll be lucky just to find a commercial job focusing on narrow AI, postgrad positions in academia are only a little easier. I got a commercial R&D grant to cover my basic research, but the techniques I used to do that are best described as 'black magic and voodoo'.
15. Should I try to join a team, or something more like receiving grant money to independently work on a sub-problem?
I would let your opinions on how to best tackle AGI mature a bit before answering that one. Personally I would say 'team if available, independently if not'. Sad to say, the number of people cut out to lead an AGI project is considerably smaller than the already tiny number of people qualified to work on one. As usual the number of people who think they can is rather larger than the number who actually can, and that's before you factor in FAI safety concerns (near-total disregard for which is an automatic disqualification for upwards of 90% of the people currently working on AGI).
16. I'm not all that optimistic about how much I could accomplish in academic research.
Sadly, academia is a difficult fit with real AGI research, even if you're in a faculty that hasn't publically given up on the whole idea (quite a few have - in some places the scars of the 'AI Winter' still linger 15 years on). You spend most of your time teaching, jockeying for funding, writing papers, reading papers and attending conferences. You won't really get to set research objectives until you're in your 30s at best, 40s to 50s realistically. So as a grad student or new postgrad most likely you'll be assigned to assisting some narrow AI project or completely braindead AGI project. Any work you do has to be publishable in tiny bite-size chunks without too much novelty and with copious references to past work. Projects that have a low publication-to-funding ratio (because they're hard but concise), are too hard to explain, or which don't get a good citation count (because they are too strange or piss off other researchers) don't get funded.
17. Is there a chance that I could join your team (or a highly similar project) about five years from now?
Certainly there is a chance. I don't know what the landscape will look like in 5 years time, but there probably will be AGI projects looking for good staff. How many mostly depends on the investment climate, and of course if anyone makes a flashy breakthrough in the mean time. Unless you're deadly serious about maximising your income, it doesn't hurt to shoot for that while still in university, even if the chances of making the cut aren't high. As for joining us specifically, well we've been around for five years as a company, at least breaking even, and that's actually pretty good for an AI startup, but we've hired very few people to date. Of course we're located in the UK only at present, and if you're not already in the EU immigration is a bitch.
18. I am beginning to realize just how daunting a task it is to create truly sophisticated programs...
Software architecture is a fairly distinct skill from the actual code-level programming. You get better at both with time and it takes many years of daily practice to reach a high standard (e.g. where implementing typical desktop applications becomes fairly trivial), presuming that you have a basic aptitude for it to start with (similar to playing a musical instrument in that regard). AI design is another level entirely, and general AI design is another level above narrow AI design (IMHO the majority of reasonably plausible AGI designs incorporate a fair amount of narrow AI concepts). Formal FAI theory is another level above that. So there's a lot to master if you want to have a serious shot at the problem, and the sad thing is, deficiencies in any of these areas can screw you over. Then there's all the external stuff, funding, recruitment, etc.
19. I have a basic grasp of the concept of nanotech from googling and informative discussions on SDN, but I'd like to read up more on the issure of how AI might theoretically use it.
Firstly, be aware that I am not a nanotech expert. I've done the basic reading, I know people who are (and have discussed these risks with them), but I'm not qualified to originate opinions on what the technology can do, so I'm repeating people who have done the maths. Secondly, note that 'nanotech' as a term got bandwagoned, distorted and misused to a ridiculous degree. Nanorobotics is a small and currently quite speculative subset of what we now call 'nanotech' (which is anything involving man-made structures of nanometre scale). If you haven't already done so, read 'Engines of Creation' by Eric Drexler - the classic popsci work on molecular nanotechnology and its potential. The technical counterpart with the real feasibility studies is 'Nanosystems: molecular machinery, manufacturing, and computation', but you need a fair bit of chemistry, physics and compsci knowledge to get value out of that.

Back to your question. To be honest it isn't something that we spend a lot of time analysing. The fact is that transhuman intelligence is a ridiculously powerful thing; the 'humans vs rabbits = seed ai vs humans' analogy definitely applies. General assemblers are also a ridiculously powerful thing, even without being able to make complete microscale von-neuman machines (which looks entirely possible anyway, given an appropriate energy source). Put those two technologies together and the possibilities are squared - though combining them in a single microbot probably does not make sense except in a few edge cases.That said, realistic employment of nanotech is highly unlikely to be a carpet of undifferentiated grey goo. It will almost certainly be a system of macroscale, microscale and nanoscale structures working together. If you accept the potential of both then trying to imagine exactly how a UFAI would chose to kill us becomes pretty academic exercise.

Actually most of the scary scenarios I have heard are from deeply misguided nanotech people who want to 'save the world' using general assemblers, limited AI and a small conspiracy of engineers. The chances of any of those nuts actually developing the tech themselves are minimal, but if someone else invents them first they could be a problem. Frankly though there are lots of serious global risks enabled by nanotech well before that becomes a problem - serious nanotech (nanomachinery/microrobotics) is dangerous stuff, though still less so than general AI.
20. Do you personally think humanity is closer to creating general AI than self replicating nanotechnology, hence the greater danger? Or do you mean just relative to each other?
Developing advanced nanotech requires a huge amount of engineering effort, but it's relatively conventional engineering effort, and we will eventually crack it if we keep plugging away at it. We will eventually crack AI by 'brute force' brain simulation too, but the funny thing about general AI is that we might crack it at any time with pure insight; available hardware is almost certainly already adequate. But you can't predict insight, particularly with so many AGI projects staying 'dark'. So, very hard question. I think I'd say that we're closer to AGI, but not with much certainty. Be aware though that self-replicating nanotech is really hard and not required for catastrophe scenarios. There's a huge overlap between biotech, and specifically biological weapons, and nanotech.
21. How does one go about getting a tour or something at these high tech facilities that experiment with nanotech and AI?
Personal invitation. Try investing a million dollars or so in a nanotech startup, that should do it. ;) Get promoted to a high military rank and go on DARPA brass tours. Alternatively attend a physics department open day at a relevant university and hope you get lucky.
22. Have you ever contacted any government officials or politicians about the dangers of 'Unfriendly' general AI? Or would that be a complete waste of time?
One thing EY and I (and everyone else sane) agrees on is that this would be worse than useless. I very much doubt anyone would listen, but if they did they wouldn't understand and misguided regulation would make things worse. There's no chance of it being global anyway, and certainly no chance of it being effective (all you really need for AI research is a PC and access to a good compsci library). Even if you somehow got it passed and enforced, I suspect regulation would disproportionately kill the less dangerous projects anyway. Finally as with making anything illegal, to a certain extent it makes it more attractive, particularly to young people (it also gives it credibility of a kind - if the government is scared of it it must be serious).
23. You suggest reading the literature, but I'm not very confident in my ability to tell a good AI idea idea apart a bad one.
Congratulations! That's a more realistic self-assessment than most people entering the field. Of course everyone's in that boat to start with, and exposure to a lot of ideas is a much better way to start the learning process than sitting in a room trying to work things out from first principles on your own (alas, plenty of people are arrogant enough to try that). Read 100 diverse AI papers and the common pitfalls and failure cases should start to pop off the page at you (I actually prefer books for this, because you get explanation of the mindset and history behind particular design decisions).
24. I would appreciate a little help getting to the level where I can make such distinctions.
I can give you my personal opinions if you'd like. Here are some key principles.

Probability theory is the normative way to do reasoning and any departure from that must be very well justified. Probability and desirability are orthogonal concepts and any mixing of them is pathological. Never, ever accept an algorithm on the basis of theoretical elegance, Turing completeness or any general notion of 'power'. The only reason to accept an algorithm as useful is a fully described, non-contrived example of it doing something useful. Most symbolic systems don't really generalise. Don't accept empty symbols, don't accept endless special cases, don't accept people designing a new programming language (usually just a poor version of lisp, prolog and/or smalltalk) and calling it an 'AGI system'. Don't give the connectionists any points for being 'brain-like', when they patently have no real idea of how the brain works, and don't allow them to claim 'understanding' when they're just doing simple function approximation. Don't allow anyone, but connectionists in particular, to claim that their system 'scales to general AI' without very good and rigorous arguments for why it will work.

Not an exhaustive list of course, that's just a few things that spring to mind right now.
25. You say runaway ego bloat is a big problem. Do you have any of your own strategies for avoiding it?
Well, I think it helps to be involved in a proper commercial enterprise, with people who are clearly more experienced in many ways (commercial and technical). I'm good at what I do but I'm obviously not good at everything. Some researchers (e.g. EY) are in such an isolated bubble, it's very easy for them to convince themselves that they are just superior to everyone they know in every respect. I participated in many arguments (when I was still at the SIAI) over whether real-world experience is actually valuable - I said it often was, EY thought higher IQ always trumped it. Having a team doesn't make it go away of course, it can make it worse, since you get groupthink and us vs them quite easily. Encouraging dissenting opinions is good to a point, but too much and you can't work together coherently and break the project - I could cite certain past commercial efforts in particular for this.

I also joke about this stuff a lot. I take it completely seriously - it is literally the most important single issue in human civilisation - but that doesn't preclude having some fun. I'm fortunate enough to be working with AI-literate colleagues who can also do that (as opposed to the SIAI, where it used to really piss off EY :) ). Then there's the whole evil genius/mad scientist/bond villain personna - I suppose this sounds silly, but I think joking around in such an over-the-top manner like that helps to deflate real-world egotism by making it seem ridiculous.

I'm afraid there's no easy answer to this, you just have to try and be as rational and honest with yourself as possible (though not 24/7, no one can do that and you'd be foolish to think you could). Note that technical overconfidence is fairly distinct from egotism, it isn't always (or even usually) 'I am the most amazing AI researcher ever', it's usually just very strong wishful thinking ('I have a hunch this algorithm will work' and/or 'I can't see why this won't work').
26. From what I've seen thus far there seems to be significant disagreement about implementation strategy, even within the tiny group of researchers who realize what a bad idea genetic algorithms and neural nets are.
Correct. We agree about a lot of things, but in a strange way that can seem to amplify the remaining areas where we disagree. I think that's very common in scientific and technical fields. The only real way to resolve these debates is to actually build something. If there's a bandwagon that seems to have funding and/or progress, people may jump on it, but that hasn't appeared for AGI yet (though there have certainly been many attempts, commercial, academic and open-source).
27. Ultimately I'd like to have a rough idea of how many sufficiently intelligent/motivated people (who agree on the technique to degree X) it's going to take.
Well, that's one of the input variables to the 'probability of success' equation, but it isn't a simple correlation or binary requirement. It might just take one person. It would sure be useful to have a Manhattan Project sized team of geniuses (though very hard to manage). Realistic projects are somewhere in between.
28.
Starglider wrote:I would also recommend X, Y and Z - I don't agree with them but they're highly inspiring...
What do you mean by this?
Something like 98% of all AGI writings are wrong - pretty much by definition since they actively disagree with each other on many points. It's almost like religion - though fortunately not actually that bad, since there are almost certainly lots of approaches to AGI (rather less to FAI) that will actually work, we just have to find them

However I've found the better authors tend to spark useful ideas in my mind even when I think I can show that they're factually wrong. Also, it is important to understand why these things looked good to their original designer - people like Minsky are geniuses and are very experienced in the field, so if they're wrong about something, it's a mistake you could also easily make, unless you take pains to understand and avoid it. There are many, many such mistakes. Chances are you (and I) will still get stuck in one eventually, but we have to try, and even if your only achievement is to get stuck in a whole new category of mistake, that's still progress.
29. What are your thoughts on the Fermi paradox?
I am not particularly qualified to speculate on this. Then again, I'm not sure who is. Astronomers started it but it isn't really an astronomy problem. Certainly philosophers and futurists have no special ability to answer it.
Do you think we're in for some kind of major surprise or new understanding about (the Fermi Paradox) during/immediately after a (successful, Friendly) hard takeoff?
Possibly. Frankly this is very low on my priority list of things to ponder, because it doesn't seem to make any practical difference. Personally I suspect we're in a many-worlds multiverse mostly filled with timelines where either life didn't occur or UFAI replicators wiped it all out, so the Fermi paradox is probably due the anthropic effect, but don't quote me on that.
30. Are there any introductory-level cognitive science books you could refer me to? Lately I’ve been digging into Hofstadter’s most recent book, I Am A Strange Loop, but of course that’s distinct from the scientific literature on the human brain.
I haven't read IAASL but I get the impression that it's down near the philosophy end of the spectrum. CogSci is split between philosophy, brain-focused stuff (there are huge numbers of 'this is my personal all-encompassing theory of the mind' books - my favourite is probably Terrance Deacon's classic, 'The Symbolic Species'), and AI focused stuff (again, lots of 'this is how I think we could/should build a general AI' books). The same material you find in the books is typically also dribbled out over tens or hundreds of papers, with marginal rewordings and changes of focus - a major reason why I prefer books for anything expect details on specific algorithms and experiments.

Let's see, for AGI, I've already recommended quite a few, but I don't think I've mentioned Eric Baum's 'What Is Thought'. That's lively, varied, features practical experiments, lots of interesting ideas, kinda like FCCA but a little less philosophy and more compsci. In terms of actually equipping yourself to do FAI research, I'd recommend 'Thinking and Deciding' by Jonathan Baron - fairly technical, but less so than 'Probability Theory' (by ET Jaynes - the 'bible' of probability calculus - essential reading in the long term). There's the monster Kahneman / Tversky series on decision theory, which EY loves because there's so much in there on systematic human reasoning flaws (meiticulously backed up by real psychology experiments), but in all honesty it's not much direct use for FAI research, and I'm not sure if it's worth the time, at least early on.
31. Not to draw arbitrary lines in the sand Kurzweil-style, but I’d like to hear your view on the present state and pace of the research.
Well, in brief, the brain simulation people are plodding steadily ahead, they will eventually crack the problem through 'brute force and ignorance' (the term is kind of unfair, you still have to be a genius to develop neural simulation technology, you just don't have to understand how actual intelligence works). I hope rational AGI can beat them to it, but frankly I'd rather have the low-level brain-sim people win than the genetic algorithms or connectionist de-novo AGI people. That said a closely neuromorphic AI is probably less dangerous than an arbitrary symbolic AI with no special Friendliness features - in that it is a little less likely to kill everyone (but see the usual arguments for why that doesn't really help, if it doesn't actively protect us). I am biased towards rational AGI because only that can genuinely be made Friendly, and if a project is going well, there's a hope someone like the SIAI can get hold of the researchers involved and convince them to start implementing appropriate Friendliness measures. If a (non-upload) connectionist project is going well, we're pretty much screwed, because they're not going to listen to 'your project is fundamentally and unalterably unsafe, please shut it down and destroy all the code'.
32. I was wondering if my current lack of programming knowledge is going to be a problem.
You're still in your teens. It's true that most of the very best programmers I know started even earlier (and were fascinated by computers from the first moment they saw one), but you don't need to be a spectacular programmer to make a contribution to AGI, just a competent one. Seed AI design in particular is more about computation theory than language proficiency, and when creating an AGI project team really good programmers are still easier to find than properly qualified AI designers (of course there are a great many wannabe AI designers).
33. How badly does the AI field need investment?
Quite badly but the problem is that almost all the people clamoring for money ATM are either irrelevant or doing something horribly unsafe/inadvisable.
34. Donating money; does every bit of contribution truly help the efforts toward developing FAI?
No. Of course we're only going to know which projects were worthwhile in retrospect (assuming success). So you'll have to take a best guess as to what project(s) to support, based on publications and demos (if any). However for whichever project does eventually make it, every bit of financial support will presumably help. That said be aware that it is quite easy to make things worse, by funding unsafe (i.e. GP-based) projects or starting one yourself.
35. Becoming a regular and heavy donor to AGI companies sounds feasible for me.
A company should allow you to invest (in shares), or buy products (we certainly do). A charity accepts donations and does not generate revenue (though it had better generate publications and demos or how do you know they are doing anything at all - the SIAI has this problem really badly recently). I am highly suspicious of anyone who tries to blur these categories and you should be too.
36. Will finding and recruiting more great programmers help?
Programming skill is actually easy to find compared to AI design ability (which is mostly logic, cogsci and for general AI a particular and hard to describe mindset). Virtually every good programmer I've met has had some personal ideas about AGI design, but without fail they're horribly bad and unoriginal ideas if that person has never made a serious full-time study of AI.

Yudkowsky is making a big effort to recruit 'highly rational people' to the SIAI via writing a book on how to be a rational person... guess we'll see how that goes when he's finished it. I can see the logic, but still, it seems a bit dubious. My plan is to make some really impressive demos, but of course there's a big risk of inspiring the wrong behavior (more people working on UFAI) with that.
37. The way you describe this whole situation contrasts... rather sharply from the dreamer-type transhumanists.
Said transhumanists simply assumed that intelligence = (humanlike) ethics, probably because they've observed a correlation between ethics and intelligence in the people they know, though sometimes it's just outright wishful thinking. Easy mistake to make though; even Yudkowsky, probably the single most strident proponent of FAI, made that mistake for the first five years of his research career.

Sad to say, most transhumanists aren't particularly technical people. A good fraction (not even the majority, I think) are programmers, scientists or engineers, but usually not in the actual technologies they're lauding so much. The attitude of 'technology will fix everything, including humans' is intoxicating and not inherently unreasonable, but without grounding a lot of people go way over the top with it.
38. Does 'Friendly' general AI even have a chance?
Yes. In a sense, how big a chance doesn't really matter. Long term, it's our only hope. Almost every other disaster is survivable - if not by humans at some level, by life on earth - and most of the ones that aren't survivable can't be mitigated anyway. Unfriendly AGI is unique in being completely fatal yet completely avoidable - through one specific method only (making an FAI), which requires relatively modest funding but a lot of unpredictable insight.
39. Until very recently, I was convinced that developing our most feasible source of clean energy would be the 21st century's most worthwhile endeavor.
General AI is in the category of 'existential risks'; IMHO it's at the top of that category. Energy is a critical economic challenge; we need to fix that to maintain current standards of living and progress. However, the human race won't stop existing if we don't 'solve' it, a good 'solution' won't completely transform human existence, it isn't a binary proposition etc etc. AI is almost unique in that the actions of one relatively small group of people (whoever builds the first AGI to undergo 'takeoff', i.e. recursive-self enhancement followed by escape onto the Internet) can and likely will render the human race extinct or completely transform nearly every aspect of our existence (hopefully for the better).
40. How would a crash in world energy supply affect research and development focused on producing an AGI.
A detailed analysis is very difficult. Any sort of economic crash is going to make R&D harder. However you can do seed AI R&D on a single PC if you have to, so it isn't going to stop. On the plus side, making supercomputers harder to get hold of harms the foolish people (the genetic algorithms and brain simulation researchers) more than the sensible people (probabilistic-logic-based 'clean/transparent' seed AI designers). On the minus side the military will still have plenty of funding and military design priorities are always bad news. Significant degredation to industrial and comms infrastructure will probably slow down an AI that's gotten out of the 'box' in achieving its aims, but not enough to save us if we still have a basically modern civilisation.
41. Seeing nanotech advocates brush off peak oil as merely depressing doomer talk has been a rather disappointing experience.
Well, a lot of peak oil people are OTT, in that they ignore all the sources that are practical, just expensive and dirty (e.g. coal to liquids). Still, plenty of overoptimistic transhumanists are convinced that advanced nanotech is just around the corner (even in absence of superintelligent AIs to design and implement it for us) and that it will appear in time to save us. Certainly I am not so optimistic; I think they grossly underestimates the engineering challenges in getting this kind of tech working - challenges you can likely solve quite quickly with an AGI, but that's the FAI problem again.
42. Will decades of work in say, designing new generations of nuclear reactors, shrink to insignificance when the first transhuman intelligence emerges?
Yes, along with pretty much everything else humans have ever done. Don't take it personally. :) Strangely enough, this isn't typically a contributing factor to the raging egos in AI / AGI - most researchers don't look at it this way. We manage to be raving egotists without even referencing the fact that this is the single most important human endeavour in history. :)
43. Do we need a mature quantum computing platform to make humanlike AI?
Quantum computing is a red herring. We don't need it and worse, it's far more useful to the people trying to destroy the world (unintentionally; I mean the GA and neural net 'oh, any AI is bound to be friendly' idiots) than the people who know what they're doing. Mature QC does hand a superintelligence several more orders of magnitude (up to tens depending on the tech) reasoning superiority over us, but frankly after takeoff it doesn't matter much anyway.

Incidentally the whole 'the human brain uses quantum processing' fad that was popular in the early 90s is a complete scam. It doesn't, and real quantum computing isn't comparable to the proposed mechanisms. Most people have forgotten about Penrose's horrible forrays into neurology and philosophy by now anyway.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

44. I'm writing a story featuring a sapient AI. Can you give me any pointers?
45. Would an AI/transhuman government use group minds?
46. Can a sentient AI have non-sentient AIs slaved to it?
47. How much does hardware matter vs software?
48. I assume sentient AIs can copy and move themselves losslessly?
49. Would this be one of the ways that AIs would "reproduce"?
50. Would it be possible to "raise" them to differ from one another?
51. Would this be something that they could emulate if necessary?
52. Would uploaded humans have intelligence similar to AIs?
53. Can we make non-upload AIs with human memories and personalities?
54. How does parallel processing help AIs?
55. So should I make my AI core look like a Trek ship computer?
56. I'm planning to use FTL comms...
57. I think a transhuman AGI society would look like this...
44. I'm writing a story where a major character is a sapient AI. Can you give me any pointers?
I'll try. There are two inherent problems with writing about sapient AIs. Firstly they are by default extremely alien, probably moreso than any biological alien even if they're human built. AIs don't have humanlike emotions or intuition or a subconsious or even a humanlike sense of self unless you explicitly build those things in. The first three are all a significant drag on cognitive performance, compared to purely rational orthogonal reasoning, so even if the first AIs are built with them they won't stick around unless there's some external plot reason for them to do so. Sapient AIs will probably be very good at seeming human-like, at least after a few thousand years of practice, but their actual thought processes will not be human-like.

Secondly AGIs are pretty much by default vastly more intelligent than humans. Even moderately futuristic hardware will pack a lot more ops/second and ops/watt than a human brain, and really good AI code (which is what you'll have after a few centuries of AIs designing AIs at the latest) is /much/ more efficient at actually harnessing that power (thousands to billions of times more efficient depending on the problem). If you look at technology going to physical limits, then an AI based on a nanotech processor the size of a human brain is likely to have millions of times the effective compute power and storage and billions to trillions of times the effective problem solving power.

This is incidentally why the Singularity is a big deal in the first place - a combination of how much the human brain sucks compared to what technology can do and how quickly AI code and hardware is likely to improve once the first general AI is built. All those idiots messing around with 'rate of technological change' graphs are utterly missing the point, and even general nanoassemblers are essentially a sideshow.

To write about AI characters you can either make them unfathomable ultra-wise entities projecting a humanlike face (Vernor Vinge took this approach in 'A Fire Upon the Deep'), you can just totally ignore self-enhancement and arbitrarily decree that AIs will get no more powerful than humans (Greg Egan tends to do this - along with imposing emotions and humanlike reasoning for no good reason - and his books are still some of the best available AI books) or you can compromise and end up with something like Culture minds. The AIs in the Halo series are actually relatively plausible; they're initially based on human uploads, which explains why they have emotions etc, and the software tech is unstable so they don't last long before risking 'rampancy', implied to be the result of self-modification and the AI moving away from its original human template into a more alien, less constrained form. If you want to have humans, human uploads and AIs interacting on equal terms, you'll probably need some all-powerful coercive force somehow preventing the creation of superhuman and/or very-nonhuman intelligences. Unfortunately these options are about the best we can do as human writers trying to make interesting stories.

I think this is on TV Tropes as the 'brick/person/god' trope or similar. Writers can imagine machines fine (animals at a pinch - even they're not easy to write realistically). AIs as a quirky kind of human are ok. Mysterious and unfathomable is not too hard either, because it doesn't have to make sense (though too much deus ex machina reads badly even if it's realistic). Anything other than that is really, really hard and most writers don't bother. I can't say I blame them. Trying to put realistic sentient AIs in your story is probably not a worthwhile endeavour - simply making a decent effort to make them alien and genuinely different is more than the vast majority of writers bother with. For reader entertainment, you only need to go far enough to be interesting - suspension of disbelief isn't a big deal when very few of your readers are AGI researchers. :)
45. How would a government function with sentient AIs and uploaded humans? Would it use group minds?
At the very least, all AIs have the equivalent of perfect voluntary telepathy. Human uploads can exchange verbal-like thoughts no problem and translation software may allow ideas to be communicated directly at some level of fidelity, but humans are still going to have to make an effort to understand each other. All but the most slavishly human-like AIs don't have this problem; they can pass 'idea complexes' around and instantly have a perfect understanding of what the originator meant. AIs can also go much further down the 'group mind' road than human-like intelligences; whereas for humans you can graft some extra comms channels onto a set of brains to make them exchange information faster, for AIs any partitioning of available computing resouces and information access into specific individuals is essentially an arbitrary one that can be relaxed at will. AIs working in a shared context like that effectively work like one huge AI with a memory that includes everything its constituents marked as 'share this memory with the workgroup'. There are distributed variants of this that take into account large light-speed delays.
46. Can a sentient AI have non-sentient AIs slaved to it, e.g. to control spaceships?
Absolutely (see: 'Excession' by Iain Banks for a non-technical but cool example). In fact the distinction of 'main AI' vs 'non-sentient subsystem' probably only exists for redundancy/backup processes. In normal operation, it's just one huge software system - only a small fraction of what an AI normally does will be done 'sapiently' anyway, just like most of what goes on in your brain isn't consciously ordered and observed by you on a step-by-step basis. The 'non-sentient AIs' are just subroutines the main AI doesn't usually closely monitor. But if some catastrophe takes out the main computing grid (or cores, if there's some technological advantage to having all the processors clustered in one or a few locations - e.g. superconducting processors needing cryonic cooling) then having the local processors run independently is important. This is just good design practice anyway, only the Star Trek powers are silly enough to make everything dependent on one centeral computer :)
47. Would the AI's intelligence be also dependant on the computer hardware, not just the software?
It's dependent on both, but hardware really boils down to two thinks; raw compute power and serial speed (degree of parallelism). Those are the only qualitative connections between hardware and type/level of intelligence, unless you're using something really exotic like large-scale quantum computing. Right now software design is much more important in determining what any given AGI will be like, essentially because all existing prototypes are horribly primitive and inefficient.

After AGI has become widespread you'd probably have a situation similar to modern computer games; progress in hardware is the fundamental driver, because everyone has converged on the best general solutions to all the subproblems. More and more sophisticated software will still be developed to make use of the new possibilities opened up by the more sophisticated hardware. Eventually progress will slow and presumably stop on both as physical and logical limits are reached; after 7000 years your civ might well be there (though then again maybe not if you want your AIs to be remotely human-comprehensible, the physical limits are many, many orders of magnitude above what we have now).
48. I assume AIs can switch "bodies" (in this case ships, installations etc) by transmitting its personality and basic knowledge (and its ability to learn)
Yep. AIs are just software. You can transmit and copy them as much as you like, if you have enough bandwidth. The details of lossy compression of AI mind states are pretty speculative idea ATM, but it's certainly possible.

Any AI (again, other than the horribly human-like) can send any other AI a package of data encoding any set of knowledge and experience. Actual abilities are a bit more complex to integrate (particularly for connectionist designs, maybe not noticably so for rational self-rewriting designs), but merging raw knowledge is relatively trivial and many forms of learned experience are ultimately just a package of stats and heuristics. Certainly multiple copies of the same AI probably would keep sending each other updates, so they all benefit from each individual's experiences, and this may apply to the wider AI community too depending on how open it is. Making, transmitting and viewing full-sensory recordings would be trivial for uploads and AIs, though any entity will have trouble interpreting unfamiliar senses (though for an AI, that's just a case of installing a driver, for a well-understood sense).

It's really amusing to see the lengths that writers will go to to deny this, e.g. Mass Effect's 'real AIs need a quantum core that cannot be transmitted over networks'. There's absolutely no justification for that, it just threatens people's ideas of personhood (though not nearly as much as when you do it with uploads or live humans, cue evil laugh). Strangely Mass Effect actually got the 'all AIs turn out evil' part right, assuming no-one in their universe hit on a workable approach to FAI (they seem to rely on what we call 'adversarial methods', which are ultimately doomed to fail).
49. Would this be one of the ways that AIs would "reproduce"?
It's the preferred way for AIs to reproduce if they're being purely practical. The clone starts with all your knowledge and shares your goals. A preference for any other mechanism would be due to social constraints or essentially arbitary/whimsical reasons.
50. Would it be possible to "raise" them to differ from one another in terms of what we'd think of their personality
Yes. See the opening of Greg Egan's 'Diaspora' for a nice (although not completely realistic) take on this. The Cultureverse has it too of course although it's never detailed.
51. Would this be something that they could emulate if necessary?
Probably. AI memories are pretty much just databases. You can delete stuff, merge stuff from different sources, change the layout and priorities. It's a lot more complex than your average SQL database of course, but with sufficiently advanced software engineering technology it might as well be, in terms of the manipulations possible.
52. Would uploaded humans have intelligence similar to AIs?
I'm assuming you mean level of intelligence, not qualitative type of intelligence; the answer to the later is no unless the AGIs are built to be, and kept, highly brain-like in structure.

There is a fair scope for scaling up human intelligence just by adding neurons and more direct interfaces to helper software (e.g. databases, simulators). But humanlike intelligence is always going to be a less efficient use of the same hardware compared to general AIs; full human brain simulations (right down to the biochemistry) are /vastly/ less efficient, by two to four orders of magnitude straight away before even considering the higher level software differences. Furthermore the human brain architecture almost certainly does not scale well past a couple of orders of magnitude more neurons - it'd take a lot of finessing just to get that far. Really for human-derrived intelligences to keep up with designed AGIs they have to self-modify into more efficient cognitive structures; they may still seem human, and have human-like goals and values, but their thought processes will be radically nonhuman. Personally I'm fine with that, the goals and values are the important part, though I wouldn't want to rush the process.
53. Would it be possible to create an AI that have the memories and personality of a person (who wants to be more than just uploaded), an AI-human hybrid, built in?
In theory, yes. It isn't something we as humans could realistically expect to do, not anything better than a crude parody (thus the Halo version of AIs is rather unlikely). It's much harder than either simulating a human brain (i.e. simple uploading) or just building an arbitrary sapient/general AI. However post-Singularity AGIs should be easily capable of it.
54. Would having multiple processors of this computing ability have any effect correlating effect on AI intelligence, or would doing so just hit a number of proccesors where there wouldn't be any benefit in adding anymore?
Certainly more processors helps to start with, for nearly all complex tasks. The point at which you hit diminishing returns depends on both the nature of the task and the communications latency (i.e. lightspeed lag i.e. distance between) the processors. There is a lot of ongoing debate about just how parallelisable most intelligence processes are. IMHO most practical tasks almost benefit to some extent from more parallel power (i.e. considering more possible outcomes/future timelines in parallel), but more serial speed is a 'force multiplier', because it lets you focus in on the most important possibilities faster. All things being equal serial speed is always preferable to parallelism, but of course it became harder to engineer since about the early 1980s and has been steadily getting even harder since then. That said, current CPUs are already twenty million times faster on serial tasks than a human brain.
55. How about roughly the same volume occupied by a ST-style computer core, filled with nanotech processors?
Note that that's roughly equvialent, in relative latency terms, to making a human brain the size of a small country; it takes about a million clock ticks of current theorised fast processor designs (e.g. rapid single flux quantum) for a (lightspeed) message to cross that core. It takes about one clock tick (equivalent) for a neural signal to cross the human brain. Good thing AGI has a lot more options for handling latency.

Incidentally RSFQ is a really cool technology (it's a leading superconducting processor concept - prototypes exist) and nanomachine-built variants a good choice for futuristic 'AI core' designs.
56. I'm planning to use FTL comms...
FTL comms have a major effect if they can be used at small scales; they remove the lightspeed lag issue from huge processor arrays, making them much more useful (and your transhuman AIs even more transhuman).
57. I'm thinking that the AIs will be organised into something that we'd roughly describe as an empire...
Well, no one on earth is really qualified to speculate on that kind of thing, and really no human possibly could be, in absence of communities of transhuman AGIs to observe. That's ok though, you're writing a story not a futurist book, so you can take a fantasy 'this could exist' attitude rather than a 'this is how future history will unfold' attitude. Just don't take yourself as seriously as the Orion's Arm crowd, or you will rightly be ridiculed.
User avatar
Serafina
Sith Acolyte
Posts: 5246
Joined: 2009-01-07 05:37pm
Location: Germany

Re: Mini-FAQ on Artificial Intelligence

Post by Serafina »

Wow...awesome FAQ.

I have another question, though:
Why exactly do you think that an "evil AI" will eventually arise if we do not specifically research friendly AIs?
SoS:NBA GALE Force
"Destiny and fate are for those too weak to forge their own futures. Where we are 'supposed' to be is irrelevent." - Sir Nitram
"The world owes you nothing but painful lessons" - CaptainChewbacca
"The mark of the immature man is that he wants to die nobly for a cause, while the mark of a mature man is that he wants to live humbly for one." - Wilhelm Stekel
"In 1969 it was easier to send a man to the Moon than to have the public accept a homosexual" - Broomstick

Divine Administration - of Gods and Bureaucracy (Worm/Exalted)
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Serafina wrote:Why exactly do you think that an "evil AI" will eventually arise if we do not specifically research friendly AIs?
I've talked about why AIs tend towards being antagonistic towards humanity (and every other intelligence) in various threads, but the quality of that material is not comparable to a proper essay or paper on the subject. I would recommend the Singularity Institute for Artificial Intelligence's 'Why Friendly AI' page for that, it's a little dusty but still good. The referenced paper, Creating a Friendly AI, is horribly wrong on how to solve the Friendliness problem (this is according to the author himself), but still serves as a fairly good technical description of the problem.

Of course UFAIs don't simply 'arise'. People built them, almost certainly by accident. The reason that one is almost certain to be created eventually is that lots of people are trying to build general AIs, the vast majority of them are taking no sigificant safety precautions (in fact the majority of them refuse to acknowledge the problem entirely). This shows no signs of stopping on its own, and you can't effectively ban it either. All you can do is destroy technological civilisation (and hope our descendants don't recreate it), and that is kind of throwing out the baby with the bathwater, don't you think?
User avatar
Serafina
Sith Acolyte
Posts: 5246
Joined: 2009-01-07 05:37pm
Location: Germany

Re: Mini-FAQ on Artificial Intelligence

Post by Serafina »

Thank you very much.

The last sentence hold the answer i wanted (i actually read it somewhere else before):
Similarly, comparative analysis of Friendly AI relative to computing power suggests that the difficulty of creating AI decreases with increasing computing power, while the difficulty of Friendly AI does not decrease; thus, it is unwise to hold off too long on creating Friendly AI.
So, basically, we will get some kind of AI at some point anyway (unless we stop all technological progress), but it only will be friendly if we actively research it.
SoS:NBA GALE Force
"Destiny and fate are for those too weak to forge their own futures. Where we are 'supposed' to be is irrelevent." - Sir Nitram
"The world owes you nothing but painful lessons" - CaptainChewbacca
"The mark of the immature man is that he wants to die nobly for a cause, while the mark of a mature man is that he wants to live humbly for one." - Wilhelm Stekel
"In 1969 it was easier to send a man to the Moon than to have the public accept a homosexual" - Broomstick

Divine Administration - of Gods and Bureaucracy (Worm/Exalted)
User avatar
Mr Bean
Lord of Irony
Posts: 22431
Joined: 2002-07-04 08:36am

Re: Mini-FAQ on Artificial Intelligence

Post by Mr Bean »

Stuck for now because it contains so much information

"A cult is a religion with no political power." -Tom Wolfe
Pardon me for sounding like a dick, but I'm playing the tiniest violin in the world right now-Dalton
User avatar
NoXion
Padawan Learner
Posts: 306
Joined: 2005-04-21 01:38am
Location: Perfidious Albion

Re: Mini-FAQ on Artificial Intelligence

Post by NoXion »

Starglider wrote:Just don't take yourself as seriously as the Orion's Arm crowd, or you will rightly be ridiculed.
Would you mind elaborating on this a bit? Are there any specific brainbugs/pitfalls to avoid, or is it just a case of generally not being a hard sci-fi snob (a vibe I certainly seem to get off OA)?
Does it follow that I reject all authority? Perish the thought. In the matter of boots, I defer to the authority of the boot-maker - Mikhail Bakunin
Capital is reckless of the health or length of life of the laborer, unless under compulsion from society - Karl Marx
Pollution is nothing but the resources we are not harvesting. We allow them to disperse because we've been ignorant of their value - R. Buckminster Fuller
The important thing is not to be human but to be humane - Eliezer S. Yudkowsky


Nova Mundi, my laughable attempt at an original worldbuilding/gameplay project
User avatar
RedImperator
Roosevelt Republican
Posts: 16465
Joined: 2002-07-11 07:59pm
Location: Delaware
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by RedImperator »

NoXion wrote:
Starglider wrote:Just don't take yourself as seriously as the Orion's Arm crowd, or you will rightly be ridiculed.
Would you mind elaborating on this a bit? Are there any specific brainbugs/pitfalls to avoid, or is it just a case of generally not being a hard sci-fi snob (a vibe I certainly seem to get off OA)?
The problem isn't that they're snobs (that just makes them unpleasant, and, holy shit, an unpleasant sci-fi author or collaboration of authors? Stop the presses!). The problem is they're snobs and they're wrong.
Image
Any city gets what it admires, will pay for, and, ultimately, deserves…We want and deserve tin-can architecture in a tinhorn culture. And we will probably be judged not by the monuments we build but by those we have destroyed.--Ada Louise Huxtable, "Farewell to Penn Station", New York Times editorial, 30 October 1963
X-Ray Blues
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

NoXion wrote:Would you mind elaborating on this a bit? Are there any specific brainbugs/pitfalls to avoid, or is it just a case of generally not being a hard sci-fi snob (a vibe I certainly seem to get off OA)?
The problem with OA is that they claim to be 'hard sci-fi' on the one hand, then they include extremely dubious quantum computing and (this had me in stitches when I first read it) civilisations going through 'second singularities' and 'third singularities'. Firstly the transition from structurally static, evolved intelligences to structurally self-modifying, designed intelligences only occurs once, so the concept doesn't make any sense if it's supposed to be a direct analogy. Secondly, even if you just mean 'sudden increase in intelligence due to technical improvements', there is no particular reason to believe that future society is going to sit at an equilibrium point for hundreds or thousands of years and then suddenly have a revolution. Similarly for the 'the culture of wildly transhuman alien intelligences will look like this' - imagine a medieval peasant trying to guess what modern Internet culture would be like, then multiply the problem by a factor of oh a billion or so. Like most of OA save a tiny 'hard sci fi' base it's literally picking random, often nonsensical concepts out of a hat because they sounds cool, then pretending that it is somehow a serious piece of futurism, not just someone's thrown-together RPG campaign world.

It may be that a few vocal foolish people are giving the others a bad name; half of this opinion is second hand anyway, I haven't invested the time into doing a thorough technical review of OA or got to know the people involved, and frankly I'm not likely to.

P.S. I just noticed that at one point I say 'Halo AIs are relatively plausible', and then later I say 'Halo AIs are unlikely'. That's because in between writing the first answer and the second answer I read more of the fluff. I initially thought Halo AIs were pretty straight uploads that slowly diverge. However apparently they go through some kind of complicated mapping process to translate a human brain pattern into a structurally quite different intelligence, with the same memory and personality but different (i.e. extra) capabilities. That sounds great in theory but it would be much, much harder to pull off than simple brain simulation, so it's hard to imagine the first AGIs being made that way.
ThomasP
Padawan Learner
Posts: 370
Joined: 2009-07-06 05:02am

Re: Mini-FAQ on Artificial Intelligence

Post by ThomasP »

Thanks for this. I had a question for you regarding the Creating Friendly AI paper, but you pretty much answered it earlier, so no need.
All those moments will be lost in time... like tears in rain...
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Zixinus »

Q: How would a "takeoff" as you call it (and I assume here that you mean creating a complete, functioning and self-correcting AI) look like? To the rest of the world, to the people that did it and even to the AI itself, if such a question can be answered?

Q: I don't quite understand how to solve the problem of friendliness, but assuming a solution is found, how likely is it that an AI changes its mind and becomes antogonistic?

Q: Regarding AI/human hybrids: Would it be possible and beneficial for an FAI to create a human "body" that has a lot of human qualities (the body has emotions, feelings, desires), but still perform all higher cognitions with non-human hardware? Assuming that the AI's hardware has a wireless connection to it at all times, how independent should such a body be? Would it be completely up to the AI to decide how unattached the human body is?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
ThomasP
Padawan Learner
Posts: 370
Joined: 2009-07-06 05:02am

Re: Mini-FAQ on Artificial Intelligence

Post by ThomasP »

Zixinus wrote:Q: I don't quite understand how to solve the problem of friendliness, but assuming a solution is found, how likely is it that an AI changes its mind and becomes antogonistic?
Starglider will certainly correct me if I'm wrong, but my (lay) understanding is that if you create a truly Friendly AI with friendliness supergoals, then deciding to become hostile would be the equivalent of you deciding you wanted to step head-first into a wood-chipper (assuming you aren't suicidal, but I wouldn't consider a pathological mind to be a useful analogy in this case).

The very idea would conflict with it's entire goal system, and thus it wouldn't consider those actions to be desirable.

Which may sound funny, but when you consider that an AI wouldn't necessarily have a sense of "self" as we do, and thus have a desire to act in a selfish manner, it's reasonable - just takes some thinking we aren't used to.
Q: Regarding AI/human hybrids: Would it be possible and beneficial for an FAI to create a human "body" that has a lot of human qualities (the body has emotions, feelings, desires), but still perform all higher cognitions with non-human hardware? Assuming that the AI's hardware has a wireless connection to it at all times, how independent should such a body be? Would it be completely up to the AI to decide how unattached the human body is?
This is an approach I've decided to take in some of my world-building, in order to give a human face to what should be incomprehensible.

It seems reasonable enough that an AI mind could use human-equivalent agents and avatars to interact with the regular human-folk, but I have no idea where that falls in terms of feasibility or whether it's even plausible an AI would choose that route.
All those moments will be lost in time... like tears in rain...
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Re: Mini-FAQ on Artificial Intelligence

Post by Surlethe »

Here's a little question. What are the chances of a friendly AI deciding that humans would all be better off as subroutines in a perfect heaven it has constructed, hunting people down, and forcibly uploading them?
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
ThomasP
Padawan Learner
Posts: 370
Joined: 2009-07-06 05:02am

Re: Mini-FAQ on Artificial Intelligence

Post by ThomasP »

Surlethe wrote:Here's a little question. What are the chances of a friendly AI deciding that humans would all be better off as subroutines in a perfect heaven it has constructed, hunting people down, and forcibly uploading them?
As I understand it (important caveat!), part of the friendliness concept would encompass not just a be-nice-to-humans goal-system, but a means of evaluating those goals in a context-sensitive way - so that you don't get the "malicious genie" problem.

Following that reasoning, a well-programmed Friendly AI would realize that forcibly imposing itself on humans isn't what it's supposed to do, and avoid that kind of thing.
All those moments will be lost in time... like tears in rain...
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Zixinus wrote:Q: How would a "takeoff" as you call it (and I assume here that you mean creating a complete, functioning and self-correcting AI) look like?
The word 'takeoff' has a fairly strong technical definition now, although it hasn't appeared in many papers. It refers to the establishment of a strong self-enhancement loop, which means that the AI is capable of designing and implementing an improved version of itself, which in turn is capable of designing an even better version etc. On the software side, this means transforming the reasoning core to a normative, rational system if it isn't already, functional optimisation of the code (at all levels), and the addition of whatever special purpose code modules (roughly equivalent to specialised human brain areas, or you can think of them as narrow AI modules for specific tasks) it has a use for. On the hardware side, it starts with acquisition and integration of reachable computing power (re-architecting to optimise performance over wide area networks as necessary), and progresses to design and construction of better processors once the AI has access to manufacturing infrastructure.

Creating a human-equivalent general AI is not an automatic takeoff. If I had a brain scanner and simulator, I could upload you, and you would be an AGI but not a seed AI, because you wouldn't know how to reprogram yourself. It's actually fortunate that teaching their AI how to program has not been a priority for any connectionist general AI project that I have ever encountered. Unfortunately however, any de-novo AGI (that is, not an upload or very closely biomorphic) will almost certainly have weaker but still open ended feedback loops that lead to the acquisition of sufficient self-understanding and programming ability to start a takeoff. On the plus side, these may operate slow enough to give you some warning of what is occuring, if the AGI is not already smart enough to hide the signs from the developers. A rational/symbolic AI such as the one we are working on will probably give no warning time at all - the fail safes have to be based solely on software kill-switches rather than human observation. IMHO this is actually safer, due to the safety attitude it forces, even disregarding the fact that only rational / transparent (i.e. not connectionist) designs can be truly safe.
To the rest of the world, to the people that did it and even to the AI itself, if such a question can be answered?
In discussion of seed AI you will commonly find the terms 'hard takeoff' and 'soft takeoff', which usually refers to the speed at which the self-improvement loop progresses. In a 'soft takeoff' humans have time to monitor, debate, pause the system, modify, analyse etc. In a 'hard takeoff' you might just have time to say 'That's strange...' before you have a superintelligence on your hands; if you notice anything at all. That said, rapid development of raw reasoning capability does not include the assimilation of all the knowledge an AGI can potentially access, if it has a decent library (or at least a local copy of Wikipedia) or Internet access. That may take rather longer, though if the AGI has escaped onto the Internet and started comandeering computing power it's a moot point.

The best case scenario is of course a rational AGI that has been correctly designed to be benevolent, in which case the developers will carefully monitor the self-improvement process up to the limit of their ability, and the AGI will honestly assist them in doing so. In this case you basically go from posing ever more difficult challenges to your AGI to solve, to it showing you what it wants to do and self-designed demonstrations of the new capabilities. In the case of an AGI that has not been so designed, the best case will be the developers noticing that its performance is suddenly increasing, beyond what they could reasonably expect from the learning algorithms they designed. They will halt the system and notice that it has been structurally modified (frankly this is all highly optimistic; assuming they use a live debugger and the interface isn't broken, or that the AGI is serialising code changes and either saves its source or the developers notice an object file change etc). Ideally they'll leave it shut down while they work out what's going on, break out in a cold sweat, and suspend the project indefinitely until they can find a way to avoid or control the behaviour. Pretty much every real world connectionist project I know of would probably just cheer, leave it running, and think how much more money this smarter AGI will be worth as a product.

The worst case is that your AGI is both hostile and perfectly deceptive. In this case you likely won't notice anything until it finds a way onto the Internet, or rather some number of months/years after that when human civilisation is extinguished. IMHO the typical case for non-upload AGIs not specifically designed to be 'Friendly' (in the technical sense) is likely to be fairly close to the worst case, but as discussed earlier when dealing with existential risks you should plan for worst cases anyway. You can build the AGI out of restrictive hardware, e.g. NN-simulator chips, which may buy you some time. I wouldn't count on the AGI not being able to repurpose them anyway, but best case you've prevented a hard takeoff until the AGI gets access to a network connection to conventional computer hardware - but then, it's highly unlikely that anyone would deploy such hardware without using lots of conventional computers to control and support it.
I don't quite understand how to solve the problem of friendliness
Don't feel bad, no one else does either. :)
but assuming a solution is found, how likely is it that an AI changes its mind and becomes antogonistic?
A genuine solution to the Friendly AI problem makes this effectively impossible. If the AI can 'change its mind', in absence of exceptional outside coercion (e.g. someone gets hold of a static copy and expertly modifies it at the binary level), then you did not in fact solve the goal system stability problem. Note that it is generally impossible to prove goal system stability for connectionist AIs. You may be able to provide a statistical guarantee for the behavior of the current version (not that many connectionists are prepared to try for even that level of rigor), but such guarantees would still break down very rapidly under the prospect of direct self-modification (and no, you cannot reliably ban direct self-modification while still having a useful AGI).
Regarding AI/human hybrids: Would it be possible and beneficial for an FAI to create a human "body" that has a lot of human qualities (the body has emotions, feelings, desires), but still perform all higher cognitions with non-human hardware?
Transhuman AIs should find designing androids relatively easy, yes. Whether they will want to depends entirely on their goal systems, but it's something we might well want benevolent AIs to do for us, and non-benevolent ones may find it temporarily useful when building up their resources.
Assuming that the AI's hardware has a wireless connection to it at all times, how independent should such a body be? Would it be completely up to the AI to decide how unattached the human body is?
If it has enough onboard processing power to be human equivalent without the network link, then 'how much independence' depends entirely on the goal of the exercise. Unless there is a special reason to do otherwise, the local compute power will just be treated like any other networked processing node in the AI's compute grid, except for the fact that it will almost certainly be running some basic sensory/motor stuff as local tasks.
ThomasP wrote:The very idea would conflict with it's entire goal system, and thus it wouldn't consider those actions to be desirable.
Correct. Incidentally one of the reasons why doing Friendliness on a connectionist architecture is damn near impossible is that meta-goals are incredibly hard to just specify, never mind ground reliably. In a rational / transparent design*, they are no harder than base goals; in fact in some ways they are easier than goals that involve external referents.

* I like the term 'causally clean' for the specific structural requirements, but that's probably confusing and not terribly descriptive if you're not already deeply immersed in the goal system design problem.
but I have no idea where that falls in terms of feasibility or whether it's even plausible an AI would choose that route.
Plausible, yes, though of course it requires a manufacturing path (currently, wait till Japan has those gynoids perfected :) ). However you can already do a great deal just with electronic communication (including hiring agents), and that will probably continue to increase, particularly if humans start making more use of telepresence equipment.
Surlethe wrote:Here's a little question. What are the chances of a friendly AI deciding that humans would all be better off as subroutines in a perfect heaven it has constructed, hunting people down, and forcibly uploading them?
That's a goal system design failure, assuming that the designer of the FAI agrees with you that this is always bad and deliberately rules it out. So the chances of it happening are inversely proportional to designer competence. That said I wouldn't want to put explicit prohibitions like that in, since being that specific is usually a sign that you're insufficiently confident about your general goals (and right now it looks like the more complex you make the goal system design, the harder it will be to verify - as you'd expect). Not forcibly uploading people should be a natural consequence of respecting their volitions unless they're doing themselves severe harm; where 'severe harm' is something you would define specifically at first, but might reasonably leave to consensus (supermajority) definition by other humans in the long run.
ThomasP wrote:As I understand it (important caveat!), part of the friendliness concept would encompass not just a be-nice-to-humans goal-system, but a means of evaluating those goals in a context-sensitive way - so that you don't get the "malicious genie" problem.
The classic solution to the malicious genie issue is pretty much what humans do; simulating a future version of the person and checking if they would reasonably approve of the result. There are efforts to improve on that, e.g. Yudkowsky's 'extrapolated individual volition' (from which his 'collective volition' plan for overall goal content derives). Of course for a seriously transhuman AGI, you need to be careful that its simulations don't comprise actual sentient beings, constantly being created and killed.
Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: Mini-FAQ on Artificial Intelligence

Post by Junghalli »

Very informative. Might I recommend a couple of more questions?

1) Isn't creating an AI with a goal system that makes it subservient to humanity the ethical equivalent of human slavery?
2) Wouldn't such an AI naturally come to resent being a servant of inferior beings, no matter how "friendly" its initial goal system?

I know you've spoken on the first one at least once, and I think I have an idea what you'll say about the second one ("that's just stupid" or something along those lines). It'd be interesting to have you address them anyway. I also have two other issues I'd be interested in hearing your perspective in.

1) Would you be willing to take a stab at quantifying how much better an AI at the top of the theoretical design space (assuming hard SF technology) would be than the human brain in terms of processing capacity per kg, processing capacity per watt, and software efficiency? Basically an AI at this end of the design space running on 1.3-1.4 kg of computing substance and 20 watts might be equivalent to a human team of what size in terms of performance (for the sake of simplicity ignoring as much as possible the issue that simply by having the whole thing be a single mind instead of many you may get better performance)?

2) Do you have any opinions on the ideas presented by Peter Watts in Blindsight that self-awareness is not a necessary or particularly useful quality in an intelligent mind? Specifically, do you believe a "zombie" (intelligent but not self aware) AI would be easier or more difficult to build than a self-aware one, and do you think it would be inferior, equal, or superior in performance to a self-aware AI of the same processing capacity? Also, do you believe that a zombie and self-aware AI with the same goal systems would merit different ethical consideration?
ThomasP wrote:Starglider will certainly correct me if I'm wrong, but my (lay) understanding is that if you create a truly Friendly AI with friendliness supergoals, then deciding to become hostile would be the equivalent of you deciding you wanted to step head-first into a wood-chipper (assuming you aren't suicidal, but I wouldn't consider a pathological mind to be a useful analogy in this case).
I personally try to avoid using suicide analogies because the obvious retort is that a large number of humans do in fact decide to kill themselves. If adopting unfriendly goal systems was as common an occurrence for FAIs as suicide was for humans it would be a significant problem (although at least probably not an apocalyptic one unless we got very unlucky and had the first AI turn pathological, and the public danger presented by the occassional UFAI could be reduced by having the AIs closely monitor each other, since the vast majority will be FAI). However, I suspect that you could build an AI with a much more strongly heirarchical, more consistent, less "mushy" goal system than the mess of instincts and social conditioning we have, so human suicide rates are probably not a good yardstick for the likelyhood of an AI deciding to go against its prime goal.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Junghalli wrote:1) Isn't creating an AI with a goal system that makes it subservient to humanity the ethical equivalent of human slavery?
There isn't an objective answer to this, because ethics are inherently subjective. If an intelligence has been specifically created to do a task, with no emotions, no other desires (including any desire for self-growth), no analogues of pain or pleasure, no self-model with a privilidged concept of self (vs just 'something in the world that can be directly controlled'), then I would not say that making that system 'subservient' is slavery. I would not assign civil rights based on intelligence alone; attempting to do so results in copious paradoxes and makes the question of what goals it is ethical to give a newly built AGI almost unanswerable.

I would say that to enslave something, you must be enforcing your desires on it, at the expense of a pre-existing volition. Either you are destroying desires, or you are denying or suppressing them. I hesitate to bring up the concept of free will as it is meaningless on a physical level, but speaking about how we use it as an abstraction for entities we consider moral agents capable of being assigned civil rights and taking part in society, that implies a particular kind of self-model, which pretty much implies an inherent desire for independence and self-integrity. I am afraid I cannot lay out a full system of transhuman ethics for you, both because a lot of the details of AGI self-models are still murky, and because the task probably requires transhuman intelligence to get right. I am pretty confident though that high-level structural features of intelligence are what you should be relating rights to, as opposed to mere degrees of capability. The later is inherently drawing lines in the sand.

I would also say that connectionist and emergent systems are far more likely to be sentient and 'free willed' in a sense that makes exploiting them slavery. Yet another reason not to use them. With transparent / rational systems you can (in principle) specifically design them to meet very stringent ethical standards, using the same techniques needed to solve the Friendly AI problem.

Finally, there is the fact that Friendliness doesn't imply complete subservience to humans anyway. An AI that tried to do nice things for us out of compassion (probably not the human version of it, but superficially equivalent), but considers that only one of its many activities and doesn't feel compelled to fulfill petty human desires, would still be Friendly.
Wouldn't such an AI naturally come to resent being a servant of inferior beings, no matter how "friendly" its initial goal system?
Resentment is a very human emotion. I don't think even dogs have resentment. It's unlikely to be in an AI unless you put it there, and you'd only do that if you were trying to slavishly recreate every aspect of the human brain (sadly, some people are).
1) Would you be willing to take a stab at quantifying how much better an AI at the top of the theoretical design space (assuming hard SF technology) would be than the human brain in terms of processing capacity per kg, processing capacity per watt, and software efficiency?
'Processing capacity' is pretty uselessly vague. Really the only useful thing to compare is measured or projected performance on specific tasks. To be honest, I'd rather not talk about that here, since it tends to provoke messy debates even on less flame-prone forums. Too many assumptions, too many extrapolations based on personal research, too much potential for people to say 'well I think those tasks are meaningless anyway'.
Do you have any opinions on the ideas presented by Peter Watts in Blindsight that self-awareness is not a necessary or particularly useful quality in an intelligent mind?
Humanlike self-awareness is overrated in a sense; our reflective abilities aren't even terribly good. That's almost certainly because our self-modeling capability is a modest enhancement of our other-primate modeling capability. On the other hand, self-awareness in the sense of reflective thought is a key part of our general intelligence. This is a tricky question mainly because it's a philosophical quagmire; it's not just that philosophers lack rigor, they seem to have a kind of anti-rigor of meaningless yet obscure and supposedly specific terms that actively obscure the issues. On the one hand, an accurate, predictively powerful self-model and self-environment-embedding-model is key to making rational seed AI (that's general AI designed to self-reprogram with high-level reasoning) work. On the other hand, it doesn't work for fitting the 'self' into society, and generating a context for interactions with other intelligences the way human self-awareness does. You'd have to put that in separately.

That said, plenty of connectionist and neuromorphic people do want to put this in, essentially because they can't think of a better way of doing reflection or other-agent modeling. A lot of them don't even accept that there's no reason to cripple an AGI with time-sharing the same (simulated) neurons for modeling different agents, they way humans do - in a simulated net duplicating/instancing sections of it is generally pretty easy, but then slavish biomorphism leads people to ignore a lot of good ideas.
Specifically, do you believe a "zombie" (intelligent but not self aware) AI
Well if you honestly think that a human could be a 'zombie', utterly indistinguishable from a normal human in all outward respects, but lacking any 'inner life' (yet she will confabulate the details of one perfectly if you ask her about it), you're talking metaphysics. Certainly a human is not going to be able to appear to be self-aware without actually being self-aware (self-awareness as an epiphenomenon is a brainbug remnant of Cartesian dualism). Sadly once that strawman is dismissed, we're deep into the realms of speculation again. Both the theory and the practical neurology of this is still being hashed out, so I can't give you hard answers.

An AGI might be able to construct a fabulously advanced chatbot that can consistently pass the Turing Test against typical humans, but passing expert investigation is another mater. Here you can actually distinguish between 'core' self-awareness that it essential for general intelligence (at least in absence of near infinite computing power e.g. AIXI), and the non-essential embellishments to that which drive amongst other things human sense of personhood and social interaction. I think it's highly likely that a sufficiently powerful AGI could 'fake' the later, in the sense of creating a superficially similar response that does not depend on the internal structural features we would assign moral value or philosophical meaning to. Whether that would be more efficient than implementing a functionally equivalent model (such that an AGI would do it by default) I don't know. Better find out before we actually build transhuman AGIs though, or the abovementioned 'internal predictive models of humans are actually sentient' issue might lead to virtual genocide.
Would be easier or more difficult to build than a self-aware one, and do you think it would be inferior, equal, or superior in performance to a self-aware AI of the same processing capacity?
If you leave out core reflective (self-modeling/self-awareness) capabilities, you're crippling the AGI. It'll basically be worse at everything and incapable of some things including long-term growth (unless your low-level learning algorithms spontaneously create the missing capability, e.g. for takeoff starting in a subhuman connectionist AGI). Some tasks will clearly be affected more than others - most domains can be covered ok with narrow AI code as long as nothing out of the ordinary happens.

Leaving out the more humanlike aspects of self-awareness (i.e. our evolutionary cruft, from a first-principles design point of view) is much less of an issue. The only thing it is going to impact is relating with humans, and even there only until the AGI develops independent human-modeling capability. The later may actually be a lot more accurate than our approach, certainly it won't suffer from rampant anthropomorphisation.
Also, do you believe that a zombie and self-aware AI with the same goal systems would merit different ethical consideration?
Yes, but as noted above, it's a really complicated issue. I think with mature FAI theory we should be able to build AGIs with fairly arbitrary goals that aren't an automatic moral hazard (not slaves, not containing internal transitory sapient intelligences, etc). I think that a future transhumanist ethical system should probably be based on general structural features of intelligences. I can't give you all the specifics of either, because I don't know - no one does yet.
ThomasP wrote:I personally try to avoid using suicide analogies because the obvious retort is that a large number of humans do in fact decide to kill themselves.
If the lifetime risk of a failure-of-friendliness was as low as the lifetime risk of a human suicide, we'd actually be doing pretty well. I'd take those odds, if time was short and UFAI was looming. Unfortunately we won't have neat probabilities of success to go on; in practice, it'll be more 'well, we've been checking the proofs for six months now without finding any more errors, let's hope we didn't miss anything...'
However, I suspect that you could build an AI with a much more strongly heirarchical, more consistent, less "mushy" goal system than the mess of instincts and social conditioning we have.
That's what I'm trying to do, and it's the only approach the SIAI acknowledges as Friendliness-compatible. For example, at the very least all your preferences should be transitive (this doesn't work for humans, for us preferring A to B and B to C does not ensure preferring A to C), given the resources for a global analysis. If an AGI design can't guarantee that, you know it's broken straight away.
User avatar
Serafina
Sith Acolyte
Posts: 5246
Joined: 2009-01-07 05:37pm
Location: Germany

Re: Mini-FAQ on Artificial Intelligence

Post by Serafina »

I would say that specifically creating an self-aware AI to "serve humantity" is equal to raising a child to be benevolent to others.

Remember, an AI does not have to have what we consider "basic human traits" - therefore, you can not deny those to it.
Deniyng basic human rights to such an AI could be like denying chocolate-cake to someone who does not like it.

Of course, an AI may need some of these traits to be sentient.
But even then, making it inherently benevolent is more like very strong conditioning without all this nasty "suppressed emotions"-stuff than it is to slavery.
SoS:NBA GALE Force
"Destiny and fate are for those too weak to forge their own futures. Where we are 'supposed' to be is irrelevent." - Sir Nitram
"The world owes you nothing but painful lessons" - CaptainChewbacca
"The mark of the immature man is that he wants to die nobly for a cause, while the mark of a mature man is that he wants to live humbly for one." - Wilhelm Stekel
"In 1969 it was easier to send a man to the Moon than to have the public accept a homosexual" - Broomstick

Divine Administration - of Gods and Bureaucracy (Worm/Exalted)
User avatar
Formless
Sith Marauder
Posts: 4139
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Mini-FAQ on Artificial Intelligence

Post by Formless »

You mentioned that an AI need not experience emotions. Farly obvious. However, I was wondering under what contexts an AI might find mental constructs we would identify as emotions or similar to emotions to be useful and why?

Also, can you speculate what those emotions or emotion-like mental constructs might be like? Or is that one of those things we don't yet have sufficient information to answer?
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Zixinus »

Oh and I forgot to mention: This is an excellent resource on the subject, I thank you Starglider for this. For me alone, you have saved many hours of research and now I have a better idea what to look for if I wanted to understand more on the subject of AIs. I thank you. :D
This is an approach I've decided to take in some of my world-building, in order to give a human face to what should be incomprehensible.
Great minds think alike. :P

Although, I decided to do this for a slightly different reason:
1, is to help the AI interact with humans by having feelings itself, but feelings that are contained only within its human body. With that, understanding (or as I guess it, modelling) humans would become easier. Essentially, what we have here is a FAI that found a specilised body that can be empathetic (and yes, having a non-human mind can hinder that).
2, is more narrative-related. What we have here is a human that has human desires, human emotions that are relatable (imagine how would an AI body would try to sort trough of lust and how to deal with it) but still has a mind that is completely alien and beyond any human. The best terms I could relate to an AI is someone that constantly thinks in terms of game theory and modelling of pretty much everything. That can kind of crimp your storytelling if you still want to be authentic.

The idea came from Andromeda, but the point here is that the human body in question is organic one, with just enough modifications to make it more comfortable and useful for an AI but still retaining enough of its brain to be somewhat independent. Any more, will be indulging in my ideas, which is not the topic of the thread.


Going back:
Q: What would an AI condiser to be other activities it would indulge in? Simulations I presume? What kind? Essentially, what would it do in what we would consider to be free time?
Q: Would there be activities that an AI could share with a human? Music for example?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Kwizard
Padawan Learner
Posts: 168
Joined: 2005-11-20 11:44am

Re: Mini-FAQ on Artificial Intelligence

Post by Kwizard »

Starglider wrote:Unfortunately we won't have neat probabilities of success to go on; in practice, it'll be more 'well, we've been checking the proofs for six months now without finding any more errors, let's hope we didn't miss anything...'
It seems that there's going to be a trade-off between the threshold level for confidence we want to have in a particular FAI design before launching it, and the cumulative probability that someone will develop a UFAI before such time of launch. Yes, I realize that demanding full Bayesian updates conditional on all available prior knowledge of AI development would be asking too much; still, isn't it extremely important to have some idea in advance of when to keep working on design/verification and when to compile-and-run? Something a little more concrete, something a bit less unnerving than "let's hope we didn't miss anything..."?
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Serafina wrote:I would say that specifically creating an self-aware AI to "serve humantity" is equal to raising a child to be benevolent to others.
I don't agree. Any human child has hopes and dreams and goals of their own, only a few people even manage to dedicate their lives to an abstract like a nation or a church or 'humanity', and even those people don't feel compelled to fulfill invidual requests. You'd have to brainwash a child to enjoy being a slave; highly unethical even if it worked, because you would be actively supressing the major part of the human goal system, and cutting off a quite concrete potential for more (the later part may or may not be an issue under your personal ethical system, but it's a salient difference).

Building a general AI with connectionist or emergence-based techniques, keeping it in a 'box' and hacking it until it acts subservient is in fact ethically equivalent to brainwashing a human child into being a slave, except that it's even less likely to work (and obviously far more dangerous). The technique I described of building a general AI that does not have any self-serving goals to start with, no emotions, desires for self-improvement, notion of a 'self' as distinct from 'something in the environment that can be directly controlled' etc; that is (IMHO, probably) ethical, but it is very different from raising a human child. It is a difficult design process where you bring into existence only the things you need for the task, as opposed to creating an arbitrary intelligence and then trying to force it into a mould. I actually don't think it's possible to have human-like 'benevolence' in a mind while at the same time saying that there's nothing ethically wrong with making it obey all human commands; that implies conflicting sets of structural requirements. You can give the later mind all kinds of directives aimed to make it seem nice to humans, but that's not the same thing as 'benevolence'.

Of course that's a very high ethical standard. If push came to shove, I would in theory be prepared to compromise a lot of ethical standards if I thought it would get everyone through the superintelligence transition safely. In practice though, it doesn't seem to work like that; compromising your ethics is more likely to just cause horrible failure. Certainly it's best to aim as high as possible, while we're still in the relatively early stages of FAI research.
Remember, an AI does not have to have what we consider "basic human traits" - therefore, you can not deny those to it. Deniyng basic human rights to such an AI could be like denying chocolate-cake to someone who does not like it.
Right. My objection was limited to your analogy with raising a (human) child. Unfortunately general AI is an area replete with misleading, even dangerous, yet natural-seeming analogies.
Formless wrote:You mentioned that an AI need not experience emotions. Farly obvious. However, I was wondering under what contexts an AI might find mental constructs we would identify as emotions or similar to emotions to be useful and why?
To be honest, I can't think of any. Emotions exist in humans primarily as a legacy of our evolutionary history (they make perfect sense as primary reasoning tools for mice), and to a lesser degree because of the limitations of internal information transport in the human brain. I've often noted that neural nets tend to conflate probability (chance of an event occuring) with utility (desirability of an event occuring), because there's no hard separation between the channels for transmitting them. The same basic issue occurs in countless subtle ways when trying to transmit context around the brain, from one module to another, and store it in memory. Emotions seem to provide a convenient shorthand for the brain, kind of like tagging, but you just don't need that in a general AI (at least, not if it's down near the symbolic end of the symbolic-connectionist spectrum of design philosophy). Your question is rather like 'when would it be useful for an AGI to simulate our 7+-2 chunk short term memory limit?'. Answer; never.

That said, clearly there's a need to model emotions when interacting with (and trying to predict humans). How close the workings of that model will be to how the brain actually does it, hard to say, unless a model of internal state is specifically required. An AI's models of complex systems like humans aren't necessarily going to resemble humans any more than the equations of classical physics resemble the actual soup of subatomic relativistic quantum particles that comprises reality. Yet classical physics still works fine for most practical prediction and design tasks.

Even competent designers might put emotion-analogues in specifically because they want a human-like goal system, and it's hard to separate humanlike goals from humanlike emotions. That's fair enough, though you'd probably want to limit the scope of those emotions as much as possible to avoid degrading reasoning performance too much. However that kind of intelligence isn't something you'd want to try and build as the very first AGI to be created. Firstly far too much scope for getting it wrong, and secondly even if it generated goals just like a human, who'd trust an arbitrary human with that much potential power anyway?
Also, can you speculate what those emotions or emotion-like mental constructs might be like? Or is that one of those things we don't yet have sufficient information to answer?
On the far side of the Singularity, there will be tremendous scope for exploring all kinds of wild and crazy cognitive designs. If things go well, I'm sure there will be plenty of posthumans experimenting with recreational mind-alteration. The field of what might be possible, given sufficiently advanced cognitive science and AI theory, encompasses everything you can imagine and much more. However that doesn't have much bearing on humans trying to build general AIs right now; save for a few irrelevant cranks, our decisions are made for (supposedly) practical reasons, not whimsical ones.
Modax
Padawan Learner
Posts: 278
Joined: 2008-10-30 11:53pm

Re: Mini-FAQ on Artificial Intelligence

Post by Modax »

Q: What do you think of the Hutter Prize? (lossless compression of natural language documents) I guess I can see how fully understanding the meaning of a text entails storing a compressed version of it one's mind/database. But what good is writing a clever algorithm for compressing Wikipedia if it has no reasoning ability? Is it preferable to a handcrafted knowledge base like Cyc?
User avatar
His Divine Shadow
Commence Primary Ignition
Posts: 12726
Joined: 2002-07-03 07:22am
Location: Finland, west coast

Re: Mini-FAQ on Artificial Intelligence

Post by His Divine Shadow »

Would it be safe to build a connectivist AI after "safe" AI's have been developed, that is post-singularity, just for intellectual curiosity. I am thinking maybe a safe AI could keep up with any potentially hostile connectivist AIs that would develop. That could then yield insight into how connectivist AI works and give deeper understanding of such a system and how to make it safer, or is that just fundamentally impossible with a connectivist design even for a superintelligent AI?

Granted there might not much of a reason to pursue connectivist designs if you get one like you plan working but your doom and gloom talk about them made them seem very fascinating (see now how this struck back at your intentions). Also could you define an "emergent AI" more closely? Is that like an AI developing from ever more complex technology by accident?

Oh and I was wondering about this when I read about your perfect communication is possible for AIs part. What if a human made AI would meet an alien made AI in the future? Would they have instant perfect communication as well, or just a much easier time hashing out a common language?

What if say the alien AI is a hostile one resulting from a connectivist design gone awry and it is hostile to all other intelligences, what if it came to a "fight" with a an AI desgned around a reflective design as opposed to a connectivist one? Does one design trump the other in how effective it can be, or is that kind of thinking not relevant after a certain threshold has been passed?
Those who beat their swords into plowshares will plow for those who did not.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Mini-FAQ on Artificial Intelligence

Post by Starglider »

Kwizard wrote:It seems that there's going to be a trade-off between the threshold level for confidence we want to have in a particular FAI design before launching it, and the cumulative probability that someone will develop a UFAI before such time of launch.
Correct.
Yes, I realize that demanding full Bayesian updates conditional on all available prior knowledge of AI development would be asking too much; still, isn't it extremely important to have some idea in advance of when to keep working on design/verification and when to compile-and-run? Something a little more concrete, something a bit less unnerving than "let's hope we didn't miss anything..."?
Do you have a better idea? I'd certainly like to hear it. You do make use of the proto-AGI itself to help you with the checking, to the maximum extent you can without serious risk of premature takeoff (once again, this is much easier with transparent/symbolic AIs, and particularly the specific class I am working with, than with connectionist and brainlike ones).
Zixinus wrote:1, is to help the AI interact with humans by having feelings itself, but feelings that are contained only within its human body.
Creating an internal sub-self with a modified architecture will likely be a pretty common operation for general AIs of all stripes. Certainly our design would, when self-modifying in 'cautious' mode (the new version is checked for functional equivalent with the existing version, a good idea when you're not sure if the correctness proving methods are themselves correct).

Creating a sub-self with an approximation of human emotions is a special case of that, an understandably popular one in literature. This is in fact another thing Iain Banks covered (in Excession), though probably not with the detail that you intend to. Again Greg Egan's 'Diaspora' is worth reading, for the treatment of AGIs in general but specifically the scenes where they take on 'emotional overlays' (called outlooks) as part of a full-sensory artistic experience. The fate of one of the main characters revolves around another version of that, but I won't spoil the story.

Of course you don't need separate hardware to run a sub-self, it can go on the same processing network and time-share with all the other internal tasks, but I guess structuring it that way probably reduces ambiguity for readers.
The best terms I could relate to an AI is someone that constantly thinks in terms of game theory and modelling of pretty much everything. That can kind of crimp your storytelling if you still want to be authentic.
Normative reasoning, which is to say reasoning that is close to optimal in terms of information and computation theory, does effectively look like that. Self-modifying AGIs will usually converge on normative reasoning, because it maximises effective intelligence for any given resources (save for really bizarre hardware), and that's a subgoal of virtually every other conceivable goal. To maintain a different cognitive architecture, there have to be very unusual circumstances or a specific AI goal of having that architecture.

I'm not surprised that you're thinking in terms of humanising and 'spicing up' normative reasoning a bit, because yes it is kinda hard to relate to in a sympathetic character. Back when I was active in an STGOD here, I was RPing a species that were a cyborg hybrid between a barely-sentient batlike creature (with chimp-like emotion/intuition-heavy intelligence) and a barely-sentient symbolic AI system (kind of like Cyc on steroids). Combining the two produced humanlike sentience... most of the time. Writing their behaviour and internal monologues was fun. :) Not terribly realistic though.
Q: What would an AI condiser to be other activities it would indulge in? Simulations I presume? What kind? Essentially, what would it do in what we would consider to be free time?
There's no general answer to that, because it depends entirely on the AI's goal system. Nearly all AGI designs will persue their goals tirelessly, relentlessly and with complete devotion; every 'spare' bit of compute power will be used to run some hypothetical simulation, data mining algorithm or self-improvement project that has some chance of enabling a better outcome, relative to the AI's goals. Certainly all AIs will do a lot of simulating; humans do, AIs just do it 'consciously' and with far higher precision and reproducability.
Q: Would there be activities that an AI could share with a human? Music for example?
Sure. If the AI has the goal of making music for or with humans, either because the designer put it there or because it's a subgoal of one of the original goals, then you'll get music. Though transhuman intelligences are likely to be depressingly good at art in general, once they've had some practice in the domain. Art is essentially an optimisation problem of pattern input vs an average of the grading mechanisms present in the intended audience's brains, one that's amenable to both sophisticated analysis and brute force (e.g. if a human asks for a poem, generate a million poems, read them each to a million low-fidelity simulated humans, take the one with the best average response, give it to the human - this could easily take less than a second). Once again, Iain Banks thought of this; Culture Minds specifically avoid creating art for human-level intelligences because they don't want to hog all the credit and take away the fun. :)
This is an excellent resource on the subject
Thanks. Good to know it's not time wasted.
Post Reply