Xeriar wrote:Alyrium Denryle wrote:
The problem is not with out programming of a friendly AI. It is with the building of subsequent machines. First off, the machines will not be smarter than we are. It will have a faster CPU, the two are not the same thing. We have to program this thing to be able to do things like perform higher math and mimic having an actual biological intelligence. It will be limited in its ability to do these things by the human programmer who initially creates it and as a result will be limited in its understanding of higher math to the degree that the programmer understood higher math. In turn, it cannot subsequently program a machine that is smarter than it unless it uses some sort of genetic algorithm that creates random mutations in code and then selects for those machines which get built that display higher cognitive capacity. These same mutations however can create a non-friendly AI. As a result malevolent AIs could well (and eventually will, simply due to mutation and drift) evolve out of your recursively more intelligent AIs due to a lack of function mutation in the Friendly code.
This is easily demonstrated as false just by looking at the way
humans learn higher math, we don't even need to bring computers into the equation. When discussing higher math with teachers in the subject, my mind does not run through countless attempts at trial and error and return with a solution that hopefully satisfies them. Invariably, if I am wrong, either I or they can actually pick out the
logical error that was made.
You can learn higher maths this way because you have a teacher who knows his stuff to guide your way. In other words, the problem has already been solved, and your only task is to learn the material.
But the "trial and error" that is described by Alyrium is in regards to
improvements on a design that the humans have already busted their balls programming to the best of their understanding. That's a horse of a different color. Apart from a minimal amount of guidance the humans may provide, the computer is all on its own and completely blind, and experience has shown the best way to solve these kinds of problems is trial and error.
Xeriar wrote:Your claim amounts to 'rational AGI is impossible'. For rational AGI to be impossible, a complete algorithmic model of thought must also be impossible. Since every human being does it, claiming that something we do is impossible to fully understand is an extremely spurious statement.
Alyrium's statement is that 'rational AGI is probably impossible
for us to realize'. The emphesised statement is important. We may simply not be smart enough to realize a rational AGI, even if it is possible in theory.
First off, we're not really rational ourselves. For the most part, our reasoning is post-hoc if it occurs at all. Only those of us who have trained our fundamentally flakey survival machine called a 'brain' to simulate rational thought can pretend to be rational. Even for them, their prejudices, ambitions, laziness, and so forth can get in the way to foul their reasoning. This is why we have the peer review process in the first place: to catch each other's mistakes.
The second point is that intelligence may not be a step-by-step list of instructions you can write down. There are many other types of algorithms that are not amenable to digital computing. We won't know for sure until intelligence is nailed down and understood.
Xeriar wrote:It is certainly difficult. I think Starglider's numbers are off by a couple orders of magnitude. I think Kurzweil is rather optimistic about his timescales. But each year, programming techniques become more sophisticated, and writing self modifying and self analyzing code is becoming more the norm. If someone's off by a decade or two, well, predicting the future was never easy.
Predicting the future is almost impossible. Just ask any prophet.
Self-analysis is extremely limited, if it is applicable at all, due to Rice's theorem. There's no general algorithm to analyze a general algorithm to see if the partial function it implements has any non-trivial property, of which friendliness is one. This lies at the core of computational theory, and so its hard to dismiss with the assertion that "programming techniques improve every year." Programming techniques will
not allow you to do the impossible no matter how sophisticated they are. We must first prove that friendliness is amenable to that kind of analysis.
And before you appeal to us as a friendly intelligence, realize that we do not fit our own definition of "friendly." We do immense damage to ourselves. We don't even know if a friendly intelligence can exist, let alone implement one.
The second problem with this assertion is even if we prove that we can compute the answers, there's no guarantee to the complexity of the task. Many non-trivial problems explode violently in time and space requirements as the characteristic measures of its size increase, like the traveling salesman problem. Intelligence is intricate enough to expect that emulating it is going to take a fair chunk of computation, and analyzing it for properties like friendliness is going to increase that complexity to obscene values.
Even if nanotechnology pans out, there will be no picotech revolution — once you get smaller than molecule size, there's hardly any structure
to matter that one can organize into computing elements. This is neglecting the real challenges to nanotech computing, like the fact that molecular computing elements will be mechanically floppy and will not take much electrical disturbance before tearing themselves apart. The amount of computing horsepower availible will be limited no matter how you slice it.
Xeriar wrote:Anguirus wrote:
That would be nice. I don't know how development on those is coming along. Can you fill me in?
What is projected to occur in this scenario when robots become cheaper than wage slaves? Do you conjure that this development will be good for any of the workers who are being replaced? Do you believe that this transition will be easy, or peaceful? Do you believe that these replaced people will continue to eat? (Out of the goodness of capitalists' hearts, no doubt, that's gotten us so far.)
You seem to be assuming that greedy capitalists would be in charge of these robots.
RepRap finished Mendel the better part of a year ago:
http://reprap.org
Fab@Home is still polishing up the Model 2:
http://fabathome.org
The substance directly extruded by these methods are plastic. Do you really expect we will have no need for machined metal in the future?
Both websites read exactly like the overly-optimistic tripe that futurists often fall prey to. While home replication will have its place, it cannot replace industry. The materials we must work to achieve our high-tech society are too diverse for one small machine to be able to work them all. It cannot cure ceramics, it cannot cut steel rods, it cannot forge gears, and it cannot etch ICs. It cannot even work with all plastics. That is a very limited capability
Xeriar wrote:The scenario parallels the early home computer industry, although it is proceeding at a slower pace (or so it feels like).
Yes, and look what happened to home computing. It fell by the wayside. Do not dismiss the power of the economy of scale.