Calling the guidance that the sum total of humanity's accumulated knowledge would provide 'minimal' risks trivializing what we've built so far, but yes, there's nothing in particular stopping a rational agent learning from nature rather than humanity, outside of any ethics functions that may be limiting it.
I'd like to point out that, as far as finding the optimum configuration of parts and programming for a good, friendly AI, knowing the general lay of the land (as the sum total of human knowledge would give initially) is indeed 'minimal,' as in its the minimum amount of information you would need to solve the problem. Remember, you're setting this thing to search for better algorithms precisely because
know what they are in the first place.
Alyrium's statement is that 'rational AGI is probably impossible for us to realize'. The emphesised statement is important. We may simply not be smart enough to realize a rational AGI, even if it is possible in theory.
Fortunately, it's not necessary for a single human to be smart enough. It may not even be necessary for the sum total of humanity to be smart enough.
I did not mean "us" (underlined) to mean "a single human selected out of humanity;" I mean "humanity in toto." Even collected, we may not be smart enough to crack the problem. Knowing that something is possible and in broad strokes how to do it is a quite different thing from being able to pull it all together.
I'm not sure if there is any place easier to correct for our biases than in computing. Entire departments and sometimes entire businesses are dedicated to testing. A great deal of testing is automated, even.
And yet bugs slip through anyway. Computers are very complex systems, and as the interaction between its parts gets more intricate, mathematical chaos can set in.
You can translate any neural network into a set of logic functions. Doing this to a human could require the mass energy of a star, but then if humans are intelligent, it is possible by demonstration.
A brain is not a neural network as defined in computer science textbooks. It is a physical machine of action potentials floating in a modulating broth of fluid. That's not amenable to translation into logic functions.
It hasn't been fruitless to try, either. The search alone has given useful tools.
Never said that it isn't fruitless.
Now you're switching to a different argument. I didn't say much about friendliness at all in my post, though for toy scenarios, friendliness as a function of resource usage is a much easier problem, and even such a 'toy' would be immensely valuable.
If you test-run your friendliness definition on 'toy' scenarios, how can you be sure that they'll work for real scenarios?
In fact you seem to be assuming that attempts to prove friendliness will be at the fully general, unrestricted access to all available resources level. That's insane. Even if it was possible it would be insane. The AI psychoanalyzing a human patient is going to be playing by different rules than the AI managing a Dyson swarm.
You don't seem to get what I'm saying.
Let's assume for a minute that we can convincingly prove that a particular AI will only produce task-appropriate output. To take your example, a psychoanalytic AI will not generate output to control a Dyson swarm, and vice versa. Thus we have proven that the psychoanalyitic AI will not be able to produce unfriendly Dyson swarm output. But in order to be wholely friendly, the AI cannot produce unfriendly psychoanalytic output either. That's still
a property of a partial function that the algorithm implements, and there's no way to decide that the algorithm has this property and not
have it be wrong sometimes — and there's no prior guarantee on which algorithms it will decide wrongly on.
Now, it is possible that a proper restriction of algorithms considered and/or inputs fed into the algorithms can let one use an algorithm that need only work for that subset, there's no prior guarantee that the partition of algorithms for which your method decides correctly and decides wrong will fall where you want them to fall. It certainly will not fall along the lines of psychonanalytic AIs/Dyson swarm control AIs.
One machine will never be able to do everything, but that's hardly required or even desirable. In the last discussion on this, it seemed obvious that you would instead have local industries specializing in various small scale gear. It's insane - and a tremendous waste - for every family in a village to be able to craft an engine block, when one or two would be more than enough.
The point is that these rapid prototypers will not
break the greedy capitalists' hold on industry and bring it to the masses. It also does nothing to bring about the post-scarcity society you trumpet about. The only reason for a poor country to buy a factory is to generate income. If it just wants a few parts, it can get it cheaper overall from the big manufacturers.
Your rapid prototypers are just that — prototypers. They're only useful for objects that will have very few instances of, basically unique objects. It's good if you have a relatively large number of objects that are unique, exchanging a higher unit cost and capital cost for the prototyper for the ability to make many objects.
For a society to be truly be called post-scarcity, however, the needs of the great majority people must be satisfied, and most people will have the same basic needs. This means that basic necessities in such a world will be commodities, and customization can come trivially through permutation of individual commodities. These commodities can be produced for reduced cost through the economy of scale, and as such if you have a reliable infrastructure to deliver freight, it's much cheaper to get shipments of commoditites than to make them yourself with a rapid prototyper.
And if you have no reliable freight infrastructure, how do you expect to get your rapid prototyper?
I'm not sure what you're referring to. That nearly every person in Western civilization owns a computer is 'wayside'?
No, that nearly everyone who owns a computer owns a mass produced
computer. Durable goods are worlds different from software.
If the capital is too expensive, poor countries will not be able to take advantage of them no matter how cheap the unit cost is. Also, what will they sell to get capital (or commodities!) that if the worth of everything they could possible create nosedives? You can't always depend on philanthropy to pull through.
It would be more that sufficiently advanced technology could in theory lower the philanthropy effort needed to solve the problem to something plausible for real humans. Our society does try to alleviate famine in the world but it does not have the will to expend enough resources to actually solve the problem, or perhaps it does not have the resources period. If we had vastly more resources then the percentage of our resources that is currently devoted to addressing the problem could be enough to solve it.
Unless your post-scarcity society is hyperbolically super-science, post-scarcity probably will not obviate the need for resource management. You can have your resources be abundant enough to satisfy your own needs at very little cost to yourself, and still not be in a position to be philanthropic. For instance, if your society achieved post-scarcity not only through automation but also
by careful population control and recycling of waste, you can be post-scarcity by any pratical definition and yet phianthropic gifts to other societies may disbalance the resource pool enough to endanger the post-scarcity. Also, resource management by recylcing will mean that the resources cost nothing as long as your resources are not stressed — if you have a protracted period of net iron need such that your iron reserves are depleated, iron is going to cost something nontrivial again.
In the case of food the necessary resources are ultimately the base elements, energy and machinery (in the present world mostly of the organic variety) to turn them into forms we metabolize, and manpower and infrastructure to distribute that. Sufficiently advanced chemical synthesizers and cheap energy could in theory solve 1 and 2, sufficiently cheap robots could in theory solve 3, and a combination of all three could in theory solve 4 by making infrastructure much cheaper to build.
In theory. But any society that you're bequeathing to is not
going to be post-scarcity by definition, and has all the problems that come along with it, like corrupt politicians and war. Your bots will have to be guarded, and will have to be inspected and maintained along with your infrastructure.
That infrastructure is also going to be a drain on some non-renewable resources, like metals. If a robot is lost, then you're out its resource investment. If you're achieving post-scarcity by means of recycling, this is capability permanently lost to you. Your convoys will have to be guarded, ect. And because you are guarding these convoys precisely because you don't want material losses, this is going to put a burden on your resource pool.
This neglects complicating factors like convincing Kim Jong Mugabe to allow your philanthropic robot army to distribute a year's supply of synthetic nutrient syrup to everyone in his insular fortress shithole.
Can that nutrient syrup be eaten raw or can be prepared in an ordinary third-world kitchen? Otherwise you're going to need some sort of processing to be able to eat, which requires a gadget. Also, are your tubes recycleable? That implies you need resource management. And if not, where are you getting your materials to make these tubes?
Of course this is all strictly conjectural (and therefore unproven) and so far from anything presently feasible it's not even worth discussing in the context of presently plausible solutions. So I don't think we're fundamentally disagreeing about anything.
Maybe not, but I think there are details that need to be addressed.