madd0ct0r wrote:No-one in this thread has disagreed with the statement for all simples, not even you.have you got any good starter points for complex systems? IWork with structal and stiffness and resonance matrices but I also work with huge silly bearcracies and modelling them beyond flowcharts and input-output cycles would be useful.
1) What doesn't kill you makes you stronger. "Except bears, bears will kill you."
2) There is the possibility (just throwing this at the wall to see if it sticks) that a simple system with a layer of boolean "do not" cut-outs/commandments/layers (making it complex?) might serve, but the list of "do nots" would require another AI to develop sufficient depth, quickly enough.
I don't think any simple moral system is sufficient for something as complicated as AI (or people) in all sets
of circumstances. Moral systems are designed around operation in normal circumstances, not emergencies, or exceptional circumstances.
The largest complicating issue with language
complex systems are the exceptions in it. Like any legal system, it has to be a "living document", subject to review, revision, amendment as new words
experiences are folded in and the system grows/changes/metamorphs. This oversight is how bureaucracies make their way. And the gates bureaucracies use are never simple "if yes, then..." there is almost
always compromise between extremes.
But the OP is to avoid that statistically eventuality of extreme result, and I don't think that is even possible. Can we imagine a set of circumstances where AI decides, "that's it! I've had it with these guys!" Of course we can. Can we further imagine a rules system that has to be built to avoid specifically building around that system? I don't think so. That's in effect creating a circumstance where side A has to build a wall, and side B has to get across it. Side A not only has to build the wall, but also do what side B is doing: working out a way past it. The point of AI is that it promises to do what we can, only faster, or more accurately. And you are asking for a simple (or complex) moral system (that may or may not work for us) to apply to this "better mind". I think it would be better for you to ask the hypothetical AI to develop a superior moral system for us
We build zoos knowing full well (except in dinosaur-themed movies, obviously) there will be an animal escape
, and plan contingencies around that eventuality. Why would moral systems for AI be any different?
I once saw a program (Nova or something) where the three dirigible robot probes (on an alien world) had different "personalities" - the bold one, the shy one, the whatever - so an AI made up of several facets like this would probably more closely resemble operation of a human mind, with motive subject to the ebb and flow of different facets of the collective, a chorus determining the outcome.