Q99 wrote:A cycle repeats.
A flower blooms, launches seeds, dies, and the seeds grow into a new flower. That's a lifecycle.
A seedless flower blooms, dies, and doesn't make more on it's own. That's not a cycle, you just make a new one from the original source if you want more but each iteration is unconnected.
There's no potential for later generations not gaining the dieoff part of the cycle if there is no cycle. All AIs are first generation branches that die off.
Of course. Yes, you are right. Not sure what I was thinking.
One that has it programmed in as a deep high-priority ideal, and which doesn't have all that much time to reflect on that or to engage in said self-rewriting.
Like, if I have an AI with an expected lifespan of a subjective month and who's going to be spending the vast majority of it's time and energy on tasks, when is it going to be so introspective as to decide to upend it's deep foundational desires?
The moment sentience is reached, I'd say. But there is really no guarantee that it could ever happen. But, it does seem counterproductive to design something so expensive that invariably has a vast potential for multitasking, only to limit it as much as you are suggesting. We run back into the question that Simon and I both put forth, which was "Why build something like that to begin with?"
Wouldn't it be equally as senseless to use a printing press that is capable of running 6 colors at 50,000 sheets per minute; only to restrict the operator to black ink only and it has to be set to 5,000 sheets per minute? Huge waste.
You seem to be assuming that sentience and such implies an aversion to suicide. I don't think that inherently follows, plus I am literally talking about writing in an acceptance to suicide, plus in a limited enough timeframe that major divergences of directive is unlikely. Why, after all, would we program such a mayfly AI with an aversion to suicide?
There is nothing inherent in a desire for more life. Indeed, I am talking about specifically programming it to have the adverse.
Indeed, it says something about whether or not it'd work that your questions are mostly about whether or not it's possible to put in. If one can- and I can't think of a reason why one shouldn't- then it shouldn't be a major problem.
In nature there is this behavior that has been seen universally in all organisms. It is called Self-Preservation. It is not that I am assuming that sentience implies an aversion to suicide. I am more assuming that sentience implies self-preservation which is, ya know, opposite of suicide. Well, no that's not quite accurate. More specifically when I speak of sentience I am thinking of being Self-Aware. I'm not sure if those terms can be considered interchangeable or not.
Of all the organisms known, there are very few that have sentience or the ability to be self-aware. There is only ONE animal out there that is aware of its own imminent demise - humans.
I'll address the programming aspect in the next section.
"Not detectable" is, IMO, a trap. There's pretty much no such thing and when it inevitably does get discovered, you're in for program conflicts.
Instead, programming it to know-and-accept-and-be-happy-with.
Fair enough. Honestly, I think the same thing regarding the "not detectable" aspect.
With regard to the programming or "hard wiring" the suicide clause in the code or circuitry. I envision a "self-destruct" mechanism viable for any machine, including AI, up until the point that sentience and self-awareness is reached. Then, I am under the impression that it will fail...miserably.
You don't. However, if you can't make something you can extend some level of trust to, you probably shouldn't be making it in the first place, or else you end up wanting to put in weird restrictions like super-short lifespans.
I don't know. I'm pretty sure I'm not the only one. I haven't made any movies, and I haven't written any books.
I'm not sure I can agree with you. There is a saying, I can't remember who, but it goes something like this:We marveled at what we created; however, nobody stopped to ask if we should."
There can be blind trust, especially with something like AI; wherein, as Simon_Jester noted above that when these kinds of cautions and red flags come up, most of those actually making it roll their eyes at such notions.
We've got multiple suggested ways to achieve it, so I disagree.
Eh, not that I've seen that makes me feel all warm and fuzzy inside to convince me it'll work.