"The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Simon_Jester »

Channel72 wrote:
Simon_Jester wrote:So... because they're worried about what the technology will do rather than about how to invent it faster, they're boring?

That strikes me as rather... short-sighted. If there is a potentially world-altering technology, I'd think you would want at least a little theoretical work being done in directions that help ensure it alters the world in non-horrible ways.
Most of what I've seen from them is hardly a rigid theoretical framework, but more like quasi-religious tenets about what they think a super-intelligent AI would do. But perhaps I haven't read enough of Yudkowsky.
If you want to see whether they're doing real work on AI theory, you'd have to read their published papers, not their blog posts. Their blog posts are of course non-scientific, but that's to be expected since blogs are a vehicle for airing opinions.
And there's AI in the sense of self-modifying or 'learning' software.

But people do continue to work on self-modifying software, and we're approaching the level of computer technology at which it becomes possible to build computers that can do as much computation as the human brain (if only by brute force emulation of brains).
Learning vs self-modifying code are very different. There's few applications of self-modifying code in practical use, outside of JITs. Machine-learning software typically just mutates data or runtime state, not code. (Back in the day LISP used to make it fashionable to write self-modifying programs by blurring the line between code and data, but these days self-modifying code is pretty rare in actual real-world usage.)
Learning code that doesn't self-modify isn't a threat the way that code which does self-modify is, or the way that machines which can learn to do entirely new categories of things are.

The thing is, there is reasonable cause for concern about what a machine capable of self-improvement, or drastically more capable than a human in important intellectual areas like "emulating a human and persuading other humans convincingly" might do.

Why criticize an actual researcher working on improving our understanding of what can or will happen if such a machine emerges?

One might well criticize a blogger who does no research- but that's different.
This space dedicated to Vasily Arkhipov
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Starglider »

I'm writing self-modifying code for deployment as a core business system in a major investment bank right now (as in, in a window next to this browser). So it is unusual in the sense that most software engineering tasks don't need it and most software engineers aren't qualified to do it, but certainly there are apps with embedded self-modifying code out there doing important things.

It's true that self-modifying symbolic code is very much out of fashion at the moment in AI research. There are plenty of people making robots with symbolic code ('conventional software engineering'), but machine learning approaches are virtually all connectionist (various flavours of neural net) or statistical at the moment. The reasons for this are complicated and out of scope of a forum post, but in principle self-modifying code should still be viable as a machine learning approach; indeeded there are a few people including me who believe that ultimately it will prove the most powerful and efficient approach. Self-modifying code does really bring (a raft of) system stability issues to the forefront, but in a sense that's a good thing in that you have to deal with them up front rather than having the sluggishness and restrictions of small-to-medium-scale connetionism damp them out for you.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Simon_Jester »

Gotcha.

[The following is not exclusively addressed to Starglider, who probably knows everything I am about to say except the factually wrong parts (if any)]

Well, code that is not self-modifying is probably not a threat to the overall peace of the world and the survival of human civilization, I would think... At least, so long as the AI is not capable of exponentially increasing its access to hardware without the knowledge or consent of humans.*

On the other hand, by the same token, people are going to keep hiring people like Starglider to build self-modifying software for applications suited to it, and the combination of "self-rewriting" and "intelligent enough to start thinking 'outside the box' about how to accomplish its goals" is a dangerous combination that merits being taken seriously.

For the parody version of the problem, see:

https://xkcd.com/416/

Image

While this is almost certainly not the wave of the future, one can see how having such a zealous and resourceful program trying to accomplish... pretty much any imaginable goal, really... would be a problem for society at large.
___________________

*Which is, yes, a big "if," Starglider, I have been reading your posts. ;)
This space dedicated to Vasily Arkhipov
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Starglider »

Simon_Jester wrote:Well, code that is not self-modifying is probably not a threat to the overall peace of the world and the survival of human civilization, I would think... At least, so long as the AI is not capable of exponentially increasing its access to hardware without the knowledge or consent of humans.*
Strictly speaking, this is not correct. Any truly general machine learning approach, by which I mean one that is capable of learning any function given sufficient training data and computational resources, can in principle (absent of competently implemented safeguards) self-modify to arbitrarily high intelligence. Simplifying a little, sufficiently large recurrent neural networks can learn to code (as evidenced by humans) and they can be configured as a Turing machine to execute arbitrary software even if the designers managed to prevent them from directly writing and executing code (unlikely). However that said, an NN learning to code is a pretty high bar. It doesn't necessarily have to be as generally intelligent as a human software engineer, but the level of abstraction needed is not easy for NNs (or any stastical approach I'm aware of) to develop. The efficiency of existing systems, measured in terms of data needed to learn and hardware resources needed to learn, is much too low to be a substantial threat. Of course, efficiency and available data and hardware keep increasing...

Practically though, an AI system directly based on self-modification is much more likely to enter a self-improvement feedback loop; it's not so much the access to the code, it's the much lower complexity required to enter the loop, and (this is contraversial) the much better achievable efficiency due to targetting a computational structure that actually matches what the available commodity hardware is designed to run. Within that classification, an AI system developed as a software engineering expert system that actually designs code is more likely to than a genetic programming system that just randomly mutates it (or rather makes somewhat educated guesses using recombinative approaches that narrow the search space a bit, but still leave it pretty wide). Most AI work is now done on clusters so the barrier to network dissemination and co-ordination is already pretty low. The most tempting fate thing you could do would probably be applying a self-modifying software engineering system using massive parallelism and compound probablity distribution / EU based decision making to the information security domain (searching for / developing exploits). I did pitch that idea a while back, but sponsors were harder to find than for the finnancial applications so for me it's on hold at the moment. No doubt black and white hat security researchers are closing in on these approaches though.
On the other hand, by the same token, people are going to keep hiring people like Starglider to build self-modifying software for applications suited to it, and the combination of "self-rewriting" and "intelligent enough to start thinking 'outside the box' about how to accomplish its goals" is a dangerous combination that merits being taken seriously.
Most self-modifying code is not artificially intelligent. There are several categories of this, from most common to least common;
1) Technically self-modifying, but only in the transformative sense e.g. runtime compilers, assorted clever things that modern OS kernels do internally to optimise hot spots. Not a risk because the functional spec of the software doesn't change (assuming there are no bugs).
2) Dynamic self-modifying, but only for performance optimisation. The algorithm may be restructured to deal with the shape of the data, from parameter tweaks to a complete rebuild from the underlying data flow graph, but the basic scope of what the algorithm can do isn't going to change. This includes general tools like supercompilation and task-specific implementations.
3) Narrow machine learning that works by direct code search. This is where you want some specific function such a global hedge trade efficiency optimiser (simple for single trade but hard for large portfolios and continuous trading : to give an example that GPs can actually outperform NNs on). This is hard and most experts are trained in NNs etc instead, which tend to give a smoother output, but there are people in banks, hedge funds, process optimisation etc using GP in production (although some of them say they are but are actually just using GA).
4) General machine learning that works by direct code search. This is trying to evolve intelligent agents from scratch, usually by creating a simulated world and then simulating natural selection and some kind of genetics (maybe expression networks as well if they're putting thought into it). Plenty of people have been trying to do this for at least a couple of decades, there are prestigious journals and everything, but as I say it's out of fashion because results have generally been poor compared to connectionist approachs.
5) Narrow machine learning that uses a software engineering expert system to actually design candidate solutions. The difference here is that the system has an abstract functional model of the requirements, the problem domain and the code, and searches on that abstract space using heuristics and reasoning about consequences, rather than blindly searching on the code level. I am working in this area at the moment and I would say this is the bleeding edge in terms of production deployment of self-modifying code. Previously I had a startup that was trying to do it for somewhat a more general business application domain that failed to get VC funding, currently I am doing it for a relatively narrow (although interesting) program trading domain.
6) General machine learning that uses a software engineering expert system to improve its own capabilities. I am pretty sure if anyone had got this to work, we'd all know about it. Obviously it's really hard, but what's remarkable actually is how few people have even tried it. There are huge hurdles which I think you only really appreciate by actually sitting down and trying to write such a system. You need to understand compilers and have a decent understanding of a few flavours of conventional AI system design first to really have a chance, I think; in fact it's very very easy to mischaracterise the reasons why things are failing, which has caused a succession of people who tried this to 'slide off the problem' and end up trying to solve some tangiental issue (e.g. what happened to Lenat going from EURISKO, which was a seminal GP system just starting to break into deliberative rewrite, to CYC which is just a big database for a static propositional solver).
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Simon_Jester »

Starglider wrote:Strictly speaking, this is not correct. Any truly general machine learning approach, by which I mean one that is capable of learning any function given sufficient training data and computational resources, can in principle (absent of competently implemented safeguards) self-modify to arbitrarily high intelligence.
Okay, neural networks that do this without abstract formal comprehension of what they are doing could certainly still pose problems, even in the presence of safeguards to stop them doing anything obviously menacing like "increase my access to hardware by a factor of 1000"

And, thinking about it, as soon as your machine learning approach learns to rewrite itself at all, it's probably going to find a lot of inefficiencies to correct. Either because a human wrote it, or because the "blind idiot god" of competition wrote it in the context of a genetic algorithm. So, yeah, could get a lot smarter and potentially nastier in a very big hurry.

Given that I'm describing a machine that can make itself smarter without any conscious understanding of how or why it is doing so... In a savage burst of anthropomorphism I'm picturing the AI equivalent of trying to 'better oneself' by reading piles of self-help books and contemplating one's navel. Which loops amusingly back to the topic of the OP, I suppose.

I didn't mean to exclude neural networks that lack explicit "we designed it to self-modify" features when I was talking about self-modifying code, but thanks for pointing out that they can do that.
Most AI work is now done on clusters so the barrier to network dissemination and co-ordination is already pretty low. The most tempting fate thing you could do would probably be applying a self-modifying software engineering system using massive parallelism and compound probablity distribution / EU based decision making to the information security domain (searching for / developing exploits). I did pitch that idea a while back, but sponsors were harder to find than for the finnancial applications so for me it's on hold at the moment. No doubt black and white hat security researchers are closing in on these approaches though.
So, self-modifying code that is already explicitly designed to seek out software vulnerabilities and presumably gets its own internal equivalent of a cookie every time it learns how to hack somebody's computer and reports what it's learned to Master.

OK yes, that does sound a biiit like playing hopscotch in an existential minefield...

The best case scenario I can imagine is that the machine doesn't totally lose the plot, and only winds up forcibly inserting itself on the entire Internet via aforesaid security weaknesses, finds all the security loopholes, then stalls out or somehow crashes because despite being smart enough to do that, and having vastly increased its potential by going viral... It still somehow isn't smart enough to come up with anything really outre like "socially engineer myself into a position where all resources of civilization are dedicated to making new OSes for me to find hacks for."

Which would require virtually unfathomable stupidity on the part of the original fault-finding code, although based on what you and others said earlier, it might well not take a program much smarter than an ant to find vulnerabilities in human-designed software. It doesn't take an animal smarter than an ant to figure out how to sneak into a human-designed building, after all, and we're probably better at designing bug-free buildings than we are at designing bug-free code.

Said program might end up smart only in the sense that an ant colony is smart, despite having massively intruded on everyone's lives in an incredibly inconvenient fashion.

That's the best-case scenario I can visualize.

Am I missing anything here? Is there a better best case scenario? I suppose "too stupid to leave its own cluster of computers it already runs on" would count but I'm not sure I can visualize that working if the machine runs long enough.
On the other hand, by the same token, people are going to keep hiring people like Starglider to build self-modifying software for applications suited to it, and the combination of "self-rewriting" and "intelligent enough to start thinking 'outside the box' about how to accomplish its goals" is a dangerous combination that merits being taken seriously.
Most self-modifying code is not artificially intelligent. There are several categories of this, from most common to least common;
Thank you for the description.
This space dedicated to Vasily Arkhipov
Channel72
Jedi Council Member
Posts: 2068
Joined: 2010-02-03 05:28pm
Location: New York

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Channel72 »

Starglider wrote:The reasons for this are complicated and out of scope of a forum post, but in principle self-modifying code should still be viable as a machine learning approach; indeeded there are a few people including me who believe that ultimately it will prove the most powerful and efficient approach
You're probably right - but Bayesian/statistical approaches dominate these days because of obvious early successes in things like machine translation, the rise of cheap cluster computing, and the availability of open source frameworks to easily do all this. Meanwhile nobody is taught what a LISP machine is anymore and self-modifying code (in the sense of a program that modifies it's own AST at runtime) is considered too unwieldy to debug.
Starglider wrote:Practically though, an AI system directly based on self-modification is much more likely to enter a self-improvement feedback loop; it's not so much the access to the code, it's the much lower complexity required to enter the loop, and (this is contraversial) the much better achievable efficiency due to targetting a computational structure that actually matches what the available commodity hardware is designed to run.
I don't really understand what you mean here. Are you referring to an AI that just modifies branches in it's own AST, vs. a neural net that trains itself? Why is the first one a better match for modern hardware? (Considering most neural nets are implemented as MxN matrices, and therefore probably at least make better use of cache locality than a self-modifying AST.)
Last edited by Channel72 on 2016-01-24 10:55am, edited 2 times in total.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Starglider »

Simon_Jester wrote:even in the presence of safeguards to stop them doing anything obviously menacing like "increase my access to hardware by a factor of 1000"
How would you even write such a safeguard? A (nontrivial) neural network is pretty much a black box that accepts inputs and generates actions. All you can do is deploy the usual counter-intrusion tools to look for suspicious network activity etc. Nobody actually does this, and even if they did nearly all commercial apps and probably most research projects would say 'we can't shut down this vital and important system just because you got something slightly anomolous on the firewall! We'd loose millions of revenue / grants'.
And, thinking about it, as soon as your machine learning approach learns to rewrite itself at all, it's probably going to find a lot of inefficiencies to correct. Either because a human wrote it, or because the "blind idiot god" of competition wrote it in the context of a genetic algorithm. So, yeah, could get a lot smarter and potentially nastier in a very big hurry.
Yes. Humans aren't particularly good at programming, as evidenced by the constant bugs in everything and the massive quality differential between average (awful) and expert (merely bad) programmers. This is on top of NNs being horribly inefficient use of the hardware, although there are quite a few people who would disagree because they seriously believe that the way the human brain does general intelligence is the only way to do it.
Given that I'm describing a machine that can make itself smarter without any conscious understanding of how or why it is doing so...
The conscious vs unconscious distinction is pretty much a human quirk. It's an artifact of the way that the global correlation, symbolic reasoning and attentional mechanisms work in humans. NN systems may have something equivalent if the designers just blindly try to reproduce human characteristics, but in general AI will not have it. There is just the sophistication of the available models and the amount of compute effort devoted to different cognitive tasks, both continuous (indeed, multidimensional) parameters.
In a savage burst of anthropomorphism I'm picturing the AI equivalent of trying to 'better oneself' by reading piles of self-help books and contemplating one's navel. Which loops amusingly back to the topic of the OP, I suppose.
Well yes but they would be books on programming, information theory, decision theory, statistics, physics, simulation design, that kind of thing. A lot of human self-help is to do with human emotions and in particular the brain's motivational mechanism, which isn't really applicable.
So, self-modifying code that is already explicitly designed to seek out software vulnerabilities and presumably gets its own internal equivalent of a cookie every time it learns how to hack somebody's computer and reports what it's learned to Master.
Well, I've heard of a few people playing around with GP for this application, but I'm not aware of any successful applications of it. There are lots of existing automated cracking tools but they work by exhaustively trying the set of all known exploits (parameterised) or at best heuristically searching for known antipatterns on a slightly abstracted model of the target code. I mean, there is automation in the sense that once an injection vector has been indentified you can quickly build a full rootkit based on it, but not in the sense of finding an entirely novel type of exploit autonomously.

To a certain extent this is because there is so much low hanging fruit. Existing approaches are good enough that state level actors can hack pretty much anything they like. Organised crime doesn't have much trouble stealing as many credit card numbers as they can use with quite simple attacks. If security continues to improve and the easy exploits go away, then eventually attackers will be pushed to use more sophisticated techniques to find new ones.
The best case scenario I can imagine is that the machine doesn't totally lose the plot, and only winds up forcibly inserting itself on the entire Internet via aforesaid security weaknesses, finds all the security loopholes, then stalls out or somehow crashes because despite being smart enough to do that, and having vastly increased its potential by going viral... It still somehow isn't smart enough to come up with anything really outre like "socially engineer myself into a position where all resources of civilization are dedicated to making new OSes for me to find hacks for."... Said program might end up smart only in the sense that an ant colony is smart, despite having massively intruded on everyone's lives in an incredibly inconvenient fashion.
Exactly: there is quite a high chance that the exponential scenario for this technology is just bricking every single internet connected device (by devoting every resource to finding ways to spread to new systems, and/or running some payload like bitcoin mining). It would be quite humorous and not entirely implausible, if rather disasterous, if a crazed Bitcoin enthusiast took out the entire Internet with a self-improving worm in this fashion. There is just no way existing security mechanisms could keep up once it got to a certain level of sophistication and spread. Kind of like the online-only grey goo scenario. I actually mention this as a worst case for self-improving AI to people who would write off anything physical as hollywood alarmism. Near total loss of the Internet should be bad enough to get people concerned, but denial is still the most common response.
Am I missing anything here? Is there a better best case scenario?
Well sure, it could be developed by a white hat security company and just proactively fixes all software it can reach and stays around blocking attacks, while consuming say 10% of all CPU and bandwidth across every Internet connected device. That would be an interesting class action lawsuit, 'your virus is actually doing us all a lot of good but we're still going to sue you since it was unauthorised dissemination'.

Of course any survivable incident like this would just amp up AI research even if publically governments are pretending to try and regulate it down. Dangerous and useful go hand in hand. Non-proliferation isn't an option for something you can research on any standard computer, given the right skillset.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Starglider »

Channel72 wrote:I don't really understand what you mean here. Are you referring to an AI that just modifies branches in it's own AST, vs. a neural net that trains itself? Why is the first one a better match for modern hardware? (Considering most neural nets are implemented as MxN matrices, and therefore probably at least make better use of cache locality than a self-modifying AST.)
I don't mean close-coupled self-modifying code that writes a few instructions and then immediately runs it. That is a low level optimisation technique which as you say hasn't been relevant for performance reasons for a couple of decades or so (ever since I and D caches split really), although it's occasionally relevant for non-performance reasons in kernel design. I mean code that searches for or actively designs algorithm code. This is generally millions or more cycles in the design phase and then at least millions in the evaluation phase. I mean, I do this stuff on GPUs where the latency to execute new generated code is many milliseconds even on the local node.

The inefficiency of neural nets does not come from the structure of the kernels (code) that implement the NN. As you say they are usually fairly straightforward algorithms with low instruction count, limited branching and quite parallelisable. The inefficiency comes from the fact that an NN does vastly more calculations than actually required to solve any problem, because it has a fixed data flow graph with very little relevance filtering. Conversely human-written algorithms usually have a complex control structure that executes only the minimum necessary operations. To take a simple example, compare the scaling on a sorting network (fully parallel, nice and simple, depth is log N, but requires N^2 operations) vs a quicksort (N log N operations). The former is a natural match for the brain where there are billions of slow parallel compute units with limited plasticity and local connectivity, but it is a really bad match for a computer with only a few very fast compute units with high plasticity and global connectivity. Essentially running NNs on computers is an emulator with a massive emulation overhead, like emulating say a Playstation 2 on a PC but many orders of magnitude worse. We can see this even for tasks that NNs are good at, such as vision processing and fuzzy machine learning, in that once we understand how to write statistical algorithms that do the equivalent thing they are (generally) much more efficient. The appeal of NNs is that they can attack poorly understood problems that we don't know how to write a statistical or logical solver for, or they can automatically optimise things that would otherwise take a lot more software engineering time (hardware cost vs dev cost tradeoff). Genetic programming has the same basic characteristics but practical results are trailing behind NNs on most (but not all) commercially relevant applications, particularly the fuzzy big data ones.

Although IMHO that is partly because most people doing GP research don't understand how to write properly probabilistic code and don't use self-optimising compound probability distributions as basic representational units available to the GP system... better stop before I breech a non-disclosure agreement :)
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Purple »

In your opinion how fare are we from one of those things actually achieving sentience? And yes, I know just how stupid this question sounds worded like that but I can't really word it well since I know just enough about it to know that I know nothing.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
Adam Reynolds
Jedi Council Member
Posts: 2354
Joined: 2004-03-27 04:51am

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Adam Reynolds »

Purple wrote:In your opinion how fare are we from one of those things actually achieving sentience? And yes, I know just how stupid this question sounds worded like that but I can't really word it well since I know just enough about it to know that I know nothing.
By definition that is unknowable. Ray Kurzweil claims AGI(artificial general intelligence, with the ability to generally carry out tasks without being programmed to do something specifically) by 2029 and ASI(the type of potentially unethical superintelligence that people like Yudkowsky are worried about as it would be unquestionably smarter than humans) by 2045. But the fundamental problem here is akin to asking someone in the 1920s when supersonic aircraft would be developed. Or of asking someone in the 1980s if smartphones would exist within 30 years. Attempting to predict future trends with any accuracy is an excellent way to look foolish in hindsight.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: "The Happiness Code" A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.

Post by Simon_Jester »

Starglider wrote:
Simon_Jester wrote:even in the presence of safeguards to stop them doing anything obviously menacing like "increase my access to hardware by a factor of 1000"
How would you even write such a safeguard? A (nontrivial) neural network is pretty much a black box that accepts inputs and generates actions. All you can do is deploy the usual counter-intrusion tools to look for suspicious network activity etc. Nobody actually does this, and even if they did nearly all commercial apps and probably most research projects would say 'we can't shut down this vital and important system just because you got something slightly anomolous on the firewall! We'd loose millions of revenue / grants'.
I honestly don't even know how you would write such a safeguard. You could try, but thinking about it I agree that as a practical matter almost no one would try.

I was more saying "if you did safeguard this, you'd still have problems."
In a savage burst of anthropomorphism I'm picturing the AI equivalent of trying to 'better oneself' by reading piles of self-help books and contemplating one's navel. Which loops amusingly back to the topic of the OP, I suppose.
Well yes but they would be books on programming, information theory, decision theory, statistics, physics, simulation design, that kind of thing. A lot of human self-help is to do with human emotions and in particular the brain's motivational mechanism, which isn't really applicable.
Well, I'm picturing this as a neural network AI pondering how to make itself 'better' without actually understanding what it is doing, which does parallel because human brains don't understand themselves either which is probably why we're bad at improving ourselves in the first place.

It was mostly a joke.
This space dedicated to Vasily Arkhipov
Post Reply