Does having immoral thoughts make someone immoral?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Ahriman238
Sith Marauder
Posts: 4854
Joined: 2011-04-22 11:04pm
Location: Ocularis Terribus.

Re: Does having immoral thoughts make someone immoral?

Post by Ahriman238 »

Jub wrote:This entire concept is invalidated by the fact that free will is an illusion, our unconscious thought has already chosen our course and the conscious brain is left to justify our actions. We're no more free to choose our actions than a computer is, the only difference is that we're programmed by evolution and experience instead of being programmed by a development team. This will be true so long as our brains are ruled by physics.
I rather disagree. If I quickly and easily choose what I want to do and then justify it, why would I ever wrestle with a dilemma? I don't believe it's because I'm having trouble justifying my chosen path to myself, that's comparatively simple. For that matter, I routinely make snap judgements and change my mind later based on reflection. How does this square with your concept of free will?
"Any plan which requires the direct intervention of any deity to work can be assumed to be a very poor one."- Newbiespud
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Does having immoral thoughts make someone immoral?

Post by Simon_Jester »

Starglider wrote:[snip]
I will simply say that even with all this you have said, we can say:

If you perform good actions because of a motive that is easily removed, that cannot be said to be 'good.' It's too much a coincidence that what you happen to do is good, it doesn't speak to any quality of your person that can be depended on.

Whereas if you perform them because of a motive that is hard to remove, which will reliably cause you to act rightly in difficult circumstances or without expectation of conventional reward, or which will lead you to do that which is right even when it is counterintuitive by the standards of most humans... that is 'good.'

So I'd argue that we're still justified in judging whether a person's internal thought processes contribute to their goodness or undermine it. But we're evaluating this in terms of the robustness of their decision-making framework, whether it consistently yields right results even when explicit tit-for-tat rewards are removed, or even when they have a reason to think they could get away with an immoral action.

Obviously we can never be 100% sure whether a given person's motives are "really" 'good' in this sense or not without putting them to an empirical test, much as we might test any other system under unusual conditions to see if it is robust.
This space dedicated to Vasily Arkhipov
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Does having immoral thoughts make someone immoral?

Post by Starglider »

Simon_Jester wrote:If you perform good actions because of a motive that is easily removed, that cannot be said to be 'good.' It's too much a coincidence that what you happen to do is good, it doesn't speak to any quality of your person that can be depended on.
I noted this in the earlier thread; it's not an inherent flaw of black-box analysis, it's a sampling issue. You don't even have to get into the specifics, if someone is a mass murderer 6 days of the week and a saint on Sundays, and you happen to observe on Sundays, obviously your assessment will be wrong (but if you are reasoning correctly, it will only be a probability anyway). An ethical system that judges an entire individual based on behaviour must have the theroetical basis of judging their behaviour at the limit of infinite trials, in all possible circumstances. This kind of thought experiment is pretty common in computational philosophy. Clearly in reality we can only approximate this perfect judgement given the limited evidence available to us, but that's true of all perception and indeed life in general.

Now you can in principle use white box analysis to improve the quality of the assessment. I wouldn't rely on introspective testimony from the subject for anything more than a weak indication, because it's hopelessly unreliable (for honest and dishonest reasons). Sufficiently good brain scanning and modelling (decades in advance of what we have now) could give good indications, but the key point is that you must use this data to estimate behaviour of the individual under non-observed circumstances, e.g. 'what would they do if they were a billionaire, national ruler etc', i.e. to support a black-box assessment with simulation. Assigning direct moral value to arbitrary brain structures, in and of themselves independent of functional output, is the key fallacy to avoid. It's inherently a fool's errand; the brain is essentially a massive constantly shifting data flow network with many levels of conventional layering, fractal nesting, recurrency, fast and slow information channels etc etc. Somewhere in there is a subsystem trying to simplify all that to a perception of 'me', but the horrible simplifications involved in that, followed by a few millenia of philosophers stumbling around in the dark, are not something you should base a comprehensive ethical system on.
So I'd argue that we're still justified in judging whether a person's internal thought processes contribute to their goodness or undermine it. But we're evaluating this in terms of the robustness of their decision-making framework, whether it consistently yields right results even when explicit tit-for-tat rewards are removed, or even when they have a reason to think they could get away with an immoral action. Obviously we can never be 100% sure whether a given person's motives are "really" 'good' in this sense or not without putting them to an empirical test, much as we might test any other system under unusual conditions to see if it is robust.
So hopefully we agree. Empiricism über alles, even if I tend towards more computational descriptions while you are more classical psychology and ethics.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Does having immoral thoughts make someone immoral?

Post by Simon_Jester »

Starglider wrote:
Simon_Jester wrote:If you perform good actions because of a motive that is easily removed, that cannot be said to be 'good.' It's too much a coincidence that what you happen to do is good, it doesn't speak to any quality of your person that can be depended on.
I noted this in the earlier thread; it's not an inherent flaw of black-box analysis, it's a sampling issue. You don't even have to get into the specifics, if someone is a mass murderer 6 days of the week and a saint on Sundays, and you happen to observe on Sundays, obviously your assessment will be wrong (but if you are reasoning correctly, it will only be a probability anyway). An ethical system that judges an entire individual based on behaviour must have the theroetical basis of judging their behaviour at the limit of infinite trials, in all possible circumstances. This kind of thought experiment is pretty common in computational philosophy. Clearly in reality we can only approximate this perfect judgement given the limited evidence available to us, but that's true of all perception and indeed life in general.
Right.

And to me, well- basically, this is one of the mechanisms by which I try to approximate what people will do in the limit of an infinite number of trials under all possible conditions. I think "what motivating factors are likely to cause this person to do right or wrong under a variety of conditions." I make allowance as best I can for the possibility that I've misunderstood a person's motives. But nonetheless, my assessment of their motives plays a big role in my assessment of whether they are good or bad people.

Essentially, I'm treating people not as black boxes, but as a poorly understood yet definable mechanism, and trying to use the data available to assess the mechanism in order to generalize its probable behavior under different circumstances.

In which case, well, having immoral thoughts doesn't MAKE someone immoral the way DA implies, but a person who has persistent immoral thoughts, or doesn't understand the reasons why people should behave morally, is more likely to behave immorally and therefore is judged more harshly.
Assigning direct moral value to arbitrary brain structures, in and of themselves independent of functional output, is the key fallacy to avoid. It's inherently a fool's errand; the brain is essentially a massive constantly shifting data flow network with many levels of conventional layering, fractal nesting, recurrency, fast and slow information channels etc etc. Somewhere in there is a subsystem trying to simplify all that to a perception of 'me', but the horrible simplifications involved in that, followed by a few millenia of philosophers stumbling around in the dark, are not something you should base a comprehensive ethical system on.
Fair enough- I don't think I'm actually doing that, though. At most I'm referencing the philosophers in an attempt to assess what things are right or wrong, as opposed to trying to assess whether people are right or wrong.

Because while there are definitely limits to either type of understanding, moral philosophers have a better track record of understanding things than they do of understanding people.
So I'd argue that we're still justified in judging whether a person's internal thought processes contribute to their goodness or undermine it. But we're evaluating this in terms of the robustness of their decision-making framework, whether it consistently yields right results even when explicit tit-for-tat rewards are removed, or even when they have a reason to think they could get away with an immoral action. Obviously we can never be 100% sure whether a given person's motives are "really" 'good' in this sense or not without putting them to an empirical test, much as we might test any other system under unusual conditions to see if it is robust.
So hopefully we agree. Empiricism über alles, even if I tend towards more computational descriptions while you are more classical psychology and ethics.
Well, it's not like I have 10-20 years' experience in the theory and practice of artificial intelligence; I have to use some kind of language to describe what I'm talking about and classical psychology and ethics are all I've got.

Plus, IF there are any truths whatsoever in classical psychology and ethics, THEN there ought to be something in play, which is similar to the correspondence principle in quantum mechanics. Under the correspondence system, quantum physics reproduces classical physics results in the limit of numerous particles, low energy levels, and large separations in time and space.

If (again) there are any truths or useful ideas to be found in classical study of ethics and the mind, then a correct and fully detailed understanding of the real mechanics of intelligence and neurology would reproduce those truths.

Conversely, if such detailed understanding does not reproduce those truths, they were never really truths at all...
This space dedicated to Vasily Arkhipov
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Does having immoral thoughts make someone immoral?

Post by Starglider »

Simon_Jester wrote:Essentially, I'm treating people not as black boxes, but as a poorly understood yet definable mechanism, and trying to use the data available to assess the mechanism in order to generalize its probable behavior under different circumstances
Everyone does that, on an informal day-to-day basis. However if we´re seriously going to do it as part of an objective philosophy (not that the ethical system itself is objective, as far as we know that doesn´t exist, I mean creating a non-ambiguous non-culture-specifc description) we need a degree of rigour that is sadly lacking in most ethical debate.
Simon_Jester wrote:but a person who has persistent immoral thoughts, or doesn't understand the reasons why people should behave morally, is more likely to behave immorally and therefore is judged more harshly.
* Simon stared coldly across the table at the student, who had just finnished explaining the link between the certainty of young earth creation and the divinely ordained supremacy of the white race. "I am updating my P values", Simon said through thinned lips, "to a direction and degree you will find... most unfavourable."

Admittedly only people from Less Wrong actually talk like that, and 97% of the time they don´t actually do the maths, but still :)
Well, it's not like I have 10-20 years' experience in the theory and practice of artificial intelligence; I have to use some kind of language to describe what I'm talking about and classical psychology and ethics are all I've got.
As of 2015 computational and even neurological descriptions aren´t always better, because the field is still so immature; there are a lot of areas where spotty as it is, classical psychology still has more predictive models (it certainly has a much larger body of research). However for fundamental philosophical discussions a computational description (of interacting causal processes that implement information processing systems) is usually better, because it is on much firmer ontological grounding, wheras classical psychology and philosophy are basically a bunch of wild guessed stand-alone models that happened to pass some experimental trials (or just became politically popular for a while; particularly in sociology).
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Does having immoral thoughts make someone immoral?

Post by Simon_Jester »

Starglider wrote:
Simon_Jester wrote:Essentially, I'm treating people not as black boxes, but as a poorly understood yet definable mechanism, and trying to use the data available to assess the mechanism in order to generalize its probable behavior under different circumstances
Everyone does that, on an informal day-to-day basis. However if we´re seriously going to do it as part of an objective philosophy (not that the ethical system itself is objective, as far as we know that doesn´t exist, I mean creating a non-ambiguous non-culture-specifc description) we need a degree of rigour that is sadly lacking in most ethical debate.
S'true.

I'm mainly debating here against people who think I shouldn't try- because the thesis of the original poster, which has been compounded by some others, is that it doesn't matter, that other humans should be viewed as black boxes whose inner mental life is irrelevant.

Which must be reassuring if you have problems with empathy or understanding another point of view, I suppose?
Simon_Jester wrote:but a person who has persistent immoral thoughts, or doesn't understand the reasons why people should behave morally, is more likely to behave immorally and therefore is judged more harshly.
* Simon stared coldly across the table at the student, who had just finnished explaining the link between the certainty of young earth creation and the divinely ordained supremacy of the white race. "I am updating my P values", Simon said through thinned lips, "to a direction and degree you will find... most unfavourable."

Admittedly only people from Less Wrong actually talk like that, and 97% of the time they don´t actually do the maths, but still :)
Hah.

The main reason for not talking like that is that anyone sufficiently clear-headed to be versed in logic and understand what the hell you just said... you probably wouldn't need to say such things to.

The main reason for not doing the math is, well... not knowing enough information to get the math right.
Well, it's not like I have 10-20 years' experience in the theory and practice of artificial intelligence; I have to use some kind of language to describe what I'm talking about and classical psychology and ethics are all I've got.
As of 2015 computational and even neurological descriptions aren´t always better, because the field is still so immature; there are a lot of areas where spotty as it is, classical psychology still has more predictive models (it certainly has a much larger body of research). However for fundamental philosophical discussions a computational description (of interacting causal processes that implement information processing systems) is usually better, because it is on much firmer ontological grounding, wheras classical psychology and philosophy are basically a bunch of wild guessed stand-alone models that happened to pass some experimental trials (or just became politically popular for a while; particularly in sociology).
Fair. Quantum mechanics and relativity were in a similar position in 1915, if you're inclined to a dash of numerology.

I remain convinced that some time around mid-century, we are going to find a lot of results from AI research and neurology replicating some, but not all, of the conclusions of classical psychology and philosophy. A lot of people will try to pretend nothing has changed, but the parts of the world worth talking to will learn to ignore them much like they ignore relativity debunkers.

And that's even without a singularity, which I don't bother planning for or predicting around because it would be farcically ignorant of the very meaning of the term.
This space dedicated to Vasily Arkhipov
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Does having immoral thoughts make someone immoral?

Post by Starglider »

Simon_Jester wrote:because the thesis of the original poster, which has been compounded by some others, is that it doesn't matter, that other humans should be viewed as black boxes whose inner mental life is irrelevant.
On the contrary Dominus claimed to be shocked and surprised that some people don´t care about the implementation details of someone´s goal system (i.e. their ´internal motivations and desires´). As far as I can tell this is because he genuinely does commit the (common) fallacy of assigning thoughts and emotions inherent moral worth, although it is possible he just has a much more confused version of the predictive modelling approach you have just espoused.
The main reason for not doing the math is, well... not knowing enough information to get the math right.
It doesn´t actually make sense to ´do the math´ on a day-to-day basis. I think a few people (on Less Wrong etc) do promote working with actual probability calculus on trivial problems (with priors from googled studies or wild guesses) as practice, so that you know how to apply it on big serious formal problems. I can kind of see the argument given some rather horrible errors in prob calculus I made in my first year of working with it. Actually it can be fun with appropriate computer support; someone should make the equivalent of Wolfram Alpha for ethical thought experiments that does the ´googling some relevant stats´ and ´do the prob calculus evaluation´ steps for you. Maybe they have and I haven´t heard of it. If not maybe I´ll make a proof of concept. :)
I remain convinced that some time around mid-century, we are going to find a lot of results from AI research and neurology replicating some, but not all, of the conclusions of classical psychology and philosophy. A lot of people will try to pretend nothing has changed, but the parts of the world worth talking to will learn to ignore them much like they ignore relativity debunkers.
Yes, this is just getting rolling in earnest. There was a bit of a false start with the overgeneralisation and general over-optimist of early symbolic AI research, although to be fair the models weren´t any more naive and were actually more constructive than the classical psychology models of the era.
SMJB
Padawan Learner
Posts: 186
Joined: 2013-06-16 08:56pm

Re: Does having immoral thoughts make someone immoral?

Post by SMJB »

Dominus Atheos wrote:A thoughtcrime being defined as a thought or motivation someone has that other people (in this case, you) deem immoral according to your personal moral code, but that that person does not act on or communicate.

If so, why?
Dominus Atheos wrote:This continues this thread:

http://bbs.stardestroyer.net/viewtopic.php?f=5&t=163829

"A morality experiment: Actions versus motivations"

A whole bunch of people answered contrary to the way I believe, that only the way you affect others or yourself matter.

So why do you believe that the private thoughts and motivations of a person, whose actions only affect the world around him in a good way, can take away from that?

Remember, for this thought experiment we are only using your own moral compass.
In your first post you talk about "thoughts" and in your second you talk about "motivations". These are two very different things! I have a lot of dark impulses in my head which I do not act on, on account of the fact that I like to imagine that I have these things called "self-control" and "willpower" and "a little goddamn respect for the rights of other people". The darkness in my soul motivates nothing in me except low-key self-loathing.

This thread is a mess, so let's let the fucking psychotic person school you all on morality.

There are three main theories of normative ethics: Consequentialism ("actions which have the good results are good"), Utilitarianism ("actions which are intended to have the right results are good"), and Deontology("acting according to your moral code is good"). Which is right? I think they all are, on different "philosophical levels" (as I call them; don't know what the accademic term is, if any).

Ultimately, objectively, good results are the only thing that matter--the starving people of the world can't eat your good intentions, after all. Hence, consequentialism is correct. However, the obvious flaw is that you can't tell whether an action's results are good or bad until after the fact, making this useless as a predictive model, hence why you don't generally get "credit" for accidentially doing the right thing. If a serial killer murders a pedophile, it doesn't follow that he will only murder pedophiles.

Therefore another model is necesary, which brings us to the next obvious option, that actions designed to have good consequences are good. This is best encapsulated by the classic moral thought expiriment i which a train barells down some tracks and you have the option to pull the lever that changes the tracks, killing one person whose tied to it or the option to not do so, allowing a dozen to die.

Obviously a moral person will pick the first--only, your options in real life often aren't so clear cut. Say the one person is tied down tight while the group of people are slowly freeing themselves. Do you still pull the lever, or do you stay your hand and hope that they get free in time? Maybe both tracks lead into tunnels and you've been told that one has one person and annother has a group, but don't actually know? In situations where you can't be reasonably sure of the consequences of actions ahead of time, you need a general guideline to tell you what to do, a series of rules of thumb which lead to right more than they lead to wrong.

Ergo, Consequentialism leads naturally to Utilitarianism which in turn leads naturally to Deontology. Consequentialism defines the goal, Utilitariamism tells you what to do when you can logic it out, and Deontology tells you what to do when, for whatever reason, you can't logic it out. Maybe you're not given enough information (you see two people trying to kill one another and have no idea why) or there isn't any way to meaningfully asess it (if you're given the option of saving one of two people, you're not going to sit down and try to figure out which one is more useful to society).

The only real exception to these is that you are not required to put yourself into danger to do so. Let's return to the classic train exercise, only you're the one person who is on the alternate track. Do you still pull the lever? If not, that's certainly not good, but nobody thinks you're a monster for not laying down your life to save a bunch of total strangers. People who kill burglars generally aren't villified, in spite of the fact that murder is an objectively worse crime than theft, because the other option is to just hope the burglar isn't going to hurt them. Hell, depending upon the percentage of your wealth that you keep in your house and what the local social safety nets look like, the robbery itself could seriously threaten your survival.
Jub wrote:This entire concept is invalidated by the fact that free will is an illusion, our unconscious thought has already chosen our course and the conscious brain is left to justify our actions. We're no more free to choose our actions than a computer is, the only difference is that we're programmed by evolution and experience instead of being programmed by a development team. This will be true so long as our brains are ruled by physics.
Free will is irrelevant. There are only actions, which can be judged as being good or bad based on the above moral systems, and the people who commit them. Whether or not we are robots who are just typing out our arguments because that's what we are programmed to do or not has nothing at all to do with whether or not the arguments themselves are right.
Starglider wrote:P.S. In light of Jub's apparent ontological crisis, I would like to clarify my statement about free will earlier. Unitary personhood and subjective free will is an illusion, but it's a complex illusion that is supported by a lot of cognitive hardware, that humans put a lot of effort into maintaining. This is because like all human intuitive models, it has substantial predictive power (self and others via empathy) and is thus adaptive for typical human social challenges. Free will as a concept is still a useful one, despite being rather fuzzy in practice; it is the basis of moral agents after all. I was taking issue specifically with the idea of applying free will to thoughts, because the reflective ability to control perceptions, emotions and particularly seqitur generation is much weaker (for nearly all humans) than the ability to control actions. For goals, people's ideas about why they do things are notoriously unreliable, which makes many of the questions in this thread hopelessly messy. You accuse someone of helping orphans 'just for imagined good karma', but is that the reality or the rationalisation? Perceptions like that are almost always confabulated after the fact; almost no-one actually works out expected utility equations when selecting how to spend their disposible income. Maybe the 'good' person believes they are doing good deeds for their own sake, but if we traced their neural support paths with sufficiently advanced technology, we'd find they're actually doing it due to an emotional addition to people expressing grattitude, or a deep-seated fear that they will be forgotten when they die. In short teasing apart human goal systems like this is a messy subject that we're still in the early stages of exploring, but we know enough to say that trying to map this stuff to classical moral/philosophical primitives is a fool's errand. Stick with applying moral weighting to the actual output of the system (i.e. people's words and actions) or you will just descend into badly grounded inconsistent nonsense.
Well said.
Starglider wrote:* Simon stared coldly across the table at the student, who had just finnished explaining the link between the certainty of young earth creation and the divinely ordained supremacy of the white race. "I am updating my P values", Simon said through thinned lips, "to a direction and degree you will find... most unfavourable."
ROFLMFAO! I am so going to change my sig line to this, and then post it on Tumblr.
Simon_Jester wrote:"WHERE IS YOUR MISSILEGOD NOW!?"
Starglider wrote:* Simon stared coldly across the table at the student, who had just finnished explaining the link between the certainty of young earth creation and the divinely ordained supremacy of the white race. "I am updating my P values", Simon said through thinned lips, "to a direction and degree you will find... most unfavourable."
Post Reply