TheHammer wrote:Pretty sure he is saying he has already made the decision. The getting drunk part is to help deal with the emotional fallout of enacting that decision, not the making of the decision itself.
If he's that uncertain about it, maybe he should hand off the button to someone else and let
them decide? If it's this self-evident that he's right, and he's this ambiguous about actually doing the dirty work to act on his own convictions, maybe he needs to talk to somebody before making the decision.
Maybe he fixes starving kids in Africa with that time (hey, wouldn't that be great), maybe he doesn't, but either way that's all he gets because, for better or worse, I am fucking terrified of him enough to murder his omnipotent ass.
You seem to be admitting, at least implicitly, that this is
not a logical decision, but that you would make it anyway.
Is that the case?
It is a perfectly logical decision if he believes with a great degree of certainty that at
some point John Smith will become an oppressor rather than a benevolent dictator. And by then there will be no stopping him. To him the risk outweighs the reward. Sure there will still be the world's problems if he is killed, but those problems can be solved, through great effort or otherwise, by regular people. But an invulnerable, all powerful oppressor is a problem that can't be solved.[/quote]What I mean is that the terms Raw Shark uses ("terrified," "I would get drunk first") do not fill me with confidence in how much faith he has in his own logic.
Scrib wrote:Simon_Jester wrote:Maybe this is exactly the sort of tyrant that could work, who knows? Or maybe it wouldn't actually lead to tyranny after all.
Maybe, maybe not. But you can't know until you're in his power. At which point it's too late to do anything about it one way or another. Which is why you try to never let it get that far.
True. He's a person, therefore he's guilty by default, therefore let us kill him.
This really does echo of Harrison Bergeron. What would you do if you found out there were going to be
ten superpowered beings emerging next year, instead of just one? What about fifty? A thousand? A million?
At what point would quietly killing them off stop being the appropriate response?
I'm not ignoring the potential for good. I have simply weighed it against the potential for evil and picked another path.
Can you show your derivation?
I mean, it seems like you're saying "With probability X, he might do [finite amount of good]. With probability (1-X), he might do [infinite amount of evil]. Therefore he's a net bad."
When did you derive that "bad outcome is infinity evil, good outcome is finite good" result? It's very much central to your reasoning that you are so firmly convinced of this premise. I don't share that firmness. Can you explain?
Do you think it's normal for us to say "we should kill people UNLESS we are absolutely certain that their existence is a net positive?"
Your life must be terribly exciting if this is anywhere near a "normal" scenario.
OK. So your argument is that this is
not a valid philosophical principle to use in general; it's only this guy who should be killed because we're not sure we'll be better off for having him around. Because he's too powerful, so if we can't be sure he's a good thing he should be preemptively eliminated.
Is that right?
Given how real celebrities are, if John Smith really is super-intelligent he won't have much of a problem. Even people whose contributions to the world are limited to showing up in movies or football games are usually wildly popular, and become disliked only if they turn out to be raging assholes or drug addicts or cheating on their spouses.
How much more popular would someone be who did real good things every day, and who could credibly claim godlike power if they wanted to?
Which is why, besides some bleating about the unfairness of capitalism, no one expects much from them. Depending on how effective a hero he is he will certainly be loved. Universally? Nope. The pressure on him is not even comparable.[/quote]I don't think he'll be uniformly loved. I do think that he'll get enough affection to satisfy the desires of almost any near-human psychology.
]Bullshit? Sorry, your argument requires that I buy into your notions. Without that, it's just the optimistic path dressed up.
...What notions? That human beings don't automatically default to evil?
Not that. The whole list and the attempt at some rigorous formulation of human behavior that would have the metahuman acting exactly in a way that would validate his argument.[/quote]I don't expect that a superman would do no harm. But on balance, if all the evidence suggests he's no worse than average, and he's been given these immense abilities... I would expect net good and act on that basis.
What exactly makes it so hard for you to believe that a person with nothing to fear would do good, or at least net good? Why did you even start this thread if you are that certain you already know the answer? It's nuts.
I honestly wasn't sure when I made the thread,like I said, I've been on the other side. Or perhaps I simply was and didn't know it? And anyway, it's not like one has to be on the fence to want to see opposing viewpoints no?
I'm sorry; perhaps I was not privy to the process by which you went from being uncertain to being profoundly certain.
Could you enlighten me? I'm sure there's more to it than what you've said so far; see above.