Does transhumanism have a carelessness problem?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Post Reply
User avatar
SolarpunkFan
Jedi Knight
Posts: 586
Joined: 2016-02-28 08:15am

Does transhumanism have a carelessness problem?

Post by SolarpunkFan »

There's been something troubling me lately. I consider myself a transhumanist, but looking at some other people in the movement makes me uneasy.

Quite simply, there seems to be a large number of big-L Libertarians and Randian types in the movement. And I'm not alone in thinking this. Here is a thread pertaining to this thought. The opening post:
I’ve been around and interviewed quite a lot of self-identified
transhumanists in the last couple of years, and I’ve noticed many of them
express a fairly stark ideology that is at best libertarian, and at worst
Randian. Very much “I want super bionic limbs and screw the rest of the world”.
They tend to brush aside the ethical, environmental, social and political
ramifications of human augmentation so long as they get to have their toys.
There’s also a common expression that if sections of society are harmed by transhumanist
progress, then it is unfortunate but necessary for the greater good (the greater
good often being bestowed primarily upon those endorsing the transhumanism).
And it doesn't seem to be limited to nobodies as well. Ray Kurzweil brushed off the possibility of the poor not being able to afford life extension by essentially saying that "cell phones are affordable now, so will life extension be in the future". Disregarding the fact that if AI gets smart enough (or even an order of magnitude short) in his timeframe there might be an unemployment crisis.

I'm not sure if this will change in the future, but right now there seems to be a big carelessness problem in transhumanism.
Seeing current events as they are is wrecking me emotionally. So I say 'farewell' to this forum. For anyone who wonders.
User avatar
Khaat
Jedi Master
Posts: 1034
Joined: 2008-11-04 11:42am

Re: Does transhumanism have a carelessness problem?

Post by Khaat »

At its root, these folks are talking personal advancement(/advantage/opportunity), not the advancement of humanity to heart. But to be fair, most are rolling along in a consumer society that offers New Better Toys, and this is the "brand" they like. Do they care if Tom, Dick, or Harry have the New Better Toys? Why would they? They've convinced themselves they're looking at "the big picture", while spanking it to their fembot pin-ups.

Y'know, unless I misread "transhumanism"....
Transhumanism (abbreviated as H+ or h+) is an international and intellectual movement that aims to transform the human condition by developing and creating widely available sophisticated technologies to greatly enhance human intellectual, physical, and psychological capacities.
No, that's what I thought it was.
Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.
Grumman
Jedi Council Member
Posts: 2488
Joined: 2011-12-10 09:13am

Re: Does transhumanism have a carelessness problem?

Post by Grumman »

SolarpunkFan wrote:Disregarding the fact that if AI gets smart enough (or even an order of magnitude short) in his timeframe there might be an unemployment crisis.
You say "unemployment crisis," I say "post-scarcity utopia." But you do have a valid concern here - the transition to a society where labour is simply not needed in necessary quantities to maintain a near 100% employment rate must be recognised as such, and compensated for with a living wage financed by that same increase in efficiency.

Your other complaint - that augmentation is unfair to those who are baseline human - I think is abhorrent. It is obscene to argue that a person has an obligation to die, not even so that someone else can live, but just because you resent anyone who has it better than yourself.
User avatar
SolarpunkFan
Jedi Knight
Posts: 586
Joined: 2016-02-28 08:15am

Re: Does transhumanism have a carelessness problem?

Post by SolarpunkFan »

Grumman wrote:Your other complaint - that augmentation is unfair to those who are baseline human - I think is abhorrent. It is obscene to argue that a person has an obligation to die, not even so that someone else can live, but just because you resent anyone who has it better than yourself.
Well, what I think would be good is that augmentation (at least life-saving augmentation) is available to everyone. Sorry if I seemed to be implying otherwise.
Seeing current events as they are is wrecking me emotionally. So I say 'farewell' to this forum. For anyone who wonders.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Does transhumanism have a carelessness problem?

Post by Simon_Jester »

The logistics problems of taking a technology that is hard to do, in the sense that orbital spaceflight is hard to do today, and making it universal, are... significant. Hell, we really only reached the point of being able to supply clean water and electricity to the world population some time in the last fifty years or so,* despite both technologies having been developed more like a hundred and fifty years ago.

So yes, there is almost inevitably going to be a major availability gap. The only thing acting to prevent this gap from lasting for centuries would be the rise of economic productivity to levels so massive ("post-scarcity") that it becomes trivially easy to supply things for everyone.

There are a lot of potential pitfalls along the road to reaching that point.
_________________

*This is not to say we do, only that we could, if we made it enough of a priority. Getting clean water to every human today is at least a plausible dream. Accomplishing the same goal in 1916 would have been a pipe dream; too much of the world lived in peasant villages and the mass production of water purification equipment was simply not feasible on the needed scale.
This space dedicated to Vasily Arkhipov
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Does transhumanism have a carelessness problem?

Post by K. A. Pital »

As one of the few red transhumanists out there, I'm pretty sure that "blue" and "white" transhumanists are ugly reactionaries.

BTW, this thread proves it once again.
Grumman wrote:Your other complaint - that augmentation is unfair to those who are baseline human - I think is abhorrent. It is obscene to argue that a person has an obligation to die, not even so that someone else can live, but just because you resent anyone who has it better than yourself.
Is it abhorrent or is the splitting of humanity into rich immortal overlords (long-living, LL) and poor mortal underlings (short-living, SL), as foretold by one of the fathers of Eastern science fiction, abhorrent. I think the splitting is more abhorrent than viewing the advantage for the ultra-rich as "unfair". It's as if they don't have enough advantages already:

http://edition.cnn.com/2015/11/10/healt ... sus-needy/

Enjoy your "utopia". :lol:
Simon_Jester wrote:The only thing acting to prevent this gap from lasting for centuries would be the rise of economic productivity
Or maybe there need not be a gap at all. The gap is a direct product of the economic order and the immense gap between the all-powerful and those not.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27380
Joined: 2002-07-07 06:30am
Location: The Lost City

Re: Does transhumanism have a carelessness problem?

Post by NecronLord »

Grumman wrote:You say "unemployment crisis," I say "post-scarcity utopia."
Because land ownership and rental price trends are going to be done away with how? Otherwise the non asset owners will still have to buy places to live off the landowners, and they will need to find jobs to have somewhere to live.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Does transhumanism have a carelessness problem?

Post by Starglider »

Transhumanists tend to see themselves in opposition to an irrational, easily scared, unwilling to learn majority, who in turn enable a ruling class of worthless beurecrats who try to suffocate all progress with pointless oppressive regulation. This is not fundamentally different from LGBTQIETC seeing themselves as in opposition to an irrational, easily scared, unwilling to learn 'moral majority', who in turn enable a ruling class of worthless religious conservatives who try to suffocate all alternative lifestyles with poinless oppressive legislation. There is just more technology involved.

Personally I find most cyberpunk 'dystopias' to still be an improvement (as in, more interesting to be part of) than the status quo. The problem with transhumanism is not that the future will have exciting new caste systems, it's the additional existential risks of there being no people at all, oppressed or otherwise.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Does transhumanism have a carelessness problem?

Post by Simon_Jester »

K. A. Pital wrote:As one of the few red transhumanists out there, I'm pretty sure that "blue" and "white" transhumanists are ugly reactionaries.
Out of curiosity, could you define 'blue' and 'white' in this context? I know what 'white' means in the context of the Russian Civil War, but that's about all.
BTW, this thread proves it once again.
Grumman wrote:Your other complaint - that augmentation is unfair to those who are baseline human - I think is abhorrent. It is obscene to argue that a person has an obligation to die, not even so that someone else can live, but just because you resent anyone who has it better than yourself.
Is it abhorrent or is the splitting of humanity into rich immortal overlords (long-living, LL) and poor mortal underlings (short-living, SL), as foretold by one of the fathers of Eastern science fiction, abhorrent. I think the splitting is more abhorrent than viewing the advantage for the ultra-rich as "unfair". It's as if they don't have enough advantages already:

http://edition.cnn.com/2015/11/10/healt ... sus-needy/

Enjoy your "utopia". :lol:
This is a legitimate point.

The converse is that, empirically, there is likely to be a problem of scarcity- if we have a choice between immortality for 5% of the population and mortality for 95%, versus mortality for 100%, and those are the only options on the table, should we always choose mortality for 100%?

That is very counterintuitive for people who adopt a utilitarian system of values, so it merits some investigation.
Simon_Jester wrote:The only thing acting to prevent this gap from lasting for centuries would be the rise of economic productivity
Or maybe there need not be a gap at all. The gap is a direct product of the economic order and the immense gap between the all-powerful and those not.
Destroying the power of the elite will not prevent the gap I am referring to from existing.

If the resources of a civilization are inadequate to the task of providing everyone with a certain good, then either no one will have the good, or some will have it and some will not.

Either everyone is equally badly off, or some are doing all right while others are doing worse. Dethroning capitalists (or whoever the elite is in your society) won't actually change this; all it can do is redistribute the finite resources of a zero sum game. In and of itself, that is all that happens.

What actually fixes the problem is development.
____________________

For example, in 1775 it was impossible for the Russian economy to supply all Russians with comfortable food, shelter, education, and medical care. This could not happen. There were only two realistic possibilities given the state of development of Russia at that time.

First, one could have had a state in which all Russians were subsistence farmers. This is not the option which existed at the time, clearly.

Or, second, one could have had a state in which almost all Russians were subsistence farmers (but worse off due to taxation) while a small minority lived in comfort and ease. You know even better than I, that this is the option which actually existed.

One can reasonably argue that the first possibility would have been better than the second. Many political thinkers have done so.

However, in the context of 1775-era Russia, there was no third possibility for the foreseeable future. There was no option "have all Russians live as well as the nobles do now." It was literally not possible due to the level of economic development in the country at the time. Even if all power had been immediately transferred to a peasant republic governed along the most ideal lines imaginable, there would still have been millions of peasants living in (by modern standards) poverty and squalor, for generations.

In relative terms they would be better off, but there would be no sudden magical appearance of universal wealth. Creation of massive wealth requires development. Fixing the government would not automatically develop the country.
___________________________

By contrast, in 1975-era Russia, the third possibility had (in many senses) become real. There was enough food for everyone, there was adequate housing that was warm in winter so that people did not freeze to death. There was education. There was medical care. All these things were now universally or near-universally available.

The government had been fixed, in the sense that the problems of 1775 no longer existed. Development had occurred. But it was not fixing the government or removing the elite that was the proximate cause of the appearance of all this wealth. The proximate cause was still development.
___________________________

Likewise, if we were to develop an immortality treatment tomorrow, the odds are it would be immensely expensive and difficult, requiring the labor of many highly trained specialists and large, dedicated facilities to produce the drugs or other agents needed to make it happen. The resources of the globe would not be adequate to supply the treatment to all humans, any more than they are adequate to supply a vacation in an orbiting hotel to all humans. We could not do that, even if we had a government which genuinely desired to do so, and if we were not ruled by an elite.

So there would only be two options.

Option one is "no one is immortal; the treatment exists but is forbidden to all."

Option two is "some are immortal, and some are not."

Option three, "all are immortal" would require much time and development to become possible. Just as it took time and development to create a condition of "all have access to clean water" or "all have food."

And until that time, we are still stuck with the choice "who becomes immortal, and who does not?" We will be able to answer "these people here" or "those people there" or "nobody," but we can't answer "everybody."
___________________________

It is straightforward enough to cry out "no gods, no masters!" and burn down the clinic providing immortality to the most privileged class in your society. There are very understandable reasons to do so.

However, if you make a pattern of doing this every time someone attempts to provide immortality, it is entirely possible that the secret of immortality will never become available to anyone. Because the infrastructure to create it on the necessary scale cannot be created in toto to supply all the billions of people on Earth, without first coming into existence to supply smaller numbers of people in certain times and places.

I am genuinely curious as to whether you would find that an acceptable price or not.
This space dedicated to Vasily Arkhipov
User avatar
SolarpunkFan
Jedi Knight
Posts: 586
Joined: 2016-02-28 08:15am

Re: Does transhumanism have a carelessness problem?

Post by SolarpunkFan »

If the resources of a civilization are inadequate to the task of providing everyone with a certain good, then either no one will have the good, or some will have it and some will not.
True. But there might also be the possibility that manufacturing becomes efficient enough to do so (something like the microdimensional mastery Kardashev scale), then it might not be a problem.

Granted such a thing isn't inevitable or maybe even likely, it's something to think about.
Seeing current events as they are is wrecking me emotionally. So I say 'farewell' to this forum. For anyone who wonders.
Grumman
Jedi Council Member
Posts: 2488
Joined: 2011-12-10 09:13am

Re: Does transhumanism have a carelessness problem?

Post by Grumman »

K. A. Pital wrote:BTW, this thread proves it once again.
Grumman wrote:Your other complaint - that augmentation is unfair to those who are baseline human - I think is abhorrent. It is obscene to argue that a person has an obligation to die, not even so that someone else can live, but just because you resent anyone who has it better than yourself.
Is it abhorrent or is the splitting of humanity into rich immortal overlords (long-living, LL) and poor mortal underlings (short-living, SL), as foretold by one of the fathers of Eastern science fiction, abhorrent. I think the splitting is more abhorrent than viewing the advantage for the ultra-rich as "unfair". It's as if they don't have enough advantages already:

http://edition.cnn.com/2015/11/10/healt ... sus-needy/

Enjoy your "utopia". :lol:
So your argument is... that because the supply of organs is insufficient to save everyone, it's better to throw them all away? Am I getting that right?
User avatar
Khaat
Jedi Master
Posts: 1034
Joined: 2008-11-04 11:42am

Re: Does transhumanism have a carelessness problem?

Post by Khaat »

I'm hoping his argument is "the supply or organs is insufficient to save everyone, so give it to those most deserving", like current transplant boards do (still a drinker? no liver for you! still a smoker? no lungs for you!)
The elite don't get jumped to the head of organ transplant waiting lists Because Money, they have organs stolen from vacationing tourists in Mexico, like everyone else.... :wink:

If, before we start making people immortal, we have a serious reorganization of how people live and die (and make more people), we might make it out the other side. Or it isn't transhumanism, it's the bastard love-child of... something and something else really bad in combination.
Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Does transhumanism have a carelessness problem?

Post by K. A. Pital »

Grumman wrote:So your argument is... that because the supply of organs is insufficient to save everyone, it's better to throw them all away? Am I getting that right?
You are getting it wrong. The most needy should get it, and on a first come first served basis, regardless of income.
Starglider wrote:Transhumanists tend to see themselves in opposition to an irrational, easily scared, unwilling to learn majority, who in turn enable a ruling class of worthless beurecrats who try to suffocate all progress with pointless oppressive regulation. This is not fundamentally different from LGBTQIETC seeing themselves as in opposition to an irrational, easily scared, unwilling to learn 'moral majority', who in turn enable a ruling class of worthless religious conservatives who try to suffocate all alternative lifestyles with poinless oppressive legislation. There is just more technology involved.
It is true that transhumanists hold other people in deep contempt. They are in "opposition" to the people and often hostile to the very idea of democracy. They're basically a refined and concentrated technocratic, bolshevik concept of a "vanguard elite" - an elite squared as they're imagining themselves as an elite that's hostile to the people it is supposed to rule using its superior knowledge and rationality. The concepts that are discussed in the reactionary subset of these closely-knit communities are pretty much techno-fascism and nothing else.
Starglider wrote:Personally I find most cyberpunk 'dystopias' to still be an improvement (as in, more interesting to be part of) than the status quo. The problem with transhumanism is not that the future will have exciting new caste systems, it's the additional existential risks of there being no people at all, oppressed or otherwise.
Thanks for being candid... about yourself. I find it curious - in a very morbid way, of course - that you find caste systems exciting. It is kind of like the free people who romanticise the criminal world, jails and prisons.
Simon_Jester wrote:Out of curiosity, could you define 'blue' and 'white' in this context? I know what 'white' means in the context of the Russian Civil War, but that's about all.
White - monarchist and (techno)fascist leanings. Blue - nationalist-liberal. Red - republican, left-wing. The same split is present in the transhumanist community (and its national versions). There are transhumanists who worship technology but despise the idea of collective rule by consensus, of democracy. These are basically techno-fascists, though sometimes they like to dress up in blue clothes to be more acceptable to the commoners (after all, the blue nationalist-liberal ideas form a mainstream centrist concensus for the majority of nations even now). The blue part is less reactionary because some liberals are not so totally crazy about Deus Ex machina to actually start sacrificing the perceived core principles of their own movement, but it is also full of people who are completely unable to say anything against the corporate boot stomping the face. Until it stomps their face, of course, but if you look at it seriously, no corporation would be willing to alienate its very own "useful idiots" for no reason.
Simon_Jester wrote:This is a legitimate point.

The converse is that, empirically, there is likely to be a problem of scarcity- if we have a choice between immortality for 5% of the population and mortality for 95%, versus mortality for 100%, and those are the only options on the table, should we always choose mortality for 100%?

That is very counterintuitive for people who adopt a utilitarian system of values, so it merits some investigation.
"The Ones Who Walk Away From Omelas" also presents a very counterintuitive problem for people who adopt a purely utilitarian system of values or even think their vulgar utilitarianism is a fine moral system that will win them debates and solve issues of any complexity, except it is written much better than I could ever write.

It does merit some investigation, so why not investigate? What actually changes for the 95% of mortals if 5% become immortal? Well, first of all, the 5% are likely to become their overlorlds. Being immortal, they can accumulate knowledge, wealth, power and influence far more effectively than even a human dynasty of kings with mandatory education for their princes could. Next, the gap between mortals and immortals rises. Just as an immortal doesn't care about the problems of those dropping dead like flies, so does the mortal have no benefit from having his immortal Master around.

I always look at things from the perspective of the disposessed and oppressed. What actually changes for them? Their rulers get more time, money and power. They don't get anything (except watching their Masters go on living gleefully while they turn into old cripples) and simply die. I am pretty sure they wouldn't understand the logic of this deal, and neither do I. The maximization of possible pleasures of the 5% immortal elite do not concern me at all.
Simon_Jester wrote:If the resources of a civilization are inadequate to the task of providing everyone with a certain good, then either no one will have the good, or some will have it and some will not.
What is to me the happiness of a "have" when I am a "have-not"? Nothing. There's no such thing as a monolithic "civilization". That aside, if the civilization has invented a very dangerous technology that has the potential to even further tip the balance of class power (like, for example, immortality or AI), a prudent civilization would try to implement some safeguards before this technology rapidly changes it for the worse.

Certain things are dangerous even as ideas, and the idea of unequal immortality is a very dangerous idea. It is the creation of God.

As an atheist I am not so much worried about the fact a Biblical God doesn't exist. I am much more worried by the fact people can create gods themselves, and much like the Greek gods or the Christian god and his angels, they will be human-like, having all the human vices (perhaps even more than the average human, as power corrupts), but they will also ascend to godhood. Vengeful, sadistic, callous masters will live forever. What's not to like about this idea, right? They will have their Heaven, and their Hell, and finally the books about a war of gods with men or what else there is wouldn't seem just fiction.
Simon_Jester wrote:However, if you make a pattern of doing this every time someone attempts to provide immortality, it is entirely possible that the secret of immortality will never become available to anyone.
Perhaps humanity should prove itself worthy of the secret. All too easily after one of the biggest bloodlettings in history have humans been given the power of the atom. The scientists, at least some of them, probably did not feel so good about this. So many betrayed the experiment and gave the data to the other superpower. Why did they do it? They let the most dangerous technology spread in the hopes of balancing out giving just one agent an absolute power. How dangerous can immortality be? Tyrants die and with it comes change. Even nations collapse to form other nations and organizations. The potential for abuse is much greater because the danger is much less obvious to the commoner. He understands a nuke will vaporize him, but he can't foresee the eternity of slavery for immortal overlords. Doesn't seem like a huge danger, right? He'll have HDTV.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Does transhumanism have a carelessness problem?

Post by Starglider »

In the long term, large scale view the differences between local human social groupings are not very significant. They are often fascinating in the anthropological sense, but the absolute difference in welfare between rich and poor on a scale of caveman to immortal post-scarcity post-human is not great. Delays of mere decades in technology availability are particularly insignificant; less than a generation when over a hundred thousand generations of sapient hominids have existed on earth. When considering the implications of this class of technological advance, the really important problems are the ones that can cause major problems across millenia or more of future time, on a planetary scale and up.

Nuclear weapons were the first single technology with truly existential risks, although the risk was probably to civilisation rather than humanity as a species (debates on the total extinction potential of a height-of-the-cold-war superpower exchange continue). Biological weapons had a slower start but the potential to progress up to very high probability of extinction (given a sufficiently apocalyptic philosophy behind the designers). Climate change is not a single technology but an overall effect of the local-maxima-seeking industrial development path, with moderate existential risk but much higher chance of occuring, due to the highly incremental, deniable and non-deliberate mechanism. Nanotechnology and AI are emerging existential threats with even higher upper ceilings on their negative effects; recognition of this has been slowly climbing over the last few decades.

Some of these things can be usefully regulated (biotechnology, climate-altering industry, nanotech to a limited extent), some of them can be politically negotiated (nuclear and biowarfare) and some of them cannot (AI).

As an aside, transhumanists do not seem any more opposed to democracy than humanist liberals, on average. Indeed they are often for voting system reform. As I said, disliking the mass of humanity and wishing to disregard the victories of political factions you don't like is a passtime of all flavours of intelectual, including most of the ones on this forum.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Does transhumanism have a carelessness problem?

Post by Simon_Jester »

K. A. Pital wrote:"The Ones Who Walk Away From Omelas" also presents a very counterintuitive problem for people who adopt a purely utilitarian system of values or even think their vulgar utilitarianism is a fine moral system that will win them debates and solve issues of any complexity, except it is written much better than I could ever write.

It does merit some investigation, so why not investigate? What actually changes for the 95% of mortals if 5% become immortal? Well, first of all, the 5% are likely to become their overlorlds. Being immortal, they can accumulate knowledge, wealth, power and influence far more effectively than even a human dynasty of kings with mandatory education for their princes could. Next, the gap between mortals and immortals rises. Just as an immortal doesn't care about the problems of those dropping dead like flies, so does the mortal have no benefit from having his immortal Master around.

I always look at things from the perspective of the disposessed and oppressed. What actually changes for them? Their rulers get more time, money and power. They don't get anything (except watching their Masters go on living gleefully while they turn into old cripples) and simply die. I am pretty sure they wouldn't understand the logic of this deal, and neither do I. The maximization of possible pleasures of the 5% immortal elite do not concern me at all.
Then the question that arises is one of timescale. For example, if immortality is introduced to 5% of the population, and then not introduced to the other 95% for centuries, you have this persistent imbalance.

Conversely, if immortality is introduced to 5% of the population, and then to the other 95% of the population ten or twenty years later, the scenario you describe does not occur, and things are not so bad.
Simon_Jester wrote:If the resources of a civilization are inadequate to the task of providing everyone with a certain good, then either no one will have the good, or some will have it and some will not.
What is to me the happiness of a "have" when I am a "have-not"? Nothing. There's no such thing as a monolithic "civilization". That aside, if the civilization has invented a very dangerous technology that has the potential to even further tip the balance of class power (like, for example, immortality or AI), a prudent civilization would try to implement some safeguards before this technology rapidly changes it for the worse.
Understandable.

So do you favor deliberately preventing anyone from having access to significant anti-agathic technology, such as might potentially confer immortality, until such time as everyone has access to it?

I intend this as a neutral question.
This space dedicated to Vasily Arkhipov
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Does transhumanism have a carelessness problem?

Post by Starglider »

We are indeed fortunate that the opinion of random bitter internet nihilists has absolutely no impact on the progress of technological development, or rather the correlation is so low as to round to zero in any practical model. The (mean) individual ability to accelerate development of transhumanist-relevant technologies, be it by direct contribution, advocacy, education or donnations to research groups, is far greater than the ability to retard development. This is why selective support of the most preferable technologies and outcomes is so much better as a strategy than luddism and denial.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Does transhumanism have a carelessness problem?

Post by K. A. Pital »

Starglider wrote:In the long term, large scale view the differences between local human social groupings are not very significant.
Said about millions of years he whose lifetime is measured in decades.
Starglider wrote:They are often fascinating in the anthropological sense, but the absolute difference in welfare between rich and poor on a scale of caveman to immortal post-scarcity post-human is not great.
When picking enemies, I'd pick the caveman and not a cyborg godlike creature. Picking allies - I'd pick an equal. Picking an "ally" who stands higher than yourself will make you into the servant of that ally. I don't want to serve rich ubermenschen. But you can, of course, hoping the scraps falling from the floor fall right towards where you are.

The plight of the luddites was a lack of understanding that their enemy was the man controlling the machine - just as now the enemy are those who control the resources, the technologies and with it - the future. We are indeed fortunate in that we know in great detail how every nut, every bolt comes to be and how much human suffering is spent every minute producing something you mindlessly consume. Enjoy while it lasts. Even the gods die one day.

I support the development and use of technology by the general public. But I oppose its privatization and the creation of property from technology. In your future, the future itself is privatized, bought and planned by a bunch of rich assholes from enclosed compounds.
Starglider wrote:As an aside, transhumanists do not seem any more opposed to democracy than humanist liberals, on average. Indeed they are often for voting system reform.
Reform into what, though?
Image
Starglider wrote:As I said, disliking the mass of humanity and wishing to disregard the victories of political factions you don't like is a passtime of all flavours of intelectual, including most of the ones on this forum.
Disliking the mass of humanity is something you can either use to uplift others or to keep others down, like now. Somehow I haven't been convinced that the minds of the "transhumanists" are pure here. Especially as the most vocal members are corporate hacks, richie richies and other people whom I won't give a handshake when saying hi.
Simon_Jester wrote:Then the question that arises is one of timescale. For example, if immortality is introduced to 5% of the population, and then not introduced to the other 95% for centuries, you have this persistent imbalance.

Conversely, if immortality is introduced to 5% of the population, and then to the other 95% of the population ten or twenty years later, the scenario you describe does not occur, and things are not so bad.
The immortals would certainly be glad to learn that Joe Nobody got for free something they've spent their entire fortunes on. :lol: Or not. If the investment into immortality devalues in 20 years to zero... begs the question why'd you even start making them immortal instead of waiting 15-20 years and starting to make everyone immortal.
Simon_Jester wrote:So do you favor deliberately preventing anyone from having access to significant anti-agathic technology, such as might potentially confer immortality, until such time as everyone has access to it?
Yes. They already have "anti-agathic technology" as it is now: transplants, plastic surgery, replacement of this and that. Add to this the 3D-printing of body parts, and there are options to prolong your life which can't even be stopped. But giving full immortality to the elite? Nah.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Does transhumanism have a carelessness problem?

Post by Simon_Jester »

To be fair, I can think of a number of reasons why it might not be possible to immortalize everyone in a hurry. It might not be economically feasible to apply the relevant treatments to more than, oh, 5% of humanity per year. Or it might take decades to build up the infrastructure from something capable of treating a handful to something for the masses.
This space dedicated to Vasily Arkhipov
User avatar
biostem
Jedi Master
Posts: 1488
Joined: 2012-11-15 01:48pm

Re: Does transhumanism have a carelessness problem?

Post by biostem »

Perhaps I'm missing something here - imagine we developed performance enhancing drugs that had no negative side effects - would it be immoral or unethical to require people pay for them, in order to gain access to them in the first place? I'm assuming that there'd be some cost/scarcity associated with any of these "immortality treatments", after all...
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Does transhumanism have a carelessness problem?

Post by K. A. Pital »

biostem wrote:Perhaps I'm missing something here - imagine we developed performance enhancing drugs that had no negative side effects - would it be immoral or unethical to require people pay for them, in order to gain access to them in the first place? I'm assuming that there'd be some cost/scarcity associated with any of these "immortality treatments", after all...
There's a cost/scarcity problem now. A lot of people have barely enough to eat and humans' development is retarded while even retards are well-cared for here in the Earthly Paradise of semi-deindustrialized, cleaned-out West. What you say is that we make it more staggering, this divide, due to the fact some people in the future may be better off. Good attempt, but I'm not buying.

There are some sympathetic faces in the movement whom I find interesting (Hank Pellissier, for example - that person has my respect).

But there are also others, which actually made me come to the conclusions voiced in the OP of this thread way earlier. Let me list a few.

Ronald Bailey - American libertarian, a right-winger who denied global warming (when turned out to be undeniable for his scientifically-minded flock, the Preacher of Deus Ex Machina of course changed his views) and said Obamacare - a shitty universal healthcare scheme, but better than zero I supposee - is like dictating people what type of food to buy. Buy the way, voted for Bush twice. That's more to Starglider's silly jab that transhumanists are in "opposition" to the dumb conservative masses. :lol:
In his 1993 book, Ecoscam, and other works, Bailey criticized claims that CFCs contribute to ozone depletion and that human activity was contributing to global warming. ... The position taken in his 1995 book, The True State of the Planet has been described as "growth forever" or "Promethean" arguing for unrestrained exploitation based on assumptions of unending nature, value derived exclusively from man's changes to material, and exceptional human resourcefulness.
Bailey voted for George W. Bush in both 2000 and 2004, a fact which he later wrote made him "disheartened and ashamed."
On August 25, 2010, Reason TV published a video titled "Wheat, Weed and Obamacare: How the Commerce Clause Made Congress All-Powerful" as part of an effort to question the constitutionality of the Patient Protection and Affordable Care Act (PPACA), also known as Obamacare.[3] The video has been credited with popularizing the argument in conservative circles that PPACA's individual mandate to buy health insurance is constitutionally equivalent to requiring consumers to buy particular types of fruits or vegetables.[3] This argument was ultimately articulated by Justice Antonin Scalia, who suggested during oral argument of the PPACA cases that if Congress has the power to require Americans to buy health insurance, then "Therefore, you can make people buy broccoli."
Raymond Kurzweil - a person in the employ of Google who is crazy about surviving to immortality (probably his entire life is now devoted to gathering enough resources to that end) and careless - or desperate? - enough to support blind development of corporately-owned technology. He also teaches corporate bosses and government officiosos how to capture the potential of emerging technologies:
In February 2009, Kurzweil, in collaboration with Google and the NASA Ames Research Center in Mountain View, California, announced the creation of the Singularity University training center for corporate executives and government officials. The University's self-described mission is to "assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity's grand challenges"
Max More tells the flock to prepare the monies - Deus Ex Machina loves the rich, just as any other fake god:
...give serious and regular thought to making money, preferably doing something you love that’s pro-transhumanism. Building personal wealth makes you independent, allows you to afford more expensive early-stage technologies, and means you can afford to fund worthy research efforts. Not all of us excel at making money, but it should never be disparaged in favor of more intellectually “pure” choices.
I have long since repudiated the “libertarian” label as inadequate to describe my economic and political views. Even so, I think the best answers to economic matters almost always reside in the smart design and use of markets rather than in direct government intervention.
Hope you aren't a thalidomide baby, Maxie! :lol:

And these are just a few of the more vocal speakers and "founder figures".

As for the "flock" - I'd say in the US and many other nations it is 70% libertarian types who want their shiny new cyborg bodies but don't give two shits about the welfare of others. Generally attracts richie heirs, middle-class IT specialists, young middle managers - that type of person.

The others 30% of "unconventionals" who aren't conforming to the yuppie-libertarian narrative are probably the hope of the movement, but truth be told, it's mainstream is probably beyond repair.
Simon_Jester wrote:To be fair, I can think of a number of reasons why it might not be possible to immortalize everyone in a hurry. It might not be economically feasible to apply the relevant treatments to more than, oh, 5% of humanity per year. Or it might take decades to build up the infrastructure from something capable of treating a handful to something for the masses.
Yeah. But 5% per year is turbo-speed. Within 10 years, you'd immortalize the vast majority of those who can be immortalized. If it stretches further into the future, the gap danger increases. It might also be that the immortality investment simply doesn't go down in value (homes certainly aren't more affordable to the working class :lol: ) and it remains a rich thing only.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Adam Reynolds
Jedi Council Member
Posts: 2354
Joined: 2004-03-27 04:51am

Re: Does transhumanism have a carelessness problem?

Post by Adam Reynolds »

K. A. Pital wrote:
I think the best answers to economic matters almost always reside in the smart design and use of markets rather than in direct government intervention.
Hope you aren't a thalidomide baby, Maxie! :lol:
While I don't disagree about your general points regarding wealth inequality, there is not necessarily anything wrong with this particular idea, as long as governments play a significant role in designing that market in the first place. A properly designed market wouldn't have problems like thalidomide poisoning as it would have incentives in place to limit problems like this

Direct intervention is vastly less efficient, as it can only occur after the fact, as was seen with the lousy response to the 2008 recession(not that there weren't other problems there, like the fact that the person in charge of the response had direct ties to one of the companies involved).
User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

Re: Does transhumanism have a carelessness problem?

Post by madd0ct0r »

Adam Reynolds wrote:
K. A. Pital wrote:
I think the best answers to economic matters almost always reside in the smart design and use of markets rather than in direct government intervention.
Hope you aren't a thalidomide baby, Maxie! :lol:
While I don't disagree about your general points regarding wealth inequality, there is not necessarily anything wrong with this particular idea, as long as governments play a significant role in designing that market in the first place. A properly designed market wouldn't have problems like thalidomide poisoning as it would have incentives in place to limit problems like this

Direct intervention is vastly less efficient, as it can only occur after the fact, as was seen with the lousy response to the 2008 recession(not that there weren't other problems there, like the fact that the person in charge of the response had direct ties to one of the companies involved).
this is true, but only in the sense that a perfectly balanced pencil will stay upright on its tip. There are systems that don't work particually well with markets - healthcare or railways for example. Even where you have markets that are simple and low capital to enter, its unlikely to work due to imperfect information, unknown future developments as well as the astonishing difficulty of designing a system that cannot be exploited by the sufficiently creative.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
User avatar
Broomstick
Emperor's Hand
Posts: 28771
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Re: Does transhumanism have a carelessness problem?

Post by Broomstick »

Grumman wrote:
SolarpunkFan wrote:Disregarding the fact that if AI gets smart enough (or even an order of magnitude short) in his timeframe there might be an unemployment crisis.
You say "unemployment crisis," I say "post-scarcity utopia." But you do have a valid concern here - the transition to a society where labour is simply not needed in necessary quantities to maintain a near 100% employment rate must be recognised as such, and compensated for with a living wage financed by that same increase in efficiency.
No, that is, in fact, NOT required.

It is wishful thinking that such an “unemployment crisis” must be or will be compensated for in any way. History is replete with examples of civilizations with a highly privileged elite and the majority dwelling abject misery and deprivation. North Korea being one of the current stand-outs in that regard.

Far too many of the upper crust of current society already see the lower classes as literal parasites on society even if they're employed. Where will the necessary compassion come from to support a living wage or guaranteed basic income?

I sometimes think the only reason the US has food stamps is because of the memory revolutions prompted by starvation of the poor.
Your other complaint - that augmentation is unfair to those who are baseline human - I think is abhorrent. It is obscene to argue that a person has an obligation to die, not even so that someone else can live, but just because you resent anyone who has it better than yourself.
People who believe in a “singularity” that is going to bring instant immortality in anything like the near future are engaging in wishful and/or magical thinking.

At best we're going to get a prolongation of lifespan, more likely, better quality at the end of lifespan for those who can afford it. This will be achieved by a combination of better medical tech (including transplants) and prosthetics.

Does anyone have an obligation to die? Of course not. Nor does anyone have an obligation to extend their lifespan, either – something that may not be desirable in all cases.

Are those without access to such technology going to resent those who can access it? You better fucking believe it! Damn few people are such saints that they wouldn't envy a healthy old age denied them solely by dollars.
The converse is that, empirically, there is likely to be a problem of scarcity- if we have a choice between immortality for 5% of the population and mortality for 95%, versus mortality for 100%, and those are the only options on the table, should we always choose mortality for 100%?
A lot depends on how that 5% is determined.

Solely by net worth? Oh, yeah, - there's going to be a lot of jealously and envy.

Solely by genetics? Hoo, boy, you think biogtry is bad now?

Immortality comes with a serious downside, unpleasant side effects, something most people would find abhorrent to experience?... not so bad for society because a lot of people won't want the trade-off.
Adam Reynolds wrote:
K. A. Pital wrote:
I think the best answers to economic matters almost always reside in the smart design and use of markets rather than in direct government intervention.
Hope you aren't a thalidomide baby, Maxie! :lol:
While I don't disagree about your general points regarding wealth inequality, there is not necessarily anything wrong with this particular idea, as long as governments play a significant role in designing that market in the first place. A properly designed market wouldn't have problems like thalidomide poisoning as it would have incentives in place to limit problems like this
The only people who say such things are those who have no understanding of how the thalidomide problem came about.

Thalidomide wasn't an untested drug – every test known at the time to check for safety was utilized. There was no way to foresee that the effects in pregnant rodents (slightly reduced litter sizes, but healthy, intact offspring) would be so different from the effects in humans (severely stunted or absent limbs). Indeed, after the effects in humans were recognized scientists had to go back and figure out why this problem didn't show up in the rodent testing.

The thalidomide situation was a classic Black Swan. The best designed market (however you wish to define that) is not going to be able to foresee the future and there will unforeseen disasters. Including unexpected drug side effects and rocks falling out of space and other nasty things.

Would no-strings-attached, no-bad-side-effects immortality be something I'd like? Yeah... I want to live a lot longer than my currently allotted years. It's unlikely. Much more likely is getting many more people to making it a maximum lifespan (probably around 120 years for humans) and keeping the quality of life maximized until the end. 120 years of good life would be a significant advance over what we currently have. On the flip side, though, no one actually plans to live that long, it would require a serious adjustment of expectations, and have all sorts of knock-on effects.

(As an example of how merely prolonging life causes problems: when the US society security system was set up average male life expectancy in the US was something like 61... but you couldn't collect until 65. It was expected that most people would die before ever collecting a benefit. Now the average lifespan is something in the mid to late 80's. MOST people are going to live to collect 15-20 years of benefits. No wonder the budget for that benefit is strained! It would be politically unpopular (to say the least!) to set the “collect benefits” age to 88, even though that would “re-set” the system to something closer to original conditions. That's just from giving most people and “extra” 20 years of life.)
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Does transhumanism have a carelessness problem?

Post by Starglider »

Broomstick wrote:People who believe in a “singularity” that is going to bring instant immortality in anything like the near future are engaging in wishful and/or magical thinking. At best we're going to get a prolongation of lifespan, more likely, better quality at the end of lifespan for those who can afford it. This will be achieved by a combination of better medical tech (including transplants) and prosthetics.
The original singularity argument essentially came in 'very rapid takeoff', 'hard takeoff' and 'soft takeoff' flavours. The first two are general-AI-specific and distinct from conventional transhumanism, and all the associated arguments involving contemporary economics and politics. The first one in particular involves either strong assumptions about nanotechnology or weak assumptions about the likely scope of technologies remaining to be discovered. Back in the late 90s, the community of people talking about the likely consequences of recursively self-enhancing artificial intelligence was rather distinct from the community of people talking about human genetic engineering, cybernetics and body modification.

Unfortunately since then the whole thing got kind of muddled together, rebranded, repackaged by assorted marketing graduates and regurgitated by people who didn't take the time to understand the basic arguments, never mind the theoretical basis.

You are correct that in the absence of radically transhuman intelligence, we can expect steady progress but at a rate comparable to current medical research. Tools keep improving, both technical stuff such as easy cheap gene sequencing and synthesis and easy global collaboration via the Internet, but then there are retarding factors as well such as escalating regulation and cost of medical trials. This is still a 'singularity' in the weak sense of a time in history when things changed much faster than in the millenia before, and it will be a 'singularity' in the original Vinge sense of 'something that alters human nature so much that future society cannot be usefully predicted' if we get something like mechanical telepathy or precise pharmaceutical control of motivational systems. It is not a 'singularity' in the AI hard-takeoff sense; it is what we used to call the CRNS (current rate no singularity) research timeline, before the word 'singularity' got so vauge.

Generally it's ok to ignore the hard takeoff scenario though, if you're not an AI researcher or making decisions about AI research. The details are esoteric and hard to debate, there's nothing you can do about it and it's irrelevant to most practical decision-making. Certainly I'm always happy to discuss the conventional transhumanist research progression.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Does transhumanism have a carelessness problem?

Post by K. A. Pital »

Adam Reynolds wrote:A properly designed market wouldn't have problems like thalidomide poisoning as it would have incentives in place to limit problems like this
A market is always a market, no matter how you "smartly design" the thing:
Richardson-Merrell may have been "over-eager," Kelsey admits. "They were particularly disappointed because Christmas is apparently the season for sedatives and hypnotics (sleeping pills). They kept calling me, and then just came right out and said, 'We want to get this drug on the market before Christmas, because that is when our best sales are."
Adam Reynolds wrote:Direct intervention is vastly less efficient
"Efficiency" is not the sole and only goal of human existence.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Post Reply