Why create an A.I. at all?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderators: Alyrium Denryle, SCRawl, Thanas

Simon_Jester
Emperor's Hand
Posts: 28795
Joined: 2009-05-23 07:29pm

Re: Why create an A.I. at all?

Postby Simon_Jester » 2016-10-25 10:01am

Ziggy Stardust wrote:The one problem with the wolves analogy (although, Simon, it is a wonderful analogy)...
If it's so wonderful, then could you please say anything whatsoever suggesting you actually got my point, rather than missing it?

My point is simply this. Forms of life that are transcendentally more intelligent than their predecessors tend to displace those predecessors. They win competition for resources. They come up with literally unimaginable strategies for doing so. And if they want to repurpose their predecessors, if they view YOU as a valuable resource... then they do that too.

This is a universal experience of all pre-intelligent animals as humans came onto the scene on Earth. It is NOT just about competition for resources. It is about what happens when something that has the power to outthink you that massively shows up.

...is that humans and wolves do necessarily compete for the same pool of finite resources. The plains apes were hunting deer with their spears and bows, and the wolves were hunting that deer. As agriculture developed, farm apes had to protect their livestock from wolves. And so on and so forth down the line. Even in modern times, there have been culls of wild wolf populations that were deemed to be a threat to human resources for one reason or another...
You are missing the point.

Right now, humans a huge fraction of the Earth's land surface for agriculture, and another large fraction for their own habitations. A great deal of resources goes into providing us with things that we need to survive, and another large amount goes into things that we, strictly speaking, don't need to survive but desire anyway.

In addition to physical resources, we use bandwidth and computer capacity for things we desire (e.g. cat videos) that might seem irrelevant to an AI, and those resources are critical to the AI's survival and growth, as critical to AI as food and water are to us.

The resources of the Earth, and for that matter the solar system, are not infinite. Accessing new resources may be possible, but always requires a large investment up-front compared to taking over or repurposing resources that have already been accessed. And given how easy it is for a more intelligent species to take over resources that 'belong' to a less intelligent one, this repurposing becomes a very likely scenario.

It is grossly premature to assume that AI will never consider humans to be using up resources that the AI thinks it needs for its own purposes.

Moreover, this is only one of several levels on which a superintelligent being's interests might come into conflict with a 'merely' intelligent being's interests. Resource consumption isn't everything.

And, in fact, modern researchers don't believe that domestic dog's originated with wolves, but rather with a different type of scavenger canine species (or, more specifically, SEVERAL different species, since multiple people in different geographic areas separately domesticated dogs) resembling modern village dogs...
Thank you for enlightening me, but this is a completely irrelevant nitpick given the focus of the conversation. And I think you know it.

Do you get my point, or not?

The problem is with using this as an analogy for humans and machines is that there is no comparable competition for resources. Machines, and AI, are not driven by the same evolutionary pressures that constrain the relationships between humans and wolves (or humans and any other animal). Every point of contact between humans and wolves is driven by a set of environmental variables that simply don't apply when we are dealing with artificial intelligence. So it isn't clear a priori that the same rules and dynamics would apply; in fact, it seems extremely unlikely.
It doesn't matter if "the same rules and dynamics" apply. As long as ANY rules and dynamics apply, one of those rules and dynamics is "should the less intelligent group find itself in conflict with one that has exponentially greater intelligence, the group with greater intelligence wins."

The problem is not "they'll use up all our water!" or "they'll hunt all our deer!" It's not something specific like that. It's a general issue that there is literally nothing that we have, not our wealth, not our culture, not the integrity of our own minds and bodies, that we can be sure of continuing to control, in the face of manipulation by a superintelligence.
This space dedicated to Vasily Arkhipov

User avatar
K. A. Pital
Glamorous Commie
Posts: 19709
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Why create an A.I. at all?

Postby K. A. Pital » 2016-10-25 12:42pm

Ziggy Stardust wrote:So it isn't clear a priori that the same rules and dynamics would apply; in fact, it seems extremely unlikely.

Matter and energy even in their basic form are resources. Machines and humans require advanced, heavily processed forms of matter to have a prosperous existence. To that end, huge amounts of different matter get processed by the human civilization.

If the machine civilization is surpassing the human, one could at least expect them to be processing more matter than humans. It would be a pure stroke of luck if the forms of matter required by mankind and machinekind never intersect at any point in the history of both, thus eliminating points of conflict. This kind of luck seems rather unlikely.

But even outside the logic of natural selection, extinction is still possible. Humans have hunted down and almost or totally exterminated a great many other species without these species being a threat to humans or competing with them for resources. I mean, do whales really compete with humans?
It is of paramount importance to achieve naval superiority... because humans are mostly made of water

User avatar
Ziggy Stardust
Sith Devotee
Posts: 2611
Joined: 2006-09-10 10:16pm
Location: Research Triangle, NC

Re: Why create an A.I. at all?

Postby Ziggy Stardust » 2016-10-25 04:14pm

Simon_Jester wrote:If it's so wonderful, then could you please say anything whatsoever suggesting you actually got my point, rather than missing it?


Why so antagonistic? Nothing I said in my post was at all aggressive, nor was any of it a ridiculous distortion of what you said.

Simon_Jester wrote:My point is simply this. Forms of life that are transcendentally more intelligent than their predecessors tend to displace those predecessors. They win competition for resources. They come up with literally unimaginable strategies for doing so. And if they want to repurpose their predecessors, if they view YOU as a valuable resource... then they do that too.


Yes, I understand that was your point. That was the entire point my post was responding to.

Did you actually understand my point, rather than missing it? You even admit right here that "competition for resources" is the key point to your analogy. And I simply pointed out that any "competition for resources" between humans and A.I. will be VASTLY different than any competition between humans and wolves, and that you can't necessarily expect the same patterns and relationships to arise. How is that at all a misinterpretation of your point? Even if you disagree with it, it's ludicrous to claim that I'm not addressing a key tenet of your argument.

Simon_Jester wrote:This is a universal experience of all pre-intelligent animals as humans came onto the scene on Earth. It is NOT just about competition for resources. It is about what happens when something that has the power to outthink you that massively shows up.


And that's fine. I never disputed that at all. Literally my only point was that the type of relationship will be different with A.I., because the rules of the interaction are being guided by different forces than the ones that guide interactions between humans and animals. Considering how hussy you got about me supposedly misinterpreting you, you seem to be utterly ignoring what my point was.

Simon_Jester wrote:Right now, humans a huge fraction of the Earth's land surface for agriculture, and another large fraction for their own habitations. A great deal of resources goes into providing us with things that we need to survive, and another large amount goes into things that we, strictly speaking, don't need to survive but desire anyway.

In addition to physical resources, we use bandwidth and computer capacity for things we desire (e.g. cat videos) that might seem irrelevant to an AI, and those resources are critical to the AI's survival and growth, as critical to AI as food and water are to us.


Yes, exactly. This REINFORCES my point. There is massive overlap between the ESSENTIAL resources required for humans and wolves (e.g. food and water), while there is significantly less overlap between these resources for humans and A.I., exactly as you say. Resources that are critical to an A.I. aren't critical to a human (at least in terms of bare minimum survival, obviously once you scale this up to societies/worldwide your definition of "critical" may have to adapt). Thus, the rules governing any interaction between humans and A.I. are going to be DIFFERENT than the rules governing an interaction between humans and wolves. It's a different paradigm, and as much as I like the wolf analogy, I don't think it's necessarily one that can hold in this different paradigm. What about this argument so irritates you that you think it is somehow a distortion of your argument?

Simon_Jester wrote:It is grossly premature to assume that AI will never consider humans to be using up resources that the AI thinks it needs for its own purposes.


Luckily, I never made that assumption. I never said that there will never be a threat to humans from A.I., or anything even remotely resembling that argument. I literally only said that you can't expect this threat to parallel the ones that humans posed to wolves, or any other animals, because the rules of the game will be different.

Simon_Jester wrote:Moreover, this is only one of several levels on which a superintelligent being's interests might come into conflict with a 'merely' intelligent being's interests. Resource consumption isn't everything.


Again, that's fine. I never claimed resource consumption WAS everything. I was literally only pointing out that resource consumption is the component of equation where the analogy doesn't quite hold.

Simon_Jester wrote:Do you get my point, or not?


Do you get mine? Because you've done a damned fine job ignoring my point and putting words in my mouth while you throw your bizarre little tantrum.

Simon_Jester wrote:The problem is not "they'll use up all our water!" or "they'll hunt all our deer!" It's not something specific like that. It's a general issue that there is literally nothing that we have, not our wealth, not our culture, not the integrity of our own minds and bodies, that we can be sure of continuing to control, in the face of manipulation by a superintelligence.


Yes, I agree completely. And this is the exact reason why the analogy doesn't quite hold, because it's a different paradigm than the one that governs human-animal interactions. Which was the entire point of my post. If you didn't want anyone to discuss the merits of your long and detailed analogy, then why spend so much time outlining such a long and detailed analogy, especially since this post indicates that you exactly agree with my concerns about it?

User avatar
Ziggy Stardust
Sith Devotee
Posts: 2611
Joined: 2006-09-10 10:16pm
Location: Research Triangle, NC

Re: Why create an A.I. at all?

Postby Ziggy Stardust » 2016-10-25 04:19pm

K. A. Pital wrote:Matter and energy even in their basic form are resources. Machines and humans require advanced, heavily processed forms of matter to have a prosperous existence. To that end, huge amounts of different matter get processed by the human civilization.

If the machine civilization is surpassing the human, one could at least expect them to be processing more matter than humans. It would be a pure stroke of luck if the forms of matter required by mankind and machinekind never intersect at any point in the history of both, thus eliminating points of conflict. This kind of luck seems rather unlikely.

But even outside the logic of natural selection, extinction is still possible. Humans have hunted down and almost or totally exterminated a great many other species without these species being a threat to humans or competing with them for resources. I mean, do whales really compete with humans?


I never said that there wouldn't be any points of conflict, or that extinction wasn't possible, or that there wouldn't be any sort of problems between humans and A.I.. I never said anything even remotely resembling that. I literally only said that the rules and dynamics governing human-A.I. interaction will be DIFFERENT from the ones governing human-animal interactions. I never said that there is no threat or conflict between humans-A.I.s.

To follow your whale example, the type of relationship that humans-whales have had historically is highly unlikely to develop between humans-A.I.s. Our interaction with whales has been driven primarily by the interest in harvesting the whales for meat, blubber, and oil. There is no logical reason to presume that A.I.s will want to harvest humans for meat, blubber, or oil, because that makes no fucking sense. Ergo, the relationship between humans-A.I.s will be different from the one between humans-whales. I'm not making ANY argument that this means A.I.s do not pose a threat or anything even close to that, I'm simply saying that the animal model, like whales here or wolves in Simon's analogy, don't necessarily give us any insight into how we would expect the human-A.I. relationship to work.

User avatar
K. A. Pital
Glamorous Commie
Posts: 19709
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Why create an A.I. at all?

Postby K. A. Pital » 2016-10-25 05:09pm

Ziggy Stardust wrote:Our interaction with whales has been driven primarily by the interest in harvesting the whales for meat, blubber, and oil.

An AI may take interest in harvesting something else. Brains, for example, or humans as such to be used as funny things. The problem is that we don't know what kind of weird interests a non-human intelligence might have. We badly cope even when we try to predict the actions of other humans who are far outside the norm, and with an AI predicting becomes an exercise in futility.

We cannot predict what AI would do, and we can't say it makes no sense for AI to harvest humans for their brains. Or hair. Or skin. Or any other type of peculiar detail which makes zero sense to us simply because we can't understand inhuman logic, being human.
It is of paramount importance to achieve naval superiority... because humans are mostly made of water

Simon_Jester
Emperor's Hand
Posts: 28795
Joined: 2009-05-23 07:29pm

Re: Why create an A.I. at all?

Postby Simon_Jester » 2016-10-25 07:30pm

Ziggy Stardust wrote:
Simon_Jester wrote:If it's so wonderful, then could you please say anything whatsoever suggesting you actually got my point, rather than missing it?
Why so antagonistic? Nothing I said in my post was at all aggressive, nor was any of it a ridiculous distortion of what you said.
Because I put considerable time and effort into that analogy, and I feel like you praised it while completely not getting it, and raising spurious objections. I apologize for the tenor of my post, but I was trying to make a point I felt was important.

Simon_Jester wrote:My point is simply this. Forms of life that are transcendentally more intelligent than their predecessors tend to displace those predecessors. They win competition for resources. They come up with literally unimaginable strategies for doing so. And if they want to repurpose their predecessors, if they view YOU as a valuable resource... then they do that too.
Yes, I understand that was your point. That was the entire point my post was responding to.

Did you actually understand my point, rather than missing it? You even admit right here that "competition for resources" is the key point to your analogy. And I simply pointed out that any "competition for resources" between humans and A.I. will be VASTLY different than any competition between humans and wolves, and that you can't necessarily expect the same patterns and relationships to arise. How is that at all a misinterpretation of your point? Even if you disagree with it, it's ludicrous to claim that I'm not addressing a key tenet of your argument.
I didn't argue that exactly the same patterns and relationships will arise.

I argued that some patterns and relationships will arise. And if superintelligent AI exists, then while we cannot predict the precise nature of the relationship between humans and AI, we can predict this: If superintelligent AI wants something humans have, or wants humans to do something they don't want to do, or wants humans to be something... the AI is likely to win. And as the AI(s) become(s) more and more developed, this winning process will become more and more lopsided in favor of the AI.

Exactly what will happen was never something I tried to predict. My point is that the closest analogy we have for what happens when a transcendentally superior intelligence emerges into the world is what we humans did to the non-intelligent animals in our natural world as we developed technology.

And as I pointed out in the first post, there is NO major land animal species on the planet that did not experience massive transformation as a side effect of human activity.

Dogs and their ancestors actually got a pretty good deal out of that, even if a significant fraction of their total population consists of bizarre malformed mutant versions, whom their wild ancestors would probably view as un-canid monstrosities.

Contrast to the Plains buffalo. They were hunted down and driven into near-extinction, one, as an indirect way for one group of humans to strike at another, and two, because humans wastefully used resources they could gain from the death of a buffalo.

Or polar bears. Whose habitat is being destroyed, and not even by humans who live in the same ecosystem as them! No, the thing that is driving wild polar bears to extinction is a random unintended side effect of other humans, most of them thousands of miles away. The side effect is so random and unintended many of us don't even believe is happening!

Or bald eagles. How did a bunch of eagles end up an endangered species? Sure as heck not because they were competing with humans for resources!

Or rhinoceroses. Many of them have been killed so their horns can be ground up into medicines. Medicines that don't actually work!

There are an enormous number of animal species that could not even begin to comprehend the bizarre and unhappy fates their species has experienced at the hands of humans. Because they are not intelligent enough to fully comprehend our motives and actions, and why we do things that can result in their death or suffering.

Frankly, if we assume superintelligent AI is widespread and in play, the best case scenario is AI viewing us as pets and controlling/distorting the future of our species in whatever ways they see fit. All the other alternatives are worse. Like, say, the AI recklessly destroying the ozone layer because it doesn't affect the AI's robots to lose it Never mind that this causes massive cancer problems for humans. Or an AI that wants to make humans happy, but doesn't respect humans' expressed wishes, might leave us 'stored' under conditions that make us individually happy, but make reproduction unlikely or impossible. Or other, stranger and worse things, that I cannot easily predict, might occur.

So please don't make this be about "what happened to wolves/dogs won't be exactly what happened to us." That is, again, missing the point.

Simon_Jester wrote:This is a universal experience of all pre-intelligent animals as humans came onto the scene on Earth. It is NOT just about competition for resources. It is about what happens when something that has the power to outthink you that massively shows up.
And that's fine. I never disputed that at all. Literally my only point was that the type of relationship will be different with A.I., because the rules of the interaction are being guided by different forces than the ones that guide interactions between humans and animals. Considering how hussy you got about me supposedly misinterpreting you, you seem to be utterly ignoring what my point was.
I think the issue is that you saw fit to concentrate on a single sub-issue that seems to me to be an exercise in pedantry.

YES, THE ANALOGY IS NOT PERFECT.

It doesn't have to be, because I'm not saying "humans will suffer the exact same fate under AI that dogs did under humans, down to being made to wear collars with tags and eat bowls of kibble." I'm saying "look at how bizarre and disruptive the effects of the rise of humans were to dogs, and to numerous other species, that appeared very well-adapted and capable to their ancestral environment. This is the scale of disruption we can expect from AI dominating an Earth populated by humans."

Simon_Jester wrote:Right now, humans a huge fraction of the Earth's land surface for agriculture, and another large fraction for their own habitations. A great deal of resources goes into providing us with things that we need to survive, and another large amount goes into things that we, strictly speaking, don't need to survive but desire anyway.

In addition to physical resources, we use bandwidth and computer capacity for things we desire (e.g. cat videos) that might seem irrelevant to an AI, and those resources are critical to the AI's survival and growth, as critical to AI as food and water are to us.
Yes, exactly. This REINFORCES my point. There is massive overlap between the ESSENTIAL resources required for humans and wolves (e.g. food and water), while there is significantly less overlap between these resources for humans and A.I., exactly as you say. Resources that are critical to an A.I. aren't critical to a human (at least in terms of bare minimum survival, obviously once you scale this up to societies/worldwide your definition of "critical" may have to adapt). Thus, the rules governing any interaction between humans and A.I. are going to be DIFFERENT than the rules governing an interaction between humans and wolves. It's a different paradigm, and as much as I like the wolf analogy, I don't think it's necessarily one that can hold in this different paradigm. What about this argument so irritates you that you think it is somehow a distortion of your argument?
What irritates me is that your thesis here is that when I say "A is like B," you reply "but A is not literally exactly the same as B."

If all you mean to say is "A is analogous to B but not literally identical," then that is a trivial observation. For A and B to be literally identical, I'd have to be predicting that human-AI interaction will literally culminate in human-descended subspecies going around with collars eating bowls of kibble off the floors of the AI's houses, while "wild" humans wander around in remote wilderness areas and are occasionally culled when they start hunting down and eating the AI's robo-sheep.

Obviously that is not what I meant. So in that case, "A is not literally the same as B" is rather trivial and something of a waste of our time.

But if what you really meant to say was "I don't think your "A is like B" analogy is applicable, because A is not literally the same as B," then either you have completely failed to understand the analogy...

OR you are simply refusing to grasp the role of analogy in a discussion.

So it seemed at the time that your original observation looked to be either trivial, a failure to understand the analogy, or a failure to understand what analogies are even for.

All three of those outcomes are rather irritating.

Yes, I agree completely. And this is the exact reason why the analogy doesn't quite hold, because it's a different paradigm than the one that governs human-animal interactions. Which was the entire point of my post. If you didn't want anyone to discuss the merits of your long and detailed analogy, then why spend so much time outlining such a long and detailed analogy, especially since this post indicates that you exactly agree with my concerns about it?
The analogy was intended to serve a specific purpose, by exploring in depth just how strange and revolutionary (and discomforting) the consequences of human activity have been, even for species which arguably benefited from that activity.

The argument this was intended to serve, and I spelled it out at the time as I recall, was "You think this thing sounds bad from the wolves' point of view? THIS is the kind of thing that's likely to happen us in the hands of powerful AI." Not this exact thing, but something "like" this thing. Something broadly comparable.

And to have it presented as a contradictory claim that "ah, but A is not exactly like B," when the whole point of the analogy was "A is enough like B to give you a sense for just how big B is..." that was not the kind of productive dialogue and discussion of the merits I'd been hoping for, to say the least.
This space dedicated to Vasily Arkhipov

User avatar
Iroscato
Jedi Council Member
Posts: 2235
Joined: 2011-02-07 03:04pm
Location: Great Britain (It's great, honestly!)

Re: Why create an A.I. at all?

Postby Iroscato » 2016-10-25 09:15pm

This thread has done nothing to help my phobia of AI - now I'm thinking that rather than be driven to extinction outright, our metal successors will...domesticate our species, allowing us to continue entirely at their whim. Fuck, that's nightmarish.
Yeah, I've always taken the subtext of the Birther movement to be, "The rules don't count here! This is different! HE'S BLACK! BLACK, I SAY! ARE YOU ALL BLIND!?

- Raw Shark

Destiny and fate are for those too weak to forge their own futures. Where we are 'supposed' to be is irrelevent.

- SirNitram (RIP)

Simon_Jester
Emperor's Hand
Posts: 28795
Joined: 2009-05-23 07:29pm

Re: Why create an A.I. at all?

Postby Simon_Jester » 2016-10-26 01:02am

Well, it's questionable whether we'd agree we'd been domesticated, or what might happen. The very concept of "domesticated" really only makes sense when you think like a human; I doubt dogs have a word for what they're missing. Maybe we wouldn't have a word for what we were missing under the tutelage of a sufficiently benevolent AI.

The possibilities are extremely broad and variable, and some of them look really good while others look lousy. The big consistent pattern is that ultra-intelligent minds far more capable than our own will almost certainly be able to do pretty much whatever they want, assuming it doesn't violate the laws of physics and isn't in some other way literally impossible.
This space dedicated to Vasily Arkhipov

User avatar
His Divine Shadow
Commence Primary Ignition
Posts: 11766
Joined: 2002-07-03 07:22am
Location: Vasa, Finland

Re: Why create an A.I. at all?

Postby His Divine Shadow » 2016-10-26 05:31am

FTeik wrote:Today we already have super-computers, that can create models of global weather-patterns or of how the universe looked right after the Big Bang and they can even defeat the Grand Masters of Chess. However they are not sentient in the way we humans are.

Yet around the world technicians, engineers and programmers are working towards Artificial Intelligence. The successful outcome of such attempts would - in the most extreme case - probably result in the enslavement of a sentient being, the rise of an inorganic overlord or at least a lot of competition for us humans.

So why create an A.I. at all? To see if we can do it? Because of ego, to prove that "God" isn't the only one capable of creating sentient life? To make computers more like humans (although I seriously doubt that to be a good thing, what with all our emotions and neuroses)? Or to create something, that is better than us?

What benefit would an A.I. have, that we couldn't get with some effort in another way?


I'd like to pose the question differently. How could we possibly stop ourselves from eventually developing an AI?
Those who beat their swords into plowshares will plow for those who did not.

User avatar
His Divine Shadow
Commence Primary Ignition
Posts: 11766
Joined: 2002-07-03 07:22am
Location: Vasa, Finland

Re: Why create an A.I. at all?

Postby His Divine Shadow » 2016-10-26 05:45am

Simon_Jester wrote:Well, it's questionable whether we'd agree we'd been domesticated, or what might happen. The very concept of "domesticated" really only makes sense when you think like a human; I doubt dogs have a word for what they're missing. Maybe we wouldn't have a word for what we were missing under the tutelage of a sufficiently benevolent AI.


Assuming there's anything to miss, human potential is limited after all. There's not much more that can be done with human level intellect beyond what a benevolent AI could provide for us, assuming some kinda culture like scenario.

Only way forward and avoiding stagnation and extermination in the long run is self-improvement, to make AI's superfluous, or rather, make us the AIs.
Those who beat their swords into plowshares will plow for those who did not.

Simon_Jester
Emperor's Hand
Posts: 28795
Joined: 2009-05-23 07:29pm

Re: Why create an A.I. at all?

Postby Simon_Jester » 2016-10-26 09:31am

Right.

Well, it's debateable whether dogs (or, more ominously, cattle) are really missing anything that would be meaningful to a dog or a cow, given that animals are truly, profoundly, deeply stupid.

From a practical point of view, if we end up in a scenario where we can't really prove we've been robbed, and are receiving benefits, maybe we should count it as a win.
This space dedicated to Vasily Arkhipov

User avatar
K. A. Pital
Glamorous Commie
Posts: 19709
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Why create an A.I. at all?

Postby K. A. Pital » 2016-10-27 01:31am

Case in point - although animals can neither notice nor comprehend this...
http://www.reuters.com/article/us-envir ... SKCN12R00F
Reuters wrote:By Alister Doyle | OSLO

Worldwide populations of mammals, birds, fish, amphibians and reptiles have plunged by almost 60 percent since 1970 as human activities overwhelm the environment, the WWF conservation group said on Thursday.

An index compiled with data from the Zoological Society of London (ZSL) to measure the abundance of biodiversity was down 58 percent from 1970 to 2012 and would fall 67 percent by 2020 on current trends, the WWF said in a report.

The decline is yet another sign that people have become the driving force for change on Earth, ushering in the epoch of the Anthropocene, a term derived from "anthropos", the Greek for "human" and "-cene" denoting a geological period.

Conservation efforts appear to be having scant impact as the index is showing a steeper plunge in wildlife populations than two years ago, when the WWF estimated a 52 percent decline by 2010.

"Wildlife is disappearing within our lifetimes at an unprecedented rate," Marco Lambertini, Director General of WWF International, said in a statement of the group's Living Planet Report, published every two years.

"Biodiversity forms the foundation of healthy forests, rivers and oceans," he said in a statement. "We are entering a new era in Earth's history: the Anthropocene," he said. WWF is also known as the World Wide Fund for Nature.

The index tracks about 14,200 populations of 3,700 species of vertebrates - creatures that range in size from pea-sized frogs to 30-metre (100 ft) long whales.

The rising human population is threatening wildlife by clearing land for farms and cities, the WWF's report said. Other factors include pollution, invasive species, hunting and climate change.

But there were still chances to reverse the trends, it said.

"Importantly ... these are declines, they are not yet extinctions," said Professor Ken Norris, Director of Science at ZSL.

Deon Nel, WWF global conservation director, told Reuters it wasn't all bad news.

"I don't speak at all about doom and gloom – we do see a lot of positive signs," Nel said.

One hopeful sign is a global agreement by almost 200 nations last year to curb climate change could, for instance, help protect tropical forests, slow a spread of deserts and curb an acidification of the seas caused by a build-up of carbon dioxide.

And a 2015 U.N. plan for sustainable development by 2030, seeking to end poverty with policies that safeguard the environment, would also help if properly implemented.

Also, some species are recovering. Last month, the giant panda was taken off an endangered list after a recovery in China.

And that's just some 40 years of human history. The period of advanced humanity, when we like to fancy ourselves as an environmentally conscious species!
It is of paramount importance to achieve naval superiority... because humans are mostly made of water

Simon_Jester
Emperor's Hand
Posts: 28795
Joined: 2009-05-23 07:29pm

Re: Why create an A.I. at all?

Postby Simon_Jester » 2016-10-27 05:10am

The thing is, we're environmentally conscious but we're just plain doing so much more of everything than we were fifty years ago. We use more artificial chemicals, we clear and plow more land, we fish the seas more, we burn more fuel.

The Earth's population has increased by roughly a factor of three since World War Two, and proportionately more of that population lives in semi-industrialized or industrialized lifestyles- that is to say, they consume industrial goods in economically relevant quantities. Subsistence farmers have much lower per capita environmental impact, except under unusual conditions (gross overpopulation leading to overcultivation) or in very specific ways (overgrazing).

So three times as many people are consuming more resources per capita... the result is inevitable.

This is not me disagreeing with your point, though. It's a backdrop to your point. You are totally correct to say that we've inflicted tremendous destruction on the animal world without hostile intent towards the animals, and indeed with no intent whatsoever toward most animals. Humans as a whole literally do not care what happens to many species of birds and frogs and so on... and yet these species experience massive dieoffs. We're not competing with them for resources in a meaningful sense of 'competition.' We simply decide to use certain resources, and they die as a side-effect. Or we decide to produce and release certain substances into the environment, and they die as a side effect. Or we decide to introduce new forms of life into an area, and the pre-existing forms of life die as a side effect.

Almost all of this was unintentional, just as the Spanish coming to the New World had no intention of killing off 90-95% of the population with epidemic diseases that in many cases spread ahead of them and wiped out whole civilizations at a time when Europeans were barely even aware of their existence.

But you cannot unring the bell- once a totally new level of technology and control over the natural environment has spread across the world, there is no way to go back to the way things used to be. And there is no way for things to somehow persist unchanged as though nothing has happened.
This space dedicated to Vasily Arkhipov

User avatar
K. A. Pital
Glamorous Commie
Posts: 19709
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Why create an A.I. at all?

Postby K. A. Pital » 2016-10-27 05:33am

The problem of caring is very much at the center of this. Even naturally compassionate humans have little influence over their less concerned bretheren: the destruction occurs from a multitude of tiny uncontrollable actions.

Are humans now so bold as to presume that every AI or every machine lifeform would be totally in control of its actions and totally aware of the harm it is causing, and actually be in a position to stop?

Mankind cannot stop - it cannot control own society in a meaningful way. Human society is uncontrollable. Should we not presume that due to the free-willed nature of intelligent beings, it is unlikely for a machine society to be a society of total control over the result of their actions?
It is of paramount importance to achieve naval superiority... because humans are mostly made of water

User avatar
Alferd Packer
Sith Marauder
Posts: 3586
Joined: 2002-07-19 09:22pm
Location: Slumgullion Pass
Contact:

Re: Why create an A.I. at all?

Postby Alferd Packer » 2016-10-27 07:33am

Well, it depends how the AI is organized.

If we create isolated, individualistic AIs who inhabit either single, humanoid chassis, or retain distinct identities within their digital zoos, then yes, their society will probably be roughly analogous to ours.

If, however, we create one or a few AGI, each of which having the ability to either operate multiple physical chassis simultaneously, or just live in its digital zoo by itself, then the AGI will operate according to whatever ethics or principles it deems the most correct. Hopefully, that will include not annihilating its meaty creators, or wrecking the planet they inhabit.
"There is a principle which is a bar against all information, which is proof against all arguments and which cannot fail to keep a man in everlasting ignorance--that principle is contempt prior to investigation." -Herbert Spencer

"Against stupidity the gods themselves contend in vain." - Schiller, Die Jungfrau von Orleans, III vi.

User avatar
K. A. Pital
Glamorous Commie
Posts: 19709
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Why create an A.I. at all?

Postby K. A. Pital » 2016-10-27 11:58am

Alferd Packer wrote:Well, it depends how the AI is organized.

If we create isolated, individualistic AIs who inhabit either single, humanoid chassis, or retain distinct identities within their digital zoos, then yes, their society will probably be roughly analogous to ours.

The problem is not the machines we create. Over that, we have some power. The problem is what kind of machines will the machine create - intelligence of that level is quite capable of self-replication, after all.
It is of paramount importance to achieve naval superiority... because humans are mostly made of water

User avatar
Alferd Packer
Sith Marauder
Posts: 3586
Joined: 2002-07-19 09:22pm
Location: Slumgullion Pass
Contact:

Re: Why create an A.I. at all?

Postby Alferd Packer » 2016-10-27 12:20pm

K. A. Pital wrote:The problem is not the machines we create. Over that, we have some power. The problem is what kind of machines will the machine create - intelligence of that level is quite capable of self-replication, after all.


Well, that's why I left the digital zoo provision--futurists may disagree on a lot of things, but the one thing that they all agree on is that AI must, when first created, be kept in a digital zoo with absolutely no access to the outside world until its intentions can be determined. That much we can do, at least in the earliest of stages of AI creation. Keep the AI on a massive, isolated server bank, study its behavior, and let it out only after you're sure it won't launch every nuke on the planet immediately upon hitting the internet. Or, if you can't be sure, you pull the plug, kill the AI, and try again.

Of course, that only applies in the infancy of AI or AGI development. After some years, consumer computer hardware may be powerful enough to run AI, or the program which gives rise to sentience may be small enough to run on everyday equipment. Point is, you do have a chance at preventing AI from being as callous as its creators, but if you get it wrong, you're stuck with the results, for good or ill.
"There is a principle which is a bar against all information, which is proof against all arguments and which cannot fail to keep a man in everlasting ignorance--that principle is contempt prior to investigation." -Herbert Spencer

"Against stupidity the gods themselves contend in vain." - Schiller, Die Jungfrau von Orleans, III vi.

User avatar
K. A. Pital
Glamorous Commie
Posts: 19709
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Why create an A.I. at all?

Postby K. A. Pital » 2016-10-27 12:24pm

You don't have a chance to figure out what the machine really thinks. You lack the tools to analyze its intentions. Humans can poorly predict psychopathic behaviour - the brain is a black box. AI, at least a sentient one, is a similar black box. You can study behaviour in a zoo, but once let free and allowed to develop, it will not be the same as whatever you studied in controlled conditions. It is like with bacteria or any other form of life - evolving and changing.
It is of paramount importance to achieve naval superiority... because humans are mostly made of water

User avatar
Alferd Packer
Sith Marauder
Posts: 3586
Joined: 2002-07-19 09:22pm
Location: Slumgullion Pass
Contact:

Re: Why create an A.I. at all?

Postby Alferd Packer » 2016-10-27 01:54pm

Given that psychopathy is a diagnosable condition, I'd beg to differ that we couldn't figure out if an AI is likely or not to go on a nuclear murdering rampage. Moreover, we probably can discern an AI's thoughts, at least a first, given that its thoughts are the result of code we wrote. There are plenty of tests one could could construct to see its behavior. Let the AI think it's out in the internet, see if tries to hack into the world's nuclear arsenals. Or fuck with the electrical grid, or train signals, or whatever. Yes, once it's out in the wild, it will evolve and possibly reproduce in ways we cannot predict, but we can ensure the best possible start for that AI by determining what we're unleashing upon the world.
"There is a principle which is a bar against all information, which is proof against all arguments and which cannot fail to keep a man in everlasting ignorance--that principle is contempt prior to investigation." -Herbert Spencer

"Against stupidity the gods themselves contend in vain." - Schiller, Die Jungfrau von Orleans, III vi.

User avatar
Starglider
Miles Dyson
Posts: 8567
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Why create an A.I. at all?

Postby Starglider » 2016-10-27 04:02pm

Alferd Packer wrote:futurists may disagree on a lot of things, but the one thing that they all agree on is that AI must, when first created, be kept in a digital zoo with absolutely no access to the outside world until its intentions can be determined.


No they don't. Only a handful of futurists have seriously proposed that, and literally no actual researchers do this*. In a decade and a half of reading AI papers, forums, mailing lists, I have never seen a single serious research team say 'we are going to do our general AI research on an isolated network'. Indeed, more and more research is using hyper-connected cloud compute (goes nicely with the increasing reliance on crowdsourced big data).

* Military and security research (not general AI) is occasionally done this way, but that's to prevent other people breaking in and stealing the code, not to prevent something getting out.
Narrative Analysis : http://www.mylittleprogram.net (under construction)

User avatar
Alferd Packer
Sith Marauder
Posts: 3586
Joined: 2002-07-19 09:22pm
Location: Slumgullion Pass
Contact:

Re: Why create an A.I. at all?

Postby Alferd Packer » 2016-10-28 07:28am

Fair enough, but have we come anywhere close to creating a sentient AGI yet? Researching AI and AGI in a public space poses essentially no threat, because there's essentially zero chance that you will stumble across the method to create a sentient AI at this point.
"There is a principle which is a bar against all information, which is proof against all arguments and which cannot fail to keep a man in everlasting ignorance--that principle is contempt prior to investigation." -Herbert Spencer

"Against stupidity the gods themselves contend in vain." - Schiller, Die Jungfrau von Orleans, III vi.

User avatar
Starglider
Miles Dyson
Posts: 8567
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Why create an A.I. at all?

Postby Starglider » 2016-10-28 08:02am

Alferd Packer wrote:Fair enough, but have we come anywhere close to creating a sentient AGI yet? Researching AI and AGI in a public space poses essentially no threat, because there's essentially zero chance that you will stumble across the method to create a sentient AI at this point.


Firstly, the chance is not zero. How close the chance is to zero is debatable, but there are fairly credible arguments that many types of AI system can cross an exponential self-improvement threshold and suddenly become a lot more intelligent, without anyone having time to take stock and rethink their methodology.

Secondly, the problem is not that it isn't being implemented, it is that no one (actually doing technical work) cares. People are not saying 'oh well will beef up security if it looks necessary'. They are saying either 'there is little or no risk, of course my genius design will ensure benevolence' or 'trying to contain an AI is pointless'. The later argument comes in two flavours; the people who are really scared of superintelligence who say it is technically not possible (because a transhuman AI will always find a way out via technical or psychological means), and the people who think it's pointless because even if they successfully implement a 'zoo', other researchers won't and someone else will soon replicate the work (this is how science & technological progress works after all).
Narrative Analysis : http://www.mylittleprogram.net (under construction)

Q99
Jedi Master
Posts: 1343
Joined: 2015-05-16 01:33pm

Re: Why create an A.I. at all?

Postby Q99 » 2016-11-02 08:40pm

Or alternative, the benefits are great enough to be worth the risk, or the possibility of exponential self-improvement is not viewed as a negative one.


Return to “Science, Logic, And Morality”

Who is online

Users browsing this forum: No registered users and 5 guests