Why create an A.I. at all?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

FTeik
Jedi Council Member
Posts: 2035
Joined: 2002-07-16 04:12pm

Why create an A.I. at all?

Post by FTeik »

Today we already have super-computers, that can create models of global weather-patterns or of how the universe looked right after the Big Bang and they can even defeat the Grand Masters of Chess. However they are not sentient in the way we humans are.

Yet around the world technicians, engineers and programmers are working towards Artificial Intelligence. The successful outcome of such attempts would - in the most extreme case - probably result in the enslavement of a sentient being, the rise of an inorganic overlord or at least a lot of competition for us humans.

So why create an A.I. at all? To see if we can do it? Because of ego, to prove that "God" isn't the only one capable of creating sentient life? To make computers more like humans (although I seriously doubt that to be a good thing, what with all our emotions and neuroses)? Or to create something, that is better than us?

What benefit would an A.I. have, that we couldn't get with some effort in another way?
The optimist thinks, that we live in the best of all possible worlds and the pessimist is afraid, that this is true.

"Don't ask, what your country can do for you. Ask, what you can do for your country." Mao Tse-Tung.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Why create an A.I. at all?

Post by Starglider »

Tractable human-level artificial intelligence of the type seen in 'Star Wars', 'Star Trek TNG' etc has such obviously massive commercial and military application that this is a silly question. Risks are generally not considered significant; there is simply the assumption that they can be managed with (software) engineering the same way that we manage risks in any other engineered product.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Why create an A.I. at all?

Post by K. A. Pital »

Starglider wrote:Risks are generally not considered significant; there is simply the assumption that they can be managed with (software) engineering the same way that we manage risks in any other engineered product.
Indeed.
Image
Humans are always making simple assumptions about manageability of risks. That's also why they will perish if a given risk turns out to be unmanageable.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Why create an A.I. at all?

Post by Starglider »

If artificial intelligence was merely as dangerous as industrial-scale chemical engineering or nuclear power, it would not be a problem. Even if it was as dangerous as motorised transport, which kills ~1.2 million people per year, then that would be a problem but clearly something we might in principle accept as worth the economic benefits. The problem with artificial intelligence is that the risk is negligible until suddenly it is existential, but that existential risk is purely theoretical and thus most people can convince themselves to ignore it. Thus anthropogenic climate change is a more appropriate analogy than Chernobyl or Bhopal.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Why create an A.I. at all?

Post by Starglider »

To expand on this, we've already had some deaths due to autonomous vehicles and plenty of accidental deaths with industrial robotics. People understand that engineers sometimes get it wrong and that sometimes that results in fatalities. But these are well understood categories of risk, minor extensions to risks that were already present on highways and in factories. The problem is entirely new categories of risk, which people are very quick to dismiss as fanciful or trivial.
User avatar
Solauren
Emperor's Hand
Posts: 10191
Joined: 2003-05-11 09:41pm

Re: Why create an A.I. at all?

Post by Solauren »

FTeik wrote:Today we already have super-computers, that can create models of global weather-patterns or of how the universe looked right after the Big Bang and they can even defeat the Grand Masters of Chess.
You started off in error.

We have calculations, that we computers perform due to the time to do them otherwise, that can create those models.

Computers don't beat chess players. The programmers of said computers figured out a counter to the chessmasters favorite tactic. (Basically really long/compicate IF THEN statements) Once the chessmaster switches tactics, the computers lose, big time.

Please know about the technology before you try to state a opinion on it.
I've been asked why I still follow a few of the people I know on Facebook with 'interesting political habits and view points'.

It's so when they comment on or approve of something, I know what pages to block/what not to vote for.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Why create an A.I. at all?

Post by Starglider »

Solauren wrote:Computers don't beat chess players. The programmers of said computers figured out a counter to the chessmasters favorite tactic. (Basically really long/compicate IF THEN statements) Once the chessmaster switches tactics, the computers lose, big time.
No, computer chess programs have thoroughly surpassed human players. While (the majority of) chess programs strictly do use position-based production rules (beyond just legal moves), the primary thing that makes them powerful is the depth of search, which is many many orders of magnitude beyond a human. This makes up for the lack of higher level pattern recognition. Furthermore the vast majority of heuristics are produced by a learning process based on the program playing against itself, not hand-written. Chess is a simple enough domain that humans have run out of options to surprise the software.

The exact same thing was seen with AlphaGo; it used a neural net instead of simple heuristics as the fuzzy production rule engine, but the neural net on its own sucked and would not have beaten even a novice player. The ability of the system came from the combination of a minimally competent action selector with deep search. Go as a domain has enough additional complexity that progress of AI players vs human ability is about 20-25 years behind chess, and I think we're still in the phase where human experts have plenty of scope to defeat the best AI algorithms.
User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

Re: Why create an A.I. at all?

Post by madd0ct0r »

FTeik wrote:Today we already have super-computers, that can create models of global weather-patterns or of how the universe looked right after the Big Bang and they can even defeat the Grand Masters of Chess. However they are not sentient in the way we humans are.

Yet around the world technicians, engineers and programmers are working towards Artificial Intelligence. The successful outcome of such attempts would - in the most extreme case - probably result in the enslavement of a sentient being, the rise of an inorganic overlord or at least a lot of competition for us humans.

So why create an A.I. at all? To see if we can do it? Because of ego, to prove that "God" isn't the only one capable of creating sentient life? To make computers more like humans (although I seriously doubt that to be a good thing, what with all our emotions and neuroses)? Or to create something, that is better than us?

What benefit would an A.I. have, that we couldn't get with some effort in another way?
Imagine I manage or own a large company. I spend hundreds of millions on payroll for data input, turning unstructured data into structured data current systems can use. If my research department allows me to replace that expensive human resource with a high power computer, I profit. If I do not but my competitors do, I go bust and all loose their jobs anyway.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
User avatar
Zixinus
Emperor's Hand
Posts: 6663
Joined: 2007-06-19 12:48pm
Location: In Seth the Blitzspear
Contact:

Re: Why create an A.I. at all?

Post by Zixinus »

So why create an A.I. at all? To see if we can do it? Because of ego, to prove that "God" isn't the only one capable of creating sentient life? To make computers more like humans (although I seriously doubt that to be a good thing, what with all our emotions and neuroses)? Or to create something, that is better than us?
The problem is that we do not create AI for any of the reason you have listed. We create AIs as tools, as special software that does specific things. Most AI include things like navigation for GPS, recognising malign code by antivirus scanners and firewalls and so on. Hell we create AIs for video games. Written software is a finite state: everything it does in any situation has to be programmed. We want AIs because we want things to self-program, to learn, to do things that would be too complicated to create in regular programs.

I reccomend that you visit computerphile, it has good introduction to what AI is.

If sentient or nearso levels of AI could exist, we would again create them as tools first. I guess an ideal would be machine slaves that are not slaves, creatures whom have no souls to suffer, who feel no pain and do their work to their outmost at all times. Who of course have to be only maintained rather than paid, require lunch brakes, require vacation days, can work only so much hours a day, etc. Rather than human labour suffer. We already do that with assembly-line robots and such.
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Why create an A.I. at all?

Post by K. A. Pital »

Starglider wrote:If artificial intelligence was merely as dangerous as industrial-scale chemical engineering or nuclear power, it would not be a problem.
Nuclear weapons already pose an existential risk, and bioengineering is also an existential risk factor. Risk seems to be neglible and then it is existential. The explosion of one atomic bomb meant nothing, but the explosion of hundreds of gigatons in a missile exchange over a few minutes would mean an effective collapse and destruction of the human civilization. So how did we get there? Humans did not stop building bombs after 10 or 20 were built; they were not content until the number of bombs reached existential-threat level to the entire civilization.

This leads me to believe that anthropogenic climate change is a worse analogy than a nuclear or chemical cataclism. The danger of global warming is poorly understood in this case, and by only a few people. On the other hand, the danger of a nuclear war is well-understood and trivially easy to imagine, but mankind pressed on with the fabrication of bombs, and sophisticated doomsday devices and various advancements and countermeasures to the nuclear weapon stock are being considered to alter - or maintain - the strategic balance. Even with full awareness that eroding this balance carries an existential risk due to miscalculated mass use of the weapons!

Humans are idiots who can't see beyond their nose and they are pathetically poor at predicting and averting black swan events both in terms of what happens in the markets and in terms of civilizational collapse. That's all I can say regarding the matter.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Why create an A.I. at all?

Post by Starglider »

Nuclear weapons are not the same thing as nuclear power; I said the later not the former. Nuclear weapons are obviously an existential risk, but that is why they are again not a good analogy to artificial intelligence; the risk is obvious, and was obvious to pretty much everyone from the late 1950s onwards. It was easy for humans to go from 'two cities were destroyed, the local environment poisoned' to 'thousands of weapons each hundreds of times more powerful could destroy all the cities and poison all the environment'.
This leads me to believe that anthropogenic climate change is a worse analogy than a nuclear or chemical cataclism. The danger of global warming is poorly understood in this case, and by only a few people. On the other hand, the danger of a nuclear war is well-understood and trivially easy to imagine
Exactly; the real, serious risks from artificial intelligence are poorly understood, by only a few people at best, and realistic scenarios are actually very hard to predict, so by your own reasoning nuclear weapons are not the right analogy. There will not be the equivalent of two cities destroyed as a wakeup call, and there is no equivalent equilibrium of wide-scale development & deployment of the technology, but no actual use of the technology.

The fact that popular culture sci-fi has spent a lot of time on the general notion of a 'robot rebellion' is not helpful here; in fact it is often actively unhelpful in that it causes engineers to say 'oh well obviously that Hollywood nonsense is unrealistic sensationalism, so... nothing to worry about'. I have heard this exact line from many people who should know better. The proximal harmful effects of increasing automation, on rising unemployment in particular, may be well appreciated but that does not help and is even a distraction, due to the familiar categories of risk vs unfamiliar categories. And finally the current ascendency of neural nets (and specifically biomorphic descriptions of them even when the technology is blatantly not biomorphic) is unhelpful, because it encourages media who do try to address the subject seriously to assume that AI will be the electronic equivalent of animal brains; currently insects, soon dogs, eventually humans. So lots of articles about economic consequences of automation and a bit of hand wringing about possibly enslaving human-like beings, but negligible appreciation of the consequences of transhuman intelligence.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Why create an A.I. at all?

Post by K. A. Pital »

You are, of course, right. I should have said that since humans persist to increase risk even in the face of obvious dangers, this would certainly happen with a badly understood one.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Q99
Jedi Council Member
Posts: 2105
Joined: 2015-05-16 01:33pm

Re: Why create an A.I. at all?

Post by Q99 »

AIs can do things we can't and make things easier for us.

They can also give us robot overlords to futilely uprise against!
User avatar
Formless
Sith Marauder
Posts: 4139
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Why create an A.I. at all?

Post by Formless »

Well, FTeik, I think it has to do with inertia of an idea that has not born much fruit BUT has been an obsession of computer scientists since the very beginning. Indeed, you are looking at computing through a paradigm that didn't always exist, but has taken hold because great effort was put into changing how we perceive and use computers.

To put things in perspective, back in the 50's and early 60's, because computers were these giant things with primitive interfaces, most people saw them as a kind of advanced calculator which you input data into and then waited for a response; much like the computer in the original Star Trek. So it seemed natural to most researchers to think that AI was the future of computing. Enter Douglas Engelbart, who had a vision of the computer as a way of augmenting human ability and intelligence rather than replacing it. To give people an advanced interface that would revolutionize or replace the office environment as people knew it and vastly improve human productivity (modern Transhumanism like you see in cyberpunk works isn't exactly what he was thinking of, but could be seen as a radical extension of the basic philosophy). And of course, people in the field laughed at him. Until, that is, he put on The Mother Of All Demos and blew people's minds. You can still watch it on youtube; its downright prescient in places, right down to his predictions about networks. Add on top of that the PC revolution of the seventies and eighties, which did not initially give people the kind of advanced workstations Engelbart envisioned, but in the long term made computers an industry that was based on his principles.

Basically, Engelbart lived to see his vision come true (he died in 2013), while General AI research remained in the same state it was in when he began-- always twenty years or more away from achieving its goals. In fact, while there is a great deal of AI research being done, most of it is actually influenced by the success of the Augmentation Project and the PC Revolution insofar as most researchers abandoned the goal of creating a general AI with sentience or conscience and merely want to make specialized stock trading bots and shit (and yeah, they have a proven risk of crashing markets, as I'm sure K. A. Pital and others can explain much better than I can). That, and there is always engineers doing research on automation for industrial and scientific purposes (the Martian rovers are only possible because of automation, for instance, and there is the research being done on self-driving cars as another example). But that is where the inertia I mentioned comes in: there are still those who want to solve the problem of General AI if only for the perceived scientific accomplishment it would represent, or the belief that they can make a bot with greater ability than a human-machine interface. Those people didn't go away, they simply became marginalized because businesses, government agencies, the now computer savvy public, and other organizations grew in size. The researchers on the other hand were mostly academics and remained about the same size and importance as ever; easily overshadowed, but vocal about the possibilities. After all, that's just what you do to secure grant money. :P
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Why create an A.I. at all?

Post by Starglider »

AI research has been making strong and continuous progress towards general AI for the last sixty years. There turned out to be a couple of orders of magnitude more intermediate steps than the early pioneers (e.g. Dartmouth conference 1956) expected, which is why it has taken so much longer than they expected, but it's hard to fault them for that; it was completely unknown territory, lacking even basic predictive models. Unlike say fusion power, nearly every one of those intermediate steps has had widespread and transformative practical applications (albeit after some repurposing by non-academics), and there have been a huge number of spinoff technologies originally developed for AI research (e.g. most of the principles behind all modern programming languages); enough to put space research to shame. Certainly all of the current big data revolution analysis algorithms, NLP and otherwise, derrive from pure AI research. Some specific symbolic approaches from the 60s through 80s (e.g. logic programming) waned in relative popularity, for complicated reasons, but that is like saying propellors waned in popularity for airliner design; irrelevant to overall progress.

In fact over time there has been a constant trend of things that were initially considered AI problems being considered not AI (once they worked consistently and were no longer mysterious), and even things that were unquestionably part of general AI research (e.g. machine vision, motion planning; researched mainly as modules of general AI into the 90s) no longer being considered general AI, just because they look like they're being solved faster than the overall problem. But this is comparable to researchers managing to build a turbopump in advance of being able to build a complete orbital rocket; it is still progress towards orbital rocketry, it has non-rocketry industrial applications, and the superalloys invented in the process have plenty of spin-off applications.

While Engelbart's ideas were highly influential (although as usual for visionaries the trends would have happened anyway, just later), there are plenty of parts of the IT industry which were and are decoupled from HCI concerns and would have progressed much the same regardless. The high visibility of Windows desktop and smarphone apps may make this non-obvious to consumers.
User avatar
Formless
Sith Marauder
Posts: 4139
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Why create an A.I. at all?

Post by Formless »

My point isn't to diminish the impact of AI researchers on computing-- obviously, Engelbart and the Silicon Valley tech giants weren't working from scratch or in a vacuum, and the existence of stock trading robots shows that some degree of progress has been made (for better or worse). Rather, the point is merely to show why a user-oriented paradigm of computing (which is about more than just GUI's-- we're talking the whole idea of software as tools) is taken for granted by most people, programmers included: the project was a massive success. The AI project never ended, it continued because it has a sort of cultural inertia, like I said. But while it can create new and innovative tools, it has never achieved anything resembling its long term goal. Hell, there is some debate as to how we will know when we've crossed that threshold! Ask enough researchers and it becomes obvious that the project fragmented into sub-groups with their own beliefs about how it will be accomplished (neural nets VS evolutionary models, for instance), and why the research is important-- what it can give us that other applications of computer technology can't. I don't see any reason why most of the spinoff technologies you mention needed to come from pure AI research, which is perhaps why its non-obvious to someone like FTeik why you should research general AI at all. Its like pointing out that our knowledge of rocketry and spaceflight was made possible because of missile research by the military when someone asks what the point was/is of doing missile research. Its true, and non-sequitor. At the end of the day, the main reason people do AI research these days is either to make task-specific non-sentient robots for things like industry, gaming, science, etc. or because of the academic challenge it represents. That's pretty much it. Computers can accomplish a lot without needing things we attribute to AI, like volition, sentience, consciousness, etc. That much is obvious.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Tribble
Sith Devotee
Posts: 3082
Joined: 2008-11-18 11:28am
Location: stardestroyer.net

Re: Why create an A.I. at all?

Post by Tribble »

Do you think that human beings will be smart enough and capable enough to create a true AI that surpasses human intelligence in every way, or could our own limitations end up preventing us from doing so?
"I reject your reality and substitute my own!" - The official Troll motto, as stated by Adam Savage
User avatar
Formless
Sith Marauder
Posts: 4139
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Why create an A.I. at all?

Post by Formless »

Speaking as a psychologist in-training? The question has a fundamental flaw: we don't have a proper definition of "intelligence". Not really. Humans have such a wide range of mental abilities that simplifying it down into one construct is difficult at best, ludicrous at worst. Our measures of intelligence are all relative to our fellow human beings, and humans are not uniform in intelligence or ability. So at best, we can say that computers already have better cognitive abilities in some areas like number crunching and playing abstract games such as Chess and, amazingly, Go. But whether we can even make a computer conscious, let alone able to, say, philosophize with us about the nature of such things is open for debate. Even if you have a mechanical view of the mind, these things are and must be incredibly complex.

Of course, even a machine that merely has above average intelligence and can be reproduced at will would be a disruptive technology regardless of whether it is smarter than the smartest humans.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Zeropoint
Jedi Knight
Posts: 581
Joined: 2013-09-14 01:49am

Re: Why create an A.I. at all?

Post by Zeropoint »

Of course, even a machine that merely has above average intelligence and can be reproduced at will would be a disruptive technology regardless of whether it is smarter than the smartest humans.
Heck, a machine that could consistently and reliably do everything that a below average human can do would be a hugely disruptive technology that would displace vast hordes of human workers. Minimum wage jobs would vanish. There would be no more unskilled labor market. Burger flippers, gone. Waitstaff in restaurants, gone. Janitors and maids, gone. Flaggers at road construction sites, gone. General laborers, gone. Gardeners and landscapers, gone. Agricultural laborers, gone.

I don't mean to imply that everyone who holds these positions is of below average intelligence; I just mean to point out that these jobs CAN be done by a "not too bright" human, and they're going to be among the first to be automated away. Fast food companies are already working on replacing their cooking staff with machines, and taking orders doesn't involve a lot of high cognitive ability either.
But whether we can even make a computer conscious
. . . will depend on whether we can define "consciousness" satisfactorily. Also, the very act of defining "being conscious" in clear, unambiguous, testable terms will greatly assist in programming a computer to do it.

I have noticed, as I've kept on eye on discussions of AI over the years, that I've never seen a good argument for why computers wouldn't be able to think. The poor arguments that I have seen always either come down to magic ("computers can't be conscious because they don't have souls") or blatantly circular logic ("computers will never be able to think because thinking is something that computers can't do").
Do you think that human beings will be smart enough and capable enough to create a true AI that surpasses human intelligence in every way, or could our own limitations end up preventing us from doing so?
Can we create something greater--smarter--than ourselves? That's an important question. I'd feel confident in saying that no individual human could do it, since it seems absurd that, for example, a human brain could contain a complete model of the human brain in addition to all the other stuff a human needs to do . . . but we have a LOT of people on the job, and no one has to understand ALL of the problem. No human being could build a microchip fabrication facility in a lifetime, but we still have microchip fabs.
I'm a cis-het white male, and I oppose racism, sexism, homophobia, and transphobia. I support treating all humans equally.

When fascism came to America, it was wrapped in the flag and carrying a cross.

That which will not bend must break and that which can be destroyed by truth should never be spared its demise.
Q99
Jedi Council Member
Posts: 2105
Joined: 2015-05-16 01:33pm

Re: Why create an A.I. at all?

Post by Q99 »

Zeropoint wrote: Heck, a machine that could consistently and reliably do everything that a below average human can do would be a hugely disruptive technology that would displace vast hordes of human workers. Minimum wage jobs would vanish. There would be no more unskilled labor market. Burger flippers, gone. Waitstaff in restaurants, gone. Janitors and maids, gone. Flaggers at road construction sites, gone. General laborers, gone. Gardeners and landscapers, gone. Agricultural laborers, gone.

I don't mean to imply that everyone who holds these positions is of below average intelligence; I just mean to point out that these jobs CAN be done by a "not too bright" human, and they're going to be among the first to be automated away. Fast food companies are already working on replacing their cooking staff with machines, and taking orders doesn't involve a lot of high cognitive ability either.
Hm, waitstaff and similar, that requires social interaction. Which is more tricky. Landscapers, that requires design and judgement calls.

But anything largely physical, yea. And, notably, we will have *such* an abundance of artificial workers that things uneconomical will become economical as a result.
User avatar
Bedlam
Jedi Master
Posts: 1497
Joined: 2006-09-23 11:12am
Location: Edinburgh, UK

Re: Why create an A.I. at all?

Post by Bedlam »

How are we on the robotics side of the equation? If we're talking about AI's taking over the general labourer job market then it doesn't matter how smart they are if the bodies can't do the work?

And would it be better to have one general 'work bot' or dozens or hundreds of robots each designed for a specific task?
User avatar
Zeropoint
Jedi Knight
Posts: 581
Joined: 2013-09-14 01:49am

Re: Why create an A.I. at all?

Post by Zeropoint »

Hm, waitstaff and similar, that requires social interaction. Which is more tricky. Landscapers, that requires design and judgement calls.
This is true, but a human of below average intelligence can do these things successfully. The point I'm trying to make here is a fairly weak one--that AIs don't even have to be as good as the average human to displace a lot of workers and transform the economy.
How are we on the robotics side of the equation? If we're talking about AI's taking over the general labourer job market then it doesn't matter how smart they are if the bodies can't do the work?
Well, we're further along on robotics than we are on general AI; just look up the Atlas robot for an example. The "smarts" are going to be the limiting factor.
And would it be better to have one general 'work bot' or dozens or hundreds of robots each designed for a specific task?
I can't begin to speculate. Obviously, a machine designed from the ground up around a specific task is going to be more efficient than a generalist platform. On the other hand, we've created a world built around the form and abilities of the human body. Exploiting that might, or might not, offset the penalties of a generalist bot. I expect we'd see both specialized and generalist robots.

I'll leave the questions about AI rights and social interaction and humanoid versus non-humanoid forms for another time.
I'm a cis-het white male, and I oppose racism, sexism, homophobia, and transphobia. I support treating all humans equally.

When fascism came to America, it was wrapped in the flag and carrying a cross.

That which will not bend must break and that which can be destroyed by truth should never be spared its demise.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Why create an A.I. at all?

Post by Simon_Jester »

Starglider's point about existential risk seems to be getting lost.

Here is an analogy: wolves.

Wolves are pretty damn smart animals, with a good survival strategy. For millions of years, they had it good. Sure, there were various predators and diseases and wolf life expectancy could be higher, but the environmental threats to wolf survival didn't prevent wolves from flourishing in all the areas they could physically reach and settle. They were slowly evolving to be even 'better' wolves, at a relatively leisurely pace. All in all, being a wolf was a pretty good ecological niche when...

WHAM.

Some time near the end of the last Ice Age, this species of ultra-smart plains apes emerges. They possess superlupine intelligence, smarter than any wolf could ever hope to be. The most banal thinking of the stupidest of these new apes makes the reasoning skills of a legendary wolf supergenius seem like a bad joke by comparison. The apes start doing things for themselves that wolves literally never imagined doing and have no aptitude for.

Such as throwing big sharp rocks- imagine, hurting other animals without having to get into claw distance of them! That is so not fair! Suddenly millions of years of painstaking evolution into how to be a really dangerous claw-claw-bite fighter is completely canceled out.

Such as fire- wait, that is a massively dangerous uncontrollable force of nature, not something you can just... toy with! It's too unpredictable... how do they even do that?

Such as setting all sorts of bizarre traps and things which enable them to take down prey... wait, HOW DID THEY EVEN KNOW they could dig a hole in the ground there and the elephant would just... walk into it? OMGWTF...

...

And then the apes set their eyes on the wolves.

Wolves are, physically, a threat to the apes. They aren't likely to try to eat a plains ape, especially since the apes developed these weird incomprehensible routines like "sharpening a rock and tying it to the end of a stick" that makes them far more dangerous than such a soft, squishy animal has any right to be in the normal order of things. But wolves are still a threat. On the other hand, wolves are valuable allies, right?

So the plains apes decide to "ally" with wolves. They deduce the wolves' social structure, analyze it, bribe wolves with food into living in a state of symbiosis... then a state of servitude. Wolves who present more trouble than they're worth are quietly killed off. Wolves who possess promising biological traits are bred disproportionately, so that their progeny come to dominate these 'tame' wolves. A new species is created, spun off the wolves: the dogs. Then the dogs themselves begin dividing up into a plethora of subspecies. Some of those species are more or less recognizably wolf-ish. Others, like the pit bull and the chihuahua, are grossly deformed or deranged, debasements of what wolves used to be.

Meanwhile, the wild wolves are still a threat to plains apes. No longer to the apes themselves, who have wandered off to live in bizarre 'structures' and things that wolves don't even recognize and their dog cousins certainly can't understand. But to the other animals likewise 'adopted' and altered by the apes to serve them as food sources or entertainment or for other, stranger purposes like 'clothing.'

The vast majority of surviving wolves are, of course, swiftly exterminated, within the course of a millenium or two. If not by the bow and arrow, then by the rifle.

Eventually, the plains apes agree that wolves are a necessary part of a balanced ecosystem- one they no longer really participate in, having transcended it. And perhaps they feel a sentimental attachment to wolves, as the ancestors of their pet-servants, the "dogs."

...

Now look at this from the wolves' point of view. Their species still exists (on sufferance) and all sorts of genetically tinkered 'descendants' of their species still exist (as our playthings). There are probably more living wolves and wolf-like creatures in the world today than at any time in history.

...If wolves were self-aware organisms, they might have some very serious objections to what just happened.
_______________________

The thing is, there is no obvious reason AI couldn't do this to us the same way we did it to wolves.

I can imagine wolves around 500,000 BC having conversations: "Will these plains apes ever evolve to be as skilled a long distance runner as a wolf? Maaaybe." "Plains apes may be able to do some of the things that slow, weak wolves do, and that would be massively disruptive to the wolf economy. I'm worried." "Ah, but will plains apes ever be able to compete in the all-important licking-their-own-tummy sector?"

Thing is, even though it's totally true that plains apes represent competition to wolves, the threat they pose by competing with wolves is trivial. The threat they pose by supplanting wolves (and just about every other living thing on top of the food chain of the Earth) is far more significant.

And hell, this would be a GOOD outcome for humanity compared to some of the things we did to other animal species.

Look what happened to the megafauna of North America after humans showed up. Look what's happening to tigers and rhinos, who are on the verge of extinction in the wild, and exist only because it suits us to keep them alive, for reasons they themselves could never comprehend. As long as it isn't too much trouble. Look what's happened to the gorilla and the orangutan, who were not so different from us five million years ago.

And all these creatures could have told themselves, once upon a time, that plains apes would never have the size/strength/quickness/ferocity/whatever to really compete with their own species.

But it didn't matter... because we were just that much smarter than them. And eventually, this meant that we got to make the rules.
This space dedicated to Vasily Arkhipov
User avatar
Zeropoint
Jedi Knight
Posts: 581
Joined: 2013-09-14 01:49am

Re: Why create an A.I. at all?

Post by Zeropoint »

I entirely agree. The machines will inevitably take over--not necessarily in a hostile way--so I feel that it's important for us to erase the line between humans and machines before that happens.
I'm a cis-het white male, and I oppose racism, sexism, homophobia, and transphobia. I support treating all humans equally.

When fascism came to America, it was wrapped in the flag and carrying a cross.

That which will not bend must break and that which can be destroyed by truth should never be spared its demise.
User avatar
Ziggy Stardust
Sith Devotee
Posts: 3114
Joined: 2006-09-10 10:16pm
Location: Research Triangle, NC

Re: Why create an A.I. at all?

Post by Ziggy Stardust »

The one problem with the wolves analogy (although, Simon, it is a wonderful analogy) is that humans and wolves do necessarily compete for the same pool of finite resources. The plains apes were hunting deer with their spears and bows, and the wolves were hunting that deer. As agriculture developed, farm apes had to protect their livestock from wolves. And so on and so forth down the line. Even in modern times, there have been culls of wild wolf populations that were deemed to be a threat to human resources for one reason or another. And, in fact, modern researchers don't believe that domestic dog's originated with wolves, but rather with a different type of scavenger canine species (or, more specifically, SEVERAL different species, since multiple people in different geographic areas separately domesticated dogs) resembling modern village dogs.

The problem is with using this as an analogy for humans and machines is that there is no comparable competition for resources. Machines, and AI, are not driven by the same evolutionary pressures that constrain the relationships between humans and wolves (or humans and any other animal). Every point of contact between humans and wolves is driven by a set of environmental variables that simply don't apply when we are dealing with artificial intelligence. So it isn't clear a priori that the same rules and dynamics would apply; in fact, it seems extremely unlikely.
Post Reply