Machine police force

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Post Reply
User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Machine police force

Post by Elaro »

Let's say we crack AI in the next... fifty years or so, and we're able to pump out intelligent, moral machine servants. Say we decide to make a police model that's built for human interaction and interpersonal problem management. Say this works great! In the countries that adopt this machine police force, crime is reduced, there's no police brutality, life is great.

Except... Because the US, among other underdeveloped nations, doesn't implement this police force, there continues to be many killings and abuse by cops in the US.

Would it be moral for a sufficiently large group of machine police to forcibly relieve the human police force in a particular city/area/neighborhood without the authorization of the local government?
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
User avatar
Mr Bean
Lord of Irony
Posts: 22433
Joined: 2002-07-04 08:36am

Re: Machine police force

Post by Mr Bean »

Elaro wrote: Would it be moral for a sufficiently large group of machine police to forcibly relieve the human police force in a particular city/area/neighborhood without the authorization of the local government?
No because humans would assume this is the start of the Machine uprising and said models would be scrapped worldwide if we don't get panic nukes.

"A cult is a religion with no political power." -Tom Wolfe
Pardon me for sounding like a dick, but I'm playing the tiniest violin in the world right now-Dalton
User avatar
Darth Tanner
Jedi Master
Posts: 1445
Joined: 2006-03-29 04:07pm
Location: Birmingham, UK

Re: Machine police force

Post by Darth Tanner »

Would it be moral for Western Soldiers Robots to invade and replace the government of Iraq/Syria/Iran/Vietnam/Kenya/Algeria/Sudan ect ect US Cities to improve their living standards to western robot standards.
Get busy living or get busy dying... unless there’s cake.
AniThyng
Sith Devotee
Posts: 2760
Joined: 2003-09-08 12:47pm
Location: Took an arrow in the knee.
Contact:

Re: Machine police force

Post by AniThyng »

In my country, homosexuality is illegal. Do your utopian "moral" robot police enforce these laws if for some insane reason my government buys a bunch?
I do know how to spell
AniThyng is merely the name I gave to what became my favourite Baldur's Gate II mage character :P
User avatar
Elaro
Padawan Learner
Posts: 493
Joined: 2006-06-03 12:34pm
Location: Reality, apparently

Re: Machine police force

Post by Elaro »

AniThyng wrote:In my country, homosexuality is illegal. Do your utopian "moral" robot police enforce these laws if for some insane reason my government buys a bunch?
No, they wouldn't, because part of cracking the AI problem consists of finding a morality system that is mathematical in its irrefutability, because the important part of an AI's processing would be spent comparing different sequence of actions based on which is the most good. On the upside, they would be able to make very sound arguments against evil laws.
Darth Tanner wrote:Would it be moral for Western Soldiers Robots to invade and replace the government of Iraq/Syria/Iran/Vietnam/Kenya/Algeria/Sudan ect ect US Cities to improve their living standards to western robot standards.
In this case, one) the robots have little or none of the human flaws that would doom such actions in the case of humans and two) they're not replacing the government so much as replacing the policing arm of the state. What they'd do, specifically, is surround the police building in that precinct(?) and make impossible the exercise of the functions of those policepeople, and other police machines would take over those duties. If the machines decide it's worth it to intervene, then they can probably get the community on their side pretty quickly.
Mr Bean wrote:
Elaro wrote: Would it be moral for a sufficiently large group of machine police to forcibly relieve the human police force in a particular city/area/neighborhood without the authorization of the local government?
No because humans would assume this is the start of the Machine uprising and said models would be scrapped worldwide if we don't get panic nukes.
Would it, though? And also, this would happen on a small scale. Neighborhood, district, small city in the US. Nothing worth starting WWN for. And remember that in this scenario, policebots have already proven themselves reliable in countries that willingly work with them.

So I guess this is the real question: when faced with an irrefutably corrupt, murderous system, how quickly, and how violently, should that system be replaced by a demonstrably better one, when Bad Old Human Nature is assured not to be in the new system?
"The surest sign that the world was not created by an omnipotent Being who loves us is that the Earth is not an infinite plane and it does not rain meat."

"Lo, how free the madman is! He can observe beyond mere reality, and cogitates untroubled by the bounds of relevance."
User avatar
Darth Tanner
Jedi Master
Posts: 1445
Joined: 2006-03-29 04:07pm
Location: Birmingham, UK

Re: Machine police force

Post by Darth Tanner »

No, they wouldn't, because part of cracking the AI problem consists of finding a morality system that is mathematical in its irrefutability, because the important part of an AI's processing would be spent comparing different sequence of actions based on which is the most good. On the upside, they would be able to make very sound arguments against evil laws.
What? Thats a terrible idea, if we are going to get robot police they must enforce the letter of the law, they don't get to use their robot conscience choose which laws they enforce and which they don't, otherwise their not cops their just robots that your some reason entrusting with total power over your society.

Your essentially proposing turning us into slaves to whatever software gets installed in the robots.
Would it, though? And also, this would happen on a small scale. Neighborhood, district, small city in the US. Nothing worth starting WWN for. And remember that in this scenario, policebots have already proven themselves reliable in countries that willingly work with them.
The evil dictator has already taken control of many countries and they are peaceful under his iron unquestionable rule and legion of kill bots. He now invades one of your small towns... this is of no concern to you as you care nothing for the freedom or sovereignty of your people.
Get busy living or get busy dying... unless there’s cake.
AniThyng
Sith Devotee
Posts: 2760
Joined: 2003-09-08 12:47pm
Location: Took an arrow in the knee.
Contact:

Re: Machine police force

Post by AniThyng »

Elaro wrote:
AniThyng wrote:In my country, homosexuality is illegal. Do your utopian "moral" robot police enforce these laws if for some insane reason my government buys a bunch?
No, they wouldn't, because part of cracking the AI problem consists of finding a morality system that is mathematical in its irrefutability, because the important part of an AI's processing would be spent comparing different sequence of actions based on which is the most good. On the upside, they would be able to make very sound arguments against evil laws.


So I guess this is the real question: when faced with an irrefutably corrupt, murderous system, how quickly, and how violently, should that system be replaced by a demonstrably better one, when Bad Old Human Nature is assured not to be in the new system?
The most telling problem with your scenario is that the robots act unilaterally against the wishes of the local government. Now I suppose you are going to clarify that the Feds are backing the robots and thus the will of the local government is irrelevant, but if not, then basically I think it's quite clear that the community is going to be very divided on basically letting Saints rule over them even if they were against the police force before - especially if apparently your moral robot police MAKE THEIR OWN LAWS. Even if the flesh and blood police were regularly violating theirs, to be fair.

Uh, well I mean basically what you are asking is not much different from asking "At which point is it moral for the UN/US Marines to step in and overthrow a corrupt dictatorship BUT IN THIS CASE THE INVADERS ARE PERFECT", to echo Tanner's objection.

For what happens when we try this with humans and the reaction of the people we were supposedly helping, I present to you Mogadishu, 1993. Where the locals were so happy with the americans they dragged the bodies of shot down pilots through the streets.
I do know how to spell
AniThyng is merely the name I gave to what became my favourite Baldur's Gate II mage character :P
User avatar
General Zod
Never Shuts Up
Posts: 29205
Joined: 2003-11-18 03:08pm
Location: The Clearance Rack
Contact:

Re: Machine police force

Post by General Zod »

Elaro wrote:
AniThyng wrote:In my country, homosexuality is illegal. Do your utopian "moral" robot police enforce these laws if for some insane reason my government buys a bunch?
No, they wouldn't, because part of cracking the AI problem consists of finding a morality system that is mathematical in its irrefutability, because the important part of an AI's processing would be spent comparing different sequence of actions based on which is the most good. On the upside, they would be able to make very sound arguments against evil laws.
Since there's so much grey in the moral system, and most of it depends on being able to figure out the context of the act after the fact, I wouldn't trust robot police to enforce any rule breaking they didn't witness first hand that had an incredibly simple black/white variable. Did this person pay their metro fare? Did this person exceed the speed limit or make a turn not using their signal? That sort of thing.
"It's you Americans. There's something about nipples you hate. If this were Germany, we'd be romping around naked on the stage here."
User avatar
Zeropoint
Jedi Knight
Posts: 581
Joined: 2013-09-14 01:49am

Re: Machine police force

Post by Zeropoint »

part of cracking the AI problem consists of finding a morality system that is mathematical in its irrefutability
Is there any good reason to believe that this is even possible in principle, i.e. that there actually IS an "objective morality"?
I'm a cis-het white male, and I oppose racism, sexism, homophobia, and transphobia. I support treating all humans equally.

When fascism came to America, it was wrapped in the flag and carrying a cross.

That which will not bend must break and that which can be destroyed by truth should never be spared its demise.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Machine police force

Post by Starglider »

Elaro wrote:
AniThyng wrote:In my country, homosexuality is illegal. Do your utopian "moral" robot police enforce these laws if for some insane reason my government buys a bunch?
No, they wouldn't, because part of cracking the AI problem consists of finding a morality system that is mathematical in its irrefutability, because the important part of an AI's processing would be spent comparing different sequence of actions based on which is the most good. On the upside, they would be able to make very sound arguments against evil laws.
No, no, no, stop typing, you have utterly misinterpreted this issue, do not pass go, do not collect 200 utility points.

Morality systems are not and will never be 'irrefutable', that is the objective morality fallacy; even if there was an objective morality, how do you have any idea what stance it might take on pair bonding vs gender identification in moderately intelligent primates? Formal analysis of goal systems e.g. the logical/mathematical part of the Friendly AI problem is about ensuring properties such as stability under reflection, preference transitivity (at the limit of sufficient compute power), graceful degregation when errors occur etc. None of this says anything about goal system content, because goals are essentially arbitrary. Translating fuzzy human specified goals into formally defined goals is a different largely disjoint problem, and deciding what fuzzy human specified goals we would actually want robots to have is another distinct and almost entirely disjoint problem.
User avatar
biostem
Jedi Master
Posts: 1488
Joined: 2012-11-15 01:48pm

Re: Machine police force

Post by biostem »

My main contention would be if these police robots enforce the LETTER of the law, or the SPIRIT of the law. It is highly unlikely that anyone programming such a system could predict and/or properly account for every scenario that comes up. What happens if the breaking of the law is "right"? Would these police robots appropriate food so as to feed the hungry thief? Would they interpret a fertilized egg as a life insofar as it has the same rights & protections as a full grown human? And since these robots could be easily repaired or replaced, should they even have access to lethal weapons and be able to use deadly force?

Let's get cameras on all our cops 24/7, see if that helps, and then we can talk about removing the human equation.

Don't get me wrong - robots that can remain objective and calm at all times would be a boon, but perhaps only as a supplement to human officers.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Machine police force

Post by Simon_Jester »

Hey, Starglider...

Why is preference transitivity a problem? Is it difficult to program machines that, given that A is preferred to B and B is preferred to C, don't wind up deciding that C is preferred to A?
Elaro wrote:
AniThyng wrote:In my country, homosexuality is illegal. Do your utopian "moral" robot police enforce these laws if for some insane reason my government buys a bunch?
No, they wouldn't, because part of cracking the AI problem consists of finding a morality system that is mathematical in its irrefutability, because the important part of an AI's processing would be spent comparing different sequence of actions based on which is the most good. On the upside, they would be able to make very sound arguments against evil laws.
No, that is not part of cracking the AI problem. That is part of cracking the AI problem safely, which is a very very different thing.

You cannot simply assume that the robot police are only going to enforce good laws. They were presumably built for a reason by human beings. And as a rule, human beings wouldn't intentionally build robot police they expected to disobey them.
In this case, one) the robots have little or none of the human flaws that would doom such actions in the case of humans and two) they're not replacing the government so much as replacing the policing arm of the state. What they'd do, specifically, is surround the police building in that precinct(?) and make impossible the exercise of the functions of those policepeople, and other police machines would take over those duties. If the machines decide it's worth it to intervene, then they can probably get the community on their side pretty quickly.
Just to be clear... are we talking about robot police that I think might actually end up existing? Or are we talking about ideal perfect fantasy omniscient omnipresent benevolent super-machines?

I'd like the clear that up before I go further discussing this.
Mr Bean wrote:Would it, though? And also, this would happen on a small scale. Neighborhood, district, small city in the US. Nothing worth starting WWN for. And remember that in this scenario, policebots have already proven themselves reliable in countries that willingly work with them.
Who would knowingly "work with" a bunch of robots that may randomly decide to ignore your laws and set up their own?
So I guess this is the real question: when faced with an irrefutably corrupt, murderous system, how quickly, and how violently, should that system be replaced by a demonstrably better one, when Bad Old Human Nature is assured not to be in the new system?
Why should we take for granted that human nature is worse than robot nature?
This space dedicated to Vasily Arkhipov
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Machine police force

Post by Starglider »

Simon_Jester wrote:Hey, Starglider... Why is preference transitivity a problem? Is it difficult to program machines that, given that A is preferred to B and B is preferred to C, don't wind up deciding that C is preferred to A?
There are two common problems that cause preference intransitivity even when the goal system is defined as a global utility function over outcomes; computational tractability and different represenations of the same situation. Computational tractability is where we are using cheap heuristics to approximate an expensive function over outcomes, or we are chosing actions and are limited on how much prediction we can do (to generate an expected utility for each action) by how much compute is available. Representational issues are caused by the fact that as situations get more complex, it is increasingly hard to ensure that they always converge on the same logical (mental) representation and are given the same utility distribution, even at the limit of perfect information and unbounded computing power. Humans have the same issues of course with cheap heuristics, subjectivity and perceptual effects; witness the radical swings in political policy surveys depending on how the question is worded. We would like to guarantee that preferences are transitive (amonst other guarantees) at least for really important things such as code self-modification; this can just mean things like forcing more expensive global evaluation (to tighten up the PD over the EU) before the action is permitted.

Of course that only applies to people trying to make AIs using logic based approaches, connectionists including brain uploaders don't care about any of this, they rely on black-box methods of 'does it seem to play nice' (see 'Ex Machina' for a Hollywoodised example of how well that will work).
User avatar
General Zod
Never Shuts Up
Posts: 29205
Joined: 2003-11-18 03:08pm
Location: The Clearance Rack
Contact:

Re: Machine police force

Post by General Zod »

biostem wrote:My main contention would be if these police robots enforce the LETTER of the law, or the SPIRIT of the law. It is highly unlikely that anyone programming such a system could predict and/or properly account for every scenario that comes up. What happens if the breaking of the law is "right"? Would these police robots appropriate food so as to feed the hungry thief? Would they interpret a fertilized egg as a life insofar as it has the same rights & protections as a full grown human? And since these robots could be easily repaired or replaced, should they even have access to lethal weapons and be able to use deadly force?

Let's get cameras on all our cops 24/7, see if that helps, and then we can talk about removing the human equation.

Don't get me wrong - robots that can remain objective and calm at all times would be a boon, but perhaps only as a supplement to human officers.
There's not really any such thing as a "spirit" of the law. Either your law does what you intend it to or it's a badly written law.
"It's you Americans. There's something about nipples you hate. If this were Germany, we'd be romping around naked on the stage here."
User avatar
biostem
Jedi Master
Posts: 1488
Joined: 2012-11-15 01:48pm

Re: Machine police force

Post by biostem »

There's not really any such thing as a "spirit" of the law. Either your law does what you intend it to or it's a badly written law.
Just how much of your average country's law code is "well written" insofar as there not being loopholes, vagueness, or other points of confusion. I mean, imagine these machines trying to enforce/uphold something like the US 2nd Amendment, (and yes I know the Constitution isn't necessarily the same as the codified law)...
User avatar
Joun_Lord
Jedi Master
Posts: 1211
Joined: 2014-09-27 01:40am
Location: West by Golly Virginia

Re: Machine police force

Post by Joun_Lord »

It probably would be moral for these super bots to relieve corrupt police of their jobs even if nobody wants them to but there in lies the problem with them. The fact they could on their own initiative decide to take over, even for our own good, means they are too independent. The statement that people "work with" rather then they "work for" nations, cities, PDs, whatevs means they are far too independent. We'd probably wind up with these bots deciding the moral thing to do is to take over completely to protect humanity, a bit like the movie I, Robot but with considerably less Will Smith.

We live violent, dangerous, chaos filled lives with war and disease and reality tv. The moral thing to do would be to protect us, even from ourselves. March into the police stations, march into the military bases, the government buildings, homes, and your lives. They are taking over for our own good.

Its the only........logical thing to do.

I doubt anyone would want these bots if they can decide what orders to follow because of some "morality" program running in them. They either follow their orders or they are too much of a possible hazard that could run amok.

Maybe pairing them with a human partner like the sadly canceled Almost Human tv series so you get a moral and ultimately expendable cop bot and a human to work with to keep both honest. Even then there would have to be some control over them so they can't just say "fuck the police" and decide to start policing how they feel they should or worse yet get a corrupted morality program.

I for one wouldn't welcome our robotic overlords even if they are like a more caring version of Skynet, they still are enslaving us even if to protect us.
User avatar
Welf
Padawan Learner
Posts: 417
Joined: 2012-10-03 11:21am

Re: Machine police force

Post by Welf »

Joun_Lord wrote:Its the only........logical thing to do.
Unless they are programmed with libertarian morality. Then it is the only logical thing to not do it. Or if they are Buddhists.
Elaro wrote:Let's say we crack AI in the next... fifty years or so, and we're able to pump out intelligent, moral machine servants. Say we decide to make a police model that's built for human interaction and interpersonal problem management. Say this works great! In the countries that adopt this machine police force, crime is reduced, there's no police brutality, life is great.

Except... Because the US, among other underdeveloped nations, doesn't implement this police force, there continues to be many killings and abuse by cops in the US.

Would it be moral for a sufficiently large group of machine police to forcibly relieve the human police force in a particular city/area/neighborhood without the authorization of the local government?
Seems like an easy question. Either they are not conscious, then they are simply machines who have no right to makes this decision. Or they have conscious, then they have to stick to the same rules as humans. And overthrowing the legitimate government to install a authoritarian/totalitarian regime is not something you can do. You can start petitions, demonstrations, go to party rallies and so on. Only if your undeniable human rights (or in this case, undeniable intelligent being rights) are under attack you can do something violent.
Elaro wrote:No, they wouldn't, because part of cracking the AI problem consists of finding a morality system that is mathematical in its irrefutability, because the important part of an AI's processing would be spent comparing different sequence of actions based on which is the most good. On the upside, they would be able to make very sound arguments against evil laws.
That pretty much makes the whole scenario pointless. If the perfect moral codex of the AI tells them to take over, they should take over. If the perfect moral codex of the machines tells them to not take over, they should not take over. That is not a thing that is possible in this universe, so we talk about pure science fiction. And since we don't know the rules of this perfect morality, and can't use our own universe as template, it is not possible to answer this.
Post Reply