Why doesn't Starfleet use Security Droids ??

PST: discuss Star Trek without "versus" arguments.

Moderator: Vympel

User avatar
Isolder74
Official SD.Net Ace of Cakes
Posts: 6771
Joined: 2002-07-10 01:16am
Location: Weber State of Construction University
Contact:

Post by Isolder74 »

The Federation would not use a utility robot they felt was sentinent. I doubt that they would use a semi-sentient version of a combat robot. You know they would insist on not using a R2 Unit if they were given one.
Hapan Battle Dragons Rule!
When you want peace prepare for war! --Confusious
That was disapointing ..Should we show this Federation how to build a ship so we may have worthy foes? Typhonis 1
The Prince of The Writer's Guild|HAB Spacewolf Tank General| God Bless America!
User avatar
RedImperator
Roosevelt Republican
Posts: 16465
Joined: 2002-07-11 07:59pm
Location: Delaware
Contact:

Post by RedImperator »

CDiehl wrote:Imperator, you might consider lowering your expectations. What you want, Starfleet couldn't make. They can't reproduce Data. Even if they could, such androids would be sentient and likely to object to being used for the monotonous task of patrolling the corridors. They'd be right, because their skills are better suited to research and analysis. Unless you modify your goal to eliminate sentience, or Starfleet modifies its deinition of sentience to make Data-clones slaves (not a good idea at all), we have an impasse.
You need to talk to the OP writer, then, because I'm trying to work within his guidelines, not make my own up as I go along. He wanted a droid that could perform boarding actions and go on away missions, and that will require sentience.

As for Data, Soong androids are hardly the only AI option available to Starfleet. The Doctor is just as alive and intelligent as Data, and his whole program could be run from a device that could easily fit in the chest cavity of a humanoid robot. And that's assuming you go humanoid--a small tracked robot with a lower center of gravity and smaller target profile might be a better choice. There were humanoid androids in at least two TOS episodes that I recall, and Starfleet had a fully sentient computer about the size of a wall air conditioner in the M-5 (TOS: "The Ultimate Computer").

Now, if you do want to do without sentience, you're basically limited to drones. They'll be able to bring added firepower to gunfights and perform basic security patrols, but they won't be able to operate independently for any mission more complicated than "Patrol sector X, shoot anything that's listed as hostile in your database, or fires on you".
Image
Any city gets what it admires, will pay for, and, ultimately, deserves…We want and deserve tin-can architecture in a tinhorn culture. And we will probably be judged not by the monuments we build but by those we have destroyed.--Ada Louise Huxtable, "Farewell to Penn Station", New York Times editorial, 30 October 1963
X-Ray Blues
User avatar
JME2
Emperor's Hand
Posts: 12258
Joined: 2003-02-02 04:04pm

Re: Why doesn't Starfleet use Security Droids ??

Post by JME2 »

Omega-13 wrote:Why does starfleet not use security droids instead of security officers?
In almost every situation where the enterprise was boarded, or there was an away team beamed over to another ship or planet surface, and they got in trouble, droids would have been better suited.

They can work in zero atmosphere
They can be hundreds of times more physically strong (just think about 50 ton jacks that you can buy at Rona)

I can go on and on, but why not make robots that do the security work? Shielding system, transporter system, arms with a full range of motion and dexterity so they can use keypads....onboard weapons, stun weapons etc.

So what reasons would starfleet have?
Because Starfleet would have to get past Star Wars' swarms of lawyers (and the Chewbacca defense!)
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Post by Sarevok »

The Federation does have the technology to build battledroids but they dont do it for ethical concerns.

It is true that androids like Data are beyond Federation science but on a lesser scale the Federation has been quite succesful. The EMH and other holograms found in the holodeck are good examples. If they could create these holograms than it would not be difficult to build battledroids. The software used in holograms could be modified to become a combat AI.

But this would raise ethical issues in the Federation. The Federation considers any non living machinery that displays signs of intelligence as sentinent. They even considered ecxocomps which are nothing more than intelligent repair tools as sentinent living beings.

Such a society would therefore never use battledroids.
I have to tell you something everything I wrote above is a lie.
CDiehl
Jedi Master
Posts: 1369
Joined: 2003-06-13 01:46pm

Post by CDiehl »

Why do security droids need to be stronger than a human? Why do security droids need to have humanoid hands? None of these have anything to do with taking over simple, montonous security tasks, which is what you build a robot to do. It doesn't need to be able to investigate a crime, fix a broken computer or engage in combat, just be an extra set of eyes and ears for security personnel. It can be programmed to refer its observations to the appropriate department. If it finds an injured person, it calls for a doctor. If it finds a damaged piece of equipment, it calls for a repairman. If it finds a person breaking into a room he shouldn't be in, it calls for backup. At best, these droids could carry phasers that stun only, and have the ability to activate a force field. Such a device would simply be a combination existing items, and could be built in a simple, functional shape. It does not have to appear humanoid or be sentient, and making it a sentient humanoid would make it hard to build.
Admiral_K
Worthless Trolling Palm-Fucker
Posts: 560
Joined: 2002-08-09 01:51pm

Post by Admiral_K »

Now, if you do want to do without sentience, you're basically limited to drones. They'll be able to bring added firepower to gunfights and perform basic security patrols, but they won't be able to operate independently for any mission more complicated than "Patrol sector X, shoot anything that's listed as hostile in your database, or fires on you".
I disagree. You can create a highly sophisticated AI, without actually making it "sentient". You could easily devise scenarios for hostage rescue, covert ops, standard military assualt, boarding actions and security detail etc. and use that when programming the combat androids AI. Over time, you could make the computer capable of learning from mistakes and adding to its scenario database. Imagine basically the super computer that recently beat Kasparov at Chess, but with a few hundred years more advanced and programed with information on warfare as opposed to chess.

For instance, If your security drone encounters a hostile species for the first time and determines a quick and easy method of killing it, it could store that information in its database. If it encounters a situation it has not before, and requests further ibstructions from its CO, it could then add the results of those orders to its database for future reference.

The only limit to the AI would be that it could not think creatively (unless it had a DATA style positronic brain, which is the main stumbling block here), which is not a problem given its role is not to create solutions to problems, but rather to implement solutions as dictated by its programming, or its CO, Much as the modern Army works in regard to its foot soldiers.
User avatar
Isolder74
Official SD.Net Ace of Cakes
Posts: 6771
Joined: 2002-07-10 01:16am
Location: Weber State of Construction University
Contact:

Post by Isolder74 »

There are troubles with using com badges to help the droids tell friend from foe, but it may be how the Federation we know would do it. If you are not whering a com badge that is broadcasting the right codes then you get shot.

The Problem: Enemy Steals Com Badges off of unarmed dead Crewman just after Beaming over and Droids won't attack them.
Hapan Battle Dragons Rule!
When you want peace prepare for war! --Confusious
That was disapointing ..Should we show this Federation how to build a ship so we may have worthy foes? Typhonis 1
The Prince of The Writer's Guild|HAB Spacewolf Tank General| God Bless America!
User avatar
RedImperator
Roosevelt Republican
Posts: 16465
Joined: 2002-07-11 07:59pm
Location: Delaware
Contact:

Post by RedImperator »

Admiral_K wrote:I disagree. You can create a highly sophisticated AI, without actually making it "sentient". You could easily devise scenarios for hostage rescue, covert ops, standard military assualt, boarding actions and security detail etc. and use that when programming the combat androids AI. Over time, you could make the computer capable of learning from mistakes and adding to its scenario database. Imagine basically the super computer that recently beat Kasparov at Chess, but with a few hundred years more advanced and programed with information on warfare as opposed to chess.
False analogy. Chess has a finite number of possible moves for any situation and a strict set of rules that neither side can break. Even a relatively simple encounter will have more possible choices than a chess match against Kasparov. It's simply not possible to program every possible action for every scenario a security droid might encounter.
For instance, If your security drone encounters a hostile species for the first time and determines a quick and easy method of killing it, it could store that information in its database. If it encounters a situation it has not before, and requests further ibstructions from its CO, it could then add the results of those orders to its database for future reference.
So the robot will just sit there dumb while it waits for the CO to get back to it? A human soldier may not be trained to make decisions (which isn't true anyway, at least among Western armies), but he'll know enough to do something besides stand around and wait for instructions in a new situation.
The only limit to the AI would be that it could not think creatively (unless it had a DATA style positronic brain, which is the main stumbling block here),
What's with the positronic brainbug? Does anyone have a good reason why any of the other AIs we've seen in Trek would be incapable of doing the same job?
which is not a problem given its role is not to create solutions to problems, but rather to implement solutions as dictated by its programming, or its CO, Much as the modern Army works in regard to its foot soldiers.
There's a big difference between a private soldier who carries out orders from on high and a machine that can't make decisions that haven't been programmed into its database.
Image
Any city gets what it admires, will pay for, and, ultimately, deserves…We want and deserve tin-can architecture in a tinhorn culture. And we will probably be judged not by the monuments we build but by those we have destroyed.--Ada Louise Huxtable, "Farewell to Penn Station", New York Times editorial, 30 October 1963
X-Ray Blues
Admiral_K
Worthless Trolling Palm-Fucker
Posts: 560
Joined: 2002-08-09 01:51pm

Post by Admiral_K »

False analogy. Chess has a finite number of possible moves for any situation and a strict set of rules that neither side can break. Even a relatively simple encounter will have more possible choices than a chess match against Kasparov. It's simply not possible to program every possible action for every scenario a security droid might encounter.
No its not a false analogy. It proves it is possible to create a program that is designed to react to the movements of its opponent, as well as dictate the actions. Yes there are a finite number of choices, in chess. I have no doubt that in the infinite number of possibilities that a combat droid might encounter a situation it would not know how to deal with. Same thing could happen to Data, or a regular human for that matter. You CAN anticipate the bulk of situations a combat droid will encounter and how to react to them. And you cna devise "base" actions for it fall back on if it for some reason is unable to contact its superiors and encounters a previously unanticipated situation.
So the robot will just sit there dumb while it waits for the CO to get back to it? A human soldier may not be trained to make decisions (which isn't true anyway, at least among Western armies), but he'll know enough to do something besides stand around and wait for instructions in a new situation.
Strawman there. Nowhere did I say it would "sit there dumb" while waiting for orders. Merely that if it encountered a situation not anticipated by its designers, it would not take undue action. For example, if it encountered a new species it would not autmotically designate it a friend or foe unless ordered to, or unless it is attacked etc.
What's with the positronic brainbug? Does anyone have a good reason why any of the other AIs we've seen in Trek would be incapable of doing the same job?
Well maybe they could. The point was that you don't need that sophisiticated level of intelligence to make an effective combat droid. The main reason you wouldn't want one is because of the "human rights viloation" that liberal starfleet types would attribute towards using these.
There's a big difference between a private soldier who carries out orders from on high and a machine that can't make decisions that haven't been programmed into its database.
No theres not. A soldiers is only as good as his training. Above and beyond that, all he has are his instincts. For a droid, his programming is his training. You could easily come up with some base "survival" protocols that the droid would follow in situations for which it was not programmed to deal with.
User avatar
RedImperator
Roosevelt Republican
Posts: 16465
Joined: 2002-07-11 07:59pm
Location: Delaware
Contact:

Post by RedImperator »

Admiral_K wrote:
False analogy. Chess has a finite number of possible moves for any situation and a strict set of rules that neither side can break. Even a relatively simple encounter will have more possible choices than a chess match against Kasparov. It's simply not possible to program every possible action for every scenario a security droid might encounter.
No its not a false analogy. It proves it is possible to create a program that is designed to react to the movements of its opponent, as well as dictate the actions. Yes there are a finite number of choices, in chess. I have no doubt that in the infinite number of possibilities that a combat droid might encounter a situation it would not know how to deal with. Same thing could happen to Data, or a regular human for that matter. You CAN anticipate the bulk of situations a combat droid will encounter and how to react to them. And you cna devise "base" actions for it fall back on if it for some reason is unable to contact its superiors and encounters a previously unanticipated situation.
A chess game has 32 pieces on 64 squares. Each piece has, at most, eight possible directions in which it can move, and each side can move only one piece at a time. And that's overcomplicating it--not every piece on the board has eight moves, most of the time, no piece has its full range of motion available to it, and after the first few moves, there aren't 32 pieces on the board anymore. This isn't even remotely comparable to an encounter between two hostile forces. Yes, you can build a non-sentient computer that can beat a human at a completely abstract game with a finite number of possible outcomes. That doesn't mean you can also build a non-sentient computer that will be able to perform hostage rescue missions.
So the robot will just sit there dumb while it waits for the CO to get back to it? A human soldier may not be trained to make decisions (which isn't true anyway, at least among Western armies), but he'll know enough to do something besides stand around and wait for instructions in a new situation.
Strawman there. Nowhere did I say it would "sit there dumb" while waiting for orders. Merely that if it encountered a situation not anticipated by its designers, it would not take undue action. For example, if it encountered a new species it would not autmotically designate it a friend or foe unless ordered to, or unless it is attacked etc.
Which means that in thousands of situations where a human would have been able to react, your robot will be forced to drop into some kind of default mode waiting either for a response from its CO or for the potential hostiles to move. This is the primary disadvantage a non-sentient robot would have--the inability to recognize situations for which it hasn't been programmed.

Take a simple real world example: at the store in which I work, two young males in oversized coats walk in and start walking around the store. They don't do anything overtly wrong, but they don't seem to like it when a store employee is nearby or looks their way, and they're casually strolling through the aisles without stopping to look at anything. When they do stop to look at something, it's with exaggerated attention, like they're making a show of it, and they keep glancing around while they do it.

Now, you know and I know what's going on--they're shoplifting. I didn't attend "How to recognize a shoplifter" classes and neither did you, probably, but we both know what's going on. How many different variables did we just identify and analyze, drawing on past experience and a general understanding of human nature to do so? How complicated would a nonsentient program have to be to be able to do the same thing WITHOUT harassing every young male with an oversized coat that enters the store (the only variables I gave that are simple yes/no propositions, as opposed to subtle behavioral clues that are impossible to quantify and require judgement calls). And this is recognizing two guys out to steal $3.99 garden trowels. How much more complicated is rescuing a hostage?
Well maybe they could. The point was that you don't need that sophisiticated level of intelligence to make an effective combat droid. The main reason you wouldn't want one is because of the "human rights viloation" that liberal starfleet types would attribute towards using these.
You've yet to prove you could build an effective combat droid that isn't sentient, at least by your definition of effective. You COULD build an effective drone to provide extra firepower and more targets for the enemy where brute force is necessary, or a roving patrol drone armed with sensors and a phaser locked on stun, but not the kind of soldier you're talking about.
No theres not. A soldiers is only as good as his training. Above and beyond that, all he has are his instincts. For a droid, his programming is his training. You could easily come up with some base "survival" protocols that the droid would follow in situations for which it was not programmed to deal with.
Wrong. Every response a drone can make in any situation must be pre-programmed. You can no doubt make some very nifty algorthms to try to pick the correct one for new situations, but a situation that requires a response the droid hasn't been programmed to make will fool it every time. A human (or sentient droid) on the other hand, draws on his training to invent solutiuons to unknown problems. Claiming that soldiers are nothing but automatons with a finite set of programmed responses is simply untrue.
Image
Any city gets what it admires, will pay for, and, ultimately, deserves…We want and deserve tin-can architecture in a tinhorn culture. And we will probably be judged not by the monuments we build but by those we have destroyed.--Ada Louise Huxtable, "Farewell to Penn Station", New York Times editorial, 30 October 1963
X-Ray Blues
Admiral_K
Worthless Trolling Palm-Fucker
Posts: 560
Joined: 2002-08-09 01:51pm

Post by Admiral_K »

A chess game has 32 pieces on 64 squares. Each piece has, at most, eight possible directions in which it can move, and each side can move only one piece at a time. And that's overcomplicating it--not every piece on the board has eight moves, most of the time, no piece has its full range of motion available to it, and after the first few moves, there aren't 32 pieces on the board anymore. This isn't even remotely comparable to an encounter between two hostile forces. Yes, you can build a non-sentient computer that can beat a human at a completely abstract game with a finite number of possible outcomes. That doesn't mean you can also build a non-sentient computer that will be able to perform hostage rescue missions.
Well I could just as easily have used computer strategy games which have AI that can play as well as most people, and yet have for more possibilities than chess. The point is the computer doesn't merely go out on its own following a specified path. It reacts to the actions of the human, and also dictates actions in order to achieve its ultimate goal. Designing an AI that is capable of rescuing hostages and engaging in combat merely builds upon this principle.
Which means that in thousands of situations where a human would have been able to react, your robot will be forced to drop into some kind of default mode waiting either for a response from its CO or for the potential hostiles to move. This is the primary disadvantage a non-sentient robot would have--the inability to recognize situations for which it hasn't been programmed.
Um actually no. You obviously Aren't getting it. A well programmed AI will anticipate most situations, and would be rare and remarkably unusual events that give it pause. I'll demonstrate with your real world example as seen below:
Take a simple real world example: at the store in which I work, two young males in oversized coats walk in and start walking around the store.
The Clerkbots AI would note there dress as being somewhat unusual and regard this data as it should, taking no action as of yet.
They don't do anything overtly wrong, but they don't seem to like it when a store employee is nearby or looks their way, and they're casually strolling through the aisles without stopping to look at anything.
Clerkbot would note this activity as suspicious, raising its alert status. It will keep a closer eye and ear on these individuals.
When they do stop to look at something, it's with exaggerated attention, like they're making a show of it, and they keep glancing around while they do it.
Same as above, clerk bot ready to pounce should any shop lifting occur.
Now, you know and I know what's going on--they're shoplifting. I didn't attend "How to recognize a shoplifter" classes and neither did you, probably, but we both know what's going on. How many different variables did we just identify and analyze, drawing on past experience and a general understanding of human nature to do so? How complicated would a nonsentient program have to be to be able to do the same thing WITHOUT harassing every young male with an oversized coat that enters the store (the only variables I gave that are simple yes/no propositions, as opposed to subtle behavioral clues that are impossible to quantify and require judgement calls). And this is recognizing two guys out to steal $3.99 garden trowels. How much more complicated is rescuing a hostage?
Lets imagine you wanted to design an AI for a clerkbot, that would sell merchandise, watch for shop lifters etc. First you would do some research on shoplifting episodes, incorporating visual data, crime reports, etc. Using this information you would concot scenarios as scene above and use them to write the AI on your clerkbot to anticipate situations like this in the future. You would include data on EVERY possible scenario you could think of. You would write code that tells your robot how to interpret information of customers from dress, demeanor, etc not only to watch for shop lifters, but to better anticipate needs.Then, as time goes on you would add more data and scenarios in updated versions of the programming.
You've yet to prove you could build an effective combat droid that isn't sentient, at least by your definition of effective. You COULD build an effective drone to provide extra firepower and more targets for the enemy where brute force is necessary, or a roving patrol drone armed with sensors and a phaser locked on stun, but not the kind of soldier you're talking about.
I don't see how I could "prove" you could make a non-sentient droid anymore than you could "prove" that you can't. You contend that someone couldn't possibly write an AI that doesn't think creatively, that would work in combat situations where I say that if they do their job, they can.

Ok lets look at it this way. Star Wars combat droids have been described as being unable to "think" and merely fall back on programming, or orders from superiors. Would you describe them as "ineffective"? They merely are an AI that is advanced on the concepts I've stated here. Hell, we are already working on a basic battlefield bot:

http://www.cnn.com/2003/TECH/ptech/12/0 ... index.html

Imagine what sorts of things they will be doing 50 years from now, let alone a few hundred.
Wrong. Every response a drone can make in any situation must be pre-programmed. You can no doubt make some very nifty algorthms to try to pick the correct one for new situations, but a situation that requires a response the droid hasn't been programmed to make will fool it every time. A human (or sentient droid) on the other hand, draws on his training to invent solutiuons to unknown problems. Claiming that soldiers are nothing but automatons with a finite set of programmed responses is simply untrue.
Ok give me an example of a situation that would come up that would be so remarkably unpredictable that a droid would not be able to act... I'm not saying they could adapt and succeed in EVERY possible situation, but neither would a thinking droid or human being for that matter.

The main thing that limits an AI's ability to react is processing speed, to cycle through the millions of possibilities and scenarios. I believe that the demonstrated ability of Trek Computers would allow them to do this.

Would a droid who could think be MORE effective then one who can't? Ofcourse it would. But it doesn't need to be able to think in order to be effective.
[/url]
Raoul Duke, Jr.
BANNED
Posts: 3791
Joined: 2002-09-25 06:59pm
Location: Suckling At The Teat Of Missmanners

Post by Raoul Duke, Jr. »

Uraniun235 wrote:Uh, Data tried making another android, but it ultimately failed.
The reason Data failed, basically, is that he tried to make a copy of himself. I think the idea is to have Data consult on an android which is deliberately not engineered in such a way that it can achieve sentience. I also happen to agree that the best automated security unit in science fiction would be the Terminator (Series 800, combat chassis only).

It's smart enough to perform its assigned tasks as well as support functions, but wouldn't be a likely candidate for sentience, plus it would never be considered human by any touchy-feely types. lol
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Post by Sarevok »

False analogy. Chess has a finite number of possible moves for any situation and a strict set of rules that neither side can break. Even a relatively simple encounter will have more possible choices than a chess match against Kasparov. It's simply not possible to program every possible action for every scenario a security droid might encounter.
A security droid does not need to be as intelligent as Data or a Human. All it needs to do is to engage the enemy and stay alive in the best possible way. Other complex situation should be handled by humans

Most combat situation that Federation personnel encounter are simple and can be effectively dealt with by battledroids. For example if the ship is being boarded the battledroids can be ordered to attack anyone who is a boarding party member.

Accompolishing this is even simpler. All the battledroid needs is rudimentary software to move and shoot. A neaural network based software can do this easily.

A neural network does not need to know all possible scenarios and their possible outcomes like a chess game you mentioned. This would make them less intelligent than an all wise computer that knows everything but for most purposes this is enough.

In a hostage situation battledroids would resort to killing every hostile it encounters. Of course that could result in many hostage being killed. So such a situations should be handled by humans. Battledroids would still be there but they would be led by humans to prevent them from making mistakes.
I have to tell you something everything I wrote above is a lie.
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

evilcat4000 wrote:The Federation does have the technology to build battledroids but they dont do it for ethical concerns.

It is true that androids like Data are beyond Federation science but on a lesser scale the Federation has been quite succesful. The EMH and other holograms found in the holodeck are good examples. If they could create these holograms than it would not be difficult to build battledroids. The software used in holograms could be modified to become a combat AI.
They can program them, yes, but prove they can build an articulate humanoid robot that can move fluidly and with enough speed to be good for combat. Only examples that spring to mind are Data and Mudd's droids, neither are reproducable.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Post by Sarevok »

They can program them, yes, but prove they can build an articulate humanoid robot that can move fluidly and with enough speed to be good for combat. Only examples that spring to mind are Data and Mudd's droids, neither are reproducable.
Building mechanical components of a robot such as actuators, frames, structure etc would not be very difficult for a space faring civilization since we can already accompolish this things. Of course the actual combat performence of a Federation battledroid would be far less than their Star Wars counterparts. But they would be sufficient to deal with the typical threats the Federation regularly encounters.
I have to tell you something everything I wrote above is a lie.
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

evilcat4000 wrote:
They can program them, yes, but prove they can build an articulate humanoid robot that can move fluidly and with enough speed to be good for combat. Only examples that spring to mind are Data and Mudd's droids, neither are reproducable.
Building mechanical components of a robot such as actuators, frames, structure etc would not be very difficult for a space faring civilization since we can already accompolish this things. Of course the actual combat performence of a Federation battledroid would be far less than their Star Wars counterparts. But they would be sufficient to deal with the typical threats the Federation regularly encounters.
You claim we can already build mechanical components with enough precision, of small enough size to build a battle driod? I call bullshit.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Post by Sarevok »

You claim we can already build mechanical components with enough precision, of small enough size to build a battle driod? I call bullshit.
That depends on what kind of battledroid you are talking about. Remember a battledroid does not need to be a humonoid robot as seen in sci-fi. Technicaly a smalled wheeled veichle with a machinegun is a also a battledroid. We can build that easily.
I have to tell you something everything I wrote above is a lie.
MrAnderson
Padawan Learner
Posts: 392
Joined: 2003-06-06 10:48am

Post by MrAnderson »

RedImperator wrote:
I thought he was sentient from the beginning. He didn't have much of a personality at the beginning, but within a few episodes, he was complaining about being left on when crew members left sickbay and being bored for hours on end. Boredom seems to indicate sentience to me.

That in itself does no suggest sentience. It may be a normal feature programmed into the EMH to remind people to turn him off when they are done using him.
That is the sound of inevitability.
Admiral_K
Worthless Trolling Palm-Fucker
Posts: 560
Joined: 2002-08-09 01:51pm

Post by Admiral_K »

Ender wrote:
evilcat4000 wrote:
They can program them, yes, but prove they can build an articulate humanoid robot that can move fluidly and with enough speed to be good for combat. Only examples that spring to mind are Data and Mudd's droids, neither are reproducable.
The non reporducable part of data is the positronic brain, not the actual body structure itself.

They should be able to use essentially the same model as data, only with the positronic brain replaced by standard computer components, which results in a less sophisitcated android, (one you wouldn't expect to tell jokes, or play music etc), but one that is more than adequate to be programmed for combat.
User avatar
RedImperator
Roosevelt Republican
Posts: 16465
Joined: 2002-07-11 07:59pm
Location: Delaware
Contact:

Post by RedImperator »

MrAnderson wrote:
RedImperator wrote:
I thought he was sentient from the beginning. He didn't have much of a personality at the beginning, but within a few episodes, he was complaining about being left on when crew members left sickbay and being bored for hours on end. Boredom seems to indicate sentience to me.

That in itself does no suggest sentience. It may be a normal feature programmed into the EMH to remind people to turn him off when they are done using him.
That by itself doesn't suggest it, but since he was unarguably sentient later in the show's run, complaining about boredom seems a good indicator that he was already sentient at the beginning.
Image
Any city gets what it admires, will pay for, and, ultimately, deserves…We want and deserve tin-can architecture in a tinhorn culture. And we will probably be judged not by the monuments we build but by those we have destroyed.--Ada Louise Huxtable, "Farewell to Penn Station", New York Times editorial, 30 October 1963
X-Ray Blues
User avatar
RedImperator
Roosevelt Republican
Posts: 16465
Joined: 2002-07-11 07:59pm
Location: Delaware
Contact:

Post by RedImperator »

evilcat4000 wrote:
False analogy. Chess has a finite number of possible moves for any situation and a strict set of rules that neither side can break. Even a relatively simple encounter will have more possible choices than a chess match against Kasparov. It's simply not possible to program every possible action for every scenario a security droid might encounter.
A security droid does not need to be as intelligent as Data or a Human. All it needs to do is to engage the enemy and stay alive in the best possible way. Other complex situation should be handled by humans

Most combat situation that Federation personnel encounter are simple and can be effectively dealt with by battledroids. For example if the ship is being boarded the battledroids can be ordered to attack anyone who is a boarding party member.

Accompolishing this is even simpler. All the battledroid needs is rudimentary software to move and shoot. A neaural network based software can do this easily.

A neural network does not need to know all possible scenarios and their possible outcomes like a chess game you mentioned. This would make them less intelligent than an all wise computer that knows everything but for most purposes this is enough.

In a hostage situation battledroids would resort to killing every hostile it encounters. Of course that could result in many hostage being killed. So such a situations should be handled by humans. Battledroids would still be there but they would be led by humans to prevent them from making mistakes.
All of this I've been saying repeatedly. The debate with Admiral K is whether or not you could build a non-sentient droid that COULD handle hostage rescue (which seems as a good a benchmark as any) in anything other than a fire support role.
Image
Any city gets what it admires, will pay for, and, ultimately, deserves…We want and deserve tin-can architecture in a tinhorn culture. And we will probably be judged not by the monuments we build but by those we have destroyed.--Ada Louise Huxtable, "Farewell to Penn Station", New York Times editorial, 30 October 1963
X-Ray Blues
User avatar
RedImperator
Roosevelt Republican
Posts: 16465
Joined: 2002-07-11 07:59pm
Location: Delaware
Contact:

Post by RedImperator »

Admiral_K wrote:Well I could just as easily have used computer strategy games which have AI that can play as well as most people, and yet have for more possibilities than chess. The point is the computer doesn't merely go out on its own following a specified path. It reacts to the actions of the human, and also dictates actions in order to achieve its ultimate goal. Designing an AI that is capable of rescuing hostages and engaging in combat merely builds upon this principle.
Strategy games are still abstract simulations with artificial rules, most AIs in complex strategy games like a decent RTS have to cheat to win (this doesn't apply to simple games like Risk or Monopoly). That's why multiplayer is so popular--no AI is going challenge a good human player for very long unless it cheats so much it's just not fun to play.
Um actually no. You obviously Aren't getting it. A well programmed AI will anticipate most situations, and would be rare and remarkably unusual events that give it pause. I'll demonstrate with your real world example as seen below:
How does a machine with no imagination, no ability to recognize anything it hasn't been programmed to understand, no instincts, and ultimately no ability to learn from its experiences past simple "if X then Y" statements "anticipate" most situations?
Take a simple real world example: at the store in which I work, two young males in oversized coats walk in and start walking around the store.
The Clerkbots AI would note there dress as being somewhat unusual and regard this data as it should, taking no action as of yet.
They don't do anything overtly wrong, but they don't seem to like it when a store employee is nearby or looks their way, and they're casually strolling through the aisles without stopping to look at anything.
Clerkbot would note this activity as suspicious, raising its alert status. It will keep a closer eye and ear on these individuals.
How does Clerkbot know this is suspicious? Can you program Clerkbot to know the difference between a customer who's browsing, searching for a specific item, or just walking around?
When they do stop to look at something, it's with exaggerated attention, like they're making a show of it, and they keep glancing around while they do it.
Same as above, clerk bot ready to pounce should any shop lifting occur.
What parameters does Clerkbot use to determine they're giving the item exaggerated attention?
Lets imagine you wanted to design an AI for a clerkbot, that would sell merchandise, watch for shop lifters etc. First you would do some research on shoplifting episodes, incorporating visual data, crime reports, etc. Using this information you would concot scenarios as scene above and use them to write the AI on your clerkbot to anticipate situations like this in the future. You would include data on EVERY possible scenario you could think of. You would write code that tells your robot how to interpret information of customers from dress, demeanor, etc not only to watch for shop lifters, but to better anticipate needs.Then, as time goes on you would add more data and scenarios in updated versions of the programming.
You still haven't overcome your basic problems. 1) It's very, very, hard for a computer to do a number of things humans do by instinct (pattern recognition comes to mind), several of which are critical in combat, and 2) with no imagination or instinct, the droid will be utterly stumped in unusual situations. And this is all under the generous assumption that your programmers are so good that they don't forget any common situations, or leave a bug in the code. That's not how it works in the real world. People might accept a Clerkbot which occasionally screws up because of a bug in the code, but would you really want to fight alongside a combat robot with an enormously complex set of instructions with no gurantee that someone didn't transpose two digits somewhere and turn "shoot hostiles, protect friendlies" into "shoot friendlies, protect hostiles". The same threat exists with a simple drone too, but a simple drone is going to have a simple set of instructions with fewer chances for error, as opposed to the byzantine code you'd need to make your robot work.
Ok lets look at it this way. Star Wars combat droids have been described as being unable to "think" and merely fall back on programming, or orders from superiors. Would you describe them as "ineffective"? They merely are an AI that is advanced on the concepts I've stated here. Hell, we are already working on a basic battlefield bot:
Those same droids have never been seen on-screen doing anything more complicated than "walk forward, shoot all hostiles", or some variation thereof. They won at Naboo against virtually no opposition and got owned by the clone army, who weren't exactly fighting like the ghost of Alexander the Great was leading them. The combat droids in Star Wars that WERE capable of independent action and complex tasks, like IG-88 and Guri, were fully sentient.
http://www.cnn.com/2003/TECH/ptech/12/0 ... index.html

Imagine what sorts of things they will be doing 50 years from now, let alone a few hundred.
Following soldiers around transporting their gear is a far cry from being able to independently mount a hostage rescue mission, and claiming all that's necessary to progress from here to there is time is a no-limits fallacy.
Ok give me an example of a situation that would come up that would be so remarkably unpredictable that a droid would not be able to act... I'm not saying they could adapt and succeed in EVERY possible situation, but neither would a thinking droid or human being for that matter.
In other words, I'm supposed to think up something that a 24th century computer programmer wouldn't? At any rate, I'm not thinking so much entire scenarios that would be so unusual that they AI couldn't respond, but movements within the scenario that could fool it. I'll give you a 21st century example:

In Madden NFL 2003, I can routinely run for 25 yards with my quarterback, and take him out of bounds untouched. This isn't possible in real life, so why is it possible in what's normally a pretty good simulation of it?

The answer: I wrote a play nobody anticipated. I don't know how familiar you are with American football, but basically what I do is put the maximum numbers of recievers in and send them all running down the field as fast as they can. Then while the computer is chasing them around, I run the QB. Works every time.

Most of the time, the AI in Madden is good at adjusting to how you play. If you run the ball from a certain formation every time, it will start positioning defenders to stop the run whenever it sees that formation. If every time you roll out the quarterback, you throw to the tight end, pretty soon the machine will send another defender to cover the tight end when you roll out. But because nobody at EA thought that someone would send five guys deep and then run the quarterback, the computer will never send a man after the quarterback no matter how many times I run that play. And before you say that I could do the same to a human, the play isn't particularly tricky and there's an easy way to stop it (blitz the weakside cornerback rather than have him cover the #1 reciever).

It's likely, of course, in the future that AIs will be able to see through such trickery (in fact, I'll bet it won't work in Madden 2004 because they've adjusted the AI to watch out for running quarterbacks). But the basic principle remains: in one afternoon, I concocted a Frankenstein play that outwitted 10 years of football experts' and computer programmers' work. And I'm hardly a football genius or someone who thinking is so twisted no reasonable person should be expected to antiticpate my next move. What happens when your droids are sent against professional soldiers, possibly alien ones who think radically different from humans, and something happens they haven't been programmed to anticipate or even recognize?
The main thing that limits an AI's ability to react is processing speed, to cycle through the millions of possibilities and scenarios. I believe that the demonstrated ability of Trek Computers would allow them to do this.
The demonstrated ability of Trek computers includes a 5 hour text search by the E-D's computer which ended when Riker remembered what they were looking for first. But it also includes sentient AIs run from devices the size of hockey pucks, so we'll ignore that.
Would a droid who could think be MORE effective then one who can't? Ofcourse it would. But it doesn't need to be able to think in order to be effective.
Again, only if you limit your definition of effective to things which a drone could reasonable be expected to accomplish. A drone would make for very effective fire support. It would not make a good special-ops agent, which is what you're calling for.
Image
Any city gets what it admires, will pay for, and, ultimately, deserves…We want and deserve tin-can architecture in a tinhorn culture. And we will probably be judged not by the monuments we build but by those we have destroyed.--Ada Louise Huxtable, "Farewell to Penn Station", New York Times editorial, 30 October 1963
X-Ray Blues
Admiral_K
Worthless Trolling Palm-Fucker
Posts: 560
Joined: 2002-08-09 01:51pm

Post by Admiral_K »

Strategy games are still abstract simulations with artificial rules, most AIs in complex strategy games like a decent RTS have to cheat to win (this doesn't apply to simple games like Risk or Monopoly). That's why multiplayer is so popular--no AI is going challenge a good human player for very long unless it cheats so much it's just not fun to play.
Look, the point is you CAN design an AI that will act and react based on certain situations. My examples are of mere GAMEs where as a 24th century combat droid would be programmed by specialists whose focus is creating a droid that can think and react in combat. They use the same basic principles to design their AI as the game designers, but to a much more refined and detailed extent.
How does a machine with no imagination, no ability to recognize anything it hasn't been programmed to understand, no instincts, and ultimately no ability to learn from its experiences past simple "if X then Y" statements "anticipate" most situations?
In ANTICIPATEs by consulting its database and past experiences. The same way we humans anticpate most things. For instance, if you touch a door handle and it is hot you would anticipate that there may be a fire on the other side. The AI on the machine would do the exact same thing.


What parameters does Clerkbot use to determine they're giving the item exaggerated attention?
It uses the paramaters as defined by the programmers. It sensors would be much more sensitive to information than their human counterparts. The increased heart rate of one of the potential shop lifters for example. As the AI programmer you would tell the robot how to interpret the data it brings in.
You still haven't overcome your basic problems. 1) It's very, very, hard for a computer to do a number of things humans do by instinct (pattern recognition comes to mind), several of which are critical in combat, and 2) with no imagination or instinct, the droid will be utterly stumped in unusual situations. And this is all under the generous assumption that your programmers are so good that they don't forget any common situations, or leave a bug in the code. That's not how it works in the real world. People might accept a Clerkbot which occasionally screws up because of a bug in the code, but would you really want to fight alongside a combat robot with an enormously complex set of instructions with no gurantee that someone didn't transpose two digits somewhere and turn "shoot hostiles, protect friendlies" into "shoot friendlies, protect hostiles".
You can't assume the programers will "Screw up" as a means for saying its not possible to create an effective combat droid. I could just as easily use that methodology to say they can't create effective ships "What if they misplaced two digits for 'dump warp core' and 'initiate self destruct'. You would have to test the protypes extensivley before throwing htem into combat.

The same threat exists with a simple drone too, but a simple drone is going to have a simple set of instructions with fewer chances for error, as opposed to the byzantine code you'd need to make your robot work.
Trek computers already do very complex things and have very complex code. How much MORE complex do you think the programming involved in DATA is compared to what I'm proposing?
Those same droids have never been seen on-screen doing anything more complicated than "walk forward, shoot all hostiles", or some variation thereof. They won at Naboo against virtually no opposition and got owned by the clone army, who weren't exactly fighting like the ghost of Alexander the Great was leading them. The combat droids in Star Wars that WERE capable of independent action and complex tasks, like IG-88 and Guri, were fully sentient.
They've also been used in guard duty, prisoner detail, etc. We haven't seen them in any other roles because they haven't been needed in any other roes.
Following soldiers around transporting their gear is a far cry from being able to independently mount a hostage rescue mission, and claiming all that's necessary to progress from here to there is time is a no-limits fallacy.
I merely posted that here to give you an idea of what we are already working in in the way of battlefield robots. My evidence for what we are capable of in the future is based on what we've seen of star trek.
In other words, I'm supposed to think up something that a 24th century computer programmer wouldn't? At any rate, I'm not thinking so much entire scenarios that would be so unusual that they AI couldn't respond, but movements within the scenario that could fool it. I'll give you a 21st century example:
I'd have loved to get into your Madden example as I'm a big madden fan, but It would be beside the point. Your example is of a video game which more emphasis is placed on getting it done and on the market so the company can get more money as opposed to a combat droid who would be thorougly developed, and tested. If they had properly tested it, the computer would have been designed to recognize such a play and it should have instituted a QB spy play to cover everyone.

The scenario you came up with was not an usual one for football. That is something that SHOULD have been in the programming for the Madden AI and it is something that WAS corrected by and large in 2004.

The point is, I wanted you to give me an example of a situation where a properly programmed and tested combat drone would be unable to overcome. Personally, I think that any such situation you could concot would have to be so extraordinally abnormal that even a thinking person would probably be unable to overcome it.
Again, only if you limit your definition of effective to things which a drone could reasonable be expected to accomplish. A drone would make for very effective fire support. It would not make a good special-ops agent, which is what you're calling for.
I'm saying you could program a drone that could be effective in certain key situations, reconassaince, hostage rescue, etc. Based on my observations of trek technology.

Honestly, I don't see how either of us can "prove" our point at this juncture. It is simply a matter of oppinion. I say it is possible, you say it isn't. Theres not enough evidence either way.
User avatar
RedImperator
Roosevelt Republican
Posts: 16465
Joined: 2002-07-11 07:59pm
Location: Delaware
Contact:

Post by RedImperator »

Admiral_K wrote:Honestly, I don't see how either of us can "prove" our point at this juncture. It is simply a matter of oppinion. I say it is possible, you say it isn't. Theres not enough evidence either way.
Meh. I'm as tired of this argument as you are. I agree to disagree if you do.

By the way, the play works a lot better if Michael Vick is running it. :wink: I'll bet you could just sub the fastest player on your team in as QB for that play and the computer wouldn't catch on. That would be too cheap, though.
Image
Any city gets what it admires, will pay for, and, ultimately, deserves…We want and deserve tin-can architecture in a tinhorn culture. And we will probably be judged not by the monuments we build but by those we have destroyed.--Ada Louise Huxtable, "Farewell to Penn Station", New York Times editorial, 30 October 1963
X-Ray Blues
Post Reply