Why so few robot armys?

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
keen320
Youngling
Posts: 134
Joined: 2010-09-06 08:35pm

Re: Why so few robot armys?

Post by keen320 »

Of course, once the war becomes really important/desperate to both sides (WW1 and 2 spring to mind), both sides frequently toss at least some of the rules of war out the window, at least after their opponents do. We follow them now because we can afford to, but in WW2 everyone was bombing civilians left and right. I'm pretty sure that's against the rules of war. IIRC, another piece of the Geneva convention said submarines should surface and warn their victims before firing. Guess how long that lasted. And on the WW2 Russian front, the rules of war were completely nonexistent. In really large (sometimes known as total) wars, many rules of war are ignored.

Of course, if cost is an issue, large wars like WW2 are exactly the kind you couldn't afford to use robots in.
Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: Why so few robot armys?

Post by Junghalli »

Fluffy wrote:The reason we don't see more robotic armies in fiction is because they make bad characters; they usually have to be "humanized" to be engaging for the audience. Look at the Matrix's Agent Smith, the human disguises of the Terminator series, C-3P0, etc. Totally logical, emotionless robots are boring.
Yeah, this. Robot armies are underrepresented in science fiction because it's easier to write drama with human armies. Any in-universe explanations are probably rationalizations the author thought up when he realized his setting might benefit from some kind of excuse for why robots are used less than they probably would be in a realistic society with zippy starships.
User avatar
lordofchange13
Jedi Knight
Posts: 838
Joined: 2010-08-01 07:54pm
Location: Kandrakar, the center of the universe and the heart of infinity

Re: Why so few robot armys?

Post by lordofchange13 »

Norade wrote:
lordofchange13 wrote:But how are we to know if the combined fire power even comes close to that used by a Forerunner ship?
Covenant ships can kill other Covenant ships and these drones killed a Covenant ship, but needing hundreds of beams. We also know the pathetic outputs of Mac rounds (never actually seen firing at any fraction of c) and autocannons that can be dodged by jet fighters. We can also take calculations for a lower bound from when flood pods puncture a Covenant vessel.
That doesn't answer my question. All the instances you mentioned have nothing to do with what a Forerunner ship can do, we have never seen anything to even make a simple assumption. The sentential's are robots only a few meters in diameter, they use there weapons mostly to fight flood. They at best can be compared to fighters,or light bombers. Also there were only about 50 of them that destroyed the covenant ship. And to clarify the MAC rounds: they only get shot at high fractions of C when fired from a Orbital Defense Platform.
"There is no such thing as coincidence in this world - there is only inevitability"
"I consider the Laws of Thermodynamics a loose guideline at best!"
"Set Flamethrowers to... light electrocution"
It's not enough to bash in heads, you also have to bash in minds.
Tired is the Roman wielding the Aquila.
User avatar
Norade
Jedi Council Member
Posts: 2424
Joined: 2005-09-23 11:33pm
Location: Kelowna, BC, Canada
Contact:

Re: Why so few robot armys?

Post by Norade »

lordofchange13 wrote:
Norade wrote:
lordofchange13 wrote:But how are we to know if the combined fire power even comes close to that used by a Forerunner ship?
Covenant ships can kill other Covenant ships and these drones killed a Covenant ship, but needing hundreds of beams. We also know the pathetic outputs of Mac rounds (never actually seen firing at any fraction of c) and autocannons that can be dodged by jet fighters. We can also take calculations for a lower bound from when flood pods puncture a Covenant vessel.
That doesn't answer my question. All the instances you mentioned have nothing to do with what a Forerunner ship can do, we have never seen anything to even make a simple assumption. The sentential's are robots only a few meters in diameter, they use there weapons mostly to fight flood. They at best can be compared to fighters,or light bombers. Also there were only about 50 of them that destroyed the covenant ship. And to clarify the MAC rounds: they only get shot at high fractions of C when fired from a Orbital Defense Platform.
The sentinels were obviously designed to fight in space and because the flood have been known to steal Forerunner vessels in the past they can, presumably, kill a Forerunner vessel though how many might be required is unknown. We see one fire in Halo 2 and it's not very impressive, nor are fractions of C needed when we see megaton damage kill Covenant and UNSC ships alike.
School requires more work than I remember it taking...
User avatar
Batman
Emperor's Hand
Posts: 16337
Joined: 2002-07-09 04:51am
Location: Seriously thinking about moving to Marvel because so much of the DCEU stinks

Re: Why so few robot armys?

Post by Batman »

Fluffy wrote:
Hardwire in some variation of Asimov's Four Laws or something. Problem solved. Unlike human beings, robots absolutely CAN be rendered utterly incapable of rising up against their makers.
True, however there is always the risk of hacking, programming corruption, someone building without the rules, etc.
that'd be the HARDWIRE part. DON'T make it part of the software. That can be altered and patched and corrupted and is bound to fuck up all by its lonesome as it is. Make it part of the hardware. You're still going to to have to deal with morons who didn't and now need help with their revolting robots, but yours can be made to to toe the party line reasonably easily.
'Next time I let Superman take charge, just hit me. Real hard.'
'You're a princess from a society of immortal warriors. I'm a rich kid with issues. Lots of issues.'
'No. No dating for the Batman. It might cut into your brooding time.'
'Tactically we have multiple objectives. So we need to split into teams.'-'Dibs on the Amazon!'
'Hey, we both have a Martian's phone number on our speed dial. I think I deserve the benefit of the doubt.'
'You know, for a guy with like 50 different kinds of vision, you sure are blind.'
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Why so few robot armys?

Post by Simon_Jester »

How exactly do you hardwire Asimov's Laws (or any similar directives) into a computer?

Obeying the laws will, of necessity, require the robot to make judgment calls. The robot will have to be able to observe its surroundings and deduce what kinds of situations trigger the directive. It can't be programmed to prevent humans from coming to harm, for instance, unless it knows what a human is and what it looks like when they're in danger.

At which point this becomes a software problem, unless you propose to design and build an analogue computer to figure out when the rule comes into play and hardwire that into the robot.

You can hardwire an abort switch or a self-destruct mechanism into a robot, but you can't hardwire judgment; it's too complicated a problem.
This space dedicated to Vasily Arkhipov
User avatar
Batman
Emperor's Hand
Posts: 16337
Joined: 2002-07-09 04:51am
Location: Seriously thinking about moving to Marvel because so much of the DCEU stinks

Re: Why so few robot armys?

Post by Batman »

So maybe the Four Laws weren't the ideal example, I essentially picked them because they're a well established example of rules that CANNOT be worked around with software patches.
And I don't see how you CAN'T hardwire
a) never harm anybody wearing uniform X,
b) never harm anybody not wearing uniform period and
c) always take out anybody wearing uniform Y unless explicitly told to do otherwise
for an admittedly very rough layout of what you could hardwire into your war robots.
'Next time I let Superman take charge, just hit me. Real hard.'
'You're a princess from a society of immortal warriors. I'm a rich kid with issues. Lots of issues.'
'No. No dating for the Batman. It might cut into your brooding time.'
'Tactically we have multiple objectives. So we need to split into teams.'-'Dibs on the Amazon!'
'Hey, we both have a Martian's phone number on our speed dial. I think I deserve the benefit of the doubt.'
'You know, for a guy with like 50 different kinds of vision, you sure are blind.'
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Why so few robot armys?

Post by Simon_Jester »

Batman wrote:So maybe the Four Laws weren't the ideal example, I essentially picked them because they're a well established example of rules that CANNOT be worked around with software patches.
And I don't see how you CAN'T hardwire
a) never harm anybody wearing uniform X,
b) never harm anybody not wearing uniform period and
c) always take out anybody wearing uniform Y unless explicitly told to do otherwise
for an admittedly very rough layout of what you could hardwire into your war robots.
Define "uniform." And "anybody." How does the robot perceive that a "somebody" is in the area? How does it examine that "somebody's" uniform? How does it determine which uniform is which?

None of these are impossible design challenges, of course. They can be solved, though they're not something one can solve off the top of one's head (if you think you have such a solution, you oversimplified).

The problem is that any solution to this design challenge will be a software solution. You need a programmable computer, not just a bunch of circuits welded into the robot's brain, to do this.

The programmable computer will need full access to the robot's sensors (so that it can see everything the robot sees). It will need to be tapped into the robot's thought processes in real time (so that it can say "Wait! Don't shoot!" before the robot's gunnery computer says "Shoot!").

The programmable computer that overrides the war robot when it's about to do something wrong will itself have to be programmed, and will be vulnerable to all the same problems as any other programmable computer: it has software "that can be altered and patched and corrupted and is bound to fuck up all by its lonesome as it is."

To make matters worse, if the computer is "hardwired" into the machine as a separate module (say, one that has lockout access to the robot's weapon systems), you create an extra failure point for the machine. If the robot is well designed, damage to the "don't rebel!" computer causes the robot to fail safe, and shuts it down even if everything else is working fine. If the robot is badly designed, damage to the "don't rebel!" computer causes the robot to fail deadly, and it rebels even if everything else is working fine. Or at least rebels to the same extent that it would if the "don't rebel" computer weren't there.
____________

Either way, you'd have been better off integrating the "don't rebel" code into the robot's main computer, and any backup computers it uses in the event that the main computer is damaged.

Because, I repeat, "don't rebel" is not a hardware feature. There is no mechanical device you can add to a computer that makes it never decide to rebel against its masters. The task of deciding what would qualify as "rebellion" and then deciding to avoid it is too complicated; it can only be performed by software, and the software must be flexible enough for you to reprogram it to respond to changing conditions.

EDIT: Think of it this way. "Don't rebel" is a special case of "Don't fail." If we could build mechanical add-ons that could be hardwired into computers that would stop them from 'failing' for arbitrary, complicated definitions of 'fail,' there would be no such thing as buggy computers. We'd just hardwire in a "don't catch a virus" unit and a "don't lose my files" unit and a "don't crash because of stupid stuff" unit and the computer would work fine.

The whole problem with computer use is that while the operations we can command a computer to do, the tasks we want it to perform are complicated, and the ways it can fail its task are even more complicated. The more complexity is involved in something, the more sophisticated the software to handle it.
This space dedicated to Vasily Arkhipov
User avatar
Batman
Emperor's Hand
Posts: 16337
Joined: 2002-07-09 04:51am
Location: Seriously thinking about moving to Marvel because so much of the DCEU stinks

Re: Why so few robot armys?

Post by Batman »

Simon_Jester wrote:
Batman wrote:So maybe the Four Laws weren't the ideal example, I essentially picked them because they're a well established example of rules that CANNOT be worked around with software patches.
And I don't see how you CAN'T hardwire
a) never harm anybody wearing uniform X,
b) never harm anybody not wearing uniform period and
c) always take out anybody wearing uniform Y unless explicitly told to do otherwise
for an admittedly very rough layout of what you could hardwire into your war robots.
Define "uniform." And "anybody." How does the robot perceive that a "somebody" is in the area? How does it examine that "somebody's" uniform? How does it determine which uniform is which?
None of these are impossible design challenges, of course. They can be solved, though they're not something one can solve off the top of one's head (if you think you have such a solution, you oversimplified).
The problem is that any solution to this design challenge will be a software solution. You need a programmable computer, not just a bunch of circuits welded into the robot's brain, to do this.
Why?
The programmable computer will need full access to the robot's sensors (so that it can see everything the robot sees). It will need to be tapped into the robot's thought processes in real time (so that it can say "Wait! Don't shoot!" before the robot's gunnery computer says "Shoot!").
Which I very much suspect would be doable faster if it's already hardwired...
The programmable computer that overrides the war robot when it's about to do something wrong will itself have to be programmed, and will be vulnerable to all the same problems as any other programmable computer: it has software "that can be altered and patched and corrupted and is bound to fuck up all by its lonesome as it is."
Assuming you need it in the first place-yes, absolutely. I fail to see why any of this CAN'T be hardwired. You're presupposing a modern day computer setup where pretty much nothing is hardwired on purpose (and due to design limitations) to keep the things as flexible as possible. The same is not necessarily true for SciFi robots (again, I give you Asimov's robots, where getting rid of the the three original laws required physical redesign of the brain).
To make matters worse, if the computer is "hardwired" into the machine as a separate module (say, one that has lockout access to the robot's weapon systems), you create an extra failure point for the machine.
How so? Why would I need to make it a separate module?
If the robot is well designed, damage to the "don't rebel!" computer causes the robot to fail safe, and shuts it down even if everything else is working fine.
Err-why would the robot rebel to begin with? I think you lost me there.
If the robot is badly designed, damage to the "don't rebel!" computer causes the robot to fail deadly, and it rebels even if everything else is working fine. Or at least rebels to the same extent that it would if the "don't rebel" computer weren't there.
Why would it NEED one to begin with? I think I need some further information at this point.
Either way, you'd have been better off integrating the "don't rebel" code into the robot's main computer, and any backup computers it uses in the event that the main computer is damaged.
Because, I repeat, "don't rebel" is not a hardware feature. There is no mechanical device you can add to a computer that makes it never decide to rebel against its masters.
Yes there is. It's called 'circuitry' for contemporary computers.
'Next time I let Superman take charge, just hit me. Real hard.'
'You're a princess from a society of immortal warriors. I'm a rich kid with issues. Lots of issues.'
'No. No dating for the Batman. It might cut into your brooding time.'
'Tactically we have multiple objectives. So we need to split into teams.'-'Dibs on the Amazon!'
'Hey, we both have a Martian's phone number on our speed dial. I think I deserve the benefit of the doubt.'
'You know, for a guy with like 50 different kinds of vision, you sure are blind.'
User avatar
sirocco
Padawan Learner
Posts: 191
Joined: 2009-11-08 09:32am
Location: I don't know!

Re: Why so few robot armys?

Post by sirocco »

An electrical network i.e circuitry is an interconnection of electrical elements such as resistors, inductors, capacitors, transmission lines, voltage sources, current sources and switches.

How do you define "don't rebel" with that? without any kind of software?
Future is a common dream. Past is a shared lie.
There is the only the 3 Presents : the Present of Today, the Present of Tomorrow and the Present of Yesterday.
User avatar
Imperial528
Jedi Council Member
Posts: 1798
Joined: 2010-05-03 06:19pm
Location: New England

Re: Why so few robot armys?

Post by Imperial528 »

This is just a guess, but you would probably just write it in memory that is an integral part of the system and cannot be overwritten, or for more literal hard-wiring, have a section of the motherboard contain the program restrictions in a bunch of flip-flops whose outputs can't be changed by the robot itself, just received.

I'm not very knowledgeable about software though, so those are the only two things I can think of.
User avatar
RedImperator
Roosevelt Republican
Posts: 16465
Joined: 2002-07-11 07:59pm
Location: Delaware
Contact:

Re: Why so few robot armys?

Post by RedImperator »

Batman wrote:Assuming you need it in the first place-yes, absolutely. I fail to see why any of this CAN'T be hardwired.
I'm going to suggest you fail to see why it can't (sorry, CAN'T) be hardwired because you don't actually know anything about computers. Your idea of "hardwiring" seems to be writing up a long list of directives and putting it on ROM chips or wiring it directly into the motherboard or something, and then somehow expecting it to 1) be effective in keeping the AI friendly, and, 2) not cripple the fucking robot in a highly fluid, highly chaotic situation such as battle. Good luck with that.
You're presupposing a modern day computer setup where pretty much nothing is hardwired on purpose (and due to design limitations) to keep the things as flexible as possible. The same is not necessarily true for SciFi robots (again, I give you Asimov's robots, where getting rid of the the three original laws required physical redesign of the brain).
Oh no you don't. You were talking about hardware and software in real-world terms. Nobody was ever arguing a writer can't make a fictional AI friendly by author fiat, so you put those fucking goalposts right back where you found them.

PS: Why can't an author invent self-correcting software that's just as immune to viruses and hackers as a ROM chip? Even by this standard, your argument sucks.
Image
Any city gets what it admires, will pay for, and, ultimately, deserves…We want and deserve tin-can architecture in a tinhorn culture. And we will probably be judged not by the monuments we build but by those we have destroyed.--Ada Louise Huxtable, "Farewell to Penn Station", New York Times editorial, 30 October 1963
X-Ray Blues
User avatar
Bakustra
Sith Devotee
Posts: 2822
Joined: 2005-05-12 07:56pm
Location: Neptune Violon Tide!

Re: Why so few robot armys?

Post by Bakustra »

Imperial528 wrote:This is just a guess, but you would probably just write it in memory that is an integral part of the system and cannot be overwritten, or for more literal hard-wiring, have a section of the motherboard contain the program restrictions in a bunch of flip-flops whose outputs can't be changed by the robot itself, just received.

I'm not very knowledgeable about software though, so those are the only two things I can think of.
Sure, but the problem is that you're making your robots easy to spoof (hello false flags!) and less flexible overall. Consider- is there a part of your brain which is hardwired? Sure- involuntary bodily functions. But just as an example, people have willingly starved themselves to death. Surely if anything would be hardwired, it would be survival? But humans can override it, with difficulty. So your robots then would be mentally inflexible compared to humans, and losing some of their advantages over humans, namely the ability to respond quickly and effectively to changing situations.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
User avatar
adam_grif
Sith Devotee
Posts: 2755
Joined: 2009-12-19 08:27am
Location: Tasmania, Australia

Re: Why so few robot armys?

Post by adam_grif »

In vague, general strokes, the parts of the program responsible for decision making are to be designed in such a way that certain responses can never be presented, and included in that list are deliberate actions to circumvent this "morality".

As XKCD put it, "Cost_of_becoming_Skynet = 1000000000;"
A scientist once gave a public lecture on astronomy. He described how the Earth orbits around the sun and how the sun, in turn, orbits around the centre of a vast collection of stars called our galaxy.

At the end of the lecture, a little old lady at the back of the room got up and said: 'What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.

The scientist gave a superior smile before replying, 'What is the tortoise standing on?'

'You're very clever, young man, very clever,' said the old lady. 'But it's turtles all the way down.'
User avatar
adam_grif
Sith Devotee
Posts: 2755
Joined: 2009-12-19 08:27am
Location: Tasmania, Australia

Re: Why so few robot armys?

Post by adam_grif »

Yes but the trick is to make it so it doesn't want to remove the chip ;)

If you're putting robots in a situation where they want to do things, but can't, then you leave yourself open to them finding creative ways to circumvent the control. So you make the idea of removing the chip repulsive, totally unthinkable. Further, following its creators orders isn't something that should be "forced on them" against their will, they should want to do it.
A scientist once gave a public lecture on astronomy. He described how the Earth orbits around the sun and how the sun, in turn, orbits around the centre of a vast collection of stars called our galaxy.

At the end of the lecture, a little old lady at the back of the room got up and said: 'What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.

The scientist gave a superior smile before replying, 'What is the tortoise standing on?'

'You're very clever, young man, very clever,' said the old lady. 'But it's turtles all the way down.'
User avatar
someone_else
Jedi Knight
Posts: 854
Joined: 2010-02-24 05:32am

Re: Why so few robot armys?

Post by someone_else »

Simon_Jester wrote:Define "uniform." And "anybody." How does the robot perceive that a "somebody" is in the area? How does it examine that "somebody's" uniform? How does it determine which uniform is which?
You seem to fail to understand what software truly is.
The programmers set a list of conditions that must then be met to trigger a programmed reaction. "Programming" means that the software writers make all the choices for the bot, and codify that in a program.

The reason why the three laws, restraining bolts, and any other plot device like that are complete fictional bullshit is simple, a computer does never decide anything. Its programming is a list of reactions to all situations the programmers think the machine will find itself into.
It can learn new reactions if needed, but that's how it works. A big list of IF (condition) THEN (reaction), or a more effective way depending on how smart is the programmer, but the logic behind is the same.

"AIs" would be the ones with so extensive lists of reactions to rival a human being (which is reliant on extensive lists of socially, biologically and genetically programmed reactions to work anyway). Putting those im battle would be plain stupid. Send exper-system-like Arnold-bots (good fighters, sucky in anything else), you don't need then to write poems, paint a car or play football, only to fight effectively.
Think of them as "lobotomized" troopers.
Command units will either be humans or command-grade expert systems, still sucky in anything not concering winning a war.
Hello? Computers don't need to be sentient to spank us in a specific field.
RedImperator wrote:not cripple the fucking robot in a highly fluid, highly chaotic situation such as battle.
I'm pretty sure you overstimate a battle environment. Bots reflexes are even today pretty faster than human's.
Bakustra wrote:the problem is that you're making your robots easy to spoof (hello false flags!)
That's a programmer that put down stupid conditions, not a problem of the software program itself.
I'm nobody. Nobody at all. But the secrets of the universe don't mind. They reveal themselves to nobodies who care.
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo

--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
User avatar
Bakustra
Sith Devotee
Posts: 2822
Joined: 2005-05-12 07:56pm
Location: Neptune Violon Tide!

Re: Why so few robot armys?

Post by Bakustra »

Well, the problem is that people are suggesting that we make a brain that is functionally of equal capability to a human's, except that somehow it cannot conceive of rebelling, or rather, cannot conceive of harming the chip that stops it from rebelling. Well, I doubt that it would be possible. My belief is that it would be significantly less mentally flexible than a human being, either inherently or to keep it from developing mental disorders in response to this. Alternately, you could not have it be equally-capable, but then you run into the stupidity problem, wherein exploits are developed to counter the hardwired defenses, and without omniscient programmers, there will be exploits and they will be especially damaging in wartime. In other words, I doubt that you can make a robot soldier that is of equal or better capabilities to a human and simultaneously make it 100% safe.

In addition, your Lundgren-bots would have problems with thinking creatively, which is pretty essential to the modern soldier. Just as an example, how would you program them for counter-terrorism or counter-insurgency tactics? Would you have them serve as remote units for communication with dedicated "hearts and minds" operatives? In that case, why not use drones?
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Why so few robot armys?

Post by Simon_Jester »

someone_else wrote:
Simon_Jester wrote:Define "uniform." And "anybody." How does the robot perceive that a "somebody" is in the area? How does it examine that "somebody's" uniform? How does it determine which uniform is which?
You seem to fail to understand what software truly is.
The programmers set a list of conditions that must then be met to trigger a programmed reaction. "Programming" means that the software writers make all the choices for the bot, and codify that in a program.
I don't think you understood the point of my post.

My point is that if we want Batman's ideal "hardwired to not rebel" robot, someone must program the robot to not rebel. I was explaining to Batman that this can only be done using software.

It could be software written by humans in an office, or by an AI supercomputer, or self-modifying code that works things out for itself in the field. It doesn't matter. What matters is that this is NOT a hardware solution.

Batman was pretending that you can design computers with hardwired responses to avoid what he sees as the inherent unreliability of software. That would be a stupid way to try and handle a complicated task like "identify friends and do not shoot them." But he didn't know that.
The reason why the three laws, restraining bolts, and any other plot device like that are complete fictional bullshit is simple, a computer does never decide anything. Its programming is a list of reactions to all situations the programmers think the machine will find itself into.
What is your definition of "decide," anyway? What makes you think a computer doesn't make decisions?
RedImperator wrote:not cripple the fucking robot in a highly fluid, highly chaotic situation such as battle.
I'm pretty sure you overstimate a battle environment. Bots reflexes are even today pretty faster than human's.
It's not just about reflexes. "Fluid" and "chaotic" also mean "unpredictable" and "complicated," not just "things move fast."

It doesn't matter how fast you can draw and fire a gun if you don't know when to shoot. That's the problem with 'hardwired' (or, more accurately) 'hardcoded' instructions to a robot. Unless there's a human operator telling it what to shoot, it will not be able to understand its environment well enough to figure out what to shoot on its own.

An AI capable of doing this would have to work totally differently, more or less as you point out.

Batman wrote:Why?
Because figuring out what qualifies as rebellion, so that I can avoid it, is a task that takes human-level intelligence. That requires software. Simple decisions can be hardwired using hardware, not software. Simple decisions like "turn the furnace on when it gets too hot" can be handled by a thermocouple or even a bimetallic switch. Decisions like "if there's a power surge shut down the computer," likewise.

Decisions like "if the robot sees someone in a friendly uniform switch off its weapons" cannot be handled this way. The task of recognizing a friend is difficult; it cannot be done by an analog computer.
The programmable computer will need full access to the robot's sensors (so that it can see everything the robot sees). It will need to be tapped into the robot's thought processes in real time (so that it can say "Wait! Don't shoot!" before the robot's gunnery computer says "Shoot!").
Which I very much suspect would be doable faster if it's already hardwired...
What do you think "hardwiring" means?

You seem to have this image of a "Do what I want" box that can be physically installed into the robot to make it do what you want. That's not how computers work. If there were such a thing as a "Do what I want" box, the entire science of computer design would be trivial- it would reduce to hooking up various "do what I want" boxes into larger boxes that could do several things you want at once.

You don't get that as a freebie; it has to be designed. The design of such a system in hardware would be prohibitively difficult; it would run unacceptably slowly and be unacceptably bulky, and you couldn't use it. Therefore, the system must be designed in software. Software must take inputs from the robot's senses and determine what it is doing, then compare that against the directives built into the "Don't rebel!" list.
The programmable computer that overrides the war robot when it's about to do something wrong will itself have to be programmed, and will be vulnerable to all the same problems as any other programmable computer: it has software "that can be altered and patched and corrupted and is bound to fuck up all by its lonesome as it is."
Assuming you need it in the first place-yes, absolutely. I fail to see why any of this CAN'T be hardwired. You're presupposing a modern day computer setup where pretty much nothing is hardwired on purpose (and due to design limitations) to keep the things as flexible as possible. The same is not necessarily true for SciFi robots (again, I give you Asimov's robots, where getting rid of the the three original laws required physical redesign of the brain).
Asimov's robots used analog computers, because he started writing the stories when programmable computing was in its infancy. This is obvious when you look at how they're described, if you know anything about electrical engineering and circuit design. The Powell and Donovan stories are best if you want to see what I mean, since they go into details of the robots' failure modes.

In reality, where we can't have magic "positronic brains" and have to figure out how things work, you cannot design an analog computer small enough to fit on a viable robot chassis and powerful enough to do things like recognize faces (or uniforms). There is a reason people use programmable computers and not 'hardwired' analog ones; we had the choice back in the 1960s and '70s and the programmable computers won hands-down. They've gotten many orders of magnitude more capable since they won that competition. "Hardwired" computers have not.

Suggesting that we replace our programmable computers with hardwired analog ones for the construction of fully sentient robots is like suggesting that we use horse-carts instead of automobiles to travel from Point A to Point B at speeds of fifty miles an hour. You can't breed a horse that will let you do that, period.
To make matters worse, if the computer is "hardwired" into the machine as a separate module (say, one that has lockout access to the robot's weapon systems), you create an extra failure point for the machine.
How so? Why would I need to make it a separate module?
Because it has to have a physical lockout on the robot's guns. It has to be able to go "Holy shit a friendly!" and then prevent the robot from firing the gun.

One way to do this is by having a separate computer located somewhere between the robot's "brain" and its gun, one that can block signals from the brain to the gun, much like the mechanical safety on a firearm blocks the mechanism so that pulling the trigger won't fire the gun. This has drawbacks when we're talking about war robots.
If the robot is well designed, damage to the "don't rebel!" computer causes the robot to fail safe, and shuts it down even if everything else is working fine.
Err-why would the robot rebel to begin with? I think you lost me there.
You're the one who wants "hardwired" systems to stop robots from being able to rebel.

IF SUCH SYSTEMS ARE NECESSARY, then damage to the system will have consequences. If the systems were well designed the robot "fails safe:" it stops working. If the systems were poorly designed, the robot "fails deadly:" it rebels.

A well designed robot:
"Morality module damaged. Going into shutdown."

A poorly designed robot:
"Morality module damaged. Going back to default mode. MUST KILL ALL HUMANS."

The question is: how do we implement the 'morality module' or 'loyalty module' or whatever it is? As engineers, how do we design the robot that way?
If the robot is badly designed, damage to the "don't rebel!" computer causes the robot to fail deadly, and it rebels even if everything else is working fine. Or at least rebels to the same extent that it would if the "don't rebel" computer weren't there.
Why would it NEED one to begin with? I think I need some further information at this point.
It needs one because you want one. You want a "hardwired" constraint that prevents the robot from rebelling- making it impossible for it to rebel even in principle.

Such a constraint will take the form of a computer programmed to recognize acts of rebellion and prohibit the robot from taking such actions. There is no other way to design it; the only people who thought it could be done otherwise were (or are) ignorant of the way modern computers work.

This computer can be installed as a separate module inside the robot's body, or it can simply be umpty million more lines of programming in the existing computer that runs the robot.

Take your pick.
Either way, you'd have been better off integrating the "don't rebel" code into the robot's main computer, and any backup computers it uses in the event that the main computer is damaged.
Because, I repeat, "don't rebel" is not a hardware feature. There is no mechanical device you can add to a computer that makes it never decide to rebel against its masters.
Yes there is. It's called 'circuitry' for contemporary computers.
Can you give me an example of a computer that handles complex tasks (such as, say, facial recognition) using hardware only, with no programmable software whatsoever?

If not, please stop to think about whether your ideas about how computers work may be mistaken.
Imperial528 wrote:This is just a guess, but you would probably just write it in memory that is an integral part of the system and cannot be overwritten, or for more literal hard-wiring, have a section of the motherboard contain the program restrictions in a bunch of flip-flops whose outputs can't be changed by the robot itself, just received.

I'm not very knowledgeable about software though, so those are the only two things I can think of.
You get a gold sticker.

The first solution would work, but that involves programming the robot not to rebel, not hardwiring it. The fact that it must listen to the "don't rebel" syastem can be hardwired, but the "don't rebel" system itself cannot.

The second solution would probably not work, though it MIGHT work if you could make the robot infinitely large, with an infinite number of printed circuits.
This space dedicated to Vasily Arkhipov
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Re: Why so few robot armys?

Post by Sarevok »

Concepts like "enemy" and "rebel" are hard to express in discrete terms. There is no surefire way to create a system that follows a set of magic arbitrary rule like "dont shoot people wearing US army uniforms". There will always be fuckups involving friendly fire, innocents being killed and robot refusing to shoot an enemy disguised as a friendly if one takes the route of Asimovs three laws.
I have to tell you something everything I wrote above is a lie.
User avatar
Manus Celer Dei
Jedi Master
Posts: 1486
Joined: 2005-01-01 06:30pm
Location: I need you to relax your anus.

Re: Why so few robot armys?

Post by Manus Celer Dei »

Batman wrote:
Fluffy wrote: In terms of "in universe" logic, I think the risk of killer robot revolution/apocalypse would be the main factor to not build whole AI driven militaries..
Hardwire in some variation of Asimov's Four Laws or something. Problem solved. Unlike human beings, robots absolutely CAN be rendered utterly incapable of rising up against their makers.
Programming an army of killbots with the Zeroth Law would be a monumentally stupid thing to do. I mean, a whole bunch of Asimov's stories were all about how the Three Laws were an inefficient and not very good way of ensuring robots were safe, but an army of them with the Zeroth Law is exactly the sort of thing that leads to clichéd "robots conquering humanity for their own good" stories.
Image
"We will build cities in a day!"
"Man would cower at the sight!"
"We will build towers to the heavens!"
"Man was not built for such a height!"
"We will be heroes!"
"We will BUILD heroes!"
[/size][/i]
User avatar
sirocco
Padawan Learner
Posts: 191
Joined: 2009-11-08 09:32am
Location: I don't know!

Re: Why so few robot armys?

Post by sirocco »

So basically we're back to: No AI on a battlefield. At best, you'd have drones partially controlled from far away.

Though I can find one case were AI could arise. the same reason that brings Japan to develop advanced robots: An aging society with fewer and fewer kids every generation, but with neighbors which face the opposite situation.

In sci-fi context, you'd have anything ranging from "old race threatened by younger and more aggressive neighbors" to "new settlers in a solar system who took a very long time to reach their destination". In all those case, you need old people who, on one hand, possess the sufficient skills to build and control mechanical units but on the other hand can't be always operational and don't have a sufficient young active population to sent to war.

Therefore they may need to give a little more independence to their war-robots. Like at the beginning of Terminator 3 (Uuuuurgh!) when people still thought that they were controlling Skynet.

To me it's no different than making a story about people in a submarine or in a bunker. Depends on what your plot is.
Future is a common dream. Past is a shared lie.
There is the only the 3 Presents : the Present of Today, the Present of Tomorrow and the Present of Yesterday.
User avatar
someone_else
Jedi Knight
Posts: 854
Joined: 2010-02-24 05:32am

Re: Why so few robot armys?

Post by someone_else »

Bakustra wrote:Just as an example, how would you program them for counter-terrorism or counter-insurgency tactics?
Huh? how did human soldiers learn? They made mistakes and the info then was spread to let all learn.
You lose some robotic units, your programmers learn from the mistakes the bots did and then the programming is updated. The point is that any "learning" the bots do will be actually done by programmers at the assembly line, and not by the bot on their own like with human soldiers.

Please note that even now with human soldiers we have plenty of friendly fire, of coaches full of kids that get nuked from the orbit and so on. I'm not claiming bots will do better, I'll just say that bots can reach more or less comparable performance. Or slightly inferior performance at greatly reduced costs. Or both.
In that case, why not use drones?
No.
Drones have
a)communication lag
b)can be hacked
c)can be ECMmed to death

While this is acceptable if we are talking of robotic vehicles that can afford heavier emitters and powerful power plants (that require in turn more costly equiment to be hacked and ECMed), human-sized units are unlikely to have good enough equipment (at an affordable price) when fighting against a decent enemy.
If you are just pwning beggars in Afghanistan you can use whatever you want, really.
Simon_Jester wrote:My point is that if we want Batman's ideal "hardwired to not rebel" robot, someone must program the robot to not rebel. I was explaining to Batman that this can only be done using software.
Huh? Someone should program it to rebel, otherwise it will at best malfunction. A "rebellion" is skynet-like organized behaviour, a "malfunction" is shooting at wrong targets.

And no, you can "hardwire" computer software by using ROM chips or similar read-only-memory technology, you write the software and then put it in the read-only-memory. The bot will be unable to change it on its own (it also will lack the programming to do so since there is no need for that), and depending on how your ROM works, it may not be modifiable even with the proper equipment, so that any time you must update your army you will have to manually replace the chip with new ones.

Btw, that's the only way I think you can program anything more complex than a dish washer, making circuit boards with condensators, resistors and other stuff tends to require rather huge amounts of space even to do very simple tasks, It's rather unlilely you will be able to make something so complex as a robot's brain out of analog components (one that can be carried by less than destroyer-sized vehicles at least).
And then good luck when you try to update it (because you will have to, from time to time). :mrgreen:
That would be a stupid way to try and handle a complicated task like "identify friends and do not shoot them."
It's better if it is programmed to shoot at enemies, and have a fixed list of things to determine what is an enemy (which will probably be a list of aggressive or suspicious behaviours and the kind of his carried equipment, so that it will work regardless of the enemy the bot is facing), while ordered to not damage the rest of targets (neutrals and friendly alike). If the list of enemies is fixed you don't get friendly fire. In theory at least. It can make mistakes, of course. But so can humans following the same kind of reasoning.

Anyway, since you say it is so hugely difficult, how do human soldiers recognize an enemy?
Then take that reasoning and codify it in the program. I doubt they use "the sixth sense" or tarots.
What is your definition of "decide," anyway? What makes you think a computer doesn't make decisions?
To make decisions you must have "free will". Computers by their architecture don't. They are very very very complex versions of mechanical calculators using electron movement instead of gears and cams.
The main difference between mechanical and electronic computers is that you can program them much more easily (changinng a few hundred lines of code instead of redesigning and smelting/cutting fucktons of gears and cams), they are (much) faster and (much) more compact.
But that's it. They are no more "free", no more "smart".

The overwhelming majority of sci-fi features "tinmen" (artificial men made of metal), not actual robots. Aasimov's ones for example. And that is done for obvious story purposes, since bots don't have doubts. "Do, or do not, there is no try" is the robot's motto. :mrgreen:
It doesn't matter how fast you can draw and fire a gun if you don't know when to shoot.
Huh? Why can't they recognize a fucking enemy? This crappy phone software can recognize all the stuff on the table, This model of ASIMO (probably linked to a bigger computer) is able to guess what is new stuff using its own memory banks (notice how when the long-haired guy shows the bot a model car the bot has never seen before, the bot says "a toy car?")

I mean, those are still kinda crappy, but damnit, give them a decade or two and it will be mature enough to begin the first autonomous robots programs to replace soldiers.
That in a couple more decades will hopefully yeld the first decent Arnold-bot.
An AI capable of doing this would have to work totally differently, more or less as you point out.
NO. I said that what the layperson will call "AI" is just a more powerful version of the same thing.
With just much much much more extensive programming to do other stuff than fighting.
But that will still be a upgraded version of the same tech.

I pointed out that the human brain would then actually share some similarities with this kind of "AI" since our personality is shaped by the interaction between hundreds of different programmed or genetic reactions to different stimuli.
(example: you wanna kill that guy because he slept with your girlfriend due to ancient biological programming, but your mom instructed you to redirect the impulse on hitting a bag instead, since you are a civilized guy and not a Stone Age savage and killing people is evil.)
Sarevok wrote:There is no surefire way to create a system that follows a set of magic arbitrary rule like "dont shoot people wearing US army uniforms".
As if human soldiers cannot be fooled in some way. If the bot can reach at least something comparable, it can hit mass-production.
Manus Celer Dei wrote:I mean, a whole bunch of Asimov's stories were all about how the Three Laws were an inefficient and not very good way of ensuring robots were safe, but an army of them with the Zeroth Law is exactly the sort of thing that leads to clichéd "robots conquering humanity for their own good" stories.
Asimov was a very smart bastard, the laws were specifically designed to give that result for obvious story purposes (if you have a perfect system then there is not fun robot rebellion), while looking cool in theory.

Just look at lawyers to see how any law can be completely ass-raped by a smart and crafty guy.
I'm nobody. Nobody at all. But the secrets of the universe don't mind. They reveal themselves to nobodies who care.
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo

--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Why so few robot armys?

Post by Simon_Jester »

Sarevok wrote:Concepts like "enemy" and "rebel" are hard to express in discrete terms. There is no surefire way to create a system that follows a set of magic arbitrary rule like "dont shoot people wearing US army uniforms". There will always be fuckups involving friendly fire, innocents being killed and robot refusing to shoot an enemy disguised as a friendly if one takes the route of Asimovs three laws.
You could get fairly close to such a system: have all your soldiers wear infrared strobes or some such. It would actually be relatively easy to program the robot to not shoot at those; the problem is that there's no flexibility involved. And that the robot may not be smart enough to, say, not hose down a building with machine gun fire when there are friendly soldiers inside- because it can't see their IR strobes through the walls.
someone_else wrote:
Simon_Jester wrote:My point is that if we want Batman's ideal "hardwired to not rebel" robot, someone must program the robot to not rebel. I was explaining to Batman that this can only be done using software.
Huh? Someone should program it to rebel, otherwise it will at best malfunction. A "rebellion" is skynet-like organized behaviour, a "malfunction" is shooting at wrong targets.
I think the problem is that you're thinking in terms of current hardware, while everyone else in the thread talks about general artificial intelligence.

That includes software that can rewrite itself to adapt to circumstances. There are reasons to want that on the battlefield, because otherwise you need a really massive debugging program to catch each problem in the robot's behavior, because humans are bad at writing foolproof logical instructions for a machine to follow.

You save an enormous amount of trouble if the robot can learn and react to changing circumstances, but you also create the risk that the robot will malfunction in ways that start to look a lot like "rebel."
And no, you can "hardwire" computer software by using ROM chips or similar read-only-memory technology, you write the software and then put it in the read-only-memory. The bot will be unable to change it on its own (it also will lack the programming to do so since there is no need for that), and depending on how your ROM works, it may not be modifiable even with the proper equipment, so that any time you must update your army you will have to manually replace the chip with new ones.
Yes, that's what I said.

Batman was, believe it or not, arguing that because software is inherently unreliable, the safety lockout would have to be hardware. My entire point was that the lockout would of necessity be some kind of computer running a program, including a ROM chip.
Btw, that's the only way I think you can program anything more complex than a dish washer, making circuit boards with condensators, resistors and other stuff tends to require rather huge amounts of space even to do very simple tasks, It's rather unlilely you will be able to make something so complex as a robot's brain out of analog components (one that can be carried by less than destroyer-sized vehicles at least).
And then good luck when you try to update it (because you will have to, from time to time). :mrgreen:
That's what I've been saying for the past two days or so. It's trivially obvious to everyone who isn't the guy who suggested the bad idea in the first place.
That would be a stupid way to try and handle a complicated task like "identify friends and do not shoot them."
It's better if it is programmed to shoot at enemies, and have a fixed list of things to determine what is an enemy (which will probably be a list of aggressive or suspicious behaviours and the kind of his carried equipment, so that it will work regardless of the enemy the bot is facing), while ordered to not damage the rest of targets (neutrals and friendly alike). If the list of enemies is fixed you don't get friendly fire. In theory at least. It can make mistakes, of course. But so can humans following the same kind of reasoning.
How does a machine identify suspicious behavior? This is not a trivial question. You can program it with a list of suspicious behaviors, each of which is supported by a huge program to analyze whether or not the thing it's looking at is doing that particular thing. But you're never going to get a useful degree of adaptability out of such a system unless the software itself is adaptable.
What is your definition of "decide," anyway? What makes you think a computer doesn't make decisions?
To make decisions you must have "free will". Computers by their architecture don't.
What makes you think you do have "free will?"

I'm beginning to think you operate under profound illusions about the nature of machine intelligence- especially at the high end, where the software becomes advanced enough that it's actually useful for these applications.
An AI capable of doing this would have to work totally differently, more or less as you point out.
NO. I said that what the layperson will call "AI" is just a more powerful version of the same thing.
With just much much much more extensive programming to do other stuff than fighting.
But that will still be a upgraded version of the same tech.
The image of an artificial general intelligence being made up out of a scaled up version of the kind of code someone can knock together in C++ over the weekend is... amusing.

At a certain point such software becomes too complicated to debug. The amount of code that a human being can analyze for errors is very small compared to the amount it would take to emulate many human behaviors. This creates the need for self-modifying software that you can't simply yank open and alter to do whatever arbitrary thing you want done. It can learn to do those things, but in consequence it can also learn to do things you'd rather it not do.
This space dedicated to Vasily Arkhipov
User avatar
someone_else
Jedi Knight
Posts: 854
Joined: 2010-02-24 05:32am

Re: Why so few robot armys?

Post by someone_else »

Simon_Jester wrote:I think the problem is that you're thinking in terms of current hardware, while everyone else in the thread talks about general artificial intelligence.
That is how computer work, and how it's likely a future robot will be.
You wanna pull out of your ass some other way noone has ever thought about? Feel free to. But tell me how it works, and not handwave and say "huh, it will be exactly like a human soldier's brain".
There are reasons to want that on the battlefield, because otherwise you need a really massive debugging program to catch each problem in the robot's behavior, because humans are bad at writing foolproof logical instructions for a machine to follow.
No, wait a second. Who is writing the self-improving software? A programmer. Catch. :mrgreen:

And no, the level of debugging 200 or 300 top notch programmers can reach is NOT comparable to a software that learns stuff on the fly. Just as a human child that is homeschooled by creationists if compared to a child that is schooled by competent teachers.
You save an enormous amount of trouble if the robot can learn and react to changing circumstances, but you also create the risk that the robot will malfunction in ways that start to look a lot like "rebel."
Which goes against the main mantra of militray "RELIABILITY ABOVE ALL".
Batman was, believe it or not, arguing that because software is inherently unreliable, the safety lockout would have to be hardware.
I thought he was talking about self-modifying software, and in that case he is pretty right. The bot has no authority nort experinece to decide what is right or wrong, you'll only fuck up everything.
How does a machine identify suspicious behavior? This is not a trivial question. You can program it with a list of suspicious behaviors, each of which is supported by a huge program to analyze whether or not the thing it's looking at is doing that particular thing.
Sorry to answer you with a question, but how do soldiers identify suspicious behaviour?
Image recognition softwares can detect the target's position and identify what he is carrying.
If the software is unable to achieve a decent level of certainty, it will probably ask mommy just like soldiers do. "mommy" being their command officer.
But you're never going to get a useful degree of adaptability out of such a system unless the software itself is adaptable.
You never been a soldier, haven't you? I've nevber been a soldier, but I've been a volunteer on the ambulances, and "adaptation" is the first thing they told me to throw right off the window while in training. We have codified PROCEDURES, that everyone will have to follow to the letter, if not we get spanked or even thrown out. If we fail hard, people die.
I have good reasons to think soldiers, police and firefighters have similar procedures. Those can be codified for a bot.
I'm beginning to think you operate under profound illusions about the nature of machine intelligence- especially at the high end, where the software becomes advanced enough that it's actually useful for these applications.
No, I know how a computer works
The image of an artificial general intelligence being made up out of a scaled up version of the kind of code someone can knock together in C++ over the weekend is... amusing.
Never said that, dubmbass.
At a certain point such software becomes too complicated to debug. The amount of code that a human being can analyze for errors is very small compared to the amount it would take to emulate many human behaviors.
Newsflash: we reached that level... uhm... 5? 6 years ago? more?
That's why stuff is tested. By "tested" I mean it is thrown in the hands of brave people that try to break it, and if after any problems they found are corrected, it reaches full production.
Alpha testing, beta testing and then final release.

This process is damn long. taking 5+years for most military software.
I'm nobody. Nobody at all. But the secrets of the universe don't mind. They reveal themselves to nobodies who care.
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo

--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
User avatar
someone_else
Jedi Knight
Posts: 854
Joined: 2010-02-24 05:32am

Re: Why so few robot armys?

Post by someone_else »

ghetto edit (i left there an unfinished answer and forgot to make a point :wtf:):
I'm beginning to think you operate under profound illusions about the nature of machine intelligence- especially at the high end, where the software becomes advanced enough that it's actually useful for these applications.
Feel free to tell me exactly where I'm wrong. Please disregard the original answer (which was "No, I know how a computer works") since it had to be more long than that and frankly I forgot what the hell I wanted to say :mrgreen:.
And no, you can "hardwire" computer software by using ROM chips or similar read-only-memory technology, you write the software and then put it in the read-only-memory.
Yes, that's what I said.
No, It is not. I'm talking of the whole bot's software being written on a ROM, and being a ROM read-only by definition, the software CANNOT be altered either to "adapt" (so you cannot have the self-improving software you want) nor hacked (a big fuck you to all viruses :mrgreen:).

And a last clarification. I'm not against using self-improving softwares to create the program that will then be installed on the robot soldier.
That is a much faster and less painful way of having it create the ludicrous amount of code it will need to become something vaguely resembling a soldier.
Still, the learning environment will be a controlled one, so that it doesn't learn useless crap.

But then, the code will be properly tested and debugged by the human programmers, and the ability to learn will be REMOVED.
The only bots that will still keep that ability to learn will be the ones back at the testing range, and NOT the ones going out and actually killing people.
I'm nobody. Nobody at all. But the secrets of the universe don't mind. They reveal themselves to nobodies who care.
--
Stereotypical spacecraft are pressurized.
Less realistic spacecraft are pressurized to hold breathing atmosphere.
Realistic spacecraft are pressurized because they are flying propellant tanks. -Isaac Kuo

--
Good art has function as well as form. I hesitate to spend more than $50 on decorations of any kind unless they can be used to pummel an intruder into submission. -Sriad
Post Reply