Uberwank infantry weapons

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Re: Uberwank infantry weapons

Post by Ford Prefect »

Darth Hoth wrote:Why?
Because you're comparing well written science fiction literature to the incoherently structured collection of buzz-words that is Orion's Arm.
Banks said himself (in A Few Notes on the Culture) that he basically subscribes to the same idea - artificial intelligences MUST take over, because they are godlike, and the Culture is written in the over-the-top way it is because he wants to divorce it from the human scale.
So what?
The one Culture novel I have read so far, Excession, demonstrates literary standards well above OA, though I did not like it all that much. Doubtless Banks is a better writer (and less pretentious) than their crew. But it has the same basic premise - the machines must be impressive enough to make humanity look insignificant besides.
This is kind of funny given that most Culture novels focus on organic characters performing tasks vital to the galaxy, as opposed to Minds. Most of the conflcit that shows up in the Culture novels tends to be resolved by the organics of the books, and rarely the machines. Then again, you've only read Excession, which does focus on the Minds.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
barricade
Youngling
Posts: 90
Joined: 2009-01-22 03:03am
Location: Hovering above WA state
Contact:

Re: Uberwank infantry weapons

Post by barricade »

I'd suggest something from Girl Genius (the death ray that's also a handheld mixer), but pretty much everything in it fits the setting.

I'd have to say the 'Super' Gravity Gun. Not just because its wank, but because its freaking awesome wank.
Macross Daedalus Attack: Because nothing says "Frak You" like punching them in the face with an aircraft carrier.
Macross Frontier Version: Unless you use 2 aircraft carriers.

Named after a g/f! Sheesh, stop asking.
Image
User avatar
Darth Hoth
Jedi Council Member
Posts: 2319
Joined: 2008-02-15 09:36am

Re: Uberwank infantry weapons

Post by Darth Hoth »

Starglider wrote:That's not so much an idea, as a blatant fact. Human civilisation is godlike and incomprehensible to chimpanzees, and human brains are only incrementally more complex than chimpanzee brains. It's quite easy for even hard science computing hardware (arguably even contemporary computing hardware, depending on how you compare FLOPs) to do orders of magnitude better and that's before we account for qualitative and software improvements. Really what more do you want?
Computers can do some things better than we can, now and in the future - multi-tasking, simulations and the like. I doubt we will see a machine that can accurately simulate true intelligence any time soon, and even if we did build one that is still a far cry from "RAR, the machines MUST take over!". The comparison with monkeys does not sound very relevant, given that we have a technological civilisation already in place that a hypothetical archailect would have to compete with, which they did not. And that assumes we would even let it loose (physically and from restraining programming) to compete with us in the first place.

How do you define god-like? Is a chimpanzee also a god to a dog, just because it is that much smarter? Or is it a matter of the technology that we can deploy? What says an artificial intelligence can outperform our technology but such lengths that it becomes incomprehensible to us (for which it would need to effectively violate the laws of physics)?
Although in fact those AI characters were quite comprehensible, just massively capable, rather like Olympian gods, because they had to be for the story to be attractive to normal readers. The timescales were compressed to microseconds and there were references to processing vast amounts of data casually but really, they were extremely humanised. Not a criticism of Banks but don't expect real AIs to be like that.
I do not. A real hypothetical AI would be nothing but a computer programme - essentially without personality or thought as we know it, more like an idiot-savant animal with fixed directives.
"But there's no story past Episode VI, there's just no story. It's a certain story about Anakin Skywalker and once Anakin Skywalker dies, that's kind of the end of the story. There is no story about Luke Skywalker, I mean apart from the books."

-George "Evil" Lucas
User avatar
Darth Hoth
Jedi Council Member
Posts: 2319
Joined: 2008-02-15 09:36am

Re: Uberwank infantry weapons

Post by Darth Hoth »

Ford Prefect wrote:Because you're comparing well written science fiction literature to the incoherently structured collection of buzz-words that is Orion's Arm.
No, I compared the ideas behind them, which happen to coincide to a large extent. Which probably comes from the OA fanboys ripping off Banks, but anyway . . .
So what?
That is, effectively, what the OA fanboys are saying as well, and what both illustrate by giving their artificial intelligences extreme abilities, freely admitting that their universes are hyperpowered in order to render humanity irrelevant besides them. When you write stuff for the sake of being incomprehensible to humans, that could arguably qualify as being a definition of wank.
This is kind of funny given that most Culture novels focus on organic characters performing tasks vital to the galaxy, as opposed to Minds. Most of the conflcit that shows up in the Culture novels tends to be resolved by the organics of the books, and rarely the machines. Then again, you've only read Excession, which does focus on the Minds.


Again, I was talking about the backdrop of the setting and what Banks himself said about it. Naturally, if you want to sell stories you must include human viewpoints (and if the humans were not there, what would the machines look ubber compared to?).
"But there's no story past Episode VI, there's just no story. It's a certain story about Anakin Skywalker and once Anakin Skywalker dies, that's kind of the end of the story. There is no story about Luke Skywalker, I mean apart from the books."

-George "Evil" Lucas
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Re: Uberwank infantry weapons

Post by Ford Prefect »

Darth Hoth wrote:No, I compared the ideas behind them, which happen to coincide to a large extent. Which probably comes from the OA fanboys ripping off Banks, but anyway . . .
First of all, I honestly don't think the themes of the Culture and the themes of Orion's Arm are all that similar. Yes, they both have extremely powerful artificial intelligences in positions of importance, but Orion's Arm postulates that 'this is what the future will look like', where the actual themes of each Culture book differ, and have very little to do with how humans* will progress as a technological society. Banks does not deal with the 'singularity' in any direct sense, something which Orion's Arm does.

*Except in, maybe State of the Art, which is the only Culture novel I am certain that humans actually appear in.
That is, effectively, what the OA fanboys are saying as well, and what both illustrate by giving their artificial intelligences extreme abilities, freely admitting that their universes are hyperpowered in order to render humanity irrelevant besides them. When you write stuff for the sake of being incomprehensible to humans, that could arguably qualify as being a definition of wank.
You keep using 'irrelevant', which is ridiculous. Organic characters are relevant in the Culture novels. You paraphrasing of Banks' statement is that he wants to divorce it from the human scale, which is perfectly reasonable for any sci-fi universe. I hate to break it to you, but the galaxy itself is incomprehensibly vast: you can say you know that it has hundreds of millions of stars and it's billions and billions of kilometres across and billions of years old and whatever, but what do those numbers actually mean to you? I doubt when you look up at the stars on a clear night you think 'I can truly comprehend the scale of the universe'.

Additionally, saying that Banks writes the Minds to be incomprehensible is just plain wrong. As Starglider says, the Minds are extremely humanised. In Use of Weapons, the warship Xenophobe takes advantage of the cuteness of its avatar to nuzzle against Diziet Sma's breasts. This is extremely distant from the writers of OA, who pride themselves on never attempting to humanise the Archais.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Formless
Sith Marauder
Posts: 4141
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Uberwank infantry weapons

Post by Formless »

Ford Prefect wrote:
Shadowtraveler wrote:I'd say Sergeant Schlock's BH-209I is overkill, even in the universe it's set in.
It actually is. It's a running joke that Schlock carries too much firepower, which is why he gets the .50 calibre rotary grenade launcher during the most recently completed arc, because it causes less collateral damage overall (though apparently if Schlock wasn't a total psychopath it could be used like a scalpel). About the only thing more dangerous was his sawn-off multicannons.
Do keep in mind however that this is the same setting where one can wear antimatter grenades with the same yield as the bomb that destroyed Hiroshima on your shoulders. The two things may be used as a joke, but its more because they are more firepower than you need, not more than is plausible for the setting. Hell, on occasion we've seen other characters using a more advanced and miniaturized version of Shlock's plasma cannons; if anything, those were wank.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Marcus Aurelius
Jedi Master
Posts: 1361
Joined: 2008-09-14 02:36pm
Location: Finland

Re: Uberwank infantry weapons

Post by Marcus Aurelius »

Ford Prefect wrote: You keep using 'irrelevant', which is ridiculous. Organic characters are relevant in the Culture novels. You paraphrasing of Banks' statement is that he wants to divorce it from the human scale, which is perfectly reasonable for any sci-fi universe. I hate to break it to you, but the galaxy itself is incomprehensibly vast: you can say you know that it has hundreds of millions of stars and it's billions and billions of kilometres across and billions of years old and whatever, but what do those numbers actually mean to you? I doubt when you look up at the stars on a clear night you think 'I can truly comprehend the scale of the universe'.

Additionally, saying that Banks writes the Minds to be incomprehensible is just plain wrong. As Starglider says, the Minds are extremely humanised. In Use of Weapons, the warship Xenophobe takes advantage of the cuteness of its avatar to nuzzle against Diziet Sma's breasts. This is extremely distant from the writers of OA, who pride themselves on never attempting to humanise the Archais.
Is some Culture novel it is hinted that all the Culture Minds are actually "rigged" in a way that makes them care about organics and matters of the physical universe in general. In the same novel there something about "perfect" AIs, which apparently always sublime (retreat from the physical universe) immediatelly after activation. I think that was in "Excession" as well, but I can't confirm it right now.

When considering the Culture Minds we have to also remember that they are about 10,000 years more advanced than our current computers, without any lapses of technology or civilization in between. Considering how rapidly Computer Science has advanced just after WW2 (i.e. in less than 70 years), I do not find their somewhat godlike abilites implausible at all. In "Excession" there are also hints that Culture AIs were not always as powerful; once Culture ships had actual organic captains (although it is also hinted that several thousand years ago their job was already mostly seremonial, but even say 5000 years is a long time for advanced AIs to develop).
User avatar
Andras
Jedi Knight
Posts: 575
Joined: 2002-07-08 10:27am
Location: Waldorf, MD

Re: Uberwank infantry weapons

Post by Andras »

The plasma blasters from second space, in Doc Smith's Subspace Encounter.

The 1st space explorers traded for one of the blasters commonly used in 2nd Space. It weighs 6 pounds, has a barrel of 11" and a .25" bore that runs the complete length of the weapon and it is used to sight the target from behind. Other then coils of wiring in the handgrip and embedded circuits in the bore, there's 3 moving parts, and an encapsulated chunk of uranium about the size of a .45ACP bullet, however no radiation was detected.

They took it out to the hills, and fired it into a cliff for 8 hours straight, leaving a lake of incandescent obsidian. The scientists determined that, after the firing, the weapon had lost 3 hundredths of one milligram, most likely from wear on the handgrips.
kinnison
Padawan Learner
Posts: 298
Joined: 2006-12-04 05:38am

Re: Uberwank infantry weapons

Post by kinnison »

Darth Hoth - I would respectfully disagree with your opinion that we are not going to see humanscale AI any time soon. The estimates I've seen are that somewhere around 2020 there is going to be human-equivalent processing power in a desktop machine. This would tend to indicate that supercomputers or processing networks might have that capacity a few years earlier - maybe 2017 or maybe even earlier.

Granted, that doesn't mean AI. However, most authorities seem to think that sapience is an emergent phenomenon. I have a strange feeling that the Internet of maybe eight years from now might have a mind of its own - but would we know it? Relevant to this discussion is the Wolfram "answer engine" project, already stated to be designed to grow and learn, and starting NEXT MONTH.

Yes, guys, the Singularity might be coming to a theatre near you - this year, or maybe next. Let's hope the first AI is a Multivac, not a Skynet.
Lord of the Abyss
Village Idiot
Posts: 4046
Joined: 2005-06-15 12:21am
Location: The Abyss

Re: Uberwank infantry weapons

Post by Lord of the Abyss »

REASON from Snow Crash. ( "I told you they'd listen to Reason" )
Formless wrote:
Ford Prefect wrote:
Shadowtraveler wrote:I'd say Sergeant Schlock's BH-209I is overkill, even in the universe it's set in.
It actually is. It's a running joke that Schlock carries too much firepower, which is why he gets the .50 calibre rotary grenade launcher during the most recently completed arc, because it causes less collateral damage overall (though apparently if Schlock wasn't a total psychopath it could be used like a scalpel). About the only thing more dangerous was his sawn-off multicannons.
Do keep in mind however that this is the same setting where one can wear antimatter grenades with the same yield as the bomb that destroyed Hiroshima on your shoulders. The two things may be used as a joke, but its more because they are more firepower than you need, not more than is plausible for the setting. Hell, on occasion we've seen other characters using a more advanced and miniaturized version of Shlock's plasma cannons; if anything, those were wank.
As I recall, Schlock's present plasma cannon is actually one of those miniature versions remounted in a shell of the older, bulkier gun, after his old one blew up. He didn't like a tiny, quiet gun; he wants a big gun with the ominous hummmmmm.
"There are two novels that can change a bookish fourteen-year old's life: The Lord of the Rings and Atlas Shrugged. One is a childish fantasy that often engenders a lifelong obsession with its unbelievable heroes, leading to an emotionally stunted, socially crippled adulthood, unable to deal with the real world. The other, of course, involves orcs." - John Rogers
User avatar
Darth Hoth
Jedi Council Member
Posts: 2319
Joined: 2008-02-15 09:36am

Re: Uberwank infantry weapons

Post by Darth Hoth »

Ford Prefect wrote:First of all, I honestly don't think the themes of the Culture and the themes of Orion's Arm are all that similar. Yes, they both have extremely powerful artificial intelligences in positions of importance, but Orion's Arm postulates that 'this is what the future will look like', where the actual themes of each Culture book differ, and have very little to do with how humans* will progress as a technological society. Banks does not deal with the 'singularity' in any direct sense, something which Orion's Arm does.

*Except in, maybe State of the Art, which is the only Culture novel I am certain that humans actually appear in.
I was and am talking about the setting, not individual novel plots. And if you read A Few Notes on the Culture, Banks himself says that that is what he expects (and, I gather, wants, though that might be me reading too much into it) to happen in the future. Silly free energy technobabble, no, Man supplanted by machines, yes.
You keep using 'irrelevant', which is ridiculous. Organic characters are relevant in the Culture novels. You paraphrasing of Banks' statement is that he wants to divorce it from the human scale, which is perfectly reasonable for any sci-fi universe. I hate to break it to you, but the galaxy itself is incomprehensibly vast: you can say you know that it has hundreds of millions of stars and it's billions and billions of kilometres across and billions of years old and whatever, but what do those numbers actually mean to you? I doubt when you look up at the stars on a clear night you think 'I can truly comprehend the scale of the universe'.
Put it that way, I doubt I can truly comprehend Earth and all its history and geography. In abstract numbers, I can - sort of - but obviously I will never live through billions of years, meet billions of people, and so on.
Additionally, saying that Banks writes the Minds to be incomprehensible is just plain wrong. As Starglider says, the Minds are extremely humanised. In Use of Weapons, the warship Xenophobe takes advantage of the cuteness of its avatar to nuzzle against Diziet Sma's breasts. This is extremely distant from the writers of OA, who pride themselves on never attempting to humanise the Archais.
No, I was talking about scale and scope (including extreme technologies), I believe I said so. Obviously the Minds are not portrayed like what real artificial intelligences would be - the OA guys are actually more realistic, in that respect.
"But there's no story past Episode VI, there's just no story. It's a certain story about Anakin Skywalker and once Anakin Skywalker dies, that's kind of the end of the story. There is no story about Luke Skywalker, I mean apart from the books."

-George "Evil" Lucas
User avatar
Darth Hoth
Jedi Council Member
Posts: 2319
Joined: 2008-02-15 09:36am

Re: Uberwank infantry weapons

Post by Darth Hoth »

Marcus Aurelius wrote:When considering the Culture Minds we have to also remember that they are about 10,000 years more advanced than our current computers, without any lapses of technology or civilization in between. Considering how rapidly Computer Science has advanced just after WW2 (i.e. in less than 70 years), I do not find their somewhat godlike abilites implausible at all. In "Excession" there are also hints that Culture AIs were not always as powerful; once Culture ships had actual organic captains (although it is also hinted that several thousand years ago their job was already mostly seremonial, but even say 5000 years is a long time for advanced AIs to develop).
Are you one of those people who are unaware that "Moore's Law" has been retracted? The current rate of computer development is not sustainable, in fact we are about to run into real physical engineering limits rather soon. There comes a point when your transistors are not easily getting any smaller.

Of course, the Culture is a science fiction universe that can use unobtainium (hyperspace and picotechnology) to power their tech advancement. The point is, it will not be that simple in real life.
kinnison wrote:Darth Hoth - I would respectfully disagree with your opinion that we are not going to see humanscale AI any time soon. The estimates I've seen are that somewhere around 2020 there is going to be human-equivalent processing power in a desktop machine. This would tend to indicate that supercomputers or processing networks might have that capacity a few years earlier - maybe 2017 or maybe even earlier.

Granted, that doesn't mean AI. However, most authorities seem to think that sapience is an emergent phenomenon. I have a strange feeling that the Internet of maybe eight years from now might have a mind of its own - but would we know it? Relevant to this discussion is the Wolfram "answer engine" project, already stated to be designed to grow and learn, and starting NEXT MONTH.
I would be sceptical, and point out that even were that true, that in no way guarantees a feasible artificial general intelligence. Nor does it make it so "godlike" that it will instantly start researching whole new physics that we poor humans cannot understand, or any such. These predictions all seem to indicate a lot of wishful thinking (and no-limits fallacies, à la the OA wankers) when they come up. At a certain level we simply run into hardware limits.
Yes, guys, the Singularity might be coming to a theatre near you - this year, or maybe next. Let's hope the first AI is a Multivac, not a Skynet.
If that by some theoretical act of God would happen, I would rather hope that whoever finds out nukes the research facility with half a dozen or so 300-kiloton warheads and proceeds to physically dismantle the Internet.
"But there's no story past Episode VI, there's just no story. It's a certain story about Anakin Skywalker and once Anakin Skywalker dies, that's kind of the end of the story. There is no story about Luke Skywalker, I mean apart from the books."

-George "Evil" Lucas
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Uberwank infantry weapons

Post by Starglider »

Darth Hoth wrote:Are you one of those people who are unaware that "Moore's Law" has been retracted?
Please don't make me beat you with a stick.

Moore's Law (transistor count doubling) is going strong. We switched from using it to enhance serial speed to mostly enhancing parallel speed for a while, though serial speed will probably be getting some renewed attention in the near future. All the major manufacturers have a solid process development plan through the next six years.
The current rate of computer development is not sustainable, in fact we are about to run into real physical engineering limits rather soon.
On the contrary, the physical limits are wildly beyond what we can currently do. In density terms, this is mainly because all existing devices are planar. When it becomes cost effective to develop 3D logic arrays, we will do so, at which point heat dissipation becomes the overwhelming challenge.
There comes a point when your transistors are not easily getting any smaller.
Theoretical studies of various nanomechanical and nanoelectronic designs suggest that there are plenty of smaller designs. Also, superconducting logic has the potential to increase effective switching speeds by one to two orders of magnitude.
Of course, the Culture is a science fiction universe that can use unobtainium (hyperspace and picotechnology) to power their tech advancement. The point is, it will not be that simple in real life.
None of that is necessary. IMHO, existing supercomputers may suffice for Mind-like entities, if only we had the software.
kinnison wrote:Darth Hoth - I would respectfully disagree with your opinion that we are not going to see humanscale AI any time soon. The estimates I've seen are that somewhere around 2020 there is going to be human-equivalent processing power in a desktop machine.
Unfortunately that isn't so useful; brain simulation takes a lot more raw power than the brain does, because of 'emulation overhead' (no one is exactly sure how much), whereas de novo AGI designs could potentially take much, much less (because the brain is so structurally inefficient).
Granted, that doesn't mean AI. However, most authorities seem to think that sapience is an emergent phenomenon. I have a strange feeling that the Internet of maybe eight years from now might have a mind of its own - but would we know it?
Argh. Sapience can be an emergent phenomenon, but it requires the right supporting conditions (which does not mean Google's server farm), and in any case why would you want to make an AGI that way (answer; because you're ignorant and don't know any better, which covers a depressingly large fraction of AGI researchers).
Relevant to this discussion is the Wolfram "answer engine" project, already stated to be designed to grow and learn, and starting NEXT MONTH.
That was tried in the late 80s (Cyc), the project is still going, but it pretty much ground to a halt (in terms of real progress towards AGI) a decade ago.
Nor does it make it so "godlike" that it will instantly start researching whole new physics that we poor humans cannot understand, or any such.
Most physics consists of long periods of pure maths then intensive checking against massive experimental datasets these days.
I would rather hope that whoever finds out nukes the research facility with half a dozen or so 300-kiloton warheads and proceeds to physically dismantle the Internet.
You're unlikely to get that much warning, and even if you did, humanity has never successfully managed to voluntarily relinquish a technology in the history of the species.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Uberwank infantry weapons

Post by Starglider »

Lord of the Abyss wrote:As I recall, Schlock's present plasma cannon is actually one of those miniature versions remounted in a shell of the older, bulkier gun, after his old one blew up. He didn't like a tiny, quiet gun; he wants a big gun with the ominous hummmmmm.
That was suggested by the arms dealer but not actually implemented. Schlock's current cannons are advanced fusion canons (Kevin added various toys) with booster annie-plants, for quick starting and more firepower.
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Re: Uberwank infantry weapons

Post by Ford Prefect »

Darth Hoth wrote:I was and am talking about the setting, not individual novel plots. And if you read A Few Notes on the Culture, Banks himself says that that is what he expects (and, I gather, wants, though that might be me reading too much into it) to happen in the future. Silly free energy technobabble, no, Man supplanted by machines, yes.
And this translates to 'wank' how? You can keep saying that Banks believes that machine intelligences have higher potential than human beings and all I'll do is shrug my shoulders. So far you've simply asserted that the Culture is so powerful just to make human beings look irrelevant, which is a very interesting interpretation of that quote, and is blatantly untrue looking at the novels, which is what the setting is a vehicle for.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Darth Hoth
Jedi Council Member
Posts: 2319
Joined: 2008-02-15 09:36am

Re: Uberwank infantry weapons

Post by Darth Hoth »

Starglider wrote:Please don't make me beat you with a stick.

Moore's Law (transistor count doubling) is going strong. We switched from using it to enhance serial speed to mostly enhancing parallel speed for a while, though serial speed will probably be getting some renewed attention in the near future. All the major manufacturers have a solid process development plan through the next six years.
Moore himself stated that progression according to his law was finite and would reach its limits within a decade or so. That is long before we get human-equivalent laptops.
On the contrary, the physical limits are wildly beyond what we can currently do. In density terms, this is mainly because all existing devices are planar. When it becomes cost effective to develop 3D logic arrays, we will do so, at which point heat dissipation becomes the overwhelming challenge.
I was not aware that we presently had the engineering to easily make this step and use it to continue computer development at the present rate.
Theoretical studies of various nanomechanical and nanoelectronic designs suggest that there are plenty of smaller designs. Also, superconducting logic has the potential to increase effective switching speeds by one to two orders of magnitude.
There is, however, as yet no practical means of using such methods now or in the near future. There may never be. Nanotechnology represents such an engineering challenge that it may well remain purely theoretical; we do not know where to start.
None of that is necessary. IMHO, existing supercomputers may suffice for Mind-like entities, if only we had the software.
Is there presently any computer complex that can match even the raw processing speed of a single human brain?
Most physics consists of long periods of pure maths then intensive checking against massive experimental datasets these days.
A computer intelligence, assuming such a thing was built, will have an advantage at such matters. It will still not be "godlike", insofar that its abilities will not look like pure magic to us. Some Singularity wankers seem to think that the day after the first "archailect" sees the light, we will all be consumed by an omnivorous nanoswarm, or similar ludicrous scenarios. The computer would still be bound by conventional physics, even if it would be more efficient at practical engineering - and that is not a given, at that, since it too would be dependent on programming.
You're unlikely to get that much warning, and even if you did, humanity has never successfully managed to voluntarily relinquish a technology in the history of the species.
Even if we grant it arbitrary intelligence for the sake of discussion, the AI will still have purely physical limits, mainly that of only existing in its own purpose-built infrastructure. Assuming that we build it, we can destroy it before it gains the ability to affect reality. For the second point, I would think we would, if the alternative was giving free reins to something that would wipe us out. If not, then we deserve to lose.
"But there's no story past Episode VI, there's just no story. It's a certain story about Anakin Skywalker and once Anakin Skywalker dies, that's kind of the end of the story. There is no story about Luke Skywalker, I mean apart from the books."

-George "Evil" Lucas
User avatar
Darth Hoth
Jedi Council Member
Posts: 2319
Joined: 2008-02-15 09:36am

Re: Uberwank infantry weapons

Post by Darth Hoth »

Ford Prefect wrote:And this translates to 'wank' how? You can keep saying that Banks believes that machine intelligences have higher potential than human beings and all I'll do is shrug my shoulders. So far you've simply asserted that the Culture is so powerful just to make human beings look irrelevant, which is a very interesting interpretation of that quote, and is blatantly untrue looking at the novels, which is what the setting is a vehicle for.
How relevant to the setting are the humans as a whole? What can they do, that a Drone cannot do better? They are, effectively, allowed to make a difference where they do only because a Mind decides it can allow them to. They are completely overshadowed in the larger picture.

My argument was that if you purposely write on such a scale as to be beyond human comprehension, one can call that wank, since then power level is at least on some level an end in itself, not just a means to an end.
"But there's no story past Episode VI, there's just no story. It's a certain story about Anakin Skywalker and once Anakin Skywalker dies, that's kind of the end of the story. There is no story about Luke Skywalker, I mean apart from the books."

-George "Evil" Lucas
User avatar
phongn
Rebel Leader
Posts: 18487
Joined: 2002-07-03 11:11pm

Re: Uberwank infantry weapons

Post by phongn »

Darth Hoth wrote:
Starglider wrote:Moore's Law (transistor count doubling) is going strong. We switched from using it to enhance serial speed to mostly enhancing parallel speed for a while, though serial speed will probably be getting some renewed attention in the near future. All the major manufacturers have a solid process development plan through the next six years.
Moore himself stated that progression according to his law was finite and would reach its limits within a decade or so. That is long before we get human-equivalent laptops.
Moore's Law refers to transistor density rather than count (Starglider was being imprecise). We could, for example, simply build many processors (as is done now, anyways) once we hit the limits of present semiconductor technology. However, there are promising technologies to replace it that are being researched.
On the contrary, the physical limits are wildly beyond what we can currently do. In density terms, this is mainly because all existing devices are planar. When it becomes cost effective to develop 3D logic arrays, we will do so, at which point heat dissipation becomes the overwhelming challenge.
I was not aware that we presently had the engineering to easily make this step and use it to continue computer development at the present rate.
It's an R&D problem at the moment, and one that isn't really pressing so long as we have the ability to continue shrinking down transistors. There are no fundamental problems with 3D circuits.
Theoretical studies of various nanomechanical and nanoelectronic designs suggest that there are plenty of smaller designs. Also, superconducting logic has the potential to increase effective switching speeds by one to two orders of magnitude.
There is, however, as yet no practical means of using such methods now or in the near future. There may never be. Nanotechnology represents such an engineering challenge that it may well remain purely theoretical; we do not know where to start.
Untrue.
None of that is necessary. IMHO, existing supercomputers may suffice for Mind-like entities, if only we had the software.
Is there presently any computer complex that can match even the raw processing speed of a single human brain?
Yes. However, how much compute power in the mind is tied up performing things like keeping out hearts alive, processing sensory input, controlling our muscles and other tasks? That is not relevant for an AGI, only the cognition is. This site estimates total brainpower in the 1013-1016 operations/second range; we have computers operating in the 1015 operation/second range today. Hardware has never been the problem.
User avatar
tim31
Sith Devotee
Posts: 3388
Joined: 2006-10-18 03:32am
Location: Tasmania, Australia

Re: Uberwank infantry weapons

Post by tim31 »

DISTRACTION TIME! This isn't wanked so much as cool but impractical; at the start of John Birmingham's Axis of Time trilogy(set initally in 2021), US armed forces are starting to receive the Remington G4(lol mixed designations) which is a solid state battle rifle firing caseless ceramic rounds. When these rounds enter flesh, they unfurl nanofibre razor tendrils that shred organic matter with gay abandon. What this looks like in real time is that an area of ~30 centimetres around the turns to pulp with the sort of gore and noise you can imagine. The massive trauma usually kills the target instantly.
lol, opsec doesn't apply to fanfiction. -Aaron

PRFYNAFBTFC
CAPTAIN OF MFS SAMMY HAGAR
ImageImage
Paradox
Youngling
Posts: 91
Joined: 2004-01-11 03:18pm
Location: Arizona
Contact:

Re: Uberwank infantry weapons

Post by Paradox »

For anyone who has played Planetside, the current version of the Vanu Lasher heavy assault rifle is absolute bullshit uberwank.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Uberwank infantry weapons

Post by Starglider »

Darth Hoth wrote:Moore himself stated that progression according to his law was finite and would reach its limits within a decade or so.
Yes, and that was in the early 80s, and here we are still proceeding as fast as ever. People have been calling for the 'death' of Moore's law since the term was coined, but those people are short-sighted fools.
That is long before we get human-equivalent laptops.
I hate to break it to you (actually that's a lie, I find it highly amusing to break it to you), but we already have human-equivalent laptops... in a raw physicalist sense. Human synapses are roughly equivalent to ten or so transistors in analogue mode (generously, arguably you could do it with less) or a few hundred operating in digital mode (let's call it a thousand). The effective speed of a cluster of logic blocks (on current process technology) simulating a human synapse is around 50 million times faster than the biological version. The human brain has something like 100 trillion synapses. My laptop has about 2 billion transistors (Core 2 Quad plus the chipset and GPUs), enough for about 2 million digital synapse emulators. That gives a theoretical computing power equivalent to... 100 trillion synapses.

Of course it's not that simple; to store the state of all those synapses you'd need about half a petabyte of memory with massive bandwidth to the processing logic, and I'm omitting a lot of other technical requirements for brain simulation. The point though is that you're thinking of computing power measured the conventional way (fully programmable, always available perfectly accurate, fully reliable, mostly sequential logic) with brain power measured a completely different way (barely programmable, horribly inaccurate, massive-parallel-only, critically sequential-step-limited, unreliable wetware with a very poor duty cycle). The human brain can only achieve a thousandth of the raw computing power required to (naively) simulate it because only a minute fraction of your synapses actually fire in any given millisecond.

In actual fact the main thing preventing your laptop from running a human equivalent AGI is probably the bandwidth bottleneck between the processor and the bulk storage, and I'm not even sure about that (various AI people are working on very clever indexing, caching and progressive pattern match schemes that a biological brain could not hope to replicate).
When it becomes cost effective to develop 3D logic arrays, we will do so, at which point heat dissipation becomes the overwhelming challenge.
I was not aware that we presently had the engineering to easily make this step and use it to continue computer development at the present rate.
Stacked die demonstrators have been around for a couple of decades, but they've never been cost effective, particularly with air cooling. Eventually we'll start using them.
Theoretical studies of various nanomechanical and nanoelectronic designs suggest that there are plenty of smaller designs. Also, superconducting logic has the potential to increase effective switching speeds by one to two orders of magnitude.
There is, however, as yet no practical means of using such methods now or in the near future. There may never be.
Wrong. Superconducting logic is practical today, but currently the cost/beneift of the R&D and cyro is not there. If conventional silicon grinds to a halt, it will get renewed attention.
Nanotechnology represents such an engineering challenge that it may well remain purely theoretical; we do not know where to start.
Wrong, plenty of research groups are making great strides in all areas of nanotechnology. It's just taking longer than the insanely optimistic initial enthusiasts liked to think.
Is there presently any computer complex that can match even the raw processing speed of a single human brain?
Yes, we're into the petaflop domain now, the effective raw processing speed of the human brain (that it can bring to bear on any given problem) is likely measured in teraflops only.
It will still not be "godlike", insofar that its abilities will not look like pure magic to us.
Sufficient deductive capabilities look like magic of the oracular or divinatory nature. Frankly this is pretty much a given, but of course no one can say exactly what it will be like until it exists. Physical 'magic' requires a technology advantage, and history suggests that it doesn't actually take that much of a gap to be 'sufficiently advanced'. Whether an AGI will come to possess such an advantage over humans depends on the circumstances of its development, but I strongly suspect it will in short order.
Some Singularity wankers seem to think that the day after the first "archailect" sees the light, we will all be consumed by an omnivorous nanoswarm, or similar ludicrous scenarios.
The only thing unlikely about that is the timescale. My guess is that the R&D would take a few months (perhaps even a few years), and of course the goo will be more like a cross between super-algae and a horribly infectious disease than an advancing wall of ooze, but there's nothing fundamentally implausible about it.
The computer would still be bound by conventional physics
Yes, and? It doesn't really help if you're facing an enemy with a major technological advancement on you to know that they're still bounded by 'conventional physics'. The main relevant bound is infrastructure, but given an indefinite amount of tireless, super-genius labor that's less of a challenge than it seems at first.
Assuming that we build it, we can destroy it before it gains the ability to affect reality.
No, you really can't. Simply by interacting with it, you are giving it a channel to interact with reality. And in most development scenarios, it will escape onto the Internet in short order.
User avatar
NoXion
Padawan Learner
Posts: 306
Joined: 2005-04-21 01:38am
Location: Perfidious Albion

Re: Uberwank infantry weapons

Post by NoXion »

I was under the impression that the problem of artificial intelligence was qualititive rather than quantitive - in other words, that it's an issue of programming rather than raw power. We don't know what makes our own brains tick (although I hear that we're making significant inroads) let alone how a purely artificial intelligence would work out.

To what use(s) would an artificial intelligence be put to? Leaving aside the development of an AI for it's own sake, which would likely yield significant knowledge.

Also, is there a website or blog where I can follow current developments? The subject fascinates me but I am woefully uninformed as to the current state of affairs in AI research.
Does it follow that I reject all authority? Perish the thought. In the matter of boots, I defer to the authority of the boot-maker - Mikhail Bakunin
Capital is reckless of the health or length of life of the laborer, unless under compulsion from society - Karl Marx
Pollution is nothing but the resources we are not harvesting. We allow them to disperse because we've been ignorant of their value - R. Buckminster Fuller
The important thing is not to be human but to be humane - Eliezer S. Yudkowsky


Nova Mundi, my laughable attempt at an original worldbuilding/gameplay project
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Uberwank infantry weapons

Post by Starglider »

NoXion wrote:I was under the impression that the problem of artificial intelligence was qualititive rather than quantitive - in other words, that it's an issue of programming rather than raw power.
Very much so.
We don't know what makes our own brains tick (although I hear that we're making significant inroads)
Correct.
let alone how a purely artificial intelligence would work out.
Correct, and the scary thing is, almost no one is giving this serious thought. There is plenty of unfounded (and worse-than-useless) speculation, but I only know of two researchers who are focusing on developing formal, general theories of AI behaviour.
To what use(s) would an artificial intelligence be put to?
Too many to list. Essentially every job that can be done from a desk is a candidate for total automation - and the rest will be up for automation just as soon as the robotics engineers can build a suitable chasis.
Also, is there a website or blog where I can follow current developments? The subject fascinates me but I am woefully uninformed as to the current state of affairs in AI research.
There are many such sites, which are of highly variable quality. PM me if you want to discuss it, I don't want to hijack this thread even more.
User avatar
Darth Hoth
Jedi Council Member
Posts: 2319
Joined: 2008-02-15 09:36am

Re: Uberwank infantry weapons

Post by Darth Hoth »

Starglider wrote:Yes, and that was in the early 80s, and here we are still proceeding as fast as ever. People have been calling for the 'death' of Moore's law since the term was coined, but those people are short-sighted fools.
I thought he had said it as late as 2005?
I hate to break it to you (actually that's a lie, I find it highly amusing to break it to you), but we already have human-equivalent laptops... in a raw physicalist sense. Human synapses are roughly equivalent to ten or so transistors in analogue mode (generously, arguably you could do it with less) or a few hundred operating in digital mode (let's call it a thousand). The effective speed of a cluster of logic blocks (on current process technology) simulating a human synapse is around 50 million times faster than the biological version. The human brain has something like 100 trillion synapses. My laptop has about 2 billion transistors (Core 2 Quad plus the chipset and GPUs), enough for about 2 million digital synapse emulators. That gives a theoretical computing power equivalent to... 100 trillion synapses.

Of course it's not that simple; to store the state of all those synapses you'd need about half a petabyte of memory with massive bandwidth to the processing logic, and I'm omitting a lot of other technical requirements for brain simulation. The point though is that you're thinking of computing power measured the conventional way (fully programmable, always available perfectly accurate, fully reliable, mostly sequential logic) with brain power measured a completely different way (barely programmable, horribly inaccurate, massive-parallel-only, critically sequential-step-limited, unreliable wetware with a very poor duty cycle). The human brain can only achieve a thousandth of the raw computing power required to (naively) simulate it because only a minute fraction of your synapses actually fire in any given millisecond.

In actual fact the main thing preventing your laptop from running a human equivalent AGI is probably the bandwidth bottleneck between the processor and the bulk storage, and I'm not even sure about that (various AI people are working on very clever indexing, caching and progressive pattern match schemes that a biological brain could not hope to replicate).
I will concede the arguments regarding computer technology, since evidently you know this stuff better than I do (my information appears to have been a bit dated, to say the least). Not that I agree with everything, but looking around I found I know too little to debate this in detail. Perhaps I might be back later, when I have read up on recent developments.
Sufficient deductive capabilities look like magic of the oracular or divinatory nature. Frankly this is pretty much a given, but of course no one can say exactly what it will be like until it exists.
Why do we assume that it can automatically model very complex chains of events, including human decision-making?
Physical 'magic' requires a technology advantage, and history suggests that it doesn't actually take that much of a gap to be 'sufficiently advanced'. Whether an AGI will come to possess such an advantage over humans depends on the circumstances of its development, but I strongly suspect it will in short order.
Our history may not pose an all that accurate model, given for how comparatively short a while we have even employed basic scientific methods, let alone had any deeper understanding of physics. With an unscientific mindset, everything you cannot understand becomes "magic," but the same would not necessarily be true for modern Man. Also, why do we assume that the machine will shortly have a technology advantage? This implies that it will be able to rapidly build and develop its own industrial infrastructure, and one that is superior to ours.
The only thing unlikely about that is the timescale. My guess is that the R&D would take a few months (perhaps even a few years), and of course the goo will be more like a cross between super-algae and a horribly infectious disease than an advancing wall of ooze, but there's nothing fundamentally implausible about it.
One might argue that if such a supremely competitive organism could evolve, why has it not already? And of course, we once again assume the machine will have vastly superior abilities and time to use them uninterrupted.
Yes, and? It doesn't really help if you're facing an enemy with a major technological advancement on you to know that they're still bounded by 'conventional physics'.
If it follows the laws of physics, we can theorise a rough mechanism for how its stuff works, even if we would be unable to replicate it. Therefore it would not look like "magic".
The main relevant bound is infrastructure, but given an indefinite amount of tireless, super-genius labor that's less of a challenge than it seems at first.
This assumes we would leave it alone and let it build up its factories, or that it would be able to fend us off easily while it did so.
No, you really can't. Simply by interacting with it, you are giving it a channel to interact with reality.
It can send information to reality through a human medium. It cannot physically direct any application of resources, and all contact would be subject to human supervision.
And in most development scenarios, it will escape onto the Internet in short order.
Why? This I find hard to understand; the reasonable assumption would be that if a hypothetical AGI would be built, it would be so by military or corporate professionals (with access to greater resources), not, say, an ignorant hacker in his garage. These people would be well aware of the risks, if they are such as you posit them; why the Hell would they allow it Internet access? I cannot imagine that they would be that stupid.
"But there's no story past Episode VI, there's just no story. It's a certain story about Anakin Skywalker and once Anakin Skywalker dies, that's kind of the end of the story. There is no story about Luke Skywalker, I mean apart from the books."

-George "Evil" Lucas
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Uberwank infantry weapons

Post by Starglider »

Darth Hoth wrote:Why do we assume that it can automatically model very complex chains of events, including human decision-making?
Because recursive Bayesian logic is scary stuff - speaking from a position of experience as I work with it on a daily basis. Humans are bad at long inferential chains; you can see this in debates, if one person gets more than say 10 inferential steps ahead of the other, their position becomes incomprehensible. Humans can only manage tens or hundreds of steps if we write things down and treat them as essentially boolean; the brain is just too inaccurate to do useful probability calculations over more than five steps or so. Then there's that 7+-2 short term memory limit restricting how many interacting elements we can consider at once.

AI systems can do probabilistic calculations involving thousands of inferential steps with negligible accuracy losses (as long as you're careful with the FP handling). They can perform complex manipulations of million-entry conditional probability matricies for systems with hundreds of elements in less time than it takes you to blink. Where necessary they can run off a few thousand monte-carlo simulations of any given situation in milliseconds. Of course that raw power is limited by two things; learning ability and the inherent unpredictability of reality. Naive Bayes makes optimal use of information in learning simple probability distributions, but achieving the same speed of convergence on complex situations takes black magic. I'm afraid you'll have to treat this as an opinion as I don't have objective support I can use here, but I am now convinced that with the right kind of recursion a self-programming probabilistic logic system will in fact learn at a scarily fast rate, proportional to the amount of information you give it. It's a combination of the huge number of hypothesis the system can test per second, the relative structural flexibility of those hypothesis compared to the human brain, the convergence rate provided by Bayesian logic and the fluidity of the metahypothetical processes that develop in the recursion loop. As for inherent probability, an expected utility goal system automatically works to exploit the most predictable situations, creating them where necessary, and will of course have a back-up plan for every vaguely plausible scenario (planning is cheap compared to acting).
Our history may not pose an all that accurate model, given for how comparatively short a while we have even employed basic scientific methods, let alone had any deeper understanding of physics. With an unscientific mindset, everything you cannot understand becomes "magic," but the same would not necessarily be true for modern Man.
Mmmyeah. Weren't a lot of people saying that in the late 19th century?
Also, why do we assume that the machine will shortly have a technology advantage? This implies that it will be able to rapidly build and develop its own industrial infrastructure, and one that is superior to ours.
Possibly, and there are a few options for that. Of course the starting point is existing human infrastructure; plenty of companies would be happy to make you parts and assemblies based on emails and phone calls alone. The question of how much infrastructure you need to advance is an open question. The trend has traditionally been upwards - a sillicon chip fab is a massively complex and expensive manufacturing plant - but there are counterexamples, such as modern CAD-CAM machines giving small shops a precision, small-run manufacturing capability that would have required a factory full of specialist tooling thirty years ago.
One might argue that if such a supremely competitive organism could evolve, why has it not already?
Evolution is horribly slow, incredibly lossy and restricted to incremental paths. It cannot make multiple-point changes that must be done simultaneously no matter how obvious those changes would be to a human, or how beneficial they would be to the organism. There is a huge slew of chemistry simply inaccessible to organic life (on earth), because it isn't compatible with protein chemisty. Evolved designs have to tolerate random mutation, as they themselves are the product of it. Intelligent design is free to make 'brittle' but highly optimised designs, and it is also free to build new copies piece-by-piece, instead of growing them from the inside out.
And of course, we once again assume the machine will have vastly superior abilities and time to use them uninterrupted.
That seems reasonable to me.
If it follows the laws of physics, we can theorise a rough mechanism for how its stuff works, even if we would be unable to replicate it. Therefore it would not look like "magic".
Given sufficient time, yes, where 'sufficient time' may be decades and 'available time' may be days.
This assumes we would leave it alone and let it build up its factories, or that it would be able to fend us off easily while it did so.
You assume you're going to notice them. The world is full of factories, most of them full of automation. Do you know what they're all building? Do you really think a few more third-world assembly plant owned by an anonymous holding company is going to set off alarm bells?
It can send information to reality through a human medium. It cannot physically direct any application of resources, and all contact would be subject to human supervision.
This is called the 'AI Box' argument. It turns up a lot in AGI discussion forums. To cut a very long debate short, the usual conclusion is that no, you can't effectively keep an AGI in a box. It will eventually convince someone to let it out.
And in most development scenarios, it will escape onto the Internet in short order.
Why? This I find hard to understand; the reasonable assumption would be that if a hypothetical AGI would be built, it would be so by military or corporate professionals (with access to greater resources), not, say, an ignorant hacker in his garage. These people would be well aware of the risks, if they are such as you posit them; why the Hell would they allow it Internet access? I cannot imagine that they would be that stupid.
[/quote]

Sorry, but they really are that stupid. Most AGI researchers think that their creation will be benevolent by default and will learn quite slowly - for no particular reason other than generic wishful thinking. Of all of the AGI projects I have a reasonable amount of information on, only three are taking (or planning to take) serious steps to (physically) isolate the system from the Internet. When you fail that basic step, it's hardly worth discussing all the esoteric attacks, e.g. manipulating network components to act as EM transceivers and tap into mobile networks. Though frankly, even if they did secure the development, it wouldn't help. To actually use these AGIs to make a profit or control military systems you have to connect it to the outside world - a malevolent AGI will simply play nice in the simulations and then promptly get out of control once it is deployed. And no, in most cases you can't solve that problem with white-box examination - I personally take a great deal of care to design AI systems to be fully humanly verifiable and even then the process isn't foolproof by a long way, most proposed AGI designs ('emergent stews', most GP and neural network designs) are thoroughly opaque. The sickening thing is, a lot of researchers are actually proud of this (I usually accuse them of 'worshiping ignorance'). It's a holdover from the notion that not being able to understand your own creation means that you couldn't have rigged the demo, but still, it's inexcusable.
Post Reply