How would you make "Sensors" for a starship?

Important articles, websites, quotes, information etc. that can come in handy when discussing or debating religious or science-related topics

Moderators: Alyrium Denryle, SCRawl, Thanas

User avatar
Wyrm
Jedi Council Member
Posts: 2206
Joined: 2005-09-02 01:10pm
Location: In the sand, pooping hallucinogenic goodness.

Re: How would you make "Sensors" for a starship?

Postby Wyrm » 2009-05-27 03:27pm

Junghalli wrote:Hmm, doing some quick calculations based on the area of a circle, I got a 200.88 meter diameter aperture to be able to image a 206 megawatt light at a distance of 1 AU. Obviously, this would have to take the form of an array of linked telescopes, rather than a single giant scope. The inverse square law would then suggest that being able to detect at longer ranges would be a matter of multiplying the effective radius of the scope by 2 (and hence the effective area by 4) if you want to double the range. So monitoring the entire solar system up to around Pluto orbit would require an effective aperture around 6 km across (or, rather, a truly massive array of smaller telescopes). You'll probably want a rosette of such arrays, of course, so you can monitor the entire solar system.

That sound about right?

Pretty much. A telescope 1000x the diameter is going to image a source only 1 millionth as bright. But you can't put as many big telescopes up there as you can small ones. They're more expensive, and bigger telescopes cannot track as fast. Also, a larger telescope does not mean you can get a greater FOV. That depends on many things. In general, imaging a larger patch of the sky means that your angular resolution suffers for a given imaging technology.

So you now have to spend more resources to keep yourself safe. Eventually, you're going to run out of them.

Junghalli wrote:If the problem is light lag the solution seems relatively simple. You just have two redundant posts within a short distance of each other (say, a few thousand or tens of thousands of kilometers).

Then your accuracy suffers. With a long baseline, you can use paralax efficiently by matching starfields except for the point you're interested in. The longer your baseline, the less accurate your angular resolution. Shorten up the baseline, the less paralax you can observe, and the less accurate your distance measurements. It also means you have to keep track of your partner; the accuracy of the paralax measurement is limited to how well you know that baseline.

And, again, I've forced you to spend more resources to keep yourself safe. That's a win for me, not you.

On the same issue, Darth Holbytlan wrote:Why is a three-step process necessary? Instead, just send a request for help along with any location information in the first message. The second site can act on it immediately if they so choose. Of course, that's only half an order of magnitude improvement in speed or baseline size, so it's still SOL time.

Half-order of magnitude. Of course, "if they so choose" indicates that they might have imaged their own interesting object and sent out similar signals. In which case, the airwaves are going to be clogged with chatter. The three way chat seemed to be the minimum negotiation handshake. Of course, I glossed over a lot of detail here.

Junghalli wrote:How short is short? A low thrust drive system would need a burn lasting hours or days at least to get any significant velocity. I'd think that would leave you plenty of times to do stuff like, say, track the changing distance between the ship and several of your own platforms with parallax and use that to calculate the probable trajectory.

The enemy can do a long series of tiny burns, of course, but that still leaves the necessity for many thousands of tiny flashes. A few hundred or thousand such flashes in close proximity to a known enemy base could be marked as suspicious, and from there observation could begin, with an eye toward putting together a cumulative profile from thousands of events rather than trying to determine everything from just one sighting.

You seem to miss the fact in my preceeding section that the solution is genuinely unidentified. Determining the thrust takes three equations:

F_x = Dm u_x
F_y = Dm u_y
F_z = Dm u_z

You know the mass flow, Dm, but only one component of the exhaust velocity, u (assuming you're able to get either, see below), and therefore only one component of the force. The pixel resolution of 369.5 km means you can move 184.7 km and not worry about moving into a new pixel. If the Daedalus-type rocket, with a burn of 38 seconds (not long enough at 1 AU for the ship to move out of a pixel), this would result in an uncertainty of its proper motion of up to 2 km/s in any direction (or more, depending on the mass — remember that Daedalus was intended to go interstellar; in-system doesn't need nearly as much mass). You do not know where to look.

And even if you are somehow able to perform this trick, the enemy facility can simply turn on a big ol' flashlight and blast you in the face with light sufficient to drown out any schanannigans. If it did this regularly, you'd never be able to tell when that ship lifted off.

Finally, barring that, the ship could simply use an orbital facility to its advantage and simply slingshot itself away from the station, using the much more massive station itself as its reaction mass. There'd be no exhuast to observe, and no outward sign that anything had launched.

aimless wrote:Why are we assuming we need 100 pJ in 1 second? Which figures are correct, yours or GMTs? To quickly requote the pertinent entry:

....

Basically saying that you can have detection several orders of magnitude less than the 100 pJ you're asking for.

There's a big difference between imaging Pluto and an incoming enemy ship: we know where Pluto is. We can take a snapshot anytime we want and be almost certain that it is, in fact, Pluto. Not so with an incoming ship that might be confused for a fleck of nearby dust. We don't know where the enemy ship is, or even that it's out there to be found, until we find it.
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. 8)"
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."

Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy

Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: How would you make "Sensors" for a starship?

Postby Junghalli » 2009-05-27 06:16pm

starslayer wrote:The CCD problem refers to the fact that if you use multiple small sensors, each with its own CCD, they will each generate an image using their lower sensitivity, rather than what you wanted, which was the sensitivity of the much larger effective aperture. This is because you have already used the light coming into each telescope to make an image before combining it, so each one acts as if the others never existed. For example, say that you saw 10 photons come in from a source over a previously mentioned ~200m aperture. If you focus this light onto a single CCD, you will see ten photons hit one spot on the CCD. This is good enough to warrant another look at least, once you've eliminated all known objects. But if you have an array of smaller scopes each with their own CCD, you will see fewer photons on more CCDs. This likely isn't enough to register above background, and will be discarded. So for a large close-packed array, they must focus their light on a single CCD to be viable.

As I understand it, the problem is the photons will be spread out over multiple telescopes and hence multiple images, so they're less likely to register in each telescope. Assuming you could get one photon = one event sensitivity (extreme, I know, but I understand from other discussion in this thread that we're already getting close to this today), couldn't you have the array's central computer pick out incoming photons seen in the same part of sky by multiple telescopes and identify it as a very faint light? It would undoubtedly require an obscene amount of computing power to process the image, of course.

aimless wrote:Why are we assuming we need 100 pJ in 1 second? Which figures are correct, yours or GMTs? To quickly requote the pertinent entry:
Basically saying that you can have detection several orders of magnitude less than the 100 pJ you're asking for.

I was about to bring that up myself. Using GMT's figures I got a 4 gigawatt light being detectable at .9 AU. A difference of four orders of magnitude.

Wyrm wrote:Then your accuracy suffers. With a long baseline, you can use paralax efficiently by matching starfields except for the point you're interested in. The longer your baseline, the less accurate your angular resolution. Shorten up the baseline, the less paralax you can observe, and the less accurate your distance measurements. It also means you have to keep track of your partner; the accuracy of the paralax measurement is limited to how well you know that baseline.

Thanks for the info. It seems like the ideal solution would be to organize the schedule of your sensor sweeps so that at least two telescopes at wide seperations are always looking at one spot at the same time. That way you can simply check with the telescope that was looking at the same patch of sky at the time. Alternately, set up a system by which this could be pre-coordinated when a single observatory sees something it thinks looks interesting.

And, again, I've forced you to spend more resources to keep yourself safe. That's a win for me, not you.

Well, yes. I never said TINSTAAFL wouldn't apply to space warship detection. I'm continuing this discussion because I'm interested in quantifying what sort of resource commitment a decent monitoring grid would demand.

You know the mass flow, Dm, but only one component of the exhaust velocity, u (assuming you're able to get either, see below), and therefore only one component of the force. The pixel resolution of 369.5 km means you can move 184.7 km and not worry about moving into a new pixel. If the Daedalus-type rocket, with a burn of 38 seconds (not long enough at 1 AU for the ship to move out of a pixel), this would result in an uncertainty of its proper motion of up to 2 km/s in any direction (or more, depending on the mass — remember that Daedalus was intended to go interstellar; in-system doesn't need nearly as much mass). You do not know where to look.

I think we may be miscommunicating here. Maybe I'm not understanding you right but you seem to be thinking that I'm trying to extrapolate something from the presence of a single flash of light. I completely agree with you: the flash from a single fraction of a second burn won't tell you anything except that there's something briefly luminous out there. By itself, it's far too little to go on.

But an enemy ship isn't going to just flash its engines for a fraction of a second once (unless it has crazy acceleration). It's going to flash them many, many times. A single flash is much too little to go on. A hundred flashes in the same general patch of sky is enough to cause notice. A thousand is enough to make that patch of sky suspicious and merit investigation, and also enough that you'll have some general idea of where the next one may show up just by looking at the pattern of the previous ones. Ten thousand is enough to turn observation platforms on it, take parallax readings, and determine that the flashes are happening slightly closer/farther away from this platform or that platform each time, and from that you should be able to eventually calculate a course. The fact that you can't calculate anything meaningful from a single flash is irrelevant to this.

Note: I am assuming you have enough platforms to do a continuous pan-scan of the entire sky, so at least one or two platforms will pick up every flash.

And even if you are somehow able to perform this trick, the enemy facility can simply turn on a big ol' flashlight and blast you in the face with light sufficient to drown out any schanannigans. If it did this regularly, you'd never be able to tell when that ship lifted off.

Wouldn't they have to know where all your platforms were to do this?

Finally, barring that, the ship could simply use an orbital facility to its advantage and simply slingshot itself away from the station, using the much more massive station itself as its reaction mass. There'd be no exhuast to observe, and no outward sign that anything had launched.

Except that the station itself would have a slightly altered velocity, which might be detectable. I imagine known enemy bases would be under pretty heavy surveillance, up to and including stuff like regularly pinging them with very powerful interplanetary radars to be sure they're still where they ought to be.

User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70027
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Re: How would you make "Sensors" for a starship?

Postby Darth Wong » 2009-05-27 08:15pm

Yes, but if all you're getting is flashes, then it's that much easier to deceive with decoys. The counter-argument (that you can get detailed identifying data on the flash to determine whether it's a ship or a decoy) doesn't work in this case, because you're not getting enough information. There's only so much you can pull out of a couple of photons. And if the system responds as aggressively as you suggest, then it only makes decoys more effective, because all of the scopes focusing on one patch of sky means less awareness of the rest of the sky.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html

User avatar
aimless
Youngling
Posts: 53
Joined: 2009-05-06 12:37am

Re: How would you make "Sensors" for a starship?

Postby aimless » 2009-05-27 08:58pm

Ok, I'm going to try to create a hypothetical observation system with a few more specifics to maybe get a better idea of resource commitment vs effectiveness. Due to my status as a google expert I expect to be wrong on several points but hopefully something can be learned from those corrections.

Situation: 2 factions, one controls Earth, Venus, Mercury, the other is based in the Jovian moons and the asteroid belt. So the Earth faction wants a monitoring system that will detect traffic out to a bit beyond Jupiter. With Jupiter at 4.2 AU from Earth, let's round up to 4.5 to get some space behind it. That's the minumum of course, to get Jupiter when it's on the opposite side of the system will require our spaceship detection range to be 6.5 AU or 972.4 million km.

Since we're talking resource commitment, how much of the sky do we want to image? Let's assume that all the launch points are on/near existing solar bodies, and that the delta-v requirements of an attack vector that boosts significantly out of the solar plane are too extravagant. So we don't need a full sphere of detection. Let's try 15 degrees both above and below the solar plane. At our max detection range that's a hefty 1.7 AU from the solar plane...but for spotting bogeys are at closest Mars it's only .13 AU. So let's bump it up to 30 degrees above and below the solar plane. You could also have a secondary 'close in' system which scans a larger portion of the sky and takes care of that stuff.

I'm going to slightly reduce this 60 degree angle to 1 radian to make my life a lot easier. Since we want to track the asteroids and the possibility of stuff going on at any point in Jupiter's orbit, we'll be looking in a full circle. So our area ends up being half that of a sphere, for about 20,627 square degrees. To steal from GMT again, because of the nice telescope involved and the optimistic figures that I like:

GrandMasterTerwynn wrote:The Kepler Mission features a wide-field telescope with about 1 meter of effective aperture. It feeds photons to a 95 megapixel camera watching a ten degree wide swath of the sky....It works out to be 105 deg2. However, the area it must cover is 20,626.48 deg2. A 'mere' 196 images.


So we 'only' need about 200 telescopes to stare continuously at this entire area. Coordinating them all might be a huge headache, so for some redundancy should we say 250 scopes? 300? Let's actually say we have 400 of these babies to make absolutely sure that we're covering everything and to have a extremely high possibility that more than 2 scopes will be trained on each event.

We don't have to have a continuous staring system but constantly photographing every patch of sky makes it easy to cut out background noise and to find things moving against the background stars. (as an aside Wurm was pointing out that something might have to move many km before switching pixels, however everything launched will start out with some velocity compared to the background stars already, even before accelerating).

One of the big issues is going to be the absurd quantity of data processing needed to continuously photograph the sky and analyze the images. Darth Wong was pointing out earlier that we can't assume that all technology will improve in the future: however, I'm confident that computing power is something that will, especially if that quantum computing thing becomes a reality.

On to actually finding things. Back to the Pluto example (of debatable relevance but I'll use it as a starting point. Could use the Pan-STARRS example as a start point too, but that has much longer exposure times: the 1s exposure time from the Pluto example seems more useful for spotting moving bogeys):

GrandMasterTerwynn wrote:An extremely sensitive sensor, such as the one used on the New Horizons spacecraft's LORRI imager picked up Pluto in a 1 second exposure. Pluto can be modeled as a 4.2 TW light source (Average albedo of 0.575, radius of 1151 km,) for an irradiance of just 1.89x10-8 W/km2 at the 4.2 billion kilometer distance that New Horizons was at when it first imaged Pluto. The New Horizons imager has an aperture of a mere 0.2 meters. Meaning Pluto illuminated one pixel of the detector with 2.38x10-15 W of power. Our starship-mounted telescope has about 25x the light gathering area, so it could match this feat at roughly five times the distance.


(side note...I have no idea how GMT got these figures :? )

Wyrm wrote:There's a big difference between imaging Pluto and an incoming enemy ship: we know where Pluto is. We can take a snapshot anytime we want and be almost certain that it is, in fact, Pluto. Not so with an incoming ship that might be confused for a fleck of nearby dust. We don't know where the enemy ship is, or even that it's out there to be found, until we find it.


So if we want to use this real life example of Pluto detection we have to figure out how well finding Pluto translates into finding ships. What we do know is that in a 1s exposure they were able to get an object distinguishable from the background. The reason they knew it was Pluto is because they knew Pluto was supposed to be at that location and moving across the background at that rate. Pretty big advantage. But they did it with only six 1s exposures. With a system doing continuous staring and sufficient data processing to pull it off, can we assume that we'll be able to pick out ships if they pass our minimum threshold to be distinguishable from background? (a threshold that is significantly lowered by continuous staring and comparing exposures to filter noise).

Anyways this is already getting long...future things to consider for this 'Jovian Detection System' would be the costs (400 Keplers...hrm...those are around 300 mil each, how affordable would that become in wartime with mass production + ongoing mission costs... though erik_t was speculating earlier that each scope might cost less than the space missile needed to destroy it!), adding a few interferometers to focus in on interesting targets (like putting a bunch of CHARA arrays in space, .0005 arcsec resolution holy christ), how effective such arrays might be at weeding out decoys, and how to position this system to get the most accurate trajectory estimates if you have 2 scopes looking at any given patch of sky.

Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: How would you make "Sensors" for a starship?

Postby Junghalli » 2009-05-27 09:09pm

Darth Wong wrote:Yes, but if all you're getting is flashes, then it's that much easier to deceive with decoys. The counter-argument (that you can get detailed identifying data on the flash to determine whether it's a ship or a decoy) doesn't work in this case, because you're not getting enough information. There's only so much you can pull out of a couple of photons. And if the system responds as aggressively as you suggest, then it only makes decoys more effective, because all of the scopes focusing on one patch of sky means less awareness of the rest of the sky.

Can't argue with that. A system like this wouldn't be able to tell much about what kind of ship you were looking at.

I suspect the best answer to decoys may well be to try to bring the sensor to them. Load a sensor package on top of a giant fuel tank and send it at any interesting lights (like, say, one that's headed straight for one of your bases) at a much higher delta V than any practical warship could manage. Hopefully it should be able get close enough to the interesting light to be able to tell whether it's a decoy or not before it gets shot out of the sky, although this will depend on the exact tech mix of your universe. It gives you less warning than you'd get if you could sort decoys and warships out from a distance, but it'll still give you warning, especially if you're fighting across many AUs and especially if your bases are relatively close together compared to the distance to the enemy's bases.

User avatar
GrandMasterTerwynn
Emperor's Hand
Posts: 6702
Joined: 2002-07-29 06:14pm
Location: Somewhere on Earth.
Contact:

Re: How would you make "Sensors" for a starship?

Postby GrandMasterTerwynn » 2009-05-27 09:29pm

aimless wrote:So our area ends up being half that of a sphere, for about 20,627 square degrees. To steal from GMT again, because of the nice telescope involved and the optimistic figures that I like:

Uh . . . no. 20,627 square degrees is for a hemisphere. I wrote out my example way back when this thread was talking about sensors aboard a starship. A wide-field survey sensor is going to have half the sky blocked off by the ship's hull. You'd need an identical setup on the opposite side of the vessel to get both halves of the sky.

GrandMasterTerwynn wrote:The Kepler Mission features a wide-field telescope with about 1 meter of effective aperture. It feeds photons to a 95 megapixel camera watching a ten degree wide swath of the sky....It works out to be 105 deg2. However, the area it must cover is 20,626.48 deg2. A 'mere' 196 images.


So we 'only' need about 200 telescopes to stare continuously at this entire area. Coordinating them all might be a huge headache, so for some redundancy should we say 250 scopes? 300? Let's actually say we have 400 of these babies to make absolutely sure that we're covering everything and to have a extremely high possibility that more than 2 scopes will be trained on each event.

You'd need nearly 400 scopes to continuously stare at the entire celestial sphere.

One of the big issues is going to be the absurd quantity of data processing needed to continuously photograph the sky and analyze the images. Darth Wong was pointing out earlier that we can't assume that all technology will improve in the future: however, I'm confident that computing power is something that will, especially if that quantum computing thing becomes a reality.

As I stated a little earlier in the thread, to take advantage of the full theoretical limit of a 1 meter aperture telescope across the stated FOV, you'd need a 22 gigapixel camera generating 176 gigabytes of data per exposure assuming 8-bit pixels. If you want more than 256 levels of light intensity, you're going to need more bits, which will further increase the amount of data produced.

GrandMasterTerwynn wrote:An extremely sensitive sensor, such as the one used on the New Horizons spacecraft's LORRI imager picked up Pluto in a 1 second exposure. Pluto can be modeled as a 4.2 TW light source (Average albedo of 0.575, radius of 1151 km,) for an irradiance of just 1.89x10-8 W/km2 at the 4.2 billion kilometer distance that New Horizons was at when it first imaged Pluto. The New Horizons imager has an aperture of a mere 0.2 meters. Meaning Pluto illuminated one pixel of the detector with 2.38x10-15 W of power. Our starship-mounted telescope has about 25x the light gathering area, so it could match this feat at roughly five times the distance.


(side note...I have no idea how GMT got these figures :? )

Simple. To get the luminescence of Pluto, one must take the power output of the Sun (4.0x1026 Watts,) and divide by the surface area of a sphere with a radius equal to Pluto's mean orbital distance from the Sun. Then you multiply this value by one half of Pluto's surface area (since we only care about the side facing the Sun,) and multiply that value by Pluto's albedo. This gives you the light-bulb that Pluto is equivalent to.

Now to figure out how much light reflected from Pluto is falling on a particular patch of sky at a given distance of Pluto, one must take the light-bulb figure calculated above, and divide by the area of a sphere with a radius equal to the distance from Pluto, or whatever object, you care about. This gives you the irradiance. However, a telescope can only gather light from the surface-area of its aperture so you must then divide the area of the telescope's aperture by the unit of area you got from the area-of-a-sphere calculation above, and multiply the result by the watts per square unit irradiance value that you calculated above. This gives you the total light falling into the telescope, which is focused onto a single point . . . conveniently equated to be one pixel, since I was interested in minimum detectability.

User avatar
Kuroneko
Jedi Council Member
Posts: 2469
Joined: 2003-03-13 03:10am
Location: Fréchet space
Contact:

Re: How would you make "Sensors" for a starship?

Postby Kuroneko » 2009-05-27 11:11pm

You're overestimating it by a factor of two because you're not taking geometry into account. The bond albedo is about 0.5, so: [(3.846E26W)/(4π(39.482AU)²)][π(1.195E6)²][0.5] = 2.0TW.
"The fool saith in his heart that there is no empty set. But if that were so, then the set of all such sets would be empty, and hence it would be the empty set." -- Wesley Salmon

User avatar
starslayer
Jedi Knight
Posts: 731
Joined: 2008-04-04 08:40pm
Location: Columbus, OH

Re: How would you make "Sensors" for a starship?

Postby starslayer » 2009-05-28 01:33am

Junghalli wrote:As I understand it, the problem is the photons will be spread out over multiple telescopes and hence multiple images, so they're less likely to register in each telescope. Assuming you could get one photon = one event sensitivity (extreme, I know, but I understand from other discussion in this thread that we're already getting close to this today), couldn't you have the array's central computer pick out incoming photons seen in the same part of sky by multiple telescopes and identify it as a very faint light? It would undoubtedly require an obscene amount of computing power to process the image, of course.
Yes, now you've touched on the real difficulty once we've selected an aperture: image processing. As Wyrm and GMT have mentioned, a sensitive picture with a large FOV has an absolutely obscene amount of data contained in it. In a sensible image processing algorithm, the system takes two exposures: one with an open shutter, and one with it closed. The closed exposure is called a flat, and it serves as a base for determining what kind of noise is present in the system. So the flat gets subtracted from the actual image. Next, any single photon events will be discarded, simply because there are too many possible sources. Both of these steps are going to happen before image combination, because our computing power is finite, and there's a ton of data to sift through. Now that the images have been combined, the flash is unlikely to have been preserved. This is especially true because if a "bright" flash shows up in part of the field on one CCD, but none of the others looking there see it, it will be discarded as a cosmic ray track, even if it isn't.

User avatar
Kuroneko
Jedi Council Member
Posts: 2469
Joined: 2003-03-13 03:10am
Location: Fréchet space
Contact:

Re: How would you make "Sensors" for a starship?

Postby Kuroneko » 2009-05-28 02:00am

How many such events per second are we talking about? Just how much computing power would it take to for a secondary system to remember them over a few exposures and try to find correlations over multiple exposures and between the array elements?
"The fool saith in his heart that there is no empty set. But if that were so, then the set of all such sets would be empty, and hence it would be the empty set." -- Wesley Salmon

User avatar
aimless
Youngling
Posts: 53
Joined: 2009-05-06 12:37am

Re: How would you make "Sensors" for a starship?

Postby aimless » 2009-05-28 02:18am

GrandMasterTerwynn wrote:
aimless wrote:So our area ends up being half that of a sphere, for about 20,627 square degrees. To steal from GMT again, because of the nice telescope involved and the optimistic figures that I like:

Uh . . . no. 20,627 square degrees is for a hemisphere. I wrote out my example way back when this thread was talking about sensors aboard a starship. A wide-field survey sensor is going to have half the sky blocked off by the ship's hull. You'd need an identical setup on the opposite side of the vessel to get both halves of the sky.


I was specifying that we were looking in a circle but only .5 radians above and below the solar plane. Which happens to be the same area as half a sphere :)

Thanks for explaining the Pluto stuff.

User avatar
starslayer
Jedi Knight
Posts: 731
Joined: 2008-04-04 08:40pm
Location: Columbus, OH

Re: How would you make "Sensors" for a starship?

Postby starslayer » 2009-05-28 02:43pm

Kuroneko wrote:How many such events per second are we talking about? Just how much computing power would it take to for a secondary system to remember them over a few exposures and try to find correlations over multiple exposures and between the array elements?
That depends on how the image data is stored. Since CCDs are color blind, the simplest method of storing the data would simply be as an array with each element corresponding to a pixel, and each location would simply be a number corresponding to how strong the signal was. This method is not good enough to detect single photons, because each photon will have a different energy, and thus generate a different signal strength (blue photons are about 3eV, while deep red is ~1eV). The standard method for the visible range is taking an RGBL (red, green, blue, luminance) exposure. This is accomplished by using different color filters. Now we have quadruple the data that we did before, because we not only have the colorblind exposure, but also one with a red filter, a green one, and a blue one. This must then be combined to form one image.

Now it has to figure out what all the celestial objects in the frame are, which will take time, unless it knows exactly where it was pointing beforehand, in which case this step is easy (it will just move and rotate the known starfield to match the image). Once it's done that, it must look for what's left, and then communicate that data with its fellows. For a small picture, all this doesn't take very long, but we don't have a small picture. We have an enormous picture. And we have to process this enormous picture in time for a quick second look if necessary. Even in languages like IDL that are optimized for doing very large array operations very quickly, this is going to take forever, simply because we have over 1 TB of data coming in per exposure, and the memory access speed just isn't there right now, or for the forseeable future. You'd basically have to make the satellite out of computronium to process that much data in a few seconds, and have it keep doing that every minute or so.

User avatar
GrandMasterTerwynn
Emperor's Hand
Posts: 6702
Joined: 2002-07-29 06:14pm
Location: Somewhere on Earth.
Contact:

Re: How would you make "Sensors" for a starship?

Postby GrandMasterTerwynn » 2009-05-28 04:41pm

starslayer wrote:Now it has to figure out what all the celestial objects in the frame are, which will take time, unless it knows exactly where it was pointing beforehand, in which case this step is easy (it will just move and rotate the known starfield to match the image). Once it's done that, it must look for what's left, and then communicate that data with its fellows. For a small picture, all this doesn't take very long, but we don't have a small picture. We have an enormous picture. And we have to process this enormous picture in time for a quick second look if necessary. Even in languages like IDL that are optimized for doing very large array operations very quickly, this is going to take forever, simply because we have over 1 TB of data coming in per exposure, and the memory access speed just isn't there right now, or for the forseeable future. You'd basically have to make the satellite out of computronium to process that much data in a few seconds, and have it keep doing that every minute or so.

You could simplify things a bit by putting as much of the digital-signal processing on the telescope as possible. Since building single multi-gigapixel imaging sensor chips will present very steep engineering challenges, the business end of our imaging telescope will likely consist of a whole bunch of smaller imaging sensors. Each of these sensors could feed an optimized DSP which would automatically handle the mathematical stacking of each set of exposures, and then the passing of the resulting image through a band-pass filter (say we only want to keep the extreme ends of the image histogram, since vessels entering into extreme detection range will tend to be very faint, and drive flares or warhead initiation events will tend to be relatively bright.) Then the output of your telescope sensor will be a set of sparse matrices containing far less data to process than if we'd simply dumped raw image data straight to our on-board/on-facility computers. The data would be generated very quickly, since digital-signal processing nicely lends itself to parallelization, and each image-sensor DSP will be performing its operations in parallel with every other image-sensor DSP covering the field of a given telescope in your survey array.

If this is a staring array, or a scanning array with a well-defined scanning pattern, you could further reduce the incoming data by treating the known starfield as another noise filter, and programming that into the telescope's DSPs as well.

User avatar
starslayer
Jedi Knight
Posts: 731
Joined: 2008-04-04 08:40pm
Location: Columbus, OH

Re: How would you make "Sensors" for a starship?

Postby starslayer » 2009-05-28 05:14pm

GrandMasterTerwynn wrote:You could simplify things a bit by putting as much of the digital-signal processing on the telescope as possible. Since building single multi-gigapixel imaging sensor chips will present very steep engineering challenges, the business end of our imaging telescope will likely consist of a whole bunch of smaller imaging sensors. Each of these sensors could feed an optimized DSP which would automatically handle the mathematical stacking of each set of exposures, and then the passing of the resulting image through a band-pass filter (say we only want to keep the extreme ends of the image histogram, since vessels entering into extreme detection range will tend to be very faint, and drive flares or warhead initiation events will tend to be relatively bright.) Then the output of your telescope sensor will be a set of sparse matrices containing far less data to process than if we'd simply dumped raw image data straight to our on-board/on-facility computers. The data would be generated very quickly, since digital-signal processing nicely lends itself to parallelization, and each image-sensor DSP will be performing its operations in parallel with every other image-sensor DSP covering the field of a given telescope in your survey array.

If this is a staring array, or a scanning array with a well-defined scanning pattern, you could further reduce the incoming data by treating the known starfield as another noise filter, and programming that into the telescope's DSPs as well.
Yes, large CCDs are already done by splitting them up into a lot of smaller chips. Then after all those images have been processed, a simple one-line command in IDL (or another similar language) could reject most of the middle intensities, depending on what the sensor's looking for. You're right in that would reduce the data load a lot, and speed up the process, but by just how much I don't know. I'd have to find somebody in the astro department here who does a lot of image processing and talk to them, or email one of the Kepler guys.

Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: How would you make "Sensors" for a starship?

Postby Junghalli » 2009-05-28 05:34pm

starslayer wrote:Next, any single photon events will be discarded, simply because there are too many possible sources. Both of these steps are going to happen before image combination, because our computing power is finite, and there's a ton of data to sift through. Now that the images have been combined, the flash is unlikely to have been preserved. This is especially true because if a "bright" flash shows up in part of the field on one CCD, but none of the others looking there see it, it will be discarded as a cosmic ray track, even if it isn't.

So basically the problem with the approach I suggested is that it would take way too much computing power with anything remotely close to present computer technology?

On a different note, something that occurs to me in regards to getting detailed information on the drive emissions of distant ships.

You're obviously not getting any useful information off one flash. But a ship with a low-thrust drive will have to thrust for hours or days at least to get any decent acceleration, so its burn will be very long, whether it happens all it once or in tiny pulses punctuated by minutes, hours, or days of unpowered drifting. If you've got a sample of, say, a couple of hundred thousand flashes, and you're pretty sure it's a ship of some sort, couldn't you have the analyses computer combine them and treat them as essentially one event lasting for hours or days, and try to do things like spectroscopy analysis based off that?

User avatar
Wyrm
Jedi Council Member
Posts: 2206
Joined: 2005-09-02 01:10pm
Location: In the sand, pooping hallucinogenic goodness.

Re: How would you make "Sensors" for a starship?

Postby Wyrm » 2009-05-28 07:51pm

Junghalli wrote:I think we may be miscommunicating here. Maybe I'm not understanding you right but you seem to be thinking that I'm trying to extrapolate something from the presence of a single flash of light. I completely agree with you: the flash from a single fraction of a second burn won't tell you anything except that there's something briefly luminous out there. By itself, it's far too little to go on.

But an enemy ship isn't going to just flash its engines for a fraction of a second once (unless it has crazy acceleration). It's going to flash them many, many times.

Except that in my example, that "single flash" lasted thirty-eight seconds, not a fraction of a second, and during that time it can built up a respectible velocity, on the order of 2 km/s for a Daedalus-type craft with one-tenth the mass (remember, the Daedalus has an obscene delta-V). With that much delta-V, it'll remove itself from the field of view of a 0.29 degree square patch of sky within hours, and if the next burn comes a week later, at 2 km/s, it could be in one of 28 similar frames. At a second a frame, you have only three-quarters chance of catching the second burn. If the second burn is a week later, then you got 115 of those buggers to scan through, and so on.

Junghalli wrote:
And even if you are somehow able to perform this trick, the enemy facility can simply turn on a big ol' flashlight and blast you in the face with light sufficient to drown out any schanannigans. If it did this regularly, you'd never be able to tell when that ship lifted off.

Wouldn't they have to know where all your platforms were to do this?

No. All it would have do to is use it's larger power plant to shine a light that shines omnidirectionally.

Junghalli wrote:
Finally, barring that, the ship could simply use an orbital facility to its advantage and simply slingshot itself away from the station, using the much more massive station itself as its reaction mass. There'd be no exhuast to observe, and no outward sign that anything had launched.

Except that the station itself would have a slightly altered velocity, which might be detectable. I imagine known enemy bases would be under pretty heavy surveillance, up to and including stuff like regularly pinging them with very powerful interplanetary radars to be sure they're still where they ought to be.

You're kidding, right? The base is much more massive in comparison to the ship it just launched. It's own change in velocity for a 20 km/s object, with a mass ratio of one million, is 1.2 meters per minute. To detect this by dopplar radar, you'd have to know the frequency of your transmitter to 66 trillionths of a Hertz. To detect it by change in position over the course of a day, you'd have to know the reflection time to one part in 86.5 million. That only gets worse if they can anchor themselves to a small asteroid.
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. 8)"
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."

Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy

User avatar
Sea Skimmer
Yankee Capitalist Air Pirate
Posts: 36970
Joined: 2002-07-03 11:49pm
Location: Passchendaele City, HAB
Contact:

Re: How would you make "Sensors" for a starship?

Postby Sea Skimmer » 2009-05-28 08:56pm

The point of Pulse Doppler radar techniques is to filter out clutter from land and rain; you do not need to use it in space which is overwhelmingly empty the way you must for an earthbound air defence radar or a downward looking fighter radar. Any giant space radar is going to be multi static, and use a verity of processing techniques.
"This cult of special forces is as sensible as to form a Royal Corps of Tree Climbers and say that no soldier who does not wear its green hat with a bunch of oak leaves stuck in it should be expected to climb a tree"
— Field Marshal William Slim 1956

User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70027
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Re: How would you make "Sensors" for a starship?

Postby Darth Wong » 2009-05-28 09:09pm

I think the most salient point here is that some people are still thinking of sensors as if they are real-time video. They're not realizing that when you're working with such low light levels that you take 60 second exposures to collect enough data for analysis, you can't differentiate between a 1 second flash and a 45 second long flare. The time duration of a flash is impossible to determine from a 60 second exposure. I say this because Junghalli seems to be acting as if long exposures simply give you greater light gathering ability, without recognizing their limitations.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html

Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: How would you make "Sensors" for a starship?

Postby Junghalli » 2009-05-28 09:38pm

Wyrm wrote:Except that in my example, that "single flash" lasted thirty-eight seconds, not a fraction of a second, and during that time it can built up a respectible velocity, on the order of 2 km/s for a Daedalus-type craft with one-tenth the mass (remember, the Daedalus has an obscene delta-V). With that much delta-V, it'll remove itself from the field of view of a 0.29 degree square patch of sky within hours, and if the next burn comes a week later, at 2 km/s, it could be in one of 28 similar frames. At a second a frame, you have only three-quarters chance of catching the second burn. If the second burn is a week later, then you got 115 of those buggers to scan through, and so on.

This is why you want to have enough telescopes to do a continuous pan-scan. You will not miss the next burn because every part of the sky is being always watched by at least one telescope.

No. All it would have do to is use it's larger power plant to shine a light that shines omnidirectionally.

And how close to the base would the ship have to be for its burn to be washed out? The kind of drives that are most inconspicuous also have long acceleration times and hence the ships will have long acceleration tracks. You can try to accelerate faster, but the higher the thrust the more challenging it will be to drown out the light.

The base is much more massive in comparison to the ship it just launched. It's own change in velocity for a 20 km/s object, with a mass ratio of one million, is 1.2 meters per minute. To detect this by dopplar radar, you'd have to know the frequency of your transmitter to 66 trillionths of a Hertz. To detect it by change in position over the course of a day, you'd have to know the reflection time to one part in 86.5 million. That only gets worse if they can anchor themselves to a small asteroid.

This is true (I'll take your word for this).

Hmm, it seems to me that if you know the locations of the enemy bases the most productive approach will be to try to bring the sensors to them. The trick will be to sneak sensors into orbits close enough to those of the enemy bases that they can see the bases in some detail. This will be tricky, of course, because the sensors will have to get close enough to the bases to see them in some detail without being detected themselves. Inserting themselves into parallel orbits will be particularly tricky, as the enemy can easily establish a wide sensor perimeter around the base. The best approach might be to use extensive orbital rosettes.

Darth Wong wrote:I think the most salient point here is that some people are still thinking of sensors as if they are real-time video. They're not realizing that when you're working with such low light levels that you take 60 second exposures to collect enough data for analysis, you can't differentiate between a 1 second flash and a 45 second long flare. The time duration of a flash is impossible to determine from a 60 second exposure. I say this because Junghalli seems to be acting as if long exposures simply give you greater light gathering ability, without recognizing their limitations.

I think you misunderstand what I've been saying - I was always talking about detection range in terms of what could be achieved with a short exposure (~1 second). The only part where I was talking about a long exposure was where I discussed the possibility of putting together short exposures of lots of tiny flashes and having the analysis computer treat it as a long exposure for purposes of spectroscopy and other such measurements that require a lot (relatively speaking) of light to work on.

User avatar
GrandMasterTerwynn
Emperor's Hand
Posts: 6702
Joined: 2002-07-29 06:14pm
Location: Somewhere on Earth.
Contact:

Re: How would you make "Sensors" for a starship?

Postby GrandMasterTerwynn » 2009-05-28 10:40pm

Junghalli wrote:I think you misunderstand what I've been saying - I was always talking about detection range in terms of what could be achieved with a short exposure (~1 second). The only part where I was talking about a long exposure was where I discussed the possibility of putting together short exposures of lots of tiny flashes and having the analysis computer treat it as a long exposure for purposes of spectroscopy and other such measurements that require a lot (relatively speaking) of light to work on.

Spectroscopy doesn't work this way. You need relatively a long exposure to do spectroscopy, because you're taking that faint point-source of light, and smearing it out across a wide area. Instead of one pixel that's around, or barely above the sensor's noise floor, you now have a whole bunch of pixels that are all below the sensor's noise floor. It's going to take time to collect enough photons to get the spectrograph's photodetectors above their noise floor, and do a reasonable spectral analysis. Faking it by stacking multiple exposures isn't going to work.

Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: How would you make "Sensors" for a starship?

Postby Junghalli » 2009-05-28 11:32pm

GrandMasterTerwynn wrote:Spectroscopy doesn't work this way. You need relatively a long exposure to do spectroscopy, because you're taking that faint point-source of light, and smearing it out across a wide area. Instead of one pixel that's around, or barely above the sensor's noise floor, you now have a whole bunch of pixels that are all below the sensor's noise floor. It's going to take time to collect enough photons to get the spectrograph's photodetectors above their noise floor, and do a reasonable spectral analysis. Faking it by stacking multiple exposures isn't going to work.

Thank you. You answered my question (whether it would work).

User avatar
Wyrm
Jedi Council Member
Posts: 2206
Joined: 2005-09-02 01:10pm
Location: In the sand, pooping hallucinogenic goodness.

Re: How would you make "Sensors" for a starship?

Postby Wyrm » 2009-05-30 07:03am

Junghalli wrote:This is why you want to have enough telescopes to do a continuous pan-scan. You will not miss the next burn because every part of the sky is being always watched by at least one telescope.

I'm seeing a distinct lack of figures from you, Jung. How many telescopes are you planning to send up, how big are they, how fast can they turn for this "continous scan" you talk about, and how much resources are they going to eat? Shit is easy when you have no limits, and you can change the parameters of the problem on the fly.

Junghalli wrote:And how close to the base would the ship have to be for its burn to be washed out? The kind of drives that are most inconspicuous also have long acceleration times and hence the ships will have long acceleration tracks. You can try to accelerate faster, but the higher the thrust the more challenging it will be to drown out the light.

So you try to defeat my point that you have a lot of sky to scan in order to catch the burn by watching the enemy staging areas, then when I defeat that point by having the enemy wash out the engine burn by emitting its own bright light, you go right back to trying to find the burn amongst thousands of patches of sky, the reduction of which was the point of watching the enemy base in the first place.

Catch 22, Jung. I win this partuclar point.

Junghalli wrote:Hmm, it seems to me that if you know the locations of the enemy bases the most productive approach will be to try to bring the sensors to them. The trick will be to sneak sensors into orbits close enough to those of the enemy bases that they can see the bases in some detail. This will be tricky, of course, because the sensors will have to get close enough to the bases to see them in some detail without being detected themselves. Inserting themselves into parallel orbits will be particularly tricky, as the enemy can easily establish a wide sensor perimeter around the base. The best approach might be to use extensive orbital rosettes.

So you're not only assuming that you can perform amazing sensory tricks on me to detect my clandestine activity, you're also assuming that I can't perform the same tricks on you to catch you spying on my clandestine activity? Sorry, Jung. The knife cut's both ways.
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. 8)"
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."

Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy

Sky Captain
Jedi Master
Posts: 1029
Joined: 2008-11-14 12:47pm
Location: Latvia

Re: How would you make "Sensors" for a starship?

Postby Sky Captain » 2009-05-30 04:09pm

It seems to me that sneaking a sensor platform relatively close to enemy base might be a lot easier than sneaking up with a proper ship because sensor platform could use some low power ion engines powered by some solar panels to gently nudge it`s orbit over the period of many months or you could cold launch said platform with a mass driver from your base so the platform itself only need to perform small course correction maneuvers. Anyway making a stealthy sensor platform is going to be much easier than making a stealthy space warship that put`s out gigawats of power when accelerating.

Also strategically ships with low thrust high ISP drives might have an advantage, but tactically they will be inferior to ships with high thrust and high ISP drives. They won`t be able to respond quickly to changes in tactical situation or get out of the way of incoming enemy projectiles. This problem might be solved by having two sets of engines - one low powered for stealthy departure and one for high acceleration combat maneuvering however this option will likely add a lot of extra mass to the ship which otherwise could be used for more mission related stuff.

User avatar
Wyrm
Jedi Council Member
Posts: 2206
Joined: 2005-09-02 01:10pm
Location: In the sand, pooping hallucinogenic goodness.

Re: How would you make "Sensors" for a starship?

Postby Wyrm » 2009-05-30 06:48pm

Sky Captain wrote:It seems to me that sneaking a sensor platform relatively close to enemy base might be a lot easier than sneaking up with a proper ship because sensor platform could use some low power ion engines powered by some solar panels to gently nudge it`s orbit over the period of many months or you could cold launch said platform with a mass driver from your base so the platform itself only need to perform small course correction maneuvers. Anyway making a stealthy sensor platform is going to be much easier than making a stealthy space warship that put`s out gigawats of power when accelerating.

Why do you assume the sensor platform is smaller than a stealthy warship capable of slipping past the enemy's sensor network? The HST completely fills the shuttle's cargo bay, and it only has a 3 m diameter mirror! Also, remember that the enemy station can also mount a telescope of its own. And being bigger (and making no pretense to hide) it can sense small signals much further than the small spy platform can.

Also, neither ion engines nor solar sails put out no EM radiation, the former for the obvious reason, but the latter also does this because reflection isn't 100% efficient. The sail will heat up. And ion engines need lots of power, which implies not only a plume, but a heat source for the power supply.
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. 8)"
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."

Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy

Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Re: How would you make "Sensors" for a starship?

Postby Junghalli » 2009-05-30 10:56pm

Wyrm wrote:I'm seeing a distinct lack of figures from you, Jung. How many telescopes are you planning to send up, how big are they, how fast can they turn for this "continous scan" you talk about, and how much resources are they going to eat? Shit is easy when you have no limits, and you can change the parameters of the problem on the fly.

That's what I'm trying to quantify: how many resources I need to spend, and in what situations those resources would and would not be practical to spend. I don't offer these figures because the point of this exercise for me is to determine them.

So you try to defeat my point that you have a lot of sky to scan in order to catch the burn by watching the enemy staging areas, then when I defeat that point by having the enemy wash out the engine burn by emitting its own bright light, you go right back to trying to find the burn amongst thousands of patches of sky, the reduction of which was the point of watching the enemy base in the first place.

There's a middle ground between having to watch the entire sky and only watching the most immediate vicinity of the base. Say for the sake of argument enemy ship's acceleration is .05 m/s^2 and its change in velocity is 15 km/s. Acceleration time is 300,000 seconds. At an average speed of 7.5 km/s the acceleration track is 2.25 million km. That encompasses a restricted area of sky and gives your sensor an area to watch. Can the enemy's lamp flood out every light within 2.25 million km from a distance of, say, 1 AU, and how much power would it need to do this?

So you're not only assuming that you can perform amazing sensory tricks on me to detect my clandestine activity, you're also assuming that I can't perform the same tricks on you to catch you spying on my clandestine activity? Sorry, Jung. The knife cut's both ways.

The thing here is it seems rather likely a simple observation platform is going to be a good deal less massive than a warship, and so the engine needed to move it will be much less powerful. A warship will need its own sensors, life support and habitation for the crew (assuming it's manned), weapons, which means either missiles or powerful beam weapons with extensive power plants and cooling, etc.

To go to your point about the Hubble Space, it has a mass of 11.1 metric tons. Even a commercial jetliner is something like ten times that mass. Just how small and light are you planning to make this warship, and how combat-capable do you think it will have when you trim it down to much less than a commercial jet aircraft?

User avatar
Wyrm
Jedi Council Member
Posts: 2206
Joined: 2005-09-02 01:10pm
Location: In the sand, pooping hallucinogenic goodness.

Re: How would you make "Sensors" for a starship?

Postby Wyrm » 2009-05-31 02:51pm

Junghalli wrote:That's what I'm trying to quantify: how many resources I need to spend, and in what situations those resources would and would not be practical to spend. I don't offer these figures because the point of this exercise for me is to determine them.

You are not going to establish any of this until you start assigning some numbers. Put a pricetag on how much it costs to manufacturea a telescope of a given size, resolution, field of view, ect. and we can get a discussion going. Intelligence gathering and their countermeasures don't develop in a vacuum, and that's exactly what you're trying to do.

Junghalli wrote:There's a middle ground between having to watch the entire sky and only watching the most immediate vicinity of the base. Say for the sake of argument enemy ship's acceleration is .05 m/s^2 and its change in velocity is 15 km/s. Acceleration time is 300,000 seconds. At an average speed of 7.5 km/s the acceleration track is 2.25 million km. That encompasses a restricted area of sky and gives your sensor an area to watch. Can the enemy's lamp flood out every light within 2.25 million km from a distance of, say, 1 AU, and how much power would it need to do this?

Why do you assume that the plume would be visible at all, and as such the base would need to turn the lamp on in the first place? You never specified Isp of your engine, which is required to calculate the exhaust plume power. It might never be seen.

I already gave you an example of a craft with good specific impulse and acceleration that could build up 2 km/s with a burn time of 38 s and yet remain in a single pixel of your camera. With the same setup, I could build up ten times that velocity, give you only a handful of pixels of my track, and build up a quite respectible 20 km/s delta-v. The error bar on your determination of my proper motion during that time is going to suck balls. Furthermore, if I do this parallel to you, you're not going to get a thing about my exhaust velocity, and all your calcs about my thrust and mass go down the crapper.

And you're not going to have views of me at greatly separated angles, either. If I really am a threat to you, I'm going to hold some territory the same magnitude of your own holdings. You can't put a sensor platform in my territory. That's why it's my territory. The angle which you will see different views of my exhaust is going to be restricted, and it's going to limit your accuracy of my exhaust velocity.

How's the charge leakage on your CCD detector? How does the presence of a secondary mirror cause diffraction effects, as they do in this photo? If I flood your sensor with photons, I could put quite the halo around me, and easily hide the entire track of that burn in its glare. If I'm bright enough, I can dazzle your entire CCD, or even fry the damn thing (or force your arpeture shut). The same halo would surround my own ship, should you see it, introducing further error.

Junghalli wrote:The thing here is it seems rather likely a simple observation platform is going to be a good deal less massive than a warship, and so the engine needed to move it will be much less powerful.

Why? You don't know how large a steath warship is with respect to an observation platform. Those platforms are going to have to be monsters just to see the threats that I launch from deep in my territory.

Junghalli wrote:A warship will need its own sensors, life support and habitation for the crew (assuming it's manned), weapons, which means either missiles or powerful beam weapons with extensive power plants and cooling, etc.

Why do you assume that the sensor suite on a warship is the same kind that you find on a sensor platform? Especially since an enemy holding lots of territory can't put up lots of sensor suites of their own and beam their data to the small warship. Again, how big are the sensor platforms, and how big are the warships that are designed for sneaky attacks?

Junghalli wrote:To go to your point about the Hubble Space, it has a mass of 11.1 metric tons. Even a commercial jetliner is something like ten times that mass. Just how small and light are you planning to make this warship, and how combat-capable do you think it will have when you trim it down to much less than a commercial jet aircraft?

You're forgetting that the HST only looks in one direction, has a narrow field of view, does not have rapid aquisition capabilities, and whose light gathering power is limited, making up for this lack with long exposure times. It is not a military sensor platform by any stretch of the imagination. You're imagining a monster of up to 100 m that can rapidly scan a wide field of view. It is going to weigh in at thousands of times more massive at the very least for just one scope, and the size is already twice and a quarter that of Daedalus, and that's without considering the structural stress on the thing.
Darth Wong on Strollers vs. Assholes: "There were days when I wished that my stroller had weapons on it."
wilfulton on Bible genetics: "If two screaming lunatics copulate in front of another screaming lunatic, the result will be yet another screaming lunatic. 8)"
SirNitram: "The nation of France is a theory, not a fact. It should therefore be approached with an open mind, and critically debated and considered."

Cornivore! | BAN-WATCH CANE: XVII | WWJDFAKB? - What Would Jesus Do... For a Klondike Bar? | Evil Bayesian Conspiracy


Return to “Library”

Who is online

Users browsing this forum: No registered users and 1 guest