In situations where you have a large extended object and that object is roughly the size of a detector it is possible to use a sky flipping observational technique to allow a sky estimate to be done. We then combine the N intermediate background images with rejection to form a final background image. Because each of these are in vastly different places it is very unlikely that there will be much overlap between objects. First we form N intermediate sky frames by combining with rejection all of the input images that are in the ith position in each pawprint sequence (where i=1,M). If the observations are part of a tile with M pawprints and each pawprint consists of N exposures then we do the combination in two stages. If the observations are part of a filled tile, as will be the case with the majority of data, then stacking the separate components that make up a tile is significantly faster and generally gives better results than the pawprint methods. However this approach also implies that any earlier data that were processed using a shallower version of the mask would have to be re-reduced as the mask improves. This method should be quite quick, since the steps above that lead to the creation of the mask (repeated stacking, sky subtraction and object detection) are the main consumers of the computing time. This can be done if the observations are part of a long running project and the mask can be defined by increasingly deeper stacks offline. There is a variation on this theme where the algorithm is provided with a mask beforehand. It will still fail in regions where there are no clean sky pixels left after object masking and here the only option is to interpolate between nearby pixels to fill in the gaps. This procedure gives much better results, but with the penalty that it can be quite time consuming because of the repeated object detection and stacking steps. Repeat up to here until the number of object pixels you detect in step 5 roughly converges. Recombine the original images with rejection and in conjunction with the object mask form a new sky frame.Do an object detection on the stacked frame and note where the object pixels are.Throw away the background subtracted object images.Shift and combine the background-subtracted object images to form a stacked frame.Subtract the sky frame from all the object images.Combine all the object images with rejection to form a sky frame. ![]() One way of avoiding the problem of poor rejection of astronomical objects while doing a combination for background estimation is to mask the objects beforehand using the following algorithm: Bright stellar halos are sometimes only a few percent above the ambient background and hence may not reject out during stacking. ![]() Where this algorithm fails most often is when the jitter offsets are not sufficiently large to allow the full profiles of the brightest stars to be rejected completely, or when large bright galaxies are present in the field. Doing a straight forward stack of the images with some form of outlier rejection leads to a good contemporaneous background map providing the jitter offsets are significantly larger than the characteristic size of the point-spread function and no large, or bright objects, are in the field. For a dither sequence with a reasonable range of offsets there is a good chance that every location on sky is covered by some object-free pixels.
0 Comments
Leave a Reply. |