Follow this link to skip to the main content

Frequently Asked Questions - Raw Images

Frequently Asked Questions - Raw Images

>> Search Raw Images Now


Why are parts of the image missing?

It's hard getting data all the way from Saturn. Bad weather or antenna problems at one of the Deep Space Network stations can cause data to be lost because of trouble locking the signal. This results in gaps in the images. To overcome these problems, data for important observations is played back again at a different time. For all of the other times there is just one chance to get the data.

Why are parts of the right side of the image missing?

It takes a lot of data to represent an image, much more than the old adage that a picture is worth a thousand words. For Cassini it is more like a million words -- data words. Because of the limited space on Cassini's recorder and the time it takes to transmit that data, cameras on board the spacecraft were built to have the ability to compress the data, that is, to have it take up less space on the recorder. There are two kinds of compression the camera can use -- "lossless" and "lossy." With lossless compression the original data can be completely recovered once it reaches the ground -- much like what happens with zip compression for personal computer files or the compression in GIF images. The lossy compression is more efficient and produces smaller files, but does not give the ability to get back all the original data -- like the compression in JPEG images or in MP3 audio files.

In lossless compression mode, the camera guarantees the compression will result in data at most half the amount as the original image. If the scene has a lot of detail, it will not compress as a well as a less detailed image. In this case, when the image has too much detail to be compressed by half, the camera cuts data off the end of the second line until the compression is sufficient. This means that in images with a lot of detail, the right side of even lines can be cut off in some parts of the image or throughout the whole image.

Why is the bottom of an image missing?

It takes a certain amount of time to read the data from the camera's sensor. It also takes a different amount of time for the camera to send the data over wires to the spacecraft for recording. The camera has four different sized time windows in which it is allowed to read out an image. If the time window picked by the scientist who planned the image is too short, the image will be incomplete and cut off at the bottom. There can be two reasons for this. If the scientist has chosen to use compression, the scene might contain more detail than expected and thus have too much data to readout in the amount of time given. The other reason could be that, in order to conserve the limited amount of data the scientist is allowed to collect, the scientist may have the image cut off on purpose because the interesting thing is in the top half.

An image may also be cut off at the bottom due to what is called "data policing." Each instrument is given a certain amount of space on Cassini's recorder to store the data collected, and there is much demand among instruments to get this precious space. Once a science team designing an observation fills up the allotted space, the spacecraft stops recording, thus resulting in a cutoff image.

Why is the image overexposed?

Cassini's cameras have 63 different exposure settings, from 5 milliseconds to 20 minutes. Scientists planning an observation must choose the exposure for each image taken. That can be tough if you're taking a picture of something you've never seen before. Thus, incomplete information on how bright something can be can lead to an underexposed or overexposed image.

Images can be overexposed on purpose too. If the scientist is looking for something dim next to something bright, the bright thing may be overexposed. Finally, Optical Navigation personnel use images to see where Cassini is relative to Saturn and its moons. Often they overexpose images because they need to see where these moons are in relation to the stars in the background sky.

Why are there images of different sizes?

The Cassini cameras are 1-megapixel cameras. A normal image is 1024 x 1024 pixels. Using a technique called "summation" the cameras have the ability to combine pixels together to get smaller but less noisy images. This results in smaller images that take a lot less time to readout out and take up less data volume. Summation is very useful if a scientist needs to conserve both. In the 2 x 2 mode, the camera takes a 2 x 2 pixel square and averages those values into a single pixel. Images in this mode will be 512 x 512 pixels. In the 4 x 4 mode, the camera takes a 4 x 4 pixel square and makes that a single pixel. Images in this mode are 256 x 256 in size.

What are those streaks I see in some images?

There are high-energy particles that fly though space called cosmic rays. When one of these particles hit the camera's sensor, it causes a bright spot. When one of the particles hit the camera's sensor edge-on, it can leave a trail across the image. Exposures shorter than a second will not have many of these spots or trails. However, long exposures, like those from a minute to 20 minutes will contain many of these trails.

What are those dark donut shapes?

Small donut-like dark spots in images are actually out of focus dust specks on the filter wheels, lenses or other parts of the optics of the cameras. Because there is no way to clean the cameras in space, more of these spots may appear as the Cassini mission progresses.

What is that horizontal waviness in the picture?

There is a low level source of noise in the camera's signal as it comes out of the sensor and gets converted to numbers. This noise adds and subtracts a small amount to the signal in a cycle. When the data is put into an image, one can see it as bright and dark bands in the image. The amount of noise is very small and is not noticeable in most images. Images that are of black sky or very dark can show this noise. The camera records the baseline of the signal for each line so this noise can be removed in later processing. Both cameras are affected by this noise but the Narrow Angle Camera is worse.

What is that small dark vertical band on the left part of the image?

The sensor on the Narrow Angle Camera has a flaw where the first 12 or so pixels at the left of the image are darker than the rest. This flaw was found before Cassini was launched but it was determined that it would cost too much to fix it.

Why don't I see stars in the images?

The exposures needed to take images of Saturn and its moons are still fairly short compared with the exposure times it takes to see stars. If you look really close you can sometimes see stars in images that are overexposed.

Why is the image smeared?

The Cassini spacecraft is moving very fast through the Saturnian system. When the cameras are taking pictures of objects very far away this doesn't matter too much. However, if Cassini is taking images of a moon during a close flyby, the change in distance or position during the exposure can cause the image to be smeared -- much like taking pictures of close-by things from a fast moving car.

Also, many instruments may be taking data at the same time but the spacecraft pointing for the observation would have been generated by a single instrument team. In this case, a mistake or miscommunication could result in an image being taken while the spacecraft was turning from one position to another.

Why is the image fuzzy?

The Narrow Angle Camera was plagued with a contamination problem. This caused images taken between May 2001 and early 2002 to look hazy. After a year of heating treatments, the haze problem was resolved. For more information see: http://saturn.jpl.nasa.gov/news/newsreleases/newsrelease20020723/

Why does the contrast look different between images?

The camera measures light from an object at each point in an image and assigns it a number from zero to 4095 depending on its brightness. Sometimes the scientist can't afford to send this amount of data for each pixel because of the amount of storage it takes. The camera has the ability to convert this range of values to those from zero to 255. The camera does this according to a preset table of values designed by the scientists. This table devotes many of the 256 levels for less bright things and less levels for brighter pixels. Part of calibrating an image on the ground is to reverse this table and get back pixels in the range of zero to 4095. Because you're looking at the raw data, images sent back in this mode will have dimmer things look brighter compared to the brighter parts of the image than in images not in this mode.

Why does the image look bizarre/psychedelic?

As in the previous question, the other way the camera can send back less data (by sending pixels with values from zero to 255 instead of zero to 4095) is to send back only the lower binary digits of the number. This is like having a list of amounts of money and only recording the amount of cents for each one and assigning the brightness in an image to the amount of leftover cents. Pixels with brightness values just under 255, like amounts just under a dollar, will appear almost white, while pixel values just over 255, like amounts just over a dollar with not many cents, will appear dark. The ideal use of this mode is for image scenes that are dark with almost all of the pixel values less than 255. If the scene is simple with gradual increases in brightness, then even if the original values get over 255 and go dark again, the scientists can figure out what the real value was. If the scene is very complicated or the original values are much brighter than 255, the image can have many bright and dark transitions with strange contours. In this case, the image will look very bizarre but not have much scientific value.

What are the camera filters?

To increase their scientific value, cameras on Cassini have two filter wheels in order to take images at specific wavelengths of light. Some filters only allow light of a certain color to reach the sensor. Combining three such images can produce a color image. Other filters pass light at a specific wavelength absorbed by an element such as hydrogen or methane. This allows scientists to measure where these elements are and at what abundances. Other filters, called polarizers, allow light oriented in a certain direction to reach the sensor. These filters are capable to see through a hazy atmosphere.

Light passes through both filter wheels before it reaches the camera's sensor. Scientists planning an image must chose one filter from wheel one and one filter from the other. The result is the combination of the filters; light that is passed by both filters reaches the sensor. Scientists can have the effect of choosing only one filter by using the "clear filter" on the other wheel. The tables below show the filter code and a brief description of each filter.

The Narrow Angle camera has 12 filters per wheel.


Filter Wheel 1
CL1 Clear
RED Red
BL1 Blue band 1
UV2 Ultraviolet band 2
UV1 Ultraviolet band 1
IRP0 Infrared 0º polarizer
P120 120º polarizer
P60 60º polarizer
P0 0º polarizer
HAL Hydrogen Alpha
IR4 Infrared band 4
IR2 Infrared band 2
Filter Wheel 2
CL2 Clear
GRN Green
UV3 Ultraviolet band 3
BL2 Blue band 2
MT2 Methane band 2
CB2 Continuum band 2
MT3 Methane band 3
CB3 Continuum band 3
MT1 Methane band 1
CB1 Continuum band 1
IR3 Infrared band 3
IR1 Infrared band 1

The Wide Angle camera has nine filters per wheel.


Filter Wheel 1
CL1 Clear
IR3 Infrared band 3
IR4 Infrared band 4
IR5 Infrared band 5
CB3 Continuum band 3
MT3 Methane band 3
CB2 Continuum band 2
MT2 Methane band 2
IR2 Infrared band 2
Filter Wheel 2
CL2 Clear
RED Red
GRN Green
BL1 Blue band 1
VIO Violet
HAL Hydrogen Alpha
IRP90 Infrared 90º polarizer
IRP0 Infrared 0º polarizer
IR1 Infrared band 1

For the Continuum Bands, Band 1 and 2 are in the red part of the spectrum while Band 3 is in the infrared.

What are the ghostly lights?

When the cameras take an image of something like a moon with a very bright Saturn just out of view, light shining from the planet can reflect off parts of the inside of the camera and onto the sensor. The inside of each camera is coated with a black non-reflective substance to minimize this scattered light. Still, some light does get in and the result can be rays or large fuzzy circles of light.

Why does the image look sideways or upside down?

When a photographer tilts his or her camera to best fit the scene, the resulting images may appear sideways or at an angle. The same is true for Cassini - the images reflect the orientation of the photographer, in this case the spacecraft. The images on this web page have not been processed in any way, so there is no guarantee that the images will consistently show North at the top of the frame.

Additionally, sometimes images are taken when another instrument or spacecraft subsystem controls the orientation. In this case, the scene may not be optimized for the imaging field of view, but represents a unique opportunity to capture an image.

What does it mean when in the raw images the caption says, "camera was pointing toward SKY?" What is SKY?

When the Mission and Science Planning Teams build the computer commands that are sent to the spacecraft, one of the things they need to do is tell the spacecraft where to point. The computer on board the spacecraft has a catalog of pointing "targets," generally identified by a single word for easy reference. Some of the available options for pointing the spacecraft include: Saturn, rings, most major moons and "sky." For example, when the target is "Titan," during the observation the spacecraft targets Titan and then tracks Titan as it moves relative to the background stars.

The "sky" position is used to point the spacecraft at a fixed location in the sky and take a picture of whatever is there. It is typically used to take images of unrecognized moons (newly discovered ones, for example) and for optical navigation.

Optical navigation images are used as a way for the navigation team to fine-tune their understanding of the exact orbit of moons. While capturing an optical navigation image, cameras on the spacecraft do not track the moon (which is orbiting around Saturn and therefore moving), but they stay fixed on the background stars. After taking an optical navigation image, the navigation team compares the position of the moon relative to the stars in the background of the image and calculates its orbit accurately.

Why there are no raw images between Aug. 19 and Aug. 29, 2007?

On Aug. 19, 2007, Cassini entered "solar conjunction" -- which means the Sun was between Earth and the spacecraft. During solar conjunctions, sending images to Earth is very difficult, if not impossible due to interference from the Sun, thus no images are taken. However, the position offers a great opportunity to study the Sun, and the Cassini radio science group conducted a study to characterize the Sun's corona.

Once the spacecraft exited "solar conjunction" it resumed taking images, starting on Aug. 29 with the moon Tethys.

>> Search Raw Images Now