paint-brush
Pixels and Spatial Resolution in Satellite Images: What's the Real Deal?by@mcandrea

Pixels and Spatial Resolution in Satellite Images: What's the Real Deal?

by mcarolDecember 5th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Pixels and spatial resolution in satellite imagery aren't really the same thing. Sensor technology, and concepts like Ground Sample Distance and Ground Resolved Distance determine image detail. Resampling can't improve resolution - the original signal's quality sets the limit.
featured image - Pixels and Spatial Resolution in Satellite Images: What's the Real Deal?
mcarol HackerNoon profile picture


Disclaimer: As a frequent user of satellite images in my work (and not an expert in the field), I've often found myself with these kinds of doubts. The more I learned about the subject, the more I realized how little I actually knew. So, in an effort to use this type of data more effectively and help my colleagues with their frequent questions, I'm writing these notes. There are many materials that cover this topic in greater depth.


When we start using satellite images in our analyses - and for many, their first contact with this type of data happens at work - it's common for someone with a bit more knowledge about the data to provide some key information to help you get started. One of these pieces of information is:


  • The larger the pixel size, the lower the spatial resolution of the image, and vice versa.


This association makes a lot of sense for understanding how "large" and "small" pixels can impact the analysis you want to perform.


But if you start reading more about the subject with some questions, you'll find there are quite a few concepts behind it, and knowing a bit more about this could help you in the future.


I'll try to make a brief connection between some of these concepts, clarifying why pixels and spatial resolution are not essentially synonymous, and addressing some common doubts I see in my day-to-day work.


Key Concepts:

  • Spatial resolution

  • Pixel size

  • Ground Sample Distance (GSD)

  • Ground Resolved Distance (GRD)


It all starts with the satellite sensor in orbit and its detectors. Considering a single detector in the sensor, the angle through which it is sensitive to radiation is called IFOV (Instantaneous Field of View). The larger the IFOV, the larger the area, and the lower the ability to distinguish different objects, and vice versa.



Representations of IFOV and GRC concepts of an optical imaging sensor. Adapted from [2]



In the figure above, IFOV1 allows images with greater detail of objects compared to IFOV2, in their corresponding: GRC1 and GRC2 [Ground Resolved Cell] (associations of the satellite's flight height, H, with IFOV). The size of these cells is called Ground Resolved Distance, or GRD. The concept of GRD is also related to the design and size of the detectors.


Looking at the figure, we can already understand that IFOV1, and consequently GRC1 and GRD1, have less spectral "mixing": perhaps a person and a bit of ground. In IFOV2, everything within its GRD is "mixed" and lacks the ability to distinguish different targets: person, ground, object. Here, the relationship between these concepts and spatial resolution becomes clearer.


Ground Sample Distance, or GSD, is a concept related to the distance between cell centers on the surface, in summary.


What's not yet clear is the association, not always correct, with spatial resolution. This occurs because in practice, GSD and GRD are not always exactly the same - there's some overlap between the IFOV of neighboring detectors (in an ideal scenario, this overlap would be minimal to non-existent, and characteristics like sensor quality and IFOV size influence this).


  • What follows from this is that GRD is generally a better proxy for spatial resolution than GSD.


Representation of GRD and GSD concepts of an optical imaging sensor. Adapted from [2]



During the radiometric signal acquisition, there's processing before the product is delivered. In this processing, in geometric terms, reprojection and resampling are common, so that products are organized into constant grids, independent of acquisition variations, such as angle.


  • From the organization of data in these grids, we get the pixel size


And that's why the pixel size will be, at most, equal to GSD. (And that's also why we often hear that a pixel corresponds to the distance between two consecutive centroids).


IF, in an ideal scenario (where sensor quality is crucial), GSD is very similar to GRD (minimal overlap), then we could understand all these concepts as good proxies for spatial resolution.


Why "could"?


Because the acquired signal can go through a resampling process resulting in, say, a larger pixel size, and consequently lower spatial resolution.


The opposite is not recommended. If you simply take a signal acquired with a certain GSD, GRD, and in the resampling process "build" a smaller pixel, in which you supposedly want more detailed information, this will not occur (Pixel size <= GSD!!). The signal was already acquired under certain conditions, and you can't change that (GRD) to "detail" the image more.


In summary, as I've often had to understand in various ways, even to explain to colleagues:


  • If we resample an image to a smaller pixel size, we are NOT improving spatial resolution!


Another fact is that as users, we end up learning more about pixel size and spatial resolution of an image. It's not common for us to have access to (or seek out) information like GRD or GSD.


There's much more to this, but this is a start to clarify the relationship between these concepts, and why pixel size does not necessarily correspond to the spatial resolution of an image!


References:

*reference 2 was an important reference for this text, for those who want to delve deeper.


[1] Spatial Resolution, Pixel Size, and Scale


[2] The most misunderstood words in Earth Observation


[3] Remote Sensing INFORMATION SHEET