Skip to main content

2011 | Buch

Satellite Data Compression

insite
SUCHEN

Über dieses Buch

Satellite Data Compression covers recent progress in compression techniques for multispectral, hyperspectral and ultra spectral data. A survey of recent advances in the fields of satellite communications, remote sensing and geographical information systems is included. Satellite Data Compression, contributed by leaders in this field, is the first book available on satellite data compression. It covers onboard compression methodology and hardware developments in several space agencies. Case studies are presented on recent advances in satellite data compression techniques via various prediction-based, lookup-table-based, transform-based, clustering-based, and projection-based approaches. This book provides valuable information on state-of-the-art satellite data compression technologies for professionals and students who are interested in this topic. Satellite Data Compression is designed for a professional audience comprised of computer scientists working in satellite communications, sensor system design, remote sensing, data receiving, airborne imaging and geographical information systems (GIS). Advanced-level students and academic researchers will also benefit from this book.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Development of On-Board Data Compression Technology at Canadian Space Agency
Abstract
This chapter reviews and summarizes the researches and developments on data compression techniques for satellite sensor data at the Canadian Space Agency in collaboration with its partners in other government departments, academia and Canadian industry. This chapter describes the subject matters in the order of the following sections.
Shen-En Qian
Chapter 2. CNES Studies for On-Board Compression of High-Resolution Satellite Images
Abstract
Future high resolution instruments planned by CNES for space remote sensing missions will lead to higher bit rates because of the increase in resolution and dynamic range. For example, the ground resolution improvement induces a data rate multiplied by 8 from SPOT4 to SPOT5 and by 28 to PLEIADES-HR. Lossy data compression with low complexity algorithms is then needed while compression ratio must considerably rise. New image compression algorithms have been used to increase their compression performance while complying with image quality requirements from the community of users and experts. Thus, DPCM algorithm used on-board SPOT4 was replaced by a DCT-based compressor on-board SPOT5. Recent compression algorithms such as PLEIADES-HR one use wavelet-transforms and bit-plane encoders. But future compressors will have to be more powerful to reach higher compression ratios. New transforms have been studied by CNES to exceed the DWT but other techniques as selective compression are required in order to obtain a significant performance gap. This chapter gives an overview of CNES past, present and future studies of on-board compression algorithms for high-resolution images.
Carole Thiebaut, Roberto Camarero
Chapter 3. Low-Complexity Approaches for Lossless and Near-Lossless Hyperspectral Image Compression
Abstract
There has recently been a strong interest towards low-complexity approaches for hyperspectral image compression, also driven by the standardization activities in this area and by the new hyperspectral missions that have been deployed. This chapter overviews the state-of-the-art of lossless and near-lossless compression of hyperspectral images, with a particular focus on approaches that comply with the requirements typical of real-world mission, in terms of low complexity and memory usage, error resilience and hardware friendliness. In particular, a very simple lossless compression algorithm is described, which is based on block-by-block prediction and adaptive Golomb coding, can exploit optimal band ordering, and can be extended to near-lossless compression. We also describe the results obtained with a hardware implementation of the algorithm. The compression performance of this algorithm is close to the state-of-the-art, and its low degree of complexity and memory usage, along with the possibility to compress data in parallel, make it a very good candidate for onboard hyperspectral image compression.
Andrea Abrardo, Mauro Barni, Andrea Bertoli, Raoul Grimoldi, Enrico Magli, Raffaele Vitulli
Chapter 4. FPGA Design of Listless SPIHT for Onboard Image Compression
Abstract
Space missions are designed to leave Earth’s atmosphere and operate in outer space. Satellite imaging payloads operate mostly with a store-and-forward mechanism, in which captured images are stored on board and transmitted to ground later on. With the increase of spatial resolution, space missions are faced with the necessity of handling an extensive amount of imaging data. The increased volume of image data exerts great pressure on limited bandwidth and onboard storage. Image compression techniques provide a solution to the “bandwidth vs. data volume” dilemma of modern spacecraft. Therefore, compression is becoming a very important feature in the payload image processing units of many satellites [1].
Yunsong Li, Juan Song, Chengke Wu, Kai Liu, Jie Lei, Keyan Wang
Chapter 5. Outlier-Resilient Entropy Coding
Abstract
Many data compression systems rely on a final stage based on an entropy coder, generating short codes for the most probable symbols. Images, multispectroscopy or hyperspectroscopy are just some examples, but the space mission concept covers many other fields. In some cases, especially when the on-board processing power available is very limited, a generic data compression system with a very simple pre-processing stage could suffice. The Consultative Committee for Space Data Systems made a recommendation on lossless data compression in the early 1990s, which has been successfully used in several missions so far owing to its low computational cost and acceptable compression ratios. Nevertheless, its simple entropy coder cannot perform optimally when large amounts of outliers appear in the data, which can be caused by noise, prompt particle events, or artifacts in the data or in the pre-processing stage. Here we discuss the effect of outliers on the compression ratio and we present efficient solutions to this problem. These solutions are not only alternatives to the CCSDS recommendation, but can also be used as the entropy coding stage of more complex systems such as image or spectroscopy compression.
Jordi Portell, Alberto G. Villafranca, Enrique García-Berro
Chapter 6. Quality Issues for Compression of Hyperspectral Imagery Through Spectrally Adaptive DPCM
Abstract
To meet quality issues of hyperspectral imaging, differential pulse code modulation (DPCM) is usually employed for either lossless or near-lossless data compression, i.e., the decompressed data have a user-defined maximum absolute error, being zero in the lossless case. Lossless compression thoroughly preserves the information of the data but allows a moderate decrement in transmission bit rate. Lossless compression ratios attained even by the most advanced schemes are not very high and usually lower than four. If strictly lossless techniques are not employed, a certain amount of information of the data will be lost. However, such an information may be partly due to random fluctuations of the instrumental noise. The rationale that compression-induced distortion is more tolerable, i.e., less harmful, in those bands, in which the noise is higher, and vice-versa, constitutes the virtually lossless paradigm.
Bruno Aiazzi, Luciano Alparone, Stefano Baronti
Chapter 7. Ultraspectral Sounder Data Compression by the Prediction-Based Lower Triangular Transform
Abstract
The Karhunen–Loeve transform (KLT) is the optimal unitary transform that yields the maximum coding gain. The prediction-based lower triangular transform (PLT) features the same decorrelation and coding gain properties as KLT but with lower complexity. Unlike KLT, PLT has the perfect reconstruction property which allows its direct use for lossless compression. In this paper, we apply PLT to carry out lossless compression of the ultraspectral sounder data. The experiment on the standard ultraspectral test dataset of ten AIRS digital count granules shows that the PLT compression scheme compares favorably with JPEG-LS, JPEG2000, LUT, SPIHT, and CCSDS IDC 5/3.
Shih-Chieh Wei, Bormin Huang
Chapter 8. Lookup-Table Based Hyperspectral Data Compression
Abstract
This chapter gives an overview of the lookup table (LUT) based lossless compression methods for hyperspectral images. The LUT method searches the previous band for a pixel of equal value to the pixel co-located to the one to be coded. The pixel in the same position as the obtained pixel in the current band is used as the predictor. Lookup tables are used to speed up the search. Variants of the LUT method include predictor guided LUT method and multiband lookup tables.
Jarno Mielikainen
Chapter 9. Multiplierless Reversible Integer TDLT/KLT for Lossy-to-Lossless Hyperspectral Image Compression
Abstract
Hyperspectral images have wide applications nowadays such as in atmospheric detection, remote sensing and military affairs. However, the volume of a hyperspectral image is so large that a 16bit AVIRIS image with a size 512 × 512 × 224 will occupy 112 M bytes. Therefore, efficient compression algorithms are required to reduce the cost of storage or bandwidth.
Jiaji Wu, Lei Wang, Yong Fang, L. C. Jiao
Chapter 10. Divide-and-Conquer Decorrelation for Hyperspectral Data Compression
Abstract
Recent advances in the development of modern satellite sensors have increased the need for image coding, because of the huge volume of such collected data. It is well-known that the Karhunen-Loêve transform provides the best spectral decorrelation. However, it entails some drawbacks like high computational cost, high memory requirements, its lack of component scalability, and its difficult practical implementation. In this contributed chapter we revise some of the recent proposals that have been published to mitigate some of these drawbacks, in particular, those proposals based on a divide-and-conquer decorrelation strategy. In addition, we provide a comparison among the coding performance, the computational cost, and the component scalability of these different strategies, for lossy, for progressive lossy-to-lossless, and for lossless remote-sensing image coding.
Ian Blanes, Joan Serra-Sagristà, Peter Schelkens
Chapter 11. Hyperspectral Image Compression Using Segmented Principal Component Analysis
Abstract
Principal component analysis (PCA) is the most efficient spectral decorrelation approach for hyperspectral image compression. In conjunction with JPEG2000-based spatial coding, the resulting PCA+JPEG2000 can yield superior rate-distortion performance. However, the involved overhead bits consumed by the large operation matrix for principal component transform may affect compression performance at low bitrates, particularly when the spatial size of an image patch to be compressed is relatively small compared to the spectral dimension. In our previous research, we proposed to apply the segmented principal component analysis (SPCA) to mitigate this effect, and the resulting compression algorithm, denoted as SPCA+JPEG2000, can improve the rate-distortion performance even when PCA+JPEG2000 is applicable. In this chapter, we investigate the quality of reconstructed data after SPCA+JPEG2000 compression based on the performance in spectral fidelity, classification, linear unmixing, and anomaly detection. The experimental results show that SPCA+JPEG2000 can outperform in terms of preserving more useful data information, in addition to offer excellent rate-distortion performance. Since the spectral partition in SPCA relies on the calculation of a data-dependent spectral correlation coefficient matrix, we investigate a sensor-dependent suboptimal partition approach, which can accelerate the compression process with no much distortion.
Wei Zhu, Qian Du, James E. Fowler
Chapter 12. Fast Precomputed Vector Quantization with Optimal Bit Allocation for Lossless Compression of Ultraspectral Sounder Data
Abstract
The compression of three-dimensional ultraspectral sounder data is a challenging task given its unprecedented size. We develop a fast precomputed vector quantization (FPVQ) scheme with optimal bit allocation for lossless compression of ultraspectral sounder data. The scheme comprises of linear prediction, bit-depth partitioning, vector quantization, and optimal bit allocation. Linear prediction approach a Gaussian Distribution serves as a whitening tool to make the prediction residuals of each channel close to a Gaussian distribution. Then these residuals are partitioned based on bit depths. Each partition is further divided into several sub-partitions with various 2 k channels for vector quantization. Only the codebooks with 2 m codewords for 2 k -dimensional normalized Gaussian distributions are precomputed. A new algorithm is developed for optimal bit allocation among sub-partitions. Unlike previous algorithms [19, 20] that may yield a sub-optimal solution, the proposed algorithm guarantees to find the minimum of the cost function under the constraint of a given total bit rate. Numerical experiments performed on the NASA AIRS data show that the FPVQ scheme gives high compression ratios for lossless compression of ultraspectral sounder data.
Bormin Huang
Chapter 13. Effects of Lossy Compression on Hyperspectral Classification
Abstract
Rapid advancements in sensor technology have produced remotely sensed data with hundreds of spectral bands. As a result, there is now an increasing need for efficient compression algorithms for hyperspectral images. Modern sensors are able to generate a very large amount of data from satellite systems and compression is required to transmit and archive this hyperspectral data in most cases. Although lossless compression is preferable in some applications, its compression efficiency is around three [1–3]. On the other hand, lossy compression can achieve much higher compression rates at the expense of some information loss. Due to its increasing importance, many researchers have studied the compression of hyperspectral data and numerous methods have been proposed, including transform-based methods (2D and 3D), vector quantization [3–5], and predictive techniques [6]. Several authors have used principal component analysis to remove redundancy [7–9] and some researchers have used standard compression algorithms such as JPEG and JPEG 2000 for the compression of hyperspectral imagery [9–14]. The discrete wavelet transform has been applied to the compression of hyperspectral images [15, 16] and several authors have applied the SPIHT algorithm to the compression of hyperspectral imagery [17–23].
Chulhee Lee, Sangwook Lee, Jonghwa Lee
Chapter 14. Projection Pursuit-Based Dimensionality Reduction for Hyperspectral Analysis
Abstract
Dimensionality Reduction (DR) has found many applications in hyperspectral image processing. This book chapter investigates Projection Pursuit (PP)-based Dimensionality Reduction, (PP-DR) which includes both Principal Components Analysis (PCA) and Independent Component Analysis (ICA) as special cases. Three approaches are developed for PP-DR. One is to use a Projection Index (PI) to produce projection vectors to generate Projection Index Components (PICs). Since PP generally uses random initial conditions to produce PICs, when the same PP is performed in different times or by different users at the same time, the resulting PICs are generally different in terms of components and appearing orders. To resolve this issue, a second approach is called PI-based PRioritized PP (PI-PRPP) which uses a PI as a criterion to prioritize PICs. A third approach proposed as an alternative to PI-PRPP is called Initialization-Driven PP (ID-PIPP) which specifies an appropriate set of initial conditions that allows PP to produce the same PICs as well as in the same order regardless of how PP is run. As shown by experimental results, the three PP-DR techniques can perform not only DR but also separate various targets in different PICs so as to achieve unsupervised target detection.
Haleh Safavi, Chein-I Chang, Antonio J. Plaza
Metadaten
Titel
Satellite Data Compression
herausgegeben von
Bormin Huang
Copyright-Jahr
2011
Verlag
Springer New York
Electronic ISBN
978-1-4614-1183-3
Print ISBN
978-1-4614-1182-6
DOI
https://doi.org/10.1007/978-1-4614-1183-3