Skip to main content

2009 | Buch

Information Hiding

11th International Workshop, IH 2009, Darmstadt, Germany, June 8-10, 2009, Revised Selected Papers

herausgegeben von: Stefan Katzenbeisser, Ahmad-Reza Sadeghi

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-workshop proceedings of the 11th International Workshop on Information Hiding, IH 2009, held in Darmstadt, Germany, in June 2009. The 19 revised full papers presented were carefully reviewed and selected from 55 submissions. The papers are organized in topical sections on steganography, steganalysis, watermarking, fingerprinting, hiding in unusual content, novel applications and forensics.

Inhaltsverzeichnis

Frontmatter

Steganography

Supraliminal Audio Steganography: Audio Files Tricking Audiophiles
Abstract
A supraliminal channel is one in which the secret message is encoded in the semantic content of a cover object, and that is robust against an active warden. Such channels frequently have a very low embedding rate and therefore are unsuitable for more than simply exchanging steganographic public keys. This paper introduces a proof-of-concept supraliminal channel that uses WAV files to provide a high-bitrate method of embedding information in common media.
Heather Crawford, John Aycock
An Epistemological Approach to Steganography
Abstract
Steganography has been studied extensively in the light of information, complexity, probability and signal processing theory. This paper adds epistemology to the list and argues that Simmon’s seminal prisoner’s problem has an empirical dimension, which cannot be ignored (or defined away) without simplifying the problem substantially. An introduction to the epistemological perspective on steganography is given along with a structured discussion on how the novel perspective fits into the existing body of literature.
Rainer Böhme
Fisher Information Determines Capacity of ε-Secure Steganography
Abstract
Most practical stegosystems for digital media work by applying a mutually independent embedding operation to each element of the cover. For such stegosystems, the Fisher information w.r.t. the change rate is a perfect security descriptor equivalent to KL divergence between cover and stego images. Under the assumption of Markov covers, we derive a closed-form expression for the Fisher information and show how it can be used for comparing stegosystems and optimizing their performance. In particular, using an analytic cover model fit to experimental data obtained from a large number of natural images, we prove that the ±1 embedding operation is asymptotically optimal among all mutually independent embedding operations that modify cover elements by at most 1.
Tomáš Filler, Jessica Fridrich
Fast BCH Syndrome Coding for Steganography
Abstract
This paper presents an improved data hiding technique based on BCH (n,k,t) syndrome coding. The proposed data hiding method embeds data to a block of input data (for example, image pixels, wavelet or DCT coefficients, etc.) by modifying some coefficients in a block in order to null the syndrome of the BCH coding. The proposed data hiding method can hide the same amount of data with less computational time compared to the existed methods. Contributions of this paper include the reduction of both time complexity and storage complexity as well. Storage complexity is linear while that of other methods are exponential. Time complexity of our method is almost negligible and constant for any n. On the other hand, time complexity of the existing methods is exponential. Since the time complexity is constant and storage complexity is linear, it is easy to extend this method to a large n which allows us to hide data with small embedding capacity. Note that small capacities are highly recommended for steganography to survive steganalysis. The proposed scheme shows that BCH syndrome coding for data hiding is now practical ascribed to the reduced complexity.
Rongyue Zhang, Vasiliy Sachnev, Hyoung Joong Kim

Steganalysis

Embedding Ratio Estimating for Each Bit Plane of Image
Abstract
MSLB replacement steganography has attracted researchers’ attentions. However, existing steganalysis methods for MLSB replacement steganography have been just designed under the assumption that the embedding ratios in all stego bit planes are equal. Therefore, when the messages are embedded into different bit planes with different lengths independently, a new, principled method is introduced to estimate the embedding ratio in each stego bit plane based on a sample pair model. The new method estimates the embedding ratio in each bit plane in sequence according to the priority of each bit plane’s significance. A series of experiments show that the presented steganalysis method has significantly smaller bias than applying SPA method, a typical steganalysis for LSB steganography, to estimate the embedding ratio in each bit plane directly.
Chunfang Yang, Xiangyang Luo, Fenlin Liu
Estimating Steganographic Fisher Information in Real Images
Abstract
This paper is concerned with the estimation of steganographic capacity in digital images, using information theoretic bounds and very large-scale experiments to approximate the distributions of genuine covers. The complete distribution cannot be estimated, but with carefully-chosen algorithms and a large corpus we can make local approximations by considering groups of pixels. A simple estimator for the local quadratic term of Kullback-Leibler divergence (Steganographic Fisher Information) is presented, validated on some synthetic images, and computed for a corpus of covers. The results are interesting not so much for their concrete capacity estimates but for the comparisons they provide between different embedding operations, between the information found in differently-sized and -shaped pixel groups, and the results of DC normalization within pixel groups. This work suggests lessons for the future design of spatial-domain steganalysis, and also the optimization of embedding functions.
Andrew D. Ker

Watermarking

Fast Determination of Sensitivity in the Presence of Countermeasures in BOWS-2
Abstract
The second Break Our Watermarking System (BOWS-2) contest exposed a watermarking algorithm named Broken Arrows (BA) to worldwide attacks. In its second episode, the previously existing daily limit of 30 oracle calls per IP address was lifted to allow for sensitivity analysis. Often disrespected because of their extensive oracle use, sensitivity attacks can reveal up to one bit of information about the watermark in each experiment. In this paper we describe how we circumvented BA’s countermeasures against sensitivity attacks.
Andreas Westfeld
A Phase Modulation Audio Watermarking Technique
Abstract
Audio watermarking is a technique, which can be used to embed information into the digital representation of audio signals. The main challenge is to hide data representing some information without compromising the quality of the watermarked track and at the same time ensure that the embedded watermark is robust against removal attacks. Especially providing perfect audio quality combined with high robustness against a wide variety of attacks is not adequately addressed and evaluated in current watermarking systems. In this paper, we present a new phase modulation audio watermarking technique, which among other features provides evidence for high audio quality. The system combines the alteration of the phase with the spread spectrum concept and is referred to as Adaptive Spread Phase Modulation (ASPM). Extensive benchmarking provide the evidence for the inaudibility of the embedded watermark and the good robustness.
Michael Arnold, Peter G. Baum, Walter Voeßing
Forensic Tracking Watermarking against In-theater Piracy
Abstract
Many illegal copies of digital movies by camcorder capture are found on the Internet or on the black market before their official release. Due to the angle of the camcorder relative to the screen, the copied movies are captured with perspective distortion. In this paper, we present a watermarking scheme for tracking the pirate using local auto-correlation function (LACF) to estimate geometric distortion. The goals of watermarking are to find the suspected position of the camcorder in the theater and to extract the embedded forensic marking data which specifies theater information and time stamp. Therefore, our watermarking system provides conclusive evidence to take the pirate to the court. Experimental results demonstrate robustness of the LACF and accuracy of the proposed modeling.
Min-Jeong Lee, Kyung-Su Kim, Heung-Kyu Lee
Self-recovery Fragile Watermarking Using Block-Neighborhood Tampering Characterization
Abstract
In this paper, a self-recovery fragile watermarking scheme for image authentication is proposed to improve the performance of tamper detection and tamper recovery. The proposed scheme embeds the encrypted feature comprising 6-bit recovery data and 2-bit key-based data of the image block into the least significant bits (LBS) of its mapping block. The validity of a test block is determined by comparing the number of inconsistent blocks in the 3×3 block-neighborhood of the test block with that of its mapping block. Moreover, to improve the quality of the recovered image, the 3×3 block-neighborhood is also used to recover the tampered blocks whose feature hidden in another block is corrupted. Experimental result demonstrates that the proposed method outperforms conventional self-recovery fragile watermarking algorithms in tamper detection and tamper recovery under various attacks. Additionally, the proposed scheme is not vulnerable to the collage attack, constant-average attack and four-scanning attack.
Hong-Jie He, Jia-Shu Zhang, Heng-Ming Tai
Perception-Based Audio Authentication Watermarking in the Time-Frequency Domain
Abstract
Current systems and protocols based on cryptographic methods for integrity and authenticity verification of media data do not distinguish between legitimate signal transformation and malicious tampering that manipulates the content. Furthermore, they usually provide no localization or assessment of the relevance of such manipulations with respect to human perception or semantics. We present an algorithm for a authentication audio watermarking that uses a perception-based robust hash function in combination with robust watermarking to verify the integrity of audio recordings. Experimental results show that the proposed system provides both a high level of distinction between perceptually different audio data and a high robustness against signal transformations that do not change the perceived information.
Sascha Zmudzinski, Martin Steinebach

Fingerprinting

An Improvement of Short 2-Secure Fingerprint Codes Strongly Avoiding False-Positive
Abstract
A 2-secure fingerprint code proposed by Nuida et al. (IEEE CCNC 2007) has very desirable characteristics that false-positive never occur under Marking Assumption against at most two pirates and that false-positive is very unlikely to occur even in the absence of these assumptions. However, its code length could be further reduced; in fact, another 2-secure code proposed in the same work has significantly shorter code length. In this article, we demonstrate how to mix those two codes to inherit both of their advantages. The resulting 2-secure codes have short lengths, and possess the above characteristics whenever the number of pirates (may exceed two but) is not too large.
Koji Nuida
Estimating the Minimal Length of Tardos Code
Abstract
This paper estimates the minimal length of a binary probabilistic traitor tracing code. We consider the code construction proposed by G. Tardos in 2003, with the symmetric accusation function as improved by B. Skoric et al. The length estimation is based on two pillars. First, we consider the Worst Case Attack that a group of c colluders can lead. This attack minimizes the mutual information between the code sequence of a colluder and the pirated sequence. Second, an algorithm pertaining to the field of rare event analysis is presented in order to estimate the probabilities of error: the probability that an innocent user is framed, and the probabilities that all colluders are missed. Therefore, for a given collusion size, we are able to estimate the minimal length of the code satisfying some error probabilities constraints. This estimation is far lower than the known lower bounds.
Teddy Furon, Luis Pérez-Freire, Arnaud Guyader, Frédéric Cérou

Hiding in Unusual Content, Novel Applications

Roughness-Adaptive 3D Watermarking of Polygonal Meshes
Abstract
We present a general method to improve watermark robustness by exploiting the masking effect of surface roughness on watermark visibility, which, to the best of our knowledge, has not been studied in 3D digital watermarking. Our idea is to adapt watermark strength to local surface roughness based on the knowledge that human eyes are less sensitive to changes on a rougher surface patch than those on a smoother surface. We implemented our idea in a modified version of a well known method proposed by Benedens [3]. As an additional contribution, we modified Benedens’s method in two ways to improve its performance. The first improvement led to a blind version of Benedens’s method that no longer requires any key that depends on the surface mesh of the cover 3D object. The second improvement concerned the robustness of bit ‘1’ in the watermark. Experimental results showed that our new method permits to improve watermark robustness by 41% to 56% as compared to the original Benedens’s method. Further analyses indicated that the average watermark strength by our roughness-adaptive method was larger than that by the original Benedens’s method while ensuring watermark imperceptibility. This was the main reason for the improvement in robustness observed in our experiments. We conclude that exploiting the masking property of human vision is a viable way to improve the robustness of 3D watermarks in general, and therefore could be applied to other 3D digital watermarking techniques.
Kwangtaek Kim, Mauro Barni, Hong Z. Tan
Hardware-Based Public-Key Cryptography with Public Physically Unclonable Functions
Abstract
A physically unclonable function (PUF) is a multiple-input, multiple-output, large entropy physical system that is unreproducible due to its structural complexity. A public physically unclonable function (PPUF) is a PUF that is created so that its simulation is feasible but requires very large time even when ample computational resources are available. Using PPUFs, we have developed conceptually new secret key exchange and public key protocols that are resilient against physical and side channel attacks and do not employ unproven mathematical conjectures. Judicious use of PPUF hardware sharing, parallelism, and provably correct partial simulation enables 1016 advantage of communicating parties over an attacker, requiring over 500 of years of computation even if the attacker uses all global computation resources.
Nathan Beckmann, Miodrag Potkonjak
SVD-Based Ghost Circuitry Detection
Abstract
Ghost circuitry (GC) insertion is the malicious addition of hardware in the specification and/or implementation of an IC by an attacker intending to change circuit functionality. There are numerous GC insertion sources, including untrusted foundries, synthesis tools and libraries, testing and verification tools, and configuration scripts. Moreover, GC attacks can greatly compromise the security and privacy of hardware users, either directly or through interaction with pertinent systems, application software, or with data. GC detection is a particularly difficult task in modern and pending deep submicron technologies due to intrinsic manufacturing variability. Here, we provide algebraic and statistical approaches for the detection of ghost circuitry. A singular value decomposition (SVD)-based technique for gate characteristic recovery is applied to solve a system of equations created using fast and non-destructive measurements of leakage power and/or delay. This is then combined with statistical constraint manipulation techniques to detect embedded ghost circuitry. The effectiveness of the approach is demonstrated on the ISCAS 85 benchmarks.
Michael Nelson, Ani Nahapetian, Farinaz Koushanfar, Miodrag Potkonjak

Forensics

Microphone Classification Using Fourier Coefficients
Abstract
Media forensics tries to determine the originating device of a signal. We apply this paradigm to microphone forensics, determining the microphone model used to record a given audio sample. Our approach is to extract a Fourier coefficient histogram of near-silence segments of the recording as the feature vector and to use machine learning techniques for the classification. Our test goals are to determine whether attempting microphone forensics is indeed a sensible approach and which one of the six different classification techniques tested is the most suitable one for that task. The experimental results, achieved using two different FFT window sizes (256 and 2048 frequency coefficients) and nine different thresholds for near-silence detection, show a high accuracy of up to 93.5% correct classifications for the case of 2048 frequency coefficients in a test set of seven microphones classified with linear logistic regression models. This positive tendency motivates further experiments with larger test sets and further studies for microphone identification.
Robert Buchholz, Christian Kraetzer, Jana Dittmann
Detect Digital Image Splicing with Visual Cues
Abstract
Image splicing detection has been considered as one of the most challenging problems in passive image authentication. In this paper, we propose an automatic detection framework to identify a spliced image. Distinguishing from existing methods, the proposed system is based on a human visual system (HVS) model in which visual saliency and fixation are used to guide the feature extraction mechanism. An interesting and important insight of this work is that there is a high correlation between the splicing borders and the first few fixation points predicted by a visual attention model using edge sharpness as visual cues. We exploit this idea to develope a digital image splicing detection system with high performance. We present experimental results which show that the proposed system outperforms the prior arts. An additional advantage offered by the proposed system is that it provides a convenient way of localizing the splicing boundaries.
Zhenhua Qu, Guoping Qiu, Jiwu Huang
Feature-Based Camera Model Identification Works in Practice
Results of a Comprehensive Evaluation Study
Abstract
Feature-based camera model identification plays an important role in the toolbox for image source identification. It enables the forensic investigator to discover the probable source model employed to acquire an image under investigation. However, little is known about the performance on large sets of cameras that include multiple devices of the same model. Following the process of a forensic investigation, this paper tackles important questions for the application of feature-based camera model identification in real world scenarios. More than 9,000 images were acquired under controlled conditions using 44 digital cameras of 12 different models. This forms the basis for an in-depth analysis of a) intra-camera model similarity, b) the number of required devices and images for training the identification method, and c) the influence of camera settings. All experiments in this paper suggest: feature-based camera model identification works in practice and provides reliable results even if only one device for each camera model under investigation is available to the forensic investigator.
Thomas Gloe, Karsten Borowka, Antje Winkler
Backmatter
Metadaten
Titel
Information Hiding
herausgegeben von
Stefan Katzenbeisser
Ahmad-Reza Sadeghi
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-04431-1
Print ISBN
978-3-642-04430-4
DOI
https://doi.org/10.1007/978-3-642-04431-1