Skip to main content
Top

2011 | Book

Digital Watermarking

9th International Workshop, IWDW 2010, Seoul, Korea, October 1-3, 2010, Revised Selected Papers

Editors: Hyoung-Joong Kim, Yun Qing Shi, Mauro Barni

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the thoroughly refereed post-conference proceedings of the 9th Interntaional Workshop on Digital Watermarking, IWDW 2010, held in Seoul, Korea, in October 2010. The 26 revised full papers presented were carefully reviewed and selected from 48 submissions. The papers are organized in topical sections on forensics, visual cryptography, robust watermarking, steganography, fingerprinting, and steganalysis.

Table of Contents

Frontmatter
Passive Detection of Paint-Doctored JPEG Images
Abstract
Image painting is an image doctoring method to remove particular objects. In this paper, a novel passive detection method for paint-doctored JPEG images is proposed when the doctored image is saved in an uncompressed format or in the JPEG compressed format. We detect the doctored region by computing the average of sum of absolute difference images between the doctored image and a resaved JPEG compressed image at different quality factors. There are several advantages of the proposed method: first, it can detect the doctored region accurately even if the doctored region is small in size; second, it can detect multiple doctored regions in the same image; third, it can detect the doctored region automatically and does not need any manual operation; finally, the computation is simple. Experimental results show that the proposed method can detect the paint-doctored regions efficiently and accurately.
Yu Qian Zhao, Frank Y. Shih, Yun Q. Shi
Detecting Digital Image Splicing in Chroma Spaces
Abstract
Detecting splicing traces in the tampering color space is usually a tough work. However, it is found that image splicing which is difficult to be detected in one color space is probably much easier to be detected in another one. In this paper, an efficient approach for passive color image splicing detection is proposed. Chroma spaces are introduced in our work compared with commonly used RGB and luminance spaces. Four gray level run-length run-number (RLRN) vectors with different directions extracted from de-correlated chroma channels are employed as distinguishing features for image splicing detection. Support vector machine (SVM) is used as a classifier to demonstrate the performance of the proposed feature extraction method. Experimental results have shown that that RLRN features extracted from chroma channels provide much better performance than that extracted from R, G, B and luminance channels.
Xudong Zhao, Jianhua Li, Shenghong Li, Shilin Wang
Discriminating Computer Graphics Images and Natural Images Using Hidden Markov Tree Model
Abstract
People can make highly photorealistic images using rendering technology of computer graphics. It is difficult to human eye to distinguish these images from real photo images. If an image is photorealistic graphics, it is highly possible that the content of the image was made up by human and the reliability of it becomes low. This research field belongs to passive-blind image authentication. Identifying computer graphics images is an important problem in image classification, too. In this paper, we propose using HMT(hidden Markov tree) to classifying natural images and computer graphics images. A set of features are derived from HMT model parameters and its effect is verified by experiment. The average accuracy is up to 84.6%.
Feng Pan, Jiwu Huang
A New Scrambling Evaluation Scheme Based on Spatial Distribution Entropy and Centroid Difference of Bit-Plane
Abstract
Watermarking is one of the most effective techniques for copyright protection and information hiding. It can be applied in many fields of our society. Nowadays, some image scrambling schemes are used as one part of the watermarking algorithm to enhance the security. Therefore, how to select an image scrambling scheme and what kind of the image scrambling scheme may be used for watermarking are the key problems. Evaluation method of the image scrambling schemes can be seen as a useful test tool for showing the property or flaw of the image scrambling method. In this paper, a new scrambling evaluation system based on spatial distribution entropy and centroid difference of bit-plane is presented to obtain the scrambling degree of image scrambling schemes. Our scheme is illustrated and justified through computer simulations. The experimental results show (in Figs. 6 and 7) that for the general gray-scale image, the evaluation degree of the corresponding cipher image for the first 4 significant bit-planes selection is nearly the same as that for the 8 bit-planes selection. That is why, instead of taking 8 bit-planes of a gray-scale image, it is sufficient to take only the first 4 significant bit-planes for the experiment to find the scrambling degree. This 50% reduction in the computational cost makes our scheme efficient.
Liang Zhao, Avishek Adhikari, Kouichi Sakurai
Cryptanalysis on an Image Scrambling Encryption Scheme Based on Pixel Bit
Abstract
Recently, an image scrambling encryption algorithm which makes use of one-dimensional chaos system for shuffling the pixel bits was proposed in [G.-D. Ye, Pattern Recognition Lett. 31(2010) 347-354]. Through the scrambling algorithm, the pixel locations and values can be encrypted at the same time. This scheme can be thought of as a typical binary image scrambling encryption considering the bit-plain of size \(\emph{M}\)×8\(\emph{N}\). In [Li C.Q., Lo K. T., http://arxiv.org/PS_cache/arxiv/pdf/0912/0912.1918v2.pdf], Li et al. proposed an attack using more than ⌈log 2(\(\emph{8M}\) \(\emph{N}\)-1)⌉ many known-plaintext images to recover the original plain image with the noise of size M ×N. The same principle is also suitable for the chosen-plaintext attack which can obtain the exact plain image. In the current paper, a simple attack on the original scheme is presented by applying chosen-plaintext images. Using our attack, the encryption vectors \(\emph{TM}\) and \(\emph{TN}\) and the decryption vectors TM′ and TN′ can be recovered completely. The experimental simulations on two standard images of size 128 ×128 and 256 ×256 justify our analysis. It is shown that the recovered images are identical with the corresponding original images. For both the original images, the number of chosen-plaintext images required in our scheme is 9, where as to do the same using the scheme proposed in Li et al.’ attack, at least 17 and 19 chosen-plaintext images there will be required respectively. Moreover, the some method can be also used for chosen-ciphertext attack which reveals the decryption vectors TM′ and TN′ directly. Note that our attacks are also successful under iteration system which is remarked in the conclusions.
Liang Zhao, Avishek Adhikari, Di Xiao, Kouichi Sakurai
Plane Transform Visual Cryptography
Abstract
Plane transformation visual cryptography takes a unique approach to some of the current shortcomings of current visual cryptography techniques. Typically, the direction and placement of the encrypted shares is critical when attempting to recover the secret. Many schemes are highly dependant on this stacking order. Within this paper, the scheme presented illustrates a technique whereby this restriction is loosened such that the number of acceptable alignment points is increased by performing a simple plane transform on one of the shares. This results in the same secret being recovered when the shares correctly aligned. The technique has also been extended to encompass multiple secrets, each of which can be recovered depending on the type of transformation performed on the shares.
Jonathan Weir, WeiQi Yan
A Statistical Model for Quantized AC Block DCT Coefficients in JPEG Compression and its Application to Detecting Potential Compression History in Bitmap Images
Abstract
We first develop a probability mass function (PMF) for quantized block discrete cosine transform (DCT) coefficients in JPEG compression using statistical analysis of quantization, with a Generalized Gaussian model being considered as the PDF for non-quantized block DCT coefficients. We subsequently propose a novel method to detect potential JPEG compression history in bitmap images using the PMF that has been developed. We show that this method outperforms a classical approach to compression history detection in terms of effectiveness. We also show that it detects history with both independent JPEG group (IJG) and custom quantization tables.
Gopal Narayanan, Yun Qing Shi
A Smart Phone Image Database for Single Image Recapture Detection
Abstract
Image recapture detection (IRD) is to distinguish real-scene images from the recaptured ones. Being able to detect recaptured images, a single image based counter-measure for rebroadcast attack on a face authentication system becomes feasible. Being able to detect recaptured images, general object recognition can differentiate the objects on a poster from the real ones, so that robot vision is more intelligent. Being able to detect recaptured images, composite image can be detected when recapture is used as a tool to cover the composite clues. As more and more methods have been proposed for IRD, an open database is indispensable to provide a common platform to compare the performance of different methods and to expedite further research and collaboration in the field of IRD.
This paper describes a recaptured image database captured by smart phone cameras. The cameras of smart phones represent the middle to low-end market of consumer cameras. The database includes real-scene images and the corresponding recaptured ones, which targets to evaluate the performance of image recapture detection classifiers as well as provide a reliable data source for modeling the physical process to obtain the recaptured images. There are three main contributions in this work. Firstly, we construct a challenging database of recaptured images, which is the only publicly open database up to date. Secondly, the database is constructed by the smart phone cameras, which will promote the research of algorithms suitable for consumer electronic applications. Thirdly, the contents of the real-scene images and the recaptured images are in pair, which makes the modeling of the recaptured process possible.
Xinting Gao, Bo Qiu, JingJing Shen, Tian-Tsong Ng, Yun Qing Shi
Detection of Tampering Inconsistencies on Mobile Photos
Abstract
Fast proliferation of mobile cameras and the deteriorating trust on digital images have created needs in determining the integrity of photos captured by mobile devices. As tampering often creates some inconsistencies, we propose in this paper a novel framework to statistically detect the image tampering inconsistency using accurately detected demosaicing weights features. By first cropping four non-overlapping blocks, each from one of the four quadrants in the mobile photo, we extract a set of demosaicing weights features from each block based on a partial derivative correlation model. Through regularizing the eigenspectrum of the within-photo covariance matrix and performing eigenfeature transformation, we further derive a compact set of eigen demosaicing weights features, which are sensitive to image signal mixing from different photo sources. A metric is then proposed to quantify the inconsistency based on the eigen weights features among the blocks cropped from different regions of the mobile photo. Through comparison, we show our eigen weights features perform better than the eigen features extracted from several other conventional sets of statistical forensics features in detecting the presence of tampering. Experimentally, our method shows a good confidence in tampering detection especially when one of the four cropped blocks is from a different camera model or brand with different demosaicing process.
Hong Cao, Alex C. Kot
Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise
Abstract
With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.
Wei Wang, Jing Dong, Tieniu Tan
Robust Audio Watermarking by Using Low-Frequency Histogram
Abstract
In continuation to earlier work where the problem of time-scale modification (TSM) has been studied [1] by modifying the shape of audio time domain histogram, here we consider the additional ingredient of resisting additive noise-like operations, such as Gaussian noise, lossy compression and low-pass filtering. In other words, we study the problem of the watermark against both TSM and additive noises. To this end, in this paper we extract the histogram from a Gaussian-filtered low-frequency component for audio watermarking. The watermark is inserted by shaping the histogram in a way that the use of two consecutive bins as a group is exploited for hiding a bit by reassigning their population. The watermarked signals are perceptibly similar to the original one. Comparing with the previous time-domain watermarking scheme [1], the proposed watermarking method is more robust against additive noise, MP3 compression, low-pass filtering, etc.
Shijun Xiang
Robust Blind Watermarking Scheme Using Wave Atoms
Abstract
In this paper, a robust blind watermarking scheme using Multiple Descriptions (MD) is proposed. The watermark is embedded in the Wave Atom Transform domain by modifying one of the scale bands. The detection and extraction procedure do not need the original host image. We tested the proposed algorithm against nine types of attacks like JPEG compression, Gaussian Noise addition, Median Filtering, Salt and Pepper noise, etc. They are carried out using Matlab Software. The experimental results demonstrate that the proposed algorithms have great robustness against various imaging attacks.
H. Y. Leung, L. M. Cheng
Robust Watermarking of H.264/SVC-Encoded Video: Quality and Resolution Scalability
Abstract
In this paper we investigate robust watermarking integrated with H.264/SVC video coding and address coarse-grain quality and spatial resolution scalability features according to Annex G of the H.264 standard. We show that watermark embedding in the base layer of the video is insufficient to protect the decoded video content when enhancements layers are employed. The problem is mitigated by a propagation technique of the base layer watermark signal when encoding the enhancement layer. In case of spatial resolution scalability, the base layer watermark signal is upsampled to match the resolution of the enhancement layer data. We demonstrate blind watermark detection in the full- and low-resolution decoded video for the same adapted H.264/SVC bitstream and, surprisingly, can report bit rate savings when extending the base layer watermark to the enhancement layer.
Peter Meerwald, Andreas Uhl
Reversible Watermarking Using Prediction Error Histogram and Blocking
Abstract
This paper presents a novel method using block-based predictive error histogram for reversible watermarking in spatial domain. This algorithm employs prediction error to embed data into the cover image. A pixel predictor based on Euclidean distance uses four neighboring pixels to predict the current pixel according to Euclidean distance and hides data into the selected pixels with a special predictive error by histogram shifting. Different from the existing histogram shifting schemes, where only one pair of peak points is used in the given image histogram, we divide the image into non-overlapping blocks and provide a pair of peak points for each block to take full advantage of embedding capacity. The set of peak point values will be compressed by arithmetic coding as a part of overhead information. Comparing with the existing spatial domain reversible watermark methods, the proposed method can achieve much better PSNR value at the high embedding rate. Experimental results prove the effectiveness of the proposed watermarking scheme.
Bo Ou, Yao Zhao, Rongrong Ni
An Efficient Pattern Substitution Watermarking Method for Binary Images
Abstract
In this paper, a method to decrease the size of location map for non-overlapping pattern substitution method is presented. Original pattern substitution (PS) method has been proposed by Ho et al.[1] as a reversible watermarking scheme for binary images. They use a pair of two patterns to embed data. Unfortunately, their location map is huge in size. In our method, we propose an efficient mechanism which can decrease the size of location map considerably for un-overlapping version of the PS method. Experiment results show that our method works well on decreasing the size of location map. Comparison results with the original PS method demonstrate that the proposed method achieves more embedding capacity and higher PSNR value due to the reduced size of the location map.
Keming Dong, Hyoung-Joong Kim
New JPEG Steganographic Scheme with High Security Performance
Abstract
In this paper, we present a new JPEG steganographic scheme. Three measures are taken in our method: 1) The secret message bits are not spread into the quantized block discrete cosine transform (BDCT) coefficients of all frequencies, and only those coefficients (including those of value 0) belonging to relatively low frequencies are selected for data embedding; 2) For any coefficients selected for embedding, the rounding error in JPEG quantization is utilized directly to guide the data embedding; 3) Matrix embedding. The experiments have demonstrated that these three measures can help to achieve small distortion in spatial domain, preserve the histogram of quantized block discrete cosine transform coefficients, and enhance the embedding efficiency of matrix embedding, etc. Consequently, the proposed steganographic scheme has achieved a high security performance. It can resist today’s most powerful JPEG steganalyzers effectively.
Fangjun Huang, Yun Qing Shi, Jiwu Huang
Ternary Data Hiding Technique for JPEG Steganography
Abstract
In this paper we present JPEG steganography method based on hiding data to the stream of ternary coefficients. In the proposed method each nonzero DCT coefficient is converted to the corresponding ternary coefficient. The block of 3 m  − 1 ternary coefficients is used for hiding m ternary messages by modifying one or two coefficients. Due to higher information density of the ternary coefficients, the proposed method has many solutions for hiding necessary data. Such a big choice enables to choose coefficients with lowest distortion impact. As a result, the proposed methods have better data hiding performance compared to the existing steganographic methods based on hiding data to stream of binary coefficients like matrix encoding (F5) and modified matrix encoding (MME). The proposed methods were tested with steganalysis method proposed by T. Pevny and J.Fridrich. The experimental results show that the proposed method has less detectability compared to MME (modified matrix encoding).
Vasily Sachnev, Hyoung-Joong Kim
Interleaving Embedding Scheme for ECC-Based Multimedia Fingerprinting
Abstract
In this paper, we focus on improving collusion resistance of error correcting code (ECC) based multimedia fingerprinting. Although permuted subsegment embedding (PSE) scheme has provided better interleaving collusion resistance than conventional scheme, our study shows that the resistance is still weaker than that under averaging collusion at moderate-to-high watermark-to-noise ratio (WNR). We then propose interleaving embedding (ILE) scheme to enforce interleaving collusion resistance, where user’s original fingerprint is applied block interleaving before embedded to the host signal. Simulation results show that ILE scheme can resist more colluders’ interleaving collusion than PSE scheme at moderate-to-high WNR. Theoretical analysis and experimental results demonstrate that the performance of ECC-based fingerprinting with ILE scheme under interleaving collusion is comparable to that under averaging collusion.
Xuping Zheng, Aixin Zhang, Shenghong Li, Bo Jin, Junhua Tang
A Novel Collusion Attack Strategy for Digital Fingerprinting
Abstract
Digital fingerprinting is a technology which aims to embed unique marks with traceability in order to identify users who use their multimedia content for unintended purpose. A cost-efficient attack against digital fingerprinting, known as collusion attack, involves a group of users who combine their fingerprinted content for the purpose of attenuating or removing the fingerprints. In this paper, we analyze and simulate the effect of Gaussian noise with different energies added in the noise-free forgery on both the detection performance of correlation-based detector and the perceptual quality of the attacked content. Based upon the analysis and the principal of informed watermark embedding, we propose a novel collusion attack strategy, self-adaptive noise optimization (SANO) collusion attack. The experimental results, under the assumption that orthogonal fingerprints are used, show that the proposed collusion attack performs more effectively than the most of existed collusion attacks. Less than three pieces of fingerprinted content can sufficiently interrupt orthogonal fingerprints which accommodate many thousands of users. Meanwhile, high fidelity of the attacked content is retained after the proposed collusion attack.
Hefei Ling, Hui Feng, Fuhao Zou, Weiqi Yan, Zhengding Lu
Privacy Preserving Facial and Fingerprint Multi-biometric Authentication
Abstract
The cases of identity theft can be mitigated by the adoption of secure authentication methods. Biohashing and its variants, which utilizes secret keys and biometrics, are promising methods for secure authentication; however, their shortcoming is the degraded performance under the assumption that secret keys are compromised. In this paper, we extend the concept of Biohashing to multi-biometrics – facial and fingerprint traits. We chose these traits because they are widely used, howbeit, little research attention has been given to designing privacy preserving multi-biometric systems using them. Instead of just using a single modality (facial or fingerprint), we presented a framework for using both modalities. The improved performance of the proposed method, using face and fingerprint, as against either facial or fingerprint trait used in isolation is evaluated using two chimerical bimodal databases formed from publicly available facial and fingerprint databases.
Esla Timothy Anzaku, Hosik Sohn, Yong Man Ro
Blind Linguistic Steganalysis against Translation Based Steganography
Abstract
Translation based steganography (TBS) is a kind of relatively new and secure linguistic steganography. It takes advantage of the “noise” created by automatic translation of natural language text to encode the secret information. Up to date, there is little research on the steganalysis against this kind of linguistic steganography. In this paper, a blind steganalytic method, which is named natural frequency zoned word distribution analysis (NFZ-WDA), is presented. This method has improved on a previously proposed linguistic steganalysis method based on word distribution which is targeted for the detection of linguistic steganography like nicetext and texto. The new method aims to detect the application of TBS and uses none of the related information about TBS, its only used resource is a word frequency dictionary obtained from a large corpus, or a so called natural frequency dictionary, so it is totally blind. To verify the effectiveness of NFZ-WDA, two experiments with two-class and multi-class SVM classifiers respectively are carried out. The experimental results show that the steganalytic method is pretty promising.
Zhili Chen, Liusheng Huang, Peng Meng, Wei Yang, Haibo Miao
Blind Quantitative Steganalysis Based on Feature Fusion and Gradient Boosting
Abstract
Blind quantitative steganalysis is about revealing more details about hidden information without any prior knowledge of steganograghy. Machine learning can be used to estimate some properties of hidden message for blind quantitative steganalysis. We propose a quantitative steganalysis method based on fusion of different steganalysis features and the estimator relies on gradient boosting. Experimental result shows that our proposed method has good performance for quantitative steganalysis.
Qingxiao Guan, Jing Dong, Tieniu Tan
IR Hiding: A Method to Prevent Video Re-shooting by Exploiting Differences between Human Perceptions and Recording Device Characteristics
Abstract
A method is described to prevent video images and videos displayed on screens from being re-shot by digital cameras and camcorders. Conventional methods using digital watermarking for re-shooting prevention embed content IDs into images and videos, and they help to identify the place and time where the actual content was shot. However, these methods do not actually prevent digital content from being re-shot by camcorders. We developed countermeasures to stop re-shooting by exploiting the differences between the sensory characteristics of humans and devices. The countermeasures require no additional functions to use-side devices. It uses infrared light (IR) to corrupt the content recorded by CCD or CMOS devices. In this way, re-shot content will be unusable. To validate the method, we developed a prototype system and implemented it on a 100-inch cinema screen. Experimental evaluations showed that the method effectively prevents re-shooting.
Takayuki Yamada, Seiichi Gohshi, Isao Echizen
On Limits of Embedding in 3D Images Based on 2D Watson’s Model
Abstract
We extend the Watson image quality metric to 3D images through the concept of integral imaging. In the Watson’s model, perceptual thresholds for changes to the DCT coefficients of a 2D image are given for information hiding. These thresholds are estimated in a way that the resulting distortion in the 2D image remains undetectable by the human eyes. In this paper, the same perceptual thresholds are estimated for a 3D scene in the integral imaging method. These thresholds are obtained based on the Watson’s model using the relation between 2D elemental images and resulting 3D image. The proposed model is evaluated through subjective tests in a typical image steganography scheme.
Zahra Kavehvash, Shahrokh Ghaemmaghami
A Reversible Acoustic Steganography for Integrity Verification
Abstract
Advanced signal-processing technology has provided alternative countermeasures against malicious attacks and tampering with digital multimedia, which are serious issues. We propose a reversible acoustic steganography scheme to verify the integrity of acoustic data with probative importance from being illegally used. A hash function is used as a feature value to be embedded into original acoustic target data as a checksum of the data’s originality. We compute the target original signal with an Integer Discrete Cosine Transform (intDCT) that has low computational complexity. Embedding space in the DCT domain is reserved for feature values and extra payload data, enabled by amplitude expansion in high-frequency spectrum of cover data. Countermeasures against overflow/underflow have been taken with adaptive gain optimization. Experimental evaluation has shown the distortion caused by embedding has been controlled under a level that is perceptible. Lossless hiding algorithm ensures this scheme is reversible.
Xuping Huang, Akira Nishimura, Isao Echizen
Backmatter
Metadata
Title
Digital Watermarking
Editors
Hyoung-Joong Kim
Yun Qing Shi
Mauro Barni
Copyright Year
2011
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-18405-5
Print ISBN
978-3-642-18404-8
DOI
https://doi.org/10.1007/978-3-642-18405-5

Premium Partner