Skip to main content

2014 | Buch

Digital-Forensics and Watermarking

12th International Workshop, IWDW 2013, Auckland, New Zealand, October 1-4, 2013. Revised Selected Papers

herausgegeben von: Yun Qing Shi, Hyoung-Joong Kim, Fernando Pérez-González

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-proceedings of the 12th International Workshop on Digital-Forensics and Watermarking, IWDW 2013, held in Auckland, New Zealand, during October 2013. The 24 full and 13 poster papers, presented together with 2 abstracts, were carefully reviewed and selected from 55 submissions. The papers are organized in topical sections on steganography and steganalysis; visual cryptography; reversible data hiding; forensics; watermarking; anonymizing and plate recognition.

Inhaltsverzeichnis

Frontmatter

Steganography and Steganalysis

Frontmatter
Bitspotting: Detecting Optimal Adaptive Steganography

We analyze a two-player zero-sum game between a steganographer, Alice, and a steganalyst, Eve. In this game, Alice wants to hide a secret message of length

$$k$$

in a binary sequence, and Eve wants to detect whether a secret message is present. The individual positions of all binary sequences are independently distributed, but have different levels of predictability. Using knowledge of this distribution, Alice randomizes over all possible size-

$$k$$

subsets of embedding positions. Eve uses an optimal (possibly randomized) decision rule that considers all positions, and incorporates knowledge of both the sequence distribution and Alice’s embedding strategy.

Our model extends prior work by removing restrictions on Eve’s detection power. The earlier work determined where Alice should hide the bits when Eve can only look in one position. Here, we expand Eve’s capacity to spot these bits by allowing her to consider all positions. We give defining formulas for each player’s best response strategy and minimax strategy; and we present additional structural constraints on the game’s equilibria. For the special case of length-two binary sequences, we compute explicit equilibria and provide numerical illustrations.

Benjamin Johnson, Pascal Schöttle, Aron Laszka, Jens Grossklags, Rainer Böhme
Improved Algorithm of Edge Adaptive Image Steganography Based on LSB Matching Revisited Algorithm

In edge adaptive image steganography based on LSB matching revisited algorithm (EAMR for short in this paper), the secret message bits are embedded into those consecutive pixel pairs whose absolute difference of grey values are larger than or equal to a threshold

T

. Tan

et al

. [

1

] pointed out that since those adjacent pixel pairs can be located by the potential attackers, the pulse distortion introduced in the histogram of absolute difference of pixel pairs (HADPP for short in this paper) can easily be discovered, and a targeted steganalyzer for revealing this pulse distortion is presented in [

1

]. In this paper, we propose an improved algorithm for EAMR, in which the adjacent pixel pairs for data hiding are selected in a new random way. Thus the attackers cannot locate the pixel pairs selected for data hiding accurately, and the abnormality that exists in HADPP cannot be discovered any longer. Experimental results demonstrate that our improved EAMR (I-EAMR) can efficiently defeat the targeted steganalyzer presented by Tan

et al

. [

1

]. Furthermore, it can still preserve the statistics of the carrier image well enough to resist today’s blind steganalyzers.

Fangjun Huang, Yane Zhong, Jiwu Huang
Steganography Based on Adaptive Pixel-Value Differencing Scheme Revisited

The adaptive pixel-value differencing (APVD) based steganography proposed by Luo

et al.

is a state-of-the-art steganographic approach in spatial domain. It can resist blind steganalysis but is vulnerable to the targeted attack proposed by Tan

et al

. In this paper, we introduce an improved version of APVD. Like APVD, we also first divide an image into some non-overlapping squares and then rotate each of them by a random degree of 0, 90, 180 or 270. For an embedding unit constructed with three consecutive pixels of the resulting image, we randomly select one pixel of it for data embedding. Furthermore, the local statistical features are well preserved by exploiting neighborhood information of pixels. The experimental results evaluated on a large image database reveal that our method makes a significant improvement against the APVD-specific steganalytic attack, and has an impressive performance on resisting blind steganalysis compared with previous PVD-based steganographic methods.

Hong Zhang, Qingxiao Guan, Xianfeng Zhao
Non-uniform Quantization in Breaking HUGO

In breaking HUGO (Highly Undetectable Stegonagraphy), an advanced steganographic scheme recently developed for uncompressed images, the research on steganalysis has made rapid progress recently. That is, more advanced statistical models often utilizing high dimensional features have been adopted. It is noted that there is one thing in common for all of these newly developed advanced steganalytic schemes. That is, uniform quantization has been applied to residual images in order to reduce the feature dimensionality. In this paper, non-uniform quantization is proposed, developed and utilized to break the HUGO. In constructing non-uniform quantizers, a small portion of available samples from both cover and stego images are utilized to provide needed statistics. Utilizing non-uniform quantization we can achieve better steganalytic performance than using uniform quantization under, otherwise, the same framework.

Licong Chen, Yun Q. Shi, Patchara Sutthiwan, Xinxin Niu
Steganalysis of Compressed Speech Based on Markov and Entropy

Compressed domain based steganography (CDBS) is a kind of relatively new and secure audio steganography. Up to date, there is little research on the steganalysis against this kind of audio steganography. In this paper, we introduce two methods to detect various CDBS on ACELP speech. One is the Markov method and the other is the entropy method. Both methods are based on the observation that the steganography behavior has certain effects on the relationship among the pulses in the same track. The Markov transition probabilities are utilized to evaluate the interrelationships between adjacent pulses and entropy is employed to measure the “disorder” of combined pulse distributions. First, Markov transition probabilities, joint entropy and conditional entropy features of the track pulses are extracted; a support vector machine (SVM) is then applied to the features for discovering the existence of hidden data in compressed speech signals, respectively. Some famous CDBS methods on ACELP encoded speech are considered. Experimental results have proven the effectiveness of the two methods.

Haibo Miao, Liusheng Huang, Yao Shen, Xiaorong Lu, Zhili Chen

Visual Cryptography

Frontmatter
Improved Tagged Visual Cryptograms by Using Random Grids

Tagged visual cryptography (TVC) is a brand new type of visual cryptography (VC) in which additional tags are concealed into each generated share. By folding up each single share, the associated tagged pattern is visually revealed. Such additional tag patterns greatly enrich extra abilities of VC, such as augmented message carried in a single share, user-friendly interface to manage the shares, and/or evidence for verifying consistency among those shares cooperating in a decryption instance. However, reported

$$(k,n)$$

TVC proposed by Wang and Hsu still suffers from the defects such as pixel expansion, code book required in the encoding phase and low image quality. In this work, a

$$(k,n)$$

TVC by adopting the concept of random grid (RG) is introduced. The proposed method can solve the pixel expansion and code book needed problems. Further, better visual quality of both the recovered secret image and reconstructed tag image is provided according to the theoretical analysis and demonstrated experiments.

Duanhao Ou, Xiaotian Wu, Lu Dai, Wei Sun
Cheating Immune Block-Based Progressive Visual Cryptography

Recently, Hou et al. Introduced a (2,

n

) block-based progressive visual cryptographic scheme (BPVCS), which the image blocks can be gradually recovered step by step. In Hou et al.’s (2,

n

)-BPVCS, a secret image is subdivided into

n

non-overlapped image blocks. When stacking any

t

(2 ≤

t

n

) shadows, the image blocks associated with these

t

participants will be recovered. Unfortunately, Hou et al.’s (2,

n

)-BVCPS suffers from the cheating problem, which any two dishonest participants might collude together to tamper their image blocks shared with other honest participants. Also, they can impersonate an honest participant to force other honest participants to reconstruct the wrong secret. In this paper, we solve the cheating problem and propose a cheating immune (2,

n

)-BPVCS.

Ching-Nung Yang, Yi-Chin Lin, Chih-Cheng Wu
Visual Cryptography and Random Grids Schemes

Visual Cryptography (VC) and Random Grids (RG) are both visual secret sharing (VSS) techniques, which decode the secret by stacking some authorized shares. It is claimed that RG scheme benefits more than VC scheme in terms of removing the problems of pixel expansion, tailor-made codebook design, and aspect ratio change. However, we find that the encryption rules of RGS are actually the matrices sets of probabilistic VCS. The transformation from RGS to PVCS is proved and shown by means of giving theoretical analysis and conducting some specific schemes. The relationship between codebook and computational complexity are analyzed for PVCS and RGS. Furthermore, the contrast of PVCS is no less than the one of RGS under the same access structure, which is shown by experimental results.

Zheng-xin Fu, Bin Yu
Secret Sharing in Images Based on Error-Diffused Block Truncation Coding and Error Diffusion

This paper presents a novel

$$(n,n)$$

–threshold secret sharing in images by using error-diffused block truncation coding (EDBTC) and error diffusion. The proposed scheme is designed to share a secret binary image, such as text image or natural image, into

$$n$$

EDBTC-compressed images including meaningful contents. The compressed shadows generated by our proposed scheme have good visual quality and no pixel expansion, which are beneficial to reduce suspicions from the invaders. Each value in the secret image is reconstructed by using boolean XOR operations which are considered as light-weight devices, thus the recovering process is simple and fast. In addition, EDBTC-compressed images instead of the original format images are selected as shadows, which can improve the efficiency for data transmission and storing. The experimental results demonstrate that the proposed scheme offers a high secure and effective mechanism for secret image sharing.

Duanhao Ou, Xiaotian Wu, Lu Dai, Wei Sun

Reversible Data Hiding

Frontmatter
Reversible Data Hiding in Encrypted H.264/AVC Video Streams

Reversible data hiding in the encrypted domain is an emerging technology because of the privacy-preserving requirements from cloud data management. In this paper, a reversible data hiding scheme in encrypted H.264/AVC video streams is proposed. During H.264/AVC encoding, the intra-prediction mode (IPM), motion vector difference (MVD), and residue coefficients’ signs are encrypted using a standard stream cipher. Then, the data-hider, who does not know the original video content, may reversibly embed secret data into the encrypted H.264/AVC video based on histogram-shifting of residue coefficients. With an encrypted video containing hidden data, data extraction can be carried out either in encrypted or decrypted domain. In addition, real reversibility is realized, that is, data extraction and video recovery are free of any error. Experimental results demonstrate the feasibility and efficiency of the proposed scheme.

Dawen Xu, Rangding Wang, Yun Qing Shi
Using RZL Coding to Enhance Histogram-Pair Based Image Reversible Data Hiding

An improvement of histogram-pair based image reversible data hiding by using RZL (Reverse Zero-run Length) coding is proposed in this paper. The pre-processing to compress data to a shortest one is usually adopted for raising the PSNR (Peak Signal to Noise Ratio) in data hiding. Recently, the disagreements appear that we can get better PSNR by using RZL coding after compression. We proved that our histogram-pair based image reversible data hiding is suitable to use RZL to improve the performance. The PSNR can be raised by using different RZL methods, different parameters, different embedded capacity and different images. It is hard to apply RZL to given original data with different lengths. We proposed a method to solve that by adding some 0 s to the original data to form a complete block, and the RZL needs an attached mark for lossless recovery. In our experiments it has been shown that the PSNR of image with histogram-pair based reversible data hiding by using RZL is higher than that without using RZL as the embedding data rate is not high. Zhang et al.’s RZL is better than Wong et al.’s in most cases. The average PSNR gain is about 1 dB for five test images at different payloads with the RZL used in this paper.

Xuefeng Tong, Guorong Xuan, Guangce Shen, Xiaoli Huan, Yun Qing Shi

Forensics

Frontmatter
Detecting Non-aligned Double JPEG Compression Based on Refined Intensity Difference and Calibration

The detection of non-aligned double JPEG (NA-DJPEG) compression is one of the most important topics in the forensics of JPEG image. In this paper, we propose a novel feature set to detect NA-DJPEG compression based on refined intensity difference (RID), a new measure for blocking artifacts. Refined intensity difference is essentially intensity difference with compensation, which takes the negative effect of image texture into consideration when measuring blocking artifacts. The extraction pipeline of the proposed feature set mainly includes two steps. Firstly, two groups of RID histograms (totally sixteen histograms) with respect to horizontal and vertical directions are computed to describe the possible blocking artifacts in each row and column, and the bin values of these histograms are arranged to form an RID feature vector. Then, in order to make the RID feature vector less dependent on image texture and more discriminative, we calibrate it by a reference feature vector to generate a calibrated RID (C-RID) feature vector for final binary classification. Experiments have been conducted to validate the effectiveness of the C-RID feature set, and the results have shown that it outperforms the compared feature sets in most cases.

Jianquan Yang, Guopu Zhu, Junlong Wang, Yun Qing Shi
A Novel Method for Detecting Image Sharpening Based on Local Binary Pattern

In image forensics, determining the image editing history plays an important role as most digital images need to be edited for various purposes. Image sharpening which aims to enhance the image edge contrast for a clear view is considered to be one of the most fundamental editing techniques. However, only a few works have been reported on the detection of image sharpening. From a perspective of texture analysis, the over-shoot artifact caused by image sharpening can be regarded as a special kind of texture modification. We also find that this kind of texture modification can be characterized by local binary patterns (LBP), which is one of the most wildly used methods for texture classification. Therefore, in this paper we propose a novel method based on LBP to detect the application of sharpening in digital image. At first, we employ Canny operator for edge detection. The rotation-invariant LBP was applied to the detected edge pixels of images for feature extraction. Then features extracted from sharpened and unsharpened images are fed into a support vector machine (SVM) classifier for classification. Experimental results on digital images with different coefficients for sharpening have demonstrated the capability of this method. Comparing with the state-of-arts, the proposed method is validated to be the one with better performance in sharpening detection.

Feng Ding, Guopu Zhu, Yun Qing Shi
Camera Source Identification Game with Incomplete Information

Image forensics with the presence of an adversary has raised more and more attention recently. A typical case is the interplay between the sensor-based camera source identification and fingerprint-copy attack. This paper gives a game theory analysis in such an adversarial environment. We use a counter anti-forensic method based on noise level estimation to detect the possible forgery (

forgery test

). Next, we introduce a game theory model to evaluate the ultimate performance when both the investigator and the forger have complete information. Finally, for a more practical scenario that one of the parties has incomplete information, a Bayesian game is introduced and the ultimate performance is compared with that of complete information game.

Hui Zeng, Xiangui Kang
Detecting Traitors in Re-publishing Updated Datasets

The application of fingerprinting techniques to relational data cannot protect personal information against a collusion attack, in which the attacker has access to a set of published data. The general fingerprinting techniques such as Li et at.’s, Guo et al.’s, and Schrittwieser et al.’s focus on detecting the traitor, who leaked the data. Among them, Schrittwieser et al.’s fingerprinting technique combines

$$k$$

-anonymity and full-domain generalization in order to not only detect traitors but also protect personal records. However, the technique has two main limitations. First, it does not allow the data provider to insert or delete records from the original data. Secondly, it does not create enough fingerprints for data recipients. To overcome these limitations, in this paper, we propose an (

$$\alpha ,k$$

)-privacy protection model, an extension of

$$m$$

-invariance and (

$$\alpha , k$$

)-anonymity, and a new top-down (

$$\alpha , k$$

)-privacy fingerprinting algorithm based on that model. The model not only protects sensitive personal information against collusion attacks but also allows data providers to republish their updated original data without degrading the privacy protection. The algorithm embeds fingerprints in the generalized data and extracts them from leaked data to detect the traitors. We extensively evaluate the proposed algorithm on our own built software. The evaluation results show that our algorithm creates more fingerprints than Schrittwieser et al.’s algorithm (64000 vs 1536) while achieving the same generalized data quality. Moreover, our (

$$\alpha , k$$

)-privacy algorithm creates generalized data even in the case of having small number of distinct sensitive values in the original data without adding faked records as in

$$m$$

-invariance.

Anh-Tu Hoang, Hoang-Quoc Nguyen-Son, Minh-Triet Tran, Isao Echizen
On User Interaction Behavior as Evidence for Computer Forensic Analysis

Demographic information has a rich context from which to make decisions about how to filter or individualize computer users in forensic analysis. Although current explorations into technologies such as face and fingerprint analysis have seen varying rates of success, two main problems limit their applicability in the context of computer crimes: they can be intrusive, and they can require costly equipment. Our solution is to determine users’ demographic traits by analyzing the interactions between users and computers. We conducted a field study that gathered users’ keystroke and mouse data during interaction with a computer. From user interaction data, we extracted keystroke timing and mouse movement features, and developed weighted random forest classifiers for five demographic traits: gender, age, ethnicity, handedness, and language. Experiments showed that these demographics can be accurately inferred from user interaction behavior, with recognition rates expressed by the area under the ROC curve (AUC) ranging from 82.11 % to 87.32 %.

Chao Shen, Zhongmin Cai, Roy A. Maxion, Xiaohong Guan
Effective Video Copy Detection Using Statistics of Quantized Zernike Moments

Video copy detection has found wide applications in digital multimedia forensics and copyright protection. With video copy detection, one can not only determine the presence of a query video in the massive video database, but also locate it precisely. This paper presents an effective video copy detection scheme based on the statistics of quantized Zernike moments. In our approach, each video frame is partitioned into non-overlapping blocks. The Zernike moments of first few orders are then calculated for each block. Finally, the frame-level feature is generated by aggregating statistics of the quantized Zernike moments of all the blocks in the video frame. Through extensive experiments on a public video database, this frame-level feature is demonstrated to be robust against geometric transformation, color adjustment, noise contamination and many other commonly used content-preserving operations. Compared with existing schemes in the literatures, the proposed method yields better or at least comparable performance in a series of experiments.

Jiehao Chen, Chenglong Chen, Jiangqun Ni
Identifying Video Forgery Process Using Optical Flow

With the extensive equipment of surveillance systems, the assessment of the integrity of surveillance videos is of vital importance. In this paper, an algorithm based on optical flow and anomaly detection is proposed to authenticate digital videos and further identify the inter-frame forgery process (i.e. frame deletion, insertion, and duplication). This method relies on the fact that forgery operation will introduce discontinuity points to the optical flow variation sequence and these points show different characteristics depending on the type of forgery. The anomaly detection scheme is adopted to distinguish the discontinuity points. Experiments were performed on several real-world surveillance videos delicately forged by volunteers. The results show that the proposed algorithm is effective to identify forgery process with localization, and is robust to some degree of MPEG compression.

Wan Wang, Xinghao Jiang, Shilin Wang, Meng Wan, Tanfeng Sun
A Huffman Table Index Based Approach to Detect Double MP3 Compression

MP3 is the most widely used audio format nowadays in our daily life, while MP3 audio often be forged by an audio forger for their own benefits in some significant events, which will cause double MP3 compression. In this paper, the statistical frequency and the transition probability in Markov model are counted based on Huffman code table index, and a support vector machine is applied for classification to detect double MP3 compression. Experimental results demonstrate that the proposed method has low complexity and high accuracy, also the blank with the same bitrate in double MP3 compression detection is made up.

Pengfei Ma, Rangding Wang, Diqun Yan, Chao Jin

Poster Session

Frontmatter
Reversible and Robust Audio Watermarking Based on Quantization Index Modulation and Amplitude Expansion

Existing techniques for reversible hiding of data in audio signals are so fragile that no data can be extracted from a modified stego audio signal. The present study proposes a reversible and robust technique for hiding data in audio. A robust payload is embedded based on quantization index modulation (QIM) at the averaged root mean square levels of the segmented stego waveforms. Simultaneously, a reversible payload is embedded into the apertures in the amplitude histogram created by amplitude expansion in QIM. Computer simulation was conducted to evaluate the robustness and size of the reversible payload for 20 music pieces. MP3, tandem MP3 coding, MPEG4AAC, and bandpass filtering of the stego signals revealed a maximum bit error rate of less than 16 % in 6-bits per second robust payload. Objective measurement of the stego audio quality using the perceptual evaluation of audio quality method revealed that the mean objective difference grade was higher than ‘perceptible, but not annoying’. The amount of reversible payload was above several kilobits per second.

Akira Nishimura
Visual Cryptography Schemes Based in $$k$$ -Linear Maps

Properties of a

$$k$$

-vector space

$$\mathrm {Hom}_{k}(U_{0}, V_{0})$$

of linear maps between fixed

$$k$$

-vector spaces

$$U_{0}$$

and

$$V_{0}$$

are used to define perfect black visual cryptography schemes for sharing secret images, such images can be revealed by stacking qualified sets of transparencies induced by a linear combination of some basic

$$k$$

-linear maps. The use of this type of morphisms allows to generalize some of the schemes used up to date for sharing multiple secrets.

Agustín Moreno Cañadas, Nelly Paola Palma Vanegas, Margoth Hernández Quitián
A Hybrid Feature Based Method for Distinguishing Computer Graphics and Photo-Graphic Image

This article describes a mathematical method to distinguish computer graphics (CG) from photographic images (PG). Because white balance, CFA and PRNU noise artifacts are the intrinsic properties of optical imaging, we can use these artifacts to reflect camera imaging-specific to some extent. During experiment, we design a 135-D feature set and perform distinguishing process to capture these artifacts. Images selected from relative Columbia University image database are chosen as our experiment database. Experiment result indicates our mathematical method is capable to identify computer generated images from camera produced images with 95.43 % accuracy.

Shang Gao, Cong Zhang, Chan-Le Wu, Gang Ye, Lei Huang
A Distributed Scheme for Image Splicing Detection

In order to capture more splicing traces and to improve the robustness to anti-forensics, combining different kinds of features are adopted for image detection work in recently years. However, the combined features inevitably increase the feature dimensionality and the computational complexity. In this paper, we propose a distributed approach to reducing the computational complexity introduced by the high-dimensional features in image splicing detection. We introduce first-order noncausal model to the splicing detection work and give the distributed solution to this model. The noncausal model is split into several small tasks which are solved simultaneously by the distributed scheme. Experimental results over the public Columbia Image Splicing Detection Evaluation Dataset show that the distributed noncausal model could differentiate between splicing images and natural ones effectively.

Xudong Zhao, Shilin Wang, Shenghong Li, Jianhua Li, Xiang Lin
A New Reversible Data Hiding Scheme Based on Efficient Prediction

This paper presents a new reversible data hiding scheme based on a popular technique, namely prediction error expansion (PEE). The prediction accuracy is important to the efficiency of this kind of scheme. We predict the pixels through their six round neighboring ones. And the gradient information is also taken into consideration. As a result the proposed prediction method helps us to obtain a large data hiding space. Furthermore, a sorting strategy that tries to reduce the overflow/underflow problems is employed to improve the algorithm efficiency. Experimental results prove that the proposed reversible data hiding scheme outperforms the most prior arts.

Jian Li, Xiaolong Li, Xingming Sun
Digital Forensics of Printed Source Identification for Chinese Characters

Recently, digital forensics, which involves the collection and analysis of the origin digital device, has become an important issue. Digital content can play a crucial role in identifying the source device, such as serve as evidence in court. To achieve this goal, we use different texture feature extraction methods such as gray-level co-occurrence matrix (GLCM) and discrete wavelet transform (DWT), to analyze the Chinese printed source in order to find the impact of different output devices. Furthermore, we also explore the optimum feature subset by using feature selection techniques and using support vector machine (SVM) to identify the source model of the documents. The average experimental results attain a 98.64 % identification rate which is significantly superior to the existing known method by 1.27 %. The superior testing performance demonstrates that the proposed identification method is very useful for source laser printer identification.

Min-Jen Tsai, Jung Liu, Jin-Sheng Yin, Imam Yuadi
A Cheat-Prevention Visual Secret Sharing Scheme with Minimum Pixel Expansion

A visual secret sharing (VSS) scheme with minimum pixel expansion is proposed to prevent malicious participants from deceiving an honest participant. A VSS scheme encrypts a secret image into pieces referred to as shares where each participant keeps a share so that stacking sufficient number of shares recovers the secret image. A cheat prevention VSS scheme provides another piece for each participant for verifying whether the shares presented by other participants are genuine. The proposed scheme improves the contrast of the recovered image and cheat-prevention functionality by introducing randomness in producing pieces for verification. Experimental results show the effectiveness of the proposed scheme.

Shenchuan Liu, Masaaki Fujiyoshi, Hitoshi Kiya
Reversible Audio Information Hiding Based on Integer DCT Coefficients with Adaptive Hiding Locations

This paper presents a reversible audio information hiding method based on integer Discrete Cosine Transform (intDCT), and investigates the effectiveness of adaptive hiding locations. In this work, audio data is first divided into several frames with a fixed length, and then, each frame data is transformed into the DCT coefficients by integer DCT in a lossless manner. Then the DCT coefficients are divided by several segments. We extend DCT coefficients out of selected segments to reserve hiding space. In our previous work, the payload was embedded to higher DCT coefficients. While in this paper, we focus on solving the problems to reduce the distortion caused by embedding, as well as to make hiding positions more difficult to be speculated. We examine the adaptive embedding locations by estimating distortion. By experimental evaluation of stego data, the proposed adaptive hiding shows slightly better performance in a segmental SNR (segSNR) criterion, while Listening Quality Objective Mean Opinion (MOSLQO), which is one of the well-used objective criterion for speech quality, degrades, which implies the necessity of the overlapping frame analysis to audio data for improving the sound quality. Spectrum of stego data shows that it is difficult to detect the hiding locations by the proposed method.

Xuping Huang, Nobutaka Ono, Isao Echizen, Akira Nishimura

Watermarking

Frontmatter
A Restorable Semi-fragile Watermarking Combined DCT with Interpolation

In this paper, a semi-fragile watermarking with interpolation method is proposed to improve recovery performance with less recovery watermark capacity. The original image is 2-fold down sampled to reduce the watermark payload and DCT (Discrete Cosine Transformation) is calculated on each 4 × 4 block in the down-sampled image. The DC coefficient and first two AC coefficients in each 4 × 4 block are quantized by selected quantized steps respectively and encoded with 11 bits to generate the recovery watermark corresponding to 8 × 8 block in the original image. The recovery watermark of each 8 × 8 block is embedded in the quantized DCT coefficients of other blocks. At the recovery side, the low resolution image is firstly reconstructed by the extracted valid recovery watermark and the high resolution one is reconstructed by the interpolation method based on the low resolution one. The tampered blocks are recovered by the corresponding blocks in the high resolution image. The image inpainting method is also used to recover the tampering coincident blocks. Experimental results show that the proposed restorable semi-fragile watermarking method can achieve better recovery performance under JPEG (Joint Photographic Experts Group) compression with superior invisibility.

Yaoran Huo, Hongjie He, Fan Chen
A Compressive Sensing Based Quantized Watermarking Scheme with Statistical Transparency Constraint

In multimedia protection processes, quantization based watermarking schemes are widely used. For these processes, the proposed watermarking solutions offer robust approaches for copyright protection. Unfortunately in key-less solutions, the additional hidden information (watermark) is, statistically detectable by unauthorized users, and thus they are correctly informed on the documents to be attacked or not. In this paper, we present a compressive sensing based watermarking solution able to mark digital pictures increasing statistical invisibility for attackers: the attacker will falsely conclude to a non-watermarked document with high probability. We discuss the way of using compressive sensing on the host signal for watermarking purpose. We describe a solution allowing to obtain a watermarking scheme based on compressive sensing with interesting properties for images protection processes. All the watermarking performances are discussed for three criteria robustness, statistical invisibility and capacity in order to look for the best trade-off. All the analysis are validated on digital image databases.

Claude Delpha, Said Hijazi, Remy Boyer
Watermarking-Based Perceptual Hashing Search Over Encrypted Speech

Privacy-preserving search over encrypted speech has come into being an important and urgent research field in cloud storage. In this paper, a speech scrambling encryption method based on the Chua’s circuit chaotic system with memristor is proposed, and then a watermarking-based perceptual hashing search algorithm over encrypted speech is proposed based on the zero-crossing rate. In the proposed scheme, the zero-crossing rate is extracted from the digital speech to generate the perceptual hashing as the search digest. The perceptual hashing digest as a watermark is embedded into the encrypted speech signal. The search results can be obtained by matching and computing the normalized Hamming distance of the perceptual hashing digests between the search target and the extracted one without loading and decrypting the encrypted speech. Experimental results show that the proposed scheme has good discrimination, uniqueness, and perceptual robustness to common speech processing. In addition, the security and computation complexity are all satisfactory. The precision ratio is 100 %, and the recall ratio is more than 98 %.

Hongxia Wang, Linna Zhou, Wei Zhang, Shuang Liu

Anonymizing and Plate Recognition

Frontmatter
Anonymizing Temporal Phrases in Natural Language Text to be Posted on Social Networking Services

Time-related information in text posted on-line is one type of personal information targeted by attackers, one reason that sharing information online can be risky. Therefore, time information should be anonymized before it is posted on social networking services. One approach to anonymizing information is to replace sensitive phrases with anonymous phrases, but attackers can usually spot such anonymization due to its unnaturalness. Another approach is to detect temporal passages in the text, but removal of these passages can make the meaning of the text unnatural. We have developed an algorithm that can be used to anonymize time-related personal information by removing the temporal passages when doing so will not change the natural meaning of the message. The temporal phrases are detected by using machine-learned patterns, which are represented by a subtree of the sentence parsing tree. The temporal phrases in the parsing tree are distinguished from other parts of the tree by using temporal taggers integrated into the algorithm. In an experiment with 4008 sentences posted on a social network, 84.53 % of them were anonymized without changing their intended meaning. This is significantly better than the 72.88 % rate of the best previous temporal phrase detection algorithm. Of the learned patterns, the top ten most common ones were used to detect 87.78 % the temporal phrases. This means that only some of the most common patterns can be used to the anonymize temporal phrases in most messages to be posted on an SNS. The algorithm works well not only for temporal phrases in text posted on social networks but also for other types of phrases (such as location and objective ones), other areas (religion, politics, military, etc.), and other languages.

Hoang-Quoc Nguyen-Son, Anh-Tu Hoang, Minh-Triet Tran, Hiroshi Yoshiura, Noboru Sonehara, Isao Echizen
Improved License Plate Recognition for Low-Resolution CCTV Forensics by Integrating Sparse Representation-Based Super-Resolution

Automatic license plate recognition (LPR) is an important functionality for closed-circuit television (CCTV) forensics. However, uncontrolled capture conditions make it still difficult to achieve effective LPR in practice. In this paper, we propose a novel method for robust LPR in real-world imagery, leveraging sparse representation-based (SR-based) super-resolution. To that end, we make use of high-resolution license plate (LP) images that are used for both (1) the construction of a dictionary for SR-based super-resolution and (2) the training of LP character classifiers. Comparative experimental results indicate that the proposed SR-based super-resolution method allows for effective LPR in low-resolution imagery captured by long-distance CCTV cameras.

Hyun-seok Min, Seung Ho Lee, Wesley De Neve, Yong Man Ro

Poster Session

Frontmatter
Hiding a Secret Pattern into Color Halftone Images

This paper proposes an effective color halftone image visual cryptography method to embed a binary secret pattern into dot diffused color halftone images, Data Hiding by Dual Color Conjugate Dot Diffusion (DCCDD). DCCDD considers inter-channel correlation in order to restrict the embedding distortions between different channels within an acceptable range. Compared to the previous method, the proposed method can hide a secret pattern into two halftone color images which come from different original multitone images. The experimental results show that DCCDD can embed a binary secret pattern into two color halftone images which can be generated from identical or different original multitone color images. When the two halftone images are overlaid, the secret pattern will be revealed.

Yuanfang Guo, Oscar C. Au, Ketan Tang, Jiahao Pang
An Image Authentication Scheme for Accurate Localization and Restoration

This paper presents a novel watermarking scheme for image tampering localization and restoration. The authentication data is generated according to the 5 most significant bits of all pixels and then compressed to reduce the occupied space. The main content of all image blocks is transform to recovery data to restore the tampered version. The error correction coding is introduced to improve restorable recovery version. The reduced authentication data and recovery data is concatenated and embedded into the 3 least significant bits of the all pixels. Experimental results show that the proposed scheme can accurately localize the tampered pixels of the watermarked image and perfectly recover the corresponding tampered image areas.

Qunting Yang, Tiegang Gao
Generalized Histogram Shifting-Based Blind Reversible Data Hiding with Balanced and Guarded Double Side Modification

This paper proposes a method of reversible data hiding based on generalized histogram shifting where the proposed method is free from memorizing embedding parameters. A generalized histogram shifting-based reversible data hiding (GHS-RDH) method increases (or decreases) particular pixel values in the image by

$$(q - 1)$$

, based on the tonal distribution of the image, to hide

$$q$$

-ary data symbols to the image. The method not only extracts hidden data but also restores the original image from the distorted image carrying hidden data. Whereas conventional GHS-RDH should memorize a set of image-dependent parameters for hidden data extraction and original image recovery, the proposed method is free from parameter memorization and from embedding parameters in the image by introducing three mechanisms; guard zero histogram bins, double side modification, and histogram peak shifting. The proposed method does not need to identify the distorted image conveying hidden data among all possible images before the hidden data extraction, and it makes generalized HS-RDH feasible. In addition, the proposed method is naturally free from overflow/underflow problem. Experimental results show the effectiveness of the proposed method.

Masaaki Fujiyoshi
New Forensic Methods for OOXML Format Documents

MS Office 2007–2013 documents, which use new Office Open XML (OOXML) format, could be illegally used as cover mediums to transmit secret information by offenders, because they do not easily arouse others suspicion. This paper proposes five forensic methods for OOXML format documents on the basis of researching the potential information hiding methods. The proposed forensic methods are classified into two groups to describe the details: document structure and document format. The aim is to provide security detection technology for electronic documents downloaded by users, and then prevent the damage of secret information embedded by offenders. Extensive experiments based on real data set demonstrate the effectiveness of the proposed methods.

Zhangjie Fu, Xingming Sun, Lu Zhou, Jiangang Shu
High Capacity Data Hiding Scheme for Binary Images Based on Minimizing Flipping Distortion

A binary image data hiding scheme with high capacity is proposed in this paper, which minimizes the flipping distortion measured by the proposed distortion function. In the proposed distortion function, both pixels cluster and boundary connectivity are considered to systematically evaluate the flipping distortion. Some preprocesses and postprocesses are presented to handle the unexpected distortion of embedding. Experimental results demonstrate that the proposed scheme possesses a good visual quality of stego images in both low and high capacity contexts.

Bingwen Feng, Wei Lu, Wei Sun
Backmatter
Metadaten
Titel
Digital-Forensics and Watermarking
herausgegeben von
Yun Qing Shi
Hyoung-Joong Kim
Fernando Pérez-González
Copyright-Jahr
2014
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-43886-2
Print ISBN
978-3-662-43885-5
DOI
https://doi.org/10.1007/978-3-662-43886-2