Skip to main content
Top

2005 | Book

Digital Watermarking

4th International Workshop, IWDW 2005, Siena, Italy, September 15-17, 2005. Proceedings

Editors: Mauro Barni, Ingemar Cox, Ton Kalker, Hyoung-Joong Kim

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

We are delighted to welcome the attendees of the Fourth International Wo- shop on Digital Watermarking (IWDW). Watermarking continues to generate strong academic interest. Commercialization of the technology is proceeding at a steadypace. We haveseen watermarkingadoptedfor DVD audio.Fingerpri- ing technology was successfully used to determine the source of pirated video material. Furthermore, a number of companies are using watermarking as an enabling technology for broadcast monitoring services. Watermarking of digital cinema contentis anticipated. Future applications may also come from areas- related to digital rights management. For example, the use of watermarking to enhance legacy broadcast and communication systems is now being considered. IWDW 2005 o?ers an opportunity to re?ect upon the state of the art in digital watermarking as well as discuss directions for future research and applications. This year we accepted 31 papers from 74 submissions. This 42% acceptance rate indicates our commitment to ensuring a very high quality conference. We thankthemembersoftheTechnicalProgramCommitteeformakingthispossible by their timely and insightful reviews. Thanks to their hard work this is the ?rst IWDW at which the ?nal proceedings are available to the participants at the time of the workshop as a Springer LNCS publication.

Table of Contents

Frontmatter

Session I: Steganography and Steganalysis

A New Approach to Estimating Hidden Message Length in Stochastic Modulation Steganography

Stochastic modulation steganography hides secret message within the cover image by adding a weak noise signal with a specified probabilistic distribution. The advantages of stochastic modulation steganography include high capacity and better security. Current steganalysis methods that are applicable to the detection of hidden message in traditional least significant bit (LSB) or additive noise model based steganography cannot reliably detect the existence of hidden message in stochastic modulation steganography. In this paper, we present a new steganalysis approach which can reliably detect the existence and accurately estimate the length of hidden message in stochastic modulation steganography. By analyzing the distributions of the horizontal pixel difference of the images before and after stochastic modulation embedding, it is shown that for non-adaptive steganography, the distribution of the stego-image’s pixel difference can be modeled as the convolution of the distribution of the cover image’s pixel difference and that of the quantized stego-noise difference, and that the estimation of the hidden message length in stochastic modulation can be achieved by estimating the variance of the stego-noise. To estimate the variance of the stego-noise, hence determining the existence and the length of hidden message, we first model the distribution of the cover image’s pixel difference as a generalized Gaussian and estimate the parameters of the distribution using grid search and Chi-square goodness of fit test, and then exploit the relationship between the distribution variance of the cover image’s pixel difference and that of the stego-noise difference. We present experimental results to demonstrate that our new approach is effective for steganalyzing stochastic modulation steganography. Our method provides a general theoretical framework and is applicable to other non-adaptive embedding algorithms where the distribution models of the stego-noise are known or can be estimated.

Junhui He, Jiwu Huang, Guoping Qiu
Information Transmission and Steganography

Recently there has been strong interest in developing models of steganography based on information theory. Previous work has considered under what conditions the security of the stegosystem can be guaranteed and the number of bits that can then be embedded in a cover Work. This work implicitly assumes that the hidden message is uncorrelated with the cover Work, the latter simply being used to conceal the hidden message. Here, we consider the case in which the cover Work is chosen such that it is correlated with the covert message. In this situation, the number of bits needed to encode the hidden message can be considerably reduced. We discuss the information that can then be transmitted and show that it is substantially greater than simply the number of embedded bits. We also note that the security of the system as defined by Cachin need not be compromised. However, the Shannon security may be compromised, but it remains unclear to what extent. Experimental results are presented that demonstrate the fundamental concepts.

Ingemar J. Cox, Ton Kalker, Georg Pakura, Mathias Scheel
On the Existence of Perfect Stegosystems

There are several steganography techniques (e.g. linguistic or least significant bit embedding) that provide security but no robustness against an active adversary. On the other hand it is rather well known that the spread-spectrum based technique is robust against an active adversary but it seems to be insecure against a statistical detection of stegosignal. We prove in this paper that actually this is not the case and that there exists an stegosystem that is asymptotically both secure to statistical detection and robust against a jamming of stegosignal by an active adversary. We call such stegosystems

quasiperfect

whereas we call them

perfect

if in addition the data rate of secret information is asymptotically constant. We prove that perfect stegosystems do not exist for both blind and informed decoders. Some examples using the simplex and the Reed-Muller codes jointly with stegosystems are given.

Valery Korzhik, Guillermo Morales-Luna, Moon Ho Lee
Towards Multi-class Blind Steganalyzer for JPEG Images

In this paper, we use the previously proposed calibrated DCT features [9] to construct a Support Vector Machine classifier for JPEG images capable of recognizing which steganographic algorithm was used for embedding. This work also constitutes a more detailed evaluation of the performance of DCT features as in [9] only a linear classifier was used. The DCT features transformed using Principal Component Analysis enable an interesting visualization of different stego programs in a three-dimensional space. This paper demonstrates that, at least under some simplifying assumptions in which the effects of double compression are ignored, it is possible to reliably classify stego images to their embedding techniques. The classifier is capable of generalizing to previously unseen techniques.

Tomáš Pevný, Jessica Fridrich

Session II: Fingerprinting

An Efficient Fingerprinting Scheme with Symmetric and Commutative Encryption

The illegal copying and redistribution of digital content is a crucial problem to distributors who electronically sell digital content. Fingerprinting scheme is a technique which supports the copyright protection to track redistributors of digital content using cryptographic techniques. Anonymous fingerprinting scheme prevents the content provider from framing the buyer by making the fingerprinted version known to the buyer only. In designing a fingerprinting scheme, it is important to make it more practical and efficient.

In this paper, we proposed a fingerprinting protocol to address the problem using cryptographic technologies and watermarking scheme. The digital content is encrypted using symmetric encryption. And keys, which are used to decrypt encrypted digital content, are double locked by two encryption keys kept separately by the buyer and the content provider. In the protocol, the buyer only gets a few of keys and can decrypt a few of fingerprinted digital contents in a transaction and the content provider has no idea how the fingerprint is formed. This facilitates the authority to determine the unethical party in case of illegal distributions of digital contents.

Seunglim Yong, Sang-Ho Lee
Collusion Secure Convolutional Spread Spectrum Fingerprinting

Digital Fingerprinting is a technique for the merchant who can embed unique buyer identity marks into digital media copy and also makes it possible to identify ’traitors’ who redistribute their illegal copies. This paper first discusses the collusion-resistant properties of spread-spectrum sequence against malicious attacks such as collusion combination, collusion average and additive noise. A novel two-layers secure fingerprinting scheme is then presented by concatenating the spread-spectrum code with a convolutional code. Moreover, Viterbi algorithm is improved by using Optional Code Set. The code length, collusion security and performance are proved and analyzed. As the results, the proposed scheme for perceptual media has shorter fingerprinting length and achieves optimal traitor searching.

Yan Zhu, Dengguo Feng, Wei Zou
Performance Study on Multimedia Fingerprinting Employing Traceability Codes

Digital fingerprinting is a tool to protect multimedia content from illegal redistribution by uniquely marking copies of the content distributed to each user. Collusion attack is a powerful attack whereby several differently-fingerprinted copies of the same content are combined together to attenuate or even remove the fingerprint. Coded fingerprinting is one major category of fingerprinting techniques against collusion. Many fingerprinting codes are proposed with tracing capability and collusion resistance, such as Traceability (TA) codes and Identifiable Parent Property (IPP) codes. Most of these works treat the important embedding issue in terms of a set of simplified and abstract assumptions, and they do not examine the end-to-end performance of the coded multimedia fingerprinting. In this paper we jointly consider the coding and embedding issues and examine the collusion resistance of coded fingerprinting systems with various code parameters. Our results show that TA codes generally offer better collusion resistance than IPP codes, and a TA code with a larger alphabet size and a longer code length is preferred.

Shan He, Min Wu
Regular Simplex Fingerprints and Their Optimality Properties

This paper addresses the design of additive fingerprints that are maximally resilient against Gaussian averaging collusion attacks. The detector performs a binary hypothesis test in order to decide whether a user of interest is among the colluders. The encoder (fingerprint designer) is to imbed additive fingerprints that minimize the probability of error of the test. Both the encoder and the attackers are subject to squared-error distortion constraints. We show that

n

-simplex fingerprints are optimal in sense of maximizing a geometric figure of merit for the detection test; these fingerprints outperform orthogonal fingerprints. They are also optimal in terms of maximizing the error exponent of the detection test, and maximizing the deflection criteria at the detector when the attacker’s noise is non-Gaussian. Reliable detection is guaranteed provided that the number of colluders

$K \ll \sqrt{N}$

, where

N

is the length of the host vector.

Negar Kiyavash, Pierre Moulin

Session III: Watermarking I

A Robust Multi-bit Image Watermarking Algorithm Based on HMM in Wavelet Domain

Robustness is the key issue in the development of multi-bit watermarking algorithm. A new algorithm for robust multi-bit image watermarking based on Hidden Markov Model (HMM) in wavelet domain is proposed in this paper. The algorithm is characterized as follows: (1) the proposed blind detector based on vector HMM, which describes the statistics of wavelet coefficients, achieves significant improvement in performance compared to the conventional correlation detector; (2) adaptive watermark embedding scheme is applied to achieve the low distortion according to the Human Visual System (HVS); (3) optimal multi-bit watermark embedding strategy and maximum-likelihood detection for tree structure of vector HMM is proposed through system robustness analysis. Simulation results show that relatively high capacity for watermark embedding in low frequency subbands of wavelet domain is achieved with the proposed algorithm, and high robust results are observed against StirMark attacks, such as JPEG compression, additive noise, median cut and filter.

Jiangqun Ni, Rongyue Zhang, Jiwu Huang, Chuntao Wang
Robust Detection of Transform Domain Additive Watermarks

Deviations of the actual coefficient distributions from the idealized theoretical models due to inherent modeling errors and possible attacks are big challenges for watermark detection. These uncertain deviations may degrade or even upset the performance of existing optimum detectors that are optimized at idealized models. In this paper, we present a new detection structure for transform domain additive watermarks based on Huber’s robust hypothesis testing theory. The statistical behaviors of the image subband coefficients are modeled by a contaminated generalized Gaussian distribution (GGD), which tries to capture small deviations of the actual situation from the idealized GGD. The robust detector is a min-max solution of the contamination model and turns out to be a censored version of the optimum probability ratio test. Experimental results on real images confirm the superiority of the proposed detector with respect to the classical optimum detector.

Xingliang Huang, Bo Zhang
Multi-band Wavelet Based Digital Watermarking Using Principal Component Analysis

This paper presents a novel watermarking scheme based on multi-band wavelet. Different from many other watermarking schemes, in which the watermark detection threshold is chosen empirically, the false positive rate of the proposed watermarking scheme can be calculated analytically so that watermark detection threshold can be chosen based solely on the targeted false positive. Compared with conventional watermarking schemes implemented in two-band wavelet domain, by incorporating the principal component analysis (PCA) technique the proposed blind watermarking in the multi-band wavelet domain can achieve higher perceptual transparency and stronger robustness. Specifically, the developed watermarking scheme can successfully resist common signal processing such as JPEG compression with quality factor as low as 15, and some geometric distortions such as cropping (cropped by 50%). In addition, the proposed multi-band wavelet based watermarking scheme can be parameterized, thus resulting in more security. That is, an attacker may not be able to detect the embedded watermark if the attacker does not know the parameter.

Xiangui Kang, Yun Q. Shi, Jiwu Huang, Wenjun Zeng

Session IV: Attacks

A New Inter-frame Collusion Attack and a Countermeasure

One of the challenging issues in video watermarking is its robustness to inter-frame collusion attacks. The Inter-frame collusion attacks exploit the inherent redundancy in the video frames or in the watermark to produce an unwatermarked copy of the video. A basic inter-frame collusion attack is the frame temporal filtering(FTF) attack, where temporal low-pass filtering is applied to the watermarked frames in order to remove

temporally uncorrelated

watermarks. If the video frames contain moving objects or camera motion, temporal low-pass filtering introduces visually annoying ghosting artifacts in the attacked video. Thus the applicability of the FTF attack is limited only to static scenes. We propose an extended FTF attack which overcomes this limitation by exploiting the motion within the video frames. Experimental results presented in this paper confirm the effectiveness of the proposed attack over the FTF attack. A countermeasure to this extended FTF attack is also presented.

P. Vinod, P. K. Bora
Effectiveness of ST-DM Watermarking Against Intra-video Collusion

The impact of intra-video collusion on ST-DM watermarking is considered by analyzing the robustness of a constant watermark with respect to Temporal Frame Averaging (TFA). We theoretically show that, as opposed to spread spectrum watermarking, in the ST-DM case it is not sufficient that the same watermark message is inserted within each video frame to ensure resistance against TFA. However robustness can still be achieved by increasing the spreading factor

r

. Moreover the higher the correlation between video frames the better the performance of ST-DM. We also evaluate the impact of the dithering factor

d

upon watermark robustness. As a last contribution, we evaluate the impact of TFA on the quality of the attacked video, demonstrating that, unless motion compensated averaging is used, only a few frames can be averaged without introducing annoying artifacts.

Roberto Caldelli, Alessandro Piva, Mauro Barni, Andrea Carboni
Oracle Attacks and Covert Channels

In this paper, well-known attacks named

oracle attacks

are formulated within a realistic network communication model where they reveal to use suitable covert channels, we name

oracle channels

. By exploiting information-theoretic notions, we show how to modify detection/authentication watermarking algorithms in order to counteract oracle attacks. We present three proposals, one based on randomization, another one based on time delay and a third one based on both randomization and delay.

Ilaria Venturini
Security of DM Quantization Watermarking Schemes: A Practical Study for Digital Images

In this paper, the security of Dither Modulation Quantization Index Modulation schemes for digital images is analyzed. Both pixel and DCT coefficient quantization schemes are investigated. The related works that deal with the security of spread spectrum and quantization schemes are presented and their limits are outlined. The use of independent component analysis (ICA) for natural image is introduced. We show that ICA can be an efficient tool to estimate the quantization noise which is by definition independent of the host signal. We present both a method for estimating the carrier, and an attack that relies on the ICA decomposition of patches of images; our attack scheme is also compared with another classical attack. The results reported in this paper demonstrate how changes in natural image statistics can be used to detect watermarks and devise attacks. Such natural image statistics-based attacks may pose a serious threat against watermarking schemes which are based on quantization techniques.

Patrick Bas, Jarmo Hurri

Session V: Special Session on Watermarking Security

A Survey of Watermarking Security

Digital watermarking studies have always been driven by the improvement of robustness. Most of articles of this field deal with this criterion, presenting more and more impressive experimental assessments. Some key events in this quest are the use of spread spectrum, the invention of resynchronization schemes, the discovery of side information channel, and the formulation of the opponent actions as a game.

On the contrary, security received little attention in the watermarking community. This paper presents a comprehensive overview of this recent topic. We list the typical applications which requires a secure watermarking technique. For each context, a threat analysis is purposed. This presentation allows us to illustrate all the certainties the community has on the subject, browsing all key papers. The end of the paper is devoted to what remains not clear, intuitions and future studies.

Teddy Furon
Countermeasures for Collusion Attacks Exploiting Host Signal Redundancy

Multimedia digital data is highly redundant: successive video frames are very similar in a movie clip, most songs contain some repetitive patterns, etc. This property can consequently be exploited to successively replace each part of the signal with a similar one taken from another location in the same signal or with a combination of similar parts. Such an approach is all the more pertinent when video content is considered since such signals exhibit both temporal and spatial self-similarities. To counter such attacking strategies, it is necessary to ensure that embedded watermarks are coherent with the redundancy of the host content. To this end, both motion-compensated watermarking and self-similarities inheritance will be surveyed.

Gwenaël Doërr, Jean-Luc Dugelay
Fingerprinting Schemes. Identifying the Guilty Sources Using Side Information

In a fingerprinting scheme a distributor places marks in each copy of a digital object. Placing different marks in different copies, uniquely identifies the recipient of each copy, and therefore allows to trace the source of an unauthorized redistribution. A widely used approach to the fingerprinting problem is the use of error correcting codes with a suitable minimum distance. With this approach, the set of embedded marks in a given copy is precisely a codeword of the error correcting code. We present two different approaches that use side information for the tracing process. The first one uses the Guruswami-Sudan soft-decision list decoding algorithm and the second one a modified version of the Viterbi algorithm.

Miguel Soriano, Marcel Fernandez, Josep Cotrina
Practical Data-Hiding: Additive Attacks Performance Analysis

The main goal of this tutorial is to review the theory and design the worst case additive attack (WCAA) for

$\mid{\mathcal{M}}\mid$

-ary quantization-based data-hiding methods using as performance criteria the error probability and the maximum achievable rate of reliable communications. Our analysis focuses on the practical scheme known as distortion compensation dither modulation (DC-DM). From the mathematical point of view, the problem of the worst case attack (WCA) design using probability of error as a cost function is formulated as the maximization of the average probability of error subject to the introduced distortion for a given decoding rule. When mutual information is selected as a cost function, a solution to the minimization problem should provide such an attacking noise probability density function (pdf) that will maximally decrease the rate of reliable communications for an arbitrary decoder structure. The obtained results demonstrate that, within the class of additive attacks, the developed attack leads to a stronger performance decrease for the considered class of embedding techniques than the additive white Gaussian or uniform noise attacks.

J. E. Vila-Forcén, S. Voloshynovskiy, O. Koval, F. Pérez-González, T. Pun
The Return of the Sensitivity Attack

The sensitivity attack is considered as a serious threat to the security of spread-spectrum-based schemes, since it provides a practical method of removing watermarks with minimum attacking distortion. This paper is intended as a tutorial on this problem, presenting an overview of previous research and introducing a new method based on a general formulation. This new method does not require any knowledge about the detection function nor any other system parameter, but just the binary output of the detector, being suitable for attacking most known watermarking methods. Finally, the soundness of this new approach is tested by attacking several of those methods.

Pedro Comesaña, Luis Pérez-Freire, Fernando Pérez-González

Session VI: Watermarking of Unconventional Media

Look Up Table(LUT) Method for Halftone Image Watermarking

In this paper, we introduce a LUT based watermarking method for a halftone image. Watermark bits are hidden at pseudo-random locations of halftone image during halftoning process which is based on LUT method. The pixel values of the halftone image are determined from the LUT entry indexed by both the neighborhood halftone pixels and current grayscale value. The LUT is trained by a set of grayscale images and halftone images. The advantage of LUT method is that it can be executed very fast compared with other watermarking method for a halftone image. Therefore LUT watermarking algorithm can be embedded in a printer. Experiments using real scanned images show that the proposed method is feasible method to hide the large amount of data within a halftone image without noticeable distortion and the watermark is robust to cropping and rotation.

InGook Chun
New Public-Key Authentication Watermarking for JBIG2 Resistant to Parity Attacks

An authentication watermark is a hidden data inserted into an image that allows detecting any alteration made in the image. AWTs (Authentication Watermarking Techniques) normally make use of secret- or public-key cryptographic cipher to compute the authentication signature of the image, and inserts it into the image itself. Many previous public-key AWTs for uncompressed binary images can be attacked by an image adulterating technique named “parity attack.” JBIG2 is an international standard for compressing bi-level images (both lossy and lossless). The creation of secure AWTs for compressed binary images is an important practical problem. However, it seems that no AWT for JBIG2 resistant to parity attacks has ever been proposed. This paper proposes a new data-hiding method to embed information in the text region of JBIG2 files. Then, we use this technique to design a new AWT for JBIG2-encoded images resistant to parity attacks. Both the secret- and public-key versions of the proposed AWT are completely immune against parity attacks. Moreover, watermarked images are visually pleasant, without visible salt and pepper noise. Image authenticity verification can be performed in either JBIG2 file itself or in the binary image obtained by decoding the JBIG2 file.

Sergio Vicente Denser Pamboukian, Hae Yong Kim
Software Watermarking as a Proof of Identity: A Study of Zero Knowledge Proof Based Software Watermarking

Software watermarking has been proposed as a way to prove ownership of software intellectual property in order to contain software piracy. In this paper, we propose a novel watermarking technique based on Zero Knowledge Proofs. The advantages are multi-fold. The watermark recognizer can now be distributed publicly. This helps in watermark being used as a proof for both authorship and authentication of the software. The watermark is shown as a mathematical proof which varies with every run instead of the watermark string as in the previous techniques. This watermarking scheme not only has a high degree of tamper resistance but also allows the protocol to point out the tampered subset of the embedded secret data. We present potential attacks on the protocol and discuss the strength of the watermarking scheme. We present empirical results based on our implementation.

Balaji Venkatachalam
Watermarking of 3D Irregular Meshes Based on Wavelet Multiresolution Analysis

In this paper, we propose a robust watermarking method for 3-D triangle surface meshes. Most previous methods based on the wavelet analysis can process only semi-regular meshes. Our proposal can be applied to irregular as well as regular meshes by using recently introduced irregular wavelet analysis scheme. L2-Norm of the wavelet coefficients is modified in various multi-resolution levels to embed the watermark. We also introduced a vertex and face re-ordering process as pre-processing in both watermark embedding and extraction for the robustness against connectivity reordering attacks. In addition, our proposal employs blind watermark detection scheme, which can extract the watermark without reference of cover mesh model. Through the simulations, we prove that our approach is robust against connectivity reordering as well as various kinds of geometrical attacks such as lossy compression and affine transform.

Min-Su Kim, Sébastien Valette, Ho-Youl Jung, Rémy Prost

Session VII: Channel Coding and Watermarking

Digital Watermarking Robustness and Fragility Characteristics: New Modelling and Coding Influence

This paper, introduces a new methodology for the design and analysis of digital watermarking systems which, from an information theoretic point of view, incorporates robustness and fragility. The proposed methodology is developed by focusing on the probability of error versus watermark-to-noise ratio curve, describing the technique performance, and a scenario for coded techniques which takes into account not only the coding gain, but also the robustness or fragility of the system. This new concept requires that coded digital watermarking systems design be revisited to also include the robustness and fragility requirements. Turbo codes, which appropriately meet these requirements, can be used straightforwardly to construct robust watermarking systems. Fragile systems can also be constructed by introducing the idea of polarization scheme. This new idea has allowed the implementation of hybrid techniques achieving fragility and robustness with a single watermark embedding. We moreover, present (turbo) coded techniques which can also be used in a semi-fragile mode.

Marcos de Castro Pacitti, Weiler Alves Finamore
New Geometric Analysis of Spread-Spectrum Data Hiding with Repetition Coding, with Implications for Side-Informed Schemes

In this paper we initially provide a new geometric interpretation of additive and multiplicative spread-spectrum (SS) watermarking with repetition coding and ML decoding. The interpretation gives an intuitive rationale on why the multiplicative scheme performs better in front of additive independent attacks, and it is also used to produce a novel quantitative performance analysis. Furthermore, the geometric considerations which explain the advantages of multiplicative SS with repetition afford the proposal of a novel side-informed STDM-like method, which we name Sphere-hardening Dither Modulation (SHDM). This method is the side-informed counterpart of multiplicative SS with repetition coding, in the same sense that STDM is the side-informed counterpart of additive SS with repetition coding.

Félix Balado
Trellis-Coded Rational Dither Modulation for Digital Watermarking

Rational Dither Modulation has been proposed as an effective QIM watermarking algorithm which is robust against value-metric scaling. Invariance is obtained by quantizing a rational function of the host features instead of the features themselves. In this paper we propose a vector extension of the basic RDM scheme. Specifically, a sequence of feature ratios is quantized vectorially with the aid of a properly designed dirty trellis code. A fast sub-optimum embedding algorithm is proposed ensuring fast watermark insertion and good distortion properties. Preliminary results show that a significant advantage is obtained with respect to conventional RDM.

A. Abrardo, M. Barni, F. Pérez-González, C. Mosquera

Session VIII: Theory

Closed-Form Formulas for Private Watermarking Capacities of Laplacian Sources with the Magnitude-Error Distortion Measure and Under Additive Attacks

Calculation of watermarking capacities of private Laplacian watermarking systems with the magnitude-error distortion measure under fixed attacks is addressed. First, in the case of an additive Laplacian attack, a nice closed-form formula for the watermarking capacities is derived, which involves only the distortion level and the parameter of the Laplacian attack. Second, in the case of an arbitrary additive attack, a general, but slightly more complicated formula for the watermarking capacities is given. Finally, calculation of the joint compression and private watermarking rate region of Laplacian watermarking systems with an additive Laplacian attack is considered.

Wei Sun, En-hui Yang
Improved QIM Strategies for Gaussian Watermarking

This paper revisits the problem of watermarking a Gaussian host, where the embedder and attacker are subject to mean-squared distortion constraints. The worst (nonadditive) attack and unconstrained capacity have been identified in previous work. Here we constrain the encoding function to lie in a given family of encoding functions — such as spread-spectrum or fixed-dimensional Quantization Index Modulation (QIM), with or without time-sharing, with or without external dithering. This gives rise to the notion of constrained capacity. Several such families are considered in this paper, and the one that is best under the worst attack is identified for each admissible value of the watermark-to-noise ratio (WNR) and the noise-to-host ratio (NHR). With suitable improvements, even scalar QIM can outperform any (improved) spread-spectrum scheme, for any value of WNR and NHR. The remaining gap to unconstrained capacity can be bridged using higher-dimensional lattice QIM.

Pierre Moulin, Ying Wang
On the Achievable Rate of Side Informed Embedding Techniques with Steganographic Constraints

The development of watermarking schemes in the literature is generally guided by a power constraint on the watermark to be embedded into the host. In a steganographic framework there is an additional constraint on the embedding procedure. It states that, for a scheme to be undetectable by statistical means, the pdf of the host signal must be approximately or exactly equal to that of the stegotext. In this work we examine this additional constraint when coupled with DC-DM. An analysis of the embedding scheme Stochastic QIM, which automatically meets the condition under certain assumptions, is presented and finally the capacity of the steganographic channel is examined.

Mark T. Hogan, Félix Balado, Neil J. Hurley, Guénolé C. M. Silvestre
Performance Lower Bounds for Existing and New Uncoded Digital Watermarking Modulation Techniques

Many coded digital watermarking systems development requires first the selection of a (uncoded) modulation technique to be part of a coded architecture. Therefore, performance bounds for uncoded techniques are an important tool for coded system optimization, aiming at operation close to capacity. This paper introduces a new performance lower bound for uncoded binary watermarking modulation techniques, based on a simple equivalence with a binary communication system, considering an additive gaussian attack model. When compared to others results, we observe that the proposed performance lower bound is more accurate and general. New

M

-ary unidimensional and multidimensional Spread Spectrum based modulation techniques are introduced, including their improved forms. The performances of the proposed techniques are determined, and the performance lower bounds for the corresponding techniques classes are determined as well.

Marcos de Castro Pacitti, Weiler Alves Finamore

Session IX: Watermarking II

Evaluation of Feature Extraction Techniques for Robust Watermarking

This paper addresses feature extraction techniques for robust watermarking. Geometric distortion attacks desynchronize the location of the inserted watermark and hence prevent watermark detection. Watermark synchronization, which is a process of finding the location for watermark insertion and detection, is crucial to design robust watermarking. One solution is to use image features. This paper reviews feature extraction techniques that have been used in featurebased watermarking: the Harris corner detector and the Mexican Hat wavelet scale interaction method. We also evaluate the scale-invariant keypoint extractor in comparison with other techniques in aspect of watermarking. After feature extraction, the set of triangles is generated by Delaunay tessellation. These triangles are the location for watermark insertion and detection. Redetection ratio of triangles is evaluated against geometric distortion attacks as well as signal processing attacks. Experimental results show that the scale-invariant keypoint extractor is appropriate for robust watermarking.

Hae-Yeoun Lee, In Koo Kang, Heung-Kyu Lee, Young-Ho Suh
Perceptual Video Watermarking in the 3D-DWT Domain Using a Multiplicative Approach

A video watermarking method operating in the three-dimensional discrete wavelet transform (3D DWT) relaying on the use of a novel video perceptual mask, applied in the 3D DWT domain, is here proposed. Specifically the method consists in partitioning the video sequence into spatio-temporal units of fixed length. Then the video shots undergo a one level 3D DWT. The mark is embedded by means of a multiplicative approach using perceptual masking on the 3D DWT coefficients in order to trade off between the mark robustness and its imperceptibility. The mask we propose takes into account the spatio-temporal frequency content by means of the spatio-temporal contrast sensitivity function, the luminance, and the variance of the 3D subbands which host the mark. The effectiveness of the proposed mask is verified experimentally, thus guaranteeing a high imperceptibility of the mark. Moreover, experimental results show the robustness of the proposed approach against MPEG2 compression, MPEG4 compression, gain attack, collusion, and transcoding.

Patrizio Campisi, Alessandro Neri
Robustness Enhancement of Content-Based Watermarks Using Entropy Masking Effect

Image-Adaptive watermarking systems exploit visual models to adapt the watermark to local properties of the host image. This leads to a watermark power enhancement, hence an improved resilience against different attacks, while keeping the mark imperceptible. Visual models consider different properties of the human visual system, such as frequency sensitivity, luminance sensitivity and contrast masking. Entropy masking is another human visual system’s characteristic, which rarely has been addressed in visual models. In this paper we have utilized this masking effect to improve the robustness of Image-Adaptive watermarks while keeping their transparency. Experimental results show a significant amount of enhancement to the power of watermark. The work has been expanded to video watermarking, considering special properties of the entropy masking effect.

Amir Houman Sadr, Shahrokh Ghaemmaghami

Session X: Applications

Secure Mutual Distrust Transaction Tracking Using Cryptographic Elements

This paper presents a novel approach to secure transaction tracking. The focus of the proposed scheme is on preventing insider attacks particularly prevalent in multimedia transactions, assuming both parties involved in a transaction are mutually distrustful. To achieve authentication and non-repudiation, the proposed system, called

staining

, is composed of two key components: public-key cryptography and basic watermarking. The concept is to watermark after encryption, thereby introducing a

stain

on the watermark due to decryption. Watermarking and cryptography are not usually combined in such a manner, due to several issues involved, which are also discussed.

Angela S. L. Wong, Matthew Sorell, Robert Clarke
ViWiD : Visible Watermarking Based Defense Against Phishing

In this paper, we present a watermarking based approach, and its implementation, for mitigating phishing attacks – a form of web based identity theft. ViWiD is an integrity check mechanism based on visible watermarking of logo images. ViWiD performs all of the computation on the company’s web server and it does not require installation of any tool or storage of any data, such as keys or history logs, on the user’s machine. The watermark message is designed to be unique for every user and carries a shared secret between the company and the user in order to thwart the “one size fits all” attacks. The main challenge in visible watermarking of logo images is to maintain the aesthetics of the watermarked logo to avoid damage to its marketing purpose yet be able to insert a robust and readable watermark into it. Logo images have large uniform areas and very few objects in them, which is a challenge for robust visible watermarking. We tested our scheme with two different visible watermarking techniques on various randomly selected logo images.

Mercan Topkara, Ashish Kamra, Mikhail J. Atallah, Cristina Nita-Rotaru
Backmatter
Metadata
Title
Digital Watermarking
Editors
Mauro Barni
Ingemar Cox
Ton Kalker
Hyoung-Joong Kim
Copyright Year
2005
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-32052-4
Print ISBN
978-3-540-28768-1
DOI
https://doi.org/10.1007/11551492

Premium Partner