2012 | OriginalPaper | Buchkapitel
Modern Trends in Steganography and Steganalysis
verfasst von : Jessica Fridrich
Erschienen in: Digital Forensics and Watermarking
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Only recently, researchers working in steganography realized how much the assumptions made about the cover source and the availability of information to Alice, Bob, and the Warden influence some of the most fundamental aspects, including the way a steganographic system is built and broken and how much information can be securely embedded in a given object. While for simple artificial sources the problem of embedding messages undetectably has been resolved, it remains vastly open for empirical covers, examples of which are digital media objects, such as digital images, video, and audio. The fact that empirical media is fundamentally incognizable brings serious complications but also gives researchers plenty of opportunities to uncover very interesting and sometimes quite surprising results. An example is the square root law of imperfect steganography that states that the size of secure payload in empirical objects increases only with the square root of the cover size. Since steganographic methods designed to be undetectable with respect to a given model are usually easy to attack by going outside of the model, modern steganography works with complex models of covers in which the embedding distortion is minimized, hoping that it will be difficult for the Warden to work "outside of the model." The problem of cover source model is equally important in steganalysis. However, while working with complex models in steganography is feasible, learning a relationship between cover and stego objects in a high-dimensional model space can be quite challenging due to the rapidly increasing complexity of classifier training, lack of training data, and loss of robustness. In my talk, I will provide a retrospective view of the field, point out some of the recent achievements as well as bottlenecks of future development in source model building and machine learning.