ABSTRACT
There has been growing interest in using convolutional neural networks (CNNs) in the fields of image forensics and steganalysis, and some promising results have been reported recently. These works mainly focus on the architectural design of CNNs, usually, a single CNN model is trained and then tested in experiments. It is known that, neural networks, including CNNs, are suitable to form ensembles. From this perspective, in this paper, we employ CNNs as base learners and test several different ensemble strategies. In our study, at first, a recently proposed CNN architecture is adopted to build a group of CNNs, each of them is trained on a random subsample of the training dataset. The output probabilities, or some intermediate feature representations, of each CNN, are then extracted from the original data and pooled together to form new features ready for the second level of classification. To make best use of the trained CNN models, we manage to partially recover the lost information due to spatial subsampling in the pooling layers when forming feature vectors. Performance of the ensemble methods are evaluated on BOSSbase by detecting S-UNIWARD at 0.4 bpp embedding rate. Results have indicated that both the recovery of the lost information, and learning from intermediate representation in CNNs instead of output probabilities, have led to performance improvement.
- Y. Freund, and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comp. Syst. Sciences, 55(1): 119--139, Aug. 1997. Google ScholarDigital Library
- J. H. Friedman. Greedy function approximation: a gradient boosting machine. Annals Statistics, 29(5): 1189--1232, Oct. 2001.Google ScholarCross Ref
- L. Breiman. Bagging predictors. Machine Learning, 24(2): 123--140, Aug. 1996. Google ScholarCross Ref
- L. Breiman. Random forests. Machine learning, 45(1): 5--32, Oct. 2001. Google ScholarDigital Library
- A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In: Proc. Adv. Neural Inf. Process. Syst., pages 1097--1105, 2012.Google Scholar
- O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-fei. Imagenet large scale visual recognition challenge. Int. J. Comp. Vision, 115(3): 211--252, Apr. 2015. Google ScholarDigital Library
- C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In: Proc. IEEE Conf. Comp. Vision Pattern Recognition, pages 1--9, 2015.Google ScholarCross Ref
- S. Ioffe, and C. Szegedy. Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167, Feb. 2015.Google Scholar
- K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv:1512.03385, Dec. 2015.Google Scholar
- J. Chen, X. Kang, Y. Liu, and Z. J. Wang. Median filtering forensics based on convolutional neural networks. IEEE Signal Process. Lett., 22(11): 1849--1853, Jun. 2015.Google ScholarCross Ref
- S. Tan, and B. Li. Stacked convolutional auto-encoders for steganalysis of digital images. In: Proc. APSIPA, pages 1--4, Dec. 2014.Google ScholarCross Ref
- Y. Qian, J. Dong, W. Wang, and T. Tan. Deep learning for steganalysis via convolutional neural networks. In: Proc. SPIE Electronic Imaging, pages 94090J, Mar. 2015.Google Scholar
- L. Pibre, P. Jérôme, D. Ienco, and M. Chaumont. Deep learning is a good steganalysis tool when embedding key is reused for different images, even if there is a cover source-mismatch. In: Proc. SPIE Electronic Imaging, Feb. 2016.Google ScholarCross Ref
- J. Fridrich, and J. Kodovskỳ. Rich models for steganalysis of digital images. IEEE Inf. Forensics Security, 7(3): 868--882, Jun. 2012. Google ScholarDigital Library
- G. Xu, H. Wu, and Y. Shi. Structural design of convolutional neural networks for steganalysis. IEEE Signal Process. Lett., 23(5): 708--712, Mar. 2016.Google ScholarCross Ref
- P. Bas, T. Filler, and T. Pevnỳ. "Break our steganographic system": the ins and outs of organizing BOSS. In: Proc. Inf. Hiding, vol. 6958, pages 59--70, Springer, May 2011. Google ScholarDigital Library
- J. Kodovskỳ, J. Fridrich, and V. Holub. Ensemble classifiers for steganalysis of digital media. IEEE Trans. Inf. Forensics Security, 7(2): 432--444, Apr. 2012. Google ScholarDigital Library
- V. Holub, and J. Fridrich. Random projections of residuals for digital image steganalysis. IEEE Trans. Inf. Forensics Security, 8(12): 1996--2006, Dec. 2013. Google ScholarDigital Library
- T. Denemark, V. Sedighi, V. Holub, R. Cogranne, and J. Fridrich. Selection-channel-aware rich model for steganalysis of digital images. In: Proc. Inf. Forensics Security (WIFS), pages 48--53, Dec. 2014.Google ScholarCross Ref
- W. Tang, H. Li, W. Luo, and J. Huang. Adaptive steganalysis against WOW embedding algorithm. In: Proc. ACM IH&MMSec, pages 91--96, 2014. Google ScholarDigital Library
- Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: convolutional architecture for fast feature embedding. In: Proc. ACM Int. Conf. Multimed., pages 675--678, 2014. Google ScholarDigital Library
- V. Holub, J. Fridrich, and T. Denemark. Universal distortion function for steganography in an arbitrary domain. EURASIP J. Inf. Security, 2014(1): 1--13, Dec. 2014.Google ScholarCross Ref
Index Terms
- Ensemble of CNNs for Steganalysis: An Empirical Study
Recommendations
How to Pretrain for Steganalysis
IH&MMSec '21: Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia SecurityIn this paper, we investigate the effect of pretraining CNNs on ImageNet on their performance when refined for steganalysis of digital images. In many cases, it seems that just 'seeing' a large number of images helps with the convergence of the network ...
Feature learning for steganalysis using convolutional neural networks
Traditional steganalysis methods usually rely on handcrafted features. However, with the rapid development of advanced steganography, manual design of complex features has become increasingly difficult. In this paper, we propose a new paradigm for ...
On improving CNNs performance: The case of MNIST
Highlights- We check that CNNs accept performance improvement techniques in MNIST.
- These ...
AbstractIn this note, we follow two directions to improve the performance of CNN classifiers. The first is to apply to CNN units the same improvement techniques that we have successfully used with Stacked Denoising Auto-Encoder classifiers. ...
Comments