Abstract
Traditional comics are increasingly being augmented with digital effects, such as recoloring, stereoscopy, and animation. An open question in this endeavor is identifying where in a comic panel the effects should be placed. We propose a fast, semi-automatic technique to identify effects-worthy segments in a comic panel by utilizing gaze locations as a proxy for the importance of a region. We take advantage of the fact that comic artists influence viewer gaze towards narrative important regions. By capturing gaze locations from multiple viewers, we can identify important regions and direct a computer vision segmentation algorithm to extract these segments. The challenge is that these gaze data are noisy and difficult to process. Our key contribution is to leverage a theoretical breakthrough in the computer networks community towards robust and meaningful clustering of gaze locations into semantic regions, without needing the user to specify the number of clusters. We present a method based on the concept of relative eigen quality that takes a scanned comic image and a set of gaze points and produces an image segmentation. We demonstrate a variety of effects such as defocus, recoloring, stereoscopy, and animations. We also investigate the use of artificially generated gaze locations from saliency models in place of actual gaze locations.
Supplemental Material
Available for Download
Supplemental movie, appendix, image and software files for, Creating Segments and Effects on Comics by Clustering Gaze Data
- Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Süsstrunk. 2010. Slic Superpixels. Technical Report.Google Scholar
- Suleyman Al-Showarah, Naseer Al-Jawad, and Harin Sellahewa. 2013. Examining eye-tracker scan paths for elderly people using smart phones. In Proceedings of the 6th York Doctoral Symposium on Computer Science 8 Electronics, Vol. 1. 7.Google Scholar
- Yuji Aramaki, Yusuke Matsui, Toshihiko Yamasaki, and Kiyoharu Aizawa. 2014. Interactive segmentation for manga. In Proceedings of ACM SIGGRAPH 2014 Posters. 66. Google ScholarDigital Library
- Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. 2011. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33, 5 (May 2011), 898--916. Google ScholarDigital Library
- Archive. 2015. Internet Archive. Retrieved from http://archive.org/.Google Scholar
- Soonmin Bae and Frédo Durand. 2007. Defocus magnification. In Computer Graphics Forum, Vol. 26. Wiley Online Library, 571--579. Google ScholarCross Ref
- Pieter Blignaut. 2009. Fixation identification: The optimum threshold for a dispersion algorithm. Attention Percept. Psychophys. 71, 4 (2009), 881--895. Google ScholarCross Ref
- Ali Borji and Laurent Itti. 2015. CAT2000: A large scale fixation dataset for boosting saliency research. In Proceedings of the Computer Vision and Pattern Recognition 2015 Workshop on “Future of Datasets”Google Scholar
- Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Frédo Durand, Aude Oliva, and Antonio Torralba. 2016. MIT Saliency Benchmark. Retrieved from http://saliency.mit.edu/.Google Scholar
- Huiwen Chang, Ohad Fried, Yiming Liu, Stephen DiVerdi, and Adam Finkelstein. 2015. Palette-based photo recoloring. ACM Trans. Graph. 34, 4 (2015), 139. Google ScholarDigital Library
- Jinsoo Choi, Tae-Hyun Oh, and In So Kweon. 2016. Human attention estimation for natural images: An automatic gaze refinement approach. arXiv Preprint arXiv:1601.02852 (2016).Google Scholar
- Yung-Yu Chuang, Dan B. Goldman, Ke Colin Zheng, Brian Curless, David H. Salesin, and Richard Szeliski. 2005. Animating pictures with stochastic motion textures. ACM Transactions on Graphics (TOG) 24, 3 (2005), 853--860. Google ScholarDigital Library
- ComicBookPlus. 2015. Comics in public domain. Retrieved From http://comicbookplus.com/.Google Scholar
- Comichron. 2015. Comichron sales data. Retrieved from http://www.comichron.com/monthlycomicssales.html.Google Scholar
- Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita Cucchiara. 2016. A deep multi-level network for saliency prediction. arXiv Preprint arXiv:1609.01064 (2016).Google Scholar
- Doug DeCarlo and Anthony Santella. 2002. Stylization and abstraction of photographs. ACM Transactions on Graphics 21, 3 (2002), 769--776. Google ScholarDigital Library
- Clement Farabet, Camille Couprie, Laurent Najman, and Yann LeCun. 2013. Learning hierarchical features for scene labeling. IEEE Trans. Pattern Anal. Mach. Intell. 35, 8 (2013), 1915--1929. Google ScholarDigital Library
- Joseph H. Goldberg and Jonathan I. Helfman. 2010. Scanpath clustering and aggregation. In Proceedings of the 2010 Symposium on Eye-tracking Research 8 Applications. ACM, 227--234. Google ScholarDigital Library
- Laurent Itti, Christof Koch, and Ernst Niebur. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 11 (1998), 1254--1259. Google ScholarDigital Library
- Eakta Jain. 2012. Attention-Guided Algorithms to Retarget and Augment Animations, Stills, and Videos. Ph.D. Dissertation. CMU. Google ScholarDigital Library
- Eakta Jain, Yaser Sheikh, and Jessica Hodgins. 2012. Inferring artistic intention in comic art through viewer gaze. In Proceedings of the ACM Symposium on Applied Perception. ACM, 55--62. Google ScholarDigital Library
- Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. 2015. SALICON: Saliency in context. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). Google ScholarCross Ref
- Tilke Judd, Fredo Durand, and Antonio Torralba. 2011. Fixations on low-resolution images. J. Vis. 11, 4 (2011), 14. Google ScholarCross Ref
- Tilke Judd, Krista Ehinger, Frédo Durand, and Antonio Torralba. 2009. Learning to predict where humans look. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Google ScholarCross Ref
- Kevin Karsch, Ce Liu, and Sing Bing Kang. 2012. Depth extraction from video using non-parametric sampling. In Proceedings of the Computer Vision--ECCV 2012. Springer, 775--788. Google ScholarDigital Library
- S. Karthikeyan, T. Ngo, M. Eckstein, and B. S. Manjunath. 2015. Eye tracking assisted extraction of attentionally important objects from videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3241--3250. Google ScholarCross Ref
- Harish Katti. 2011. Human Visual Perception, Study and Applications to Understanding Images and Videos. Ph.D. Dissertation.Google Scholar
- Harish Katti, Ramanathan Subramanian, Mohan Kankanhalli, Nicu Sebe, Tat-Seng Chua, and Kalpathi R. Ramakrishnan. 2010. Making computers look the way we look: Exploiting visual attention for image understanding. In Proceedings of the 18th ACM International Conference on Multimedia. ACM, 667--670. Google ScholarDigital Library
- Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, Shengdong Zhao, and George Fitzmaurice. 2014. Draco: Bringing life to illustrations with kinetic textures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 351--360. Google ScholarDigital Library
- Khimya Khetarpal and Eakta Jain. 2016. A preliminary benchmark of four saliency algorithms on comic art. In Proceedings of the 2016 IEEE International Conference on Multimedia 8 Expo Workshops (ICMEW). IEEE, 1--6. Google ScholarCross Ref
- Natasha Kholgade, Tomas Simon, Alexei Efros, and Yaser Sheikh. 2014. 3D object manipulation in a single photograph using stock 3D models. ACM Transactions on Graphics 33, 4 (2014), 127. Google ScholarDigital Library
- Johannes Kopf and Dani Lischinski. 2012. Digital reconstruction of halftoned color comics. ACM Transactions on Graphics 31, 6 (2012), 140. Google ScholarDigital Library
- Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchendra Bhandarkar, Wojciech Matusik, and Antonio Torralba. 2016. Eye tracking for everyone. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2176--2184. Google ScholarCross Ref
- Srinivas S. S. Kruthiventi, Kumar Ayush, and R. Venkatesh Babu. 2015. Deepfix: A fully convolutional neural network for predicting human eye fixations. arXiv Preprint arXiv:1510.02927 (2015).Google Scholar
- Matthias Kümmerer, Thomas S. A. Wallis, and Matthias Bethge. 2016. DeepGaze II: Reading fixations from deep features trained on object recognition. arXiv Preprint arXiv:1610.01563 (2016).Google Scholar
- Olivier Le Meur and Antoine Coutrot. 2016. Introducing context-dependent and spatially-variant viewing biases in saccadic models. Vision Research 121 (2016), 72--84. Google ScholarCross Ref
- Olivier Le Meur and Zhi Liu. 2015. Saccadic model of eye movements for free-viewing condition. Vision Research 116 (2015), 152--164. Google ScholarCross Ref
- Anat Levin, Dani Lischinski, and Yair Weiss. 2004. Colorization using optimization. In ACM Transactions on Graphics (TOG), Vol. 23. ACM, 689--694. Google ScholarDigital Library
- Anat Levin, Dani Lischinski, and Yair Weiss. 2008. A closed-form solution to natural image matting. IEEE Transactions on Pattern Analysis and Machine Intelligence 30, 2 (2008), 228--242. Google ScholarDigital Library
- Wan-Yen Lo, Jeroen van Baar, Claude Knaus, Matthias Zwicker, and Markus Gross. 2010. Stereoscopic 3D copy 8 paste. In ACM Transactions on Graphics (TOG), Vol. 29. ACM, 147. Google ScholarDigital Library
- I. Merk and J. Schnakenberg. 2002. A stochastic model of multistable visual perception. Biological Cybernetics 86, 2 (2002), 111--116. Google ScholarCross Ref
- Ajay Mishra, Yiannis Aloimonos, and Cheong Loong Fah. 2009. Active segmentation with fixation. In IEEE International Conference on Computer Vision (ICCV). IEEE, 468--475. Google ScholarCross Ref
- Eric N. Mortensen and William A. Barrett. 1995. Intelligent scissors for image composition. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques. ACM, 191--198. Google ScholarDigital Library
- Makoto Okabe, Ken Anjyo, and Rikio Onai. 2011. Creating fluid animation from a single image using video database. Computer Graphics Forum 30 (2011). Google ScholarCross Ref
- Dim P. Papadopoulos, Alasdair D. F. Clarke, Frank Keller, and Vittorio Ferrari. 2014. Training object class detectors from eye tracking data. In Proceedings of the Computer Vision--ECCV 2014. Springer, 361--376. Google ScholarCross Ref
- Yingge Qu, Tien-Tsin Wong, and Pheng-Ann Heng. 2006. Manga colorization. ACM Transactions on Graphics 25, 3 (2006), 1214--1220. Google ScholarDigital Library
- Subramanian Ramanathan, Harish Katti, Nicu Sebe, Mohan Kankanhalli, and Tat-Seng Chua. 2010. An eye fixation database for saliency detection in images. In Proceedings of the European Conference on Computer Vision. Springer, 30--43. Google ScholarDigital Library
- Bryan C. Russell, Antonio Torralba, Kevin P. Murphy, and William T. Freeman. 2008. LabelMe: A database and web-based tool for image annotation. International Journal of Computer Vision 77, 1--3 (2008), 157--173. Google ScholarDigital Library
- Dario D. Salvucci and Joseph H. Goldberg. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research 8 Applications. ACM, 71--78. Google ScholarDigital Library
- Anthony Santella, Maneesh Agrawala, Doug DeCarlo, David Salesin, and Michael Cohen. 2006. Gaze-based interaction for semi-automatic photo cropping. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). Google ScholarDigital Library
- Anthony Santella and Doug DeCarlo. 2004. Robust clustering of eye movement recordings for quantification of visual interest. In Proceedings of the 2004 Symposium on Eye Tracking Research 8 Applications. ACM, 27--34. Google ScholarDigital Library
- John M. Shea and Joseph P. Macker. 2013. Automatic selection of number of clusters in networks using relative eigenvalue quality. In Proceedings of the Military Communications Conference, MILCOM 2013-2013 IEEE. IEEE, 131--136. Google ScholarCross Ref
- Jianbo Shi and Jitendra Malik. 2000. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 8 (2000), 888--905. Google ScholarDigital Library
- Oleg Špakov and Darius Miniotas. 2015. Application of clustering algorithms in eye gaze visualizations. Information Technology And Control 36, 2 (2015).Google Scholar
- Yusuke Sugano, Yasuyuki Matsushita, and Yoichi Sato. 2013. Graph-based joint clustering of fixations and visual entities. ACM Transactions on Applied Perception (TAP) 10, 2 (2013), 10. Google ScholarDigital Library
- M. Sun, A. D. Jepson, and E. Fiume. 2003. Video input driven animation (VIDA). In IEEE International Conference on Computer Vision (ICCV). 96--103. Google ScholarDigital Library
- Daniel Sỳkora, Jan Buriánek, and Jiří Žára. 2003. Segmentation of black and white cartoons. In Spring Conference on Computer Graphics (SCCG). 223--230. Google ScholarDigital Library
- Daniel Sỳkora, John Dingliana, and Steven Collins. 2009. LazyBrush: Flexible painting tool for hand-drawn cartoons. In Computer Graphics Forum, Vol. 28.Google ScholarCross Ref
- Enkelejda Tafaj, Gjergji Kasneci, Wolfgang Rosenstiel, and Martin Bogdan. 2012. Bayesian online clustering of eye movement data. In Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 285--288. Google ScholarDigital Library
- Thierry Urruty, Stanislas Lew, Nacim Ihadaddene, and Dan A. Simovici. 2007. Detecting eye fixations by projection clustering. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 3, 4 (2007), 5. Google ScholarDigital Library
- Dirk Walther and Christof Koch. 2006. Modeling attention to salient proto-objects. Neural Networks 19, 9 (2006), 1395--1407. Google ScholarDigital Library
- Xuemiao Xu, Liang Wan, Xiaopei Liu, Tien-Tsin Wong, Liansheng Wang, and Chi-Sing Leung. 2008. Animating animal motion from still. ACM Transactions on Graphics 27, 5 (2008), 117. Google ScholarDigital Library
- Song-Hai Zhang, Tao Chen, Yi-Fei Zhang, Shi-Min Hu, and Ralph R. Martin. 2009. Vectorizing cartoon animations. IEEE Transactions on Visualization and Computer Graphics (TVCG) 15, 4 (2009), 618--629. Google ScholarDigital Library
Index Terms
- Creating Segments and Effects on Comics by Clustering Gaze Data
Recommendations
Gaze-Based Interactive Comics
NordiCHI '16: Proceedings of the 9th Nordic Conference on Human-Computer InteractionThis extended abstract presents a gaze-interactive comics, where page design alters based on the coordinates of a user's gaze. We were analyzing the impact of gaze-based interactivity on the user's impression of the comics the plot comprehension. We ...
Leveraging gaze data for segmentation and effects on comics
SAP '16: Proceedings of the ACM Symposium on Applied PerceptionIn this work, we present a semi-automatic method based on gaze data to identify the objects in comic images on which digital effects will look best. Our key contribution is a robust technique to cluster the noisy gaze data without having to specify the ...
Design Patterns for Data Comics
CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing SystemsData comics for data-driven storytelling are inspired by the visual language of comics and aim to communicate insights in data through visualizations. While comics are widely known, few examples of data comics exist and there has not been any structured ...
Comments