skip to main content
research-article

Automatic Assessment of Depression and Anxiety through Encoding Pupil-wave from HCI in VR Scenes

Authors Info & Claims
Published:25 September 2023Publication History
Skip Abstract Section

Abstract

At present, there have been many studies on the methods of using the deep learning regression model to assess depression level based on behavioral signals (facial expression, speech, and language); however, the research on the assessment method of anxiety level using deep learning is absent. In this article, pupil-wave, a physiological signal collected by Human Computer Interaction (HCI) that can directly represent the emotional state, is developed to assess the level of depression and anxiety for the first time. In order to distinguish between different depression and anxiety levels, we use the HCI method to induce the participants’ emotional experience through three virtual reality (VR) emotional scenes of joyful, sad, and calm, and construct two differential pupil-waves of joyful and sad with the calm pupil-wave as the baseline. Correspondingly, a dual-channel fusion depression and anxiety level assessment model is constructed using the improved multi-scale convolution module and our proposed width-channel attention module for one-dimensional signal processing. The test results show that the MAE/RMSE of the depression and anxiety level assessment method proposed in this article is 3.05/4.11 and 2.49/1.85, respectively, which has better assessment performance than other related research methods. This study provides an automatic assessment technique based on human computer interaction and virtual reality for mental health physical examination.

REFERENCES

  1. [1] Greenberg Paul E., Fournier Andree-Anne, Sisitsky Tammy, Pike Crystal T., and Kessler Ronald C.. 2015. The economic burden of adults with major depressive disorder in the United States (2005 and 2010). Journal of Clinical Psychiatry 76, 2 (2015), 155162.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Vos Theo, Barber Ryan M., Bell Brad, Bertozzi-Villa Amelia, Biryukov Stan, Bolliger Ian, Charlson Fiona, Davis Adrian, Degenhardt Louisa, Dicker Daniel et al. 2015. Global, regional, and national incidence, prevalence, and years lived with disability for 301 acute and chronic diseases and injuries in 188 countries, 1990-2013: A systematic analysis for the global burden of disease study 2013. The Lancet 386, 9995 (2015), 743800.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Mathers C., Fat D. M., and Boerma J. T.. 2008. The Global Burden of Disease: 2004 Update. Geneva, Switzerland: World Health Organization.Google ScholarGoogle Scholar
  4. [4] Kessler Ronald C., van Loo Hanna M., Wardenaar Klaas J., and Bossarte Robert M.. 2017. Using patient self-reports to study heterogeneity of treatment effects in major depressive disorder. Epidemiology and Psychiatric Sciences 26, 1 (2017), 2236.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Lader Malcolm. 2015. Generalized anxiety disorder. In Encyclopedia of Psychopharmacology. Berlin, Germany, Springer, 699702.Google ScholarGoogle Scholar
  6. [6] Beck Aaron T., Steer Robert A., Ball Roberta, and Ranieri William F.. 1996. Comparison of beck depression inventories-I and-Ⅱ in psychiatric outpatients. Journal of Personality Assessment 67, 3 (1996), 588597.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Kroenke Kurt, Strine Tara W., Spitzer Robert L., Williams Janet B. W., Berry Joyce T., and Mokdad Ali H.. 2009. The PHQ-8 as a measure of current depression in the general population. Journal of Affective Disorders 114, 1–3 (2009), 163173.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Kroenke Kurt and Spitzer Robert L.. 2002. The PHQ-9: A new depression diagnostic and severity measure. Psychiatric Annals 32, 9 (2002), 509521.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Spitzer Robert L., Kroenke Kurt, Williams Janet B. W., and Löwe Bernd. 2006. A brief measure for assessing generalized anxiety disorder. Archives of Internal Medicine 166, 10 (2006), 10921097.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Card Stuart K. and Thomas Moran P.. 2008. The Psychology of Human Computer Interaction. CRC Press.Google ScholarGoogle Scholar
  11. [11] Jonathan Steuer. 2010. Defining virtual reality: Dimensions determining telepresence. Journal of Communication 42, 4 (2010).Google ScholarGoogle Scholar
  12. [12] Baas Johanna M., Nugent Monique, Lissek Shmuel, Pine Daniel S., and Grillon Christian. 2004. Fear conditioning in virtual reality contexts: A new tool for the study of anxiety. Biological Psychiatry 55, 11 (2004), 10561060.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Madsen Kyle E.. 2016. The differential effects of agency on fear induction using a horror-themed video game. Computers in Human Behavior 56, (2016), 142146.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Le Yang Dongmei Jiang, and He Lang. 2016. Decision tree based depression classification from audio video and language information. In Proceedings of the ACM Multimedia Conference Amsterdam the Netherlands. 8996.Google ScholarGoogle Scholar
  15. [15] Zhu Yu, Shang Yuanyuan, Shao Zhuhong, and Guo Guodong. 2018. Automated depression diagnosis based on deep networks to encode facial appearance and dynamics. IEEE Transactions on Affective Computing 9, 4 (2018), 578584.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] He Lang and Cao Cui. 2018. Automated depression analysis using convolutional neural networks from speech. Journal of Biomedical Informatics 83, (2018), 103111.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Haque Albert, Guo Michelle, Mine Adam S., and Fei-Fei Li. 2018. Measuring depression symptom severity from spoken language and 3D facial expressions. arXiv preprint arXiv:1811.08592 (2018).Google ScholarGoogle Scholar
  18. [18] Hanai Tuka Al, Ghassemi Mohammad, and Glass James R.. 2018. Detecting depression with audio/text sequence modeling of interviews. In Proceedings of the Interspeech 2018. 17161720.Google ScholarGoogle Scholar
  19. [19] Williamson James R., Godoy Elizabeth, Cha Miriam, Schwarzentruber Adrianne, Khorrami Pooya, Gwon Youngjune, Kung Hsiang-Tsung, Dagli Charlie, and Quatieri Thomas F.. 2016. Detecting depression using vocal, facial and semantic communication cues. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge. 1118.Google ScholarGoogle Scholar
  20. [20] Yang Le, Jiang Dongmei, and Sahli Hichem. 2018. Integrating deep and shallow models for multi-modal depression analysis-hybrid architectures. IEEE Transactions on Affective Computing 12, 1 (2018), 239253.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Zhao Ziping, Bao Zhongtian, Zhang Zixing, Deng Jun, Cummins Nicholas, Wang Haishuai, Tao Jianhua, and Schuller Björn. 2020. Automatic assessment of depression from speech via a hierarchical attention transfer network and attention autoencoders. IEEE Journal of Selected Topics in Signal Processing 14, 2 (2020), 423434.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] de Melo Wheidima Carneiro, Granger Eric, and Hadid Abdenour. 2019. Combining global and local convolutional 3D networks for detecting depression from facial expressions. In Proceedings of the 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition. 18.Google ScholarGoogle Scholar
  23. [23] Gong Yuan and Poellabauer Christian. Combining Global and Local Convolutional 3D Networks for Detecting Depression from Facial Expressions.Google ScholarGoogle Scholar
  24. [24] Desmet Pieter. 2018. Measuring emotion: Development and application of an instrument to measure emotional responses to products. In Funology. Berlin, Germany, Springer, 111123.Google ScholarGoogle Scholar
  25. [25] Mulckhuyse Manon. 2018. The influence of emotional stimuli on the oculomotor system: A review of the literature. Cognitive Affective & Behavioral Neuroscience 18, 2 (2018), 411415.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Bradley Margaret M., Miccoli Laura, Escrig Miguel A., and Lang Peter J.. 2008. The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology 45, 4 (2008), 602607.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Baños Rosa M., Botella Cristina, Rubió Isabel, Quero Soledad, García-Palacios Azucena, and Alcañiz Mariano. 2008. Presence and emotions in virtual environments: The influence of stereoscopy. Cyberpsychology & Behavior 11, 1 (2008), 18.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Baños Rosa María, Liaño Víctor, Botella Cristina, Alcañiz Mariano, Botella Cristina, and Rey Beatriz. 2006. Changing induced moods via virtual reality. In Proceedings of the 1st International Conference on Persuasive Technology for Human Well-being. 715.Google ScholarGoogle Scholar
  29. [29] Katsis Christos D., Katertsidis Nikolaos S., and Fotiadis Dimitrios I.. 2011. An integrated system based on physiological signals for the assessment of affective states in patients with anxiety disorders. Biomedical Signal Processing and Control 6, 3 (2011), 261268.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Chatterjee Moitreya, Stratou Giota, Scherer Stefan, and Morency Louis-Philippe. 2014. Context-based signal descriptors of heart-rate variability for anxiety assessment. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’14). 36313635.Google ScholarGoogle Scholar
  31. [31] Husain Wahidah, Xin Lee Ker, Rashid Nur'Aini Abdul, and Jothi Neesha. 2016. Predicting generalized anxiety disorder among women using random forest approach. In Proceedings of the 2016 3rd International Conference on Computer and Information Sciences (ICCOINS’16). 3742.Google ScholarGoogle Scholar
  32. [32] Hilbert Kevin, Lueken Ulrike, Muehlhan Markus, and Beesdo-Baum Katja. 2017. Separating generalized anxiety disorder from major depression using clinical. Hormonal, and structural MRI data: A multimodal machine learning study. Brain and Behavior 12, 3 (2017), e00633.Google ScholarGoogle Scholar
  33. [33] Nolen-Hoeksema Susan. 2000. The role of rumination in depressive disorders and mixed anxiety/depressive symptoms. Journal of Abnormal Psychology 109, 3 (2000), 504511.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Donaldson Catherine, Lam Dominic, and Mathews Andrew. 2007. Rumination and attention in major depression. Behaviour Research and Therapy 45, 11 (2007), 26642678.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Christ Maximilian, Kempa-Liehr Andreas W., and Feindt Michael. 2016. Distributed and parallel time series feature extraction for industrial big data applications. arXiv preprint arXiv:1610.07717 (2016).Google ScholarGoogle Scholar
  36. [36] Pampouchidou Anastasia, Simos Panagiotis G., Marias Kostas, Meriaudeau Fabrice, Yang Fan, Pediaditis Matthew, and Tsiknakis Manolis. 2019. Automatic assessment of depression based on visual cues: A systematic review. IEEE Transactions on Affective Computing 10, 4 (2019), 445470.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Valstar Michel, Schuller Björn, Smith Kirsty, Eyben Florian, Jiang Bihan, Bilakhia Sanjay, Schnieder Sebastian, Cowie Roddy, and Pantic Maja. 2013. AVEC 2013: The continuous audio/visual emotion and depression recognition challenge. In Proceedings of the 3rd ACM International Workshop on Audio/Visual Emotion Challenge (AVEC'13). 310.Google ScholarGoogle Scholar
  38. [38] Valstar Michel, Schuller Björn, Smith Kirsty, Almaev Timur, Eyben Florian, Krajewski Jarek, Cowie Roddy, and Pantic Maja. 2014. AVEC 2014: 3D dimensional affect and depression recognition challenge. In Proceedings of the 4th International Workshop on Audio/Visual Emotion Challenge (AVEC'14). 310.Google ScholarGoogle Scholar
  39. [39] Valstar Michel, Gratch Jonathan, Schuller Björn, Ringeval Fabien, Lalanne Denis, Torres Mercedes Torres, Scherer Stefan, Stratou Giota, Cowie Roddy, and Pantic Maja. 2016. AVEC 2016: Depression, mood, and emotion recognition workshop and challenge. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge (AVEC'16). 310.Google ScholarGoogle Scholar
  40. [40] Ringeval Fabien, Schuller Björn, Valstar Michel, Gratch Jonathan, Cowie Roddy, Scherer Stefan, Mozgai Sharon, Cummins Nicholas, Schmitt Maximilian, and Pantic Maja. 2017. AVEC 2017: Real-life depression, and affect recognition workshop and challenge. In Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge (AVEC'17). 39.Google ScholarGoogle Scholar
  41. [41] Jan Asim, Meng Hongying, Yona Falinie Binti A. Gaus, and Zhang Fan. 2018. Artificial intelligent system for automatic depression level analysis through visual and vocal expressions. IEEE Transactions on Cognitive and Developmental Systems 10, 3 (2018), 668680.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Jazaery Mohamad Al and Guo Guodong. 2021. Video-based depression level analysis by encoding deep spatiotemporal features. IEEE Transactions on Affective Computing 12, 1 (2018), 262268.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Zhou Xiuzhuang, Jin Kai, Shang Yuanyuan, and Guo Guodong. 2020. Visually interpretable representation learning for depression recognition from facial images. IEEE Transactions on Affective Computing 11, 3 (2020), 542552.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Muzammel Muhammad, Salam Hanan, Hoffmann Yann, Chetouani Mohamed, and Othmani Alice. 2020. AudVowelConsNet: A phoneme-level based deep CNN architecture for clinical depression diagnosis. Machine Learning with Applications 2, (2020), 100005.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Hea Lang and Caob Cui. 2018. Automated depression analysis using convolutional neural networks from speech. Journal of Biomedical Informatics 83, (2018), 103111.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Li Mi, Lu Shengfu, Wang Gang, Feng Lei, Fu Bingbing, and Zhong Ning. 2016. Emotion, working memory, and cognitive control in patients with first-onset and previously untreated minor depressive disorders. Journal of International Medical Research 44, 3 (2016), 529541.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Li Mi, Lu Shengfu, Feng Lei, Fu Bingbing, Wang Gang, Zhong Ning, and Hu Bin. 2016. Emotional experience and the mood-congruent working memory effect in first-onset and untreated depressive disorder patients. Journal of Psychiatry 19, 4 (2016), 1000379.Google ScholarGoogle Scholar
  48. [48] Cummins Nicholas, Scherer Stefan, Krajewski Jarek, Schnieder Sebastian, Epps Julien, and Quatieri Thomas F.. 2015. A review of depression and suicide risk assessment using speech analysis. Speech Communication 71, (2015), 1049.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Girard Jeffrey M. and Cohn Jeffrey F.. 2015. Automated audiovisual depression analysis. Current Opinion in Psychology 4, (2015), 7579.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Pampouchidou Anastasia, Simos Panagiotis G., Marias Kostas, Meriaudeau Fabrice, Yang Fan, Pediaditis Matthew, and Tsiknakis Manolis. 2019. Automatic assessment of depression based on visual cues: A systematic review. IEEE Transactions on Affective Computing 10, 4 (2019), 445470.Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Dibeklioglu Hamdi, Hammal Zakia, and Cohn Jeffrey F.. 2018. Dynamic multimodal measurement of depression severity using deep autoencoding. IEEE Journal of Biomedical and Health Informatics 22, 2 (2018), 525536.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Doehrmann Oliver, Ghosh Satrajit S., Polli Frida E., Reynolds Gretchen O., Horn Franziska, Keshavan Anisha, Triantafyllou Christina, Saygin Zeynep M., Whitfield-Gabrieli Susan, Hofmann Stefan G., Pollack Mark, and Gabrieli John D.. 2013. Predicting treatment response in social anxiety disorder from functional magnetic resonance imaging. JAMA Psychiatry 70, 1 (2013), 8797.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Kiranyaz S., Avci O., Abdeljaber O., Ince T., Gabbouj M., and Inman D. J.. 2021. 1D convolutional neural networks and applications: A survey. Mechanical Systems and Signal Processing 151 (2021), 107398.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Krizhevsky Alex, Sutskever Ilya, and Hinton Geoffrey E.. 2012. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems-Volume 1 (NIPS'12). 10971105.Google ScholarGoogle Scholar
  55. [55] Simonyan Karen and Zisserman Andrew. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).Google ScholarGoogle Scholar
  56. [56] Szegedy Christian, Liu Wei, Jia Yangqing, Sermanet Pierre, Reed Scott, Anguelo Dragomir, Erhan Dumitru, Vanhoucke Vincent, and Rabinovich Andrew. 2015. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). 19.Google ScholarGoogle Scholar
  57. [57] Gao Shang-Hua, Cheng Ming-Ming, Zhao Kai, Zhang Xin-Yu, Yang Ming-Hsuan, and Torr Philip. 2021. Res2Net: A new multi-scale backbone architecture. IEEE Transactions on Pattern Analysis and Machine Intelligence 43, 2 (2021), 652662.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Yuan Pengcheng, Lin Shufei, Cui Cheng, Du Yuning, Guo Ruoyu, He Dongliang, Ding Errui, and Han Shumin. 2020. HS-ResNet: Hierarchical-split block on convolutional neural network. arXiv preprint arXiv:2010.07621 (2020).Google ScholarGoogle Scholar
  59. [59] Bahdanau Dzmitry, Chorowski Jan, Serdyuk Dmitriy, Brakel Philémon, and Bengio Yoshua. 2016. End-to-end attention-based large vocabulary speech recognition. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’16). 49454949.Google ScholarGoogle Scholar
  60. [60] Vaswani Ashish, Shazeer Noam, Parmar Niki, Uszkoreit Jakob, Jones Llion, Gomez Aidan N., Kaiser Łukasz, and Polosukhin Illia. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17). 60006010.Google ScholarGoogle Scholar
  61. [61] Hu Jie, Shen Li, Albanie Samuel, Sun Gang, and Wu Enhua. 2020. Squeeze-and-excitation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 42, 8 (2020), 20112023.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Wang Qilong, Wu Banggu, Zhu Pengfei, Li Peihua, Zuo Wangmeng, and Hu Qinghua. 2019. ECA-Net: Efficient channel attention for deep convolutional neural networks. arXiv preprint arXiv:1910.03151 (2019).Google ScholarGoogle Scholar
  63. [63] Woo Sanghyun, Park Jongchan, Lee Joon-Young, and Kweon In So. 2018. CBAM: Convolutional block attention module. In Proceedings of the 2018 European Conference on Computer Vision. 319.Google ScholarGoogle Scholar
  64. [64] Wang Xiaolong, Girshick Ross, and Mulam Harikrishna. 2018. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 77947803.Google ScholarGoogle Scholar
  65. [65] Fu Jun, Liu Jing, Tian Haijie, Li Yong, Bao Yongjun, Fang Zhiwei, and Lu Hanqing. 2019. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 31463154.Google ScholarGoogle Scholar
  66. [66] Cosmin Duta Ionut, Liu Li, Zhu Fan, and Shao Ling. 2020. Pyramidal convolution: Rethinking convolutional neural networks for visual recognition. arXiv preprint arXiv:2006.11538 (2020).Google ScholarGoogle Scholar
  67. [67] Jan Farmanullah. 2018. Pupil localization in image data acquired with near-infrared or visible wavelength illumination. Multimedia Tools and Applications 77, 1 (2018), 10411067.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Kiruthiga A. R. and Arumuganathan R.. 2017. Smoothening of iris images and pupil segmentation using fractional derivative and wavelet transform. In Proceedings of the 2017 4th International Conference on Signal Processing, Communication and Networking (ICSCN’17). 16.Google ScholarGoogle Scholar
  69. [69] Santini Thiago, Fuhl Wolfgang, and Kasneci Enkelejda. 2018. PuRe: Robust pupil detection for real-time pervasive eye tracking. Computer Vision and Image Understanding 170, 4 (2018), 4050.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. [70] Zhu Danjie, Moore Steven T., and Raphan Theodore. 1999. Robust pupil center detection using a curvature algorithm. Computer Methods and Programs in Biomedicine 59, 3 (1999), 145157.Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Duchowski Andrew T.. 2007. Eye tracking methodology. Theory and Practice 328, 614 (2007), 23.Google ScholarGoogle Scholar
  72. [72] Daway Hazim G., Kareem Hana H., and Rafid Hashim Ahmed. 2018. Pupil detection based on color difference and circular though transform. International Journal of Electrical and Computer Engineering 8, 5 (2018), 32783284.Google ScholarGoogle Scholar
  73. [73] Hashim Ashwaq T. and Noori Duaa A.. 2016. An approach of noisy color iris segmentation based on hybrid image processing techniques. In Proceedings of the 2016 International Conference on Cyberworlds (CW’16). 183188.Google ScholarGoogle Scholar
  74. [74] Junfen Chen and Li Yan. 2015. Refining Width Parameter of Gaussian Kernel Function for Tuning Coherent Point Drift (CPD) Registration. (2015).Google ScholarGoogle Scholar
  75. [75] Gopal Krishna Patro S. and Kumar Sahu Kishore. 2015. Normalization: A preprocessing stage. International Advanced Research Journal in Science, Engineering and Technology 2, 3 (2015), 2022.Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] Hou Qibin, Zhou Daquan, and Feng Jiashi. 2021. Coordinate attention for efficient mobile network design. arXiv preprint arXiv:2103.02907 (2021).Google ScholarGoogle Scholar
  77. [77] Li Xiang, Wang Wenhai, Hu Xiaolin, and Yang Jian. 2019. Selective kernel networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 510519.Google ScholarGoogle Scholar
  78. [78] Yang Le, Jiang Dongmei, Xia Xiaohan, and Pei Ercheng. 2017. Multimodal measurement of depression using deep learning models. In Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge.Google ScholarGoogle Scholar

Index Terms

  1. Automatic Assessment of Depression and Anxiety through Encoding Pupil-wave from HCI in VR Scenes

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 20, Issue 2
      February 2024
      548 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3613570
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 September 2023
      • Online AM: 30 April 2022
      • Accepted: 22 January 2022
      • Revised: 21 December 2021
      • Received: 15 October 2021
      Published in tomm Volume 20, Issue 2

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text