Skip to main content
Erschienen in: Cognitive Computation 5/2021

27.09.2021

Training Affective Computer Vision Models by Crowdsourcing Soft-Target Labels

verfasst von: Peter Washington, Haik Kalantarian, Jack Kent, Arman Husic, Aaron Kline, Emilie Leblanc, Cathy Hou, Cezmi Mutlu, Kaitlyn Dunlap, Yordan Penev, Nate Stockham, Brianna Chrisman, Kelley Paskov, Jae-Yoon Jung, Catalin Voss, Nick Haber, Dennis P. Wall

Erschienen in: Cognitive Computation | Ausgabe 5/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Emotion detection classifiers traditionally predict discrete emotions. However, emotion expressions are often subjective, thus requiring a method to handle compound and ambiguous labels. We explore the feasibility of using crowdsourcing to acquire reliable soft-target labels and evaluate an emotion detection classifier trained with these labels. We hypothesize that training with labels that are representative of the diversity of human interpretation of an image will result in predictions that are similarly representative on a disjoint test set. We also hypothesize that crowdsourcing can generate distributions which mirror those generated in a lab setting. We center our study on the Child Affective Facial Expression (CAFE) dataset, a gold standard collection of images depicting pediatric facial expressions along with 100 human labels per image. To test the feasibility of crowdsourcing to generate these labels, we used Microworkers to acquire labels for 207 CAFE images. We evaluate both unfiltered workers and workers selected through a short crowd filtration process. We then train two versions of a ResNet-152 neural network on soft-target CAFE labels using the original 100 annotations provided with the dataset: (1) a classifier trained with traditional one-hot encoded labels and (2) a classifier trained with vector labels representing the distribution of CAFE annotator responses. We compare the resulting softmax output distributions of the two classifiers with a 2-sample independent t-test of L1 distances between the classifier’s output probability distribution and the distribution of human labels. While agreement with CAFE is weak for unfiltered crowd workers, the filtered crowd agree with the CAFE labels 100% of the time for happy, neutral, sad, and “fear + surprise” and 88.8% for “anger + disgust.” While the F1-score for a one-hot encoded classifier is much higher (94.33% vs. 78.68%) with respect to the ground truth CAFE labels, the output probability vector of the crowd-trained classifier more closely resembles the distribution of human labels (t = 3.2827, p = 0.0014). For many applications of affective computing, reporting an emotion probability distribution that accounts for the subjectivity of human interpretation can be more useful than an absolute label. Crowdsourcing, including a sufficient filtering mechanism for selecting reliable crowd workers, is a feasible solution for acquiring soft-target labels.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Cambria E, Dipankar D, Sivaji B, Antonio F. Affective computing and sentiment analysis. In A practical guide to sentiment analysis, pp. 1-10. Springer, Cham, 2017. Cambria E, Dipankar D, Sivaji B, Antonio F. Affective computing and sentiment analysis. In A practical guide to sentiment analysis, pp. 1-10. Springer, Cham, 2017.
2.
Zurück zum Zitat Hupont I, Sandra B, Eva C, Rafael DH. Advanced human affect visualization. In 2013 IEEE International Conference on Systems, Man, and Cybernetics, pp. 2700-2705. IEEE, 2013. Hupont I, Sandra B, Eva C, Rafael DH. Advanced human affect visualization. In 2013 IEEE International Conference on Systems, Man, and Cybernetics, pp. 2700-2705. IEEE, 2013.
3.
Zurück zum Zitat Jerauld R. Wearable emotion detection and feedback system. U.S. Patent 9,019,174, issued April 28, 2015. Jerauld R. Wearable emotion detection and feedback system. U.S. Patent 9,019,174, issued April 28, 2015.
4.
Zurück zum Zitat Liu R, Salisbury JP, Vahabzadeh A, Sahin NT. Feasibility of an autism-focused augmented reality smartglasses system for social communication and behavioral coaching. Front Pediatr. 2017;5:145.CrossRef Liu R, Salisbury JP, Vahabzadeh A, Sahin NT. Feasibility of an autism-focused augmented reality smartglasses system for social communication and behavioral coaching. Front Pediatr. 2017;5:145.CrossRef
5.
Zurück zum Zitat Völkel ST, Julia G, Ramona S, Renate H, Clemens S, Quay A, Heinrich H. I drive my car and my states drive me: visualizing driver’s emotional and physical states. In Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 198-203. 2018. Völkel ST, Julia G, Ramona S, Renate H, Clemens S, Quay A, Heinrich H. I drive my car and my states drive me: visualizing driver’s emotional and physical states. In Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 198-203. 2018.
6.
Zurück zum Zitat Kaur R, Sandeep K. Multimodal sentiment analysis: a survey and comparison.International Journal of Service Science, Management, Engineering, and Technology (IJSSMET). 2019;10(2):38-58. Kaur R, Sandeep K. Multimodal sentiment analysis: a survey and comparison.International Journal of Service Science, Management, Engineering, and Technology (IJSSMET). 2019;10(2):38-58.
7.
Zurück zum Zitat Poria S, Erik C, Alexander G. Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis. In Proceedings of the 2015 conference on empirical methods in natural language processing, pp. 2539-2544. 2015. Poria S, Erik C, Alexander G. Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis. In Proceedings of the 2015 conference on empirical methods in natural language processing, pp. 2539-2544. 2015.
8.
Zurück zum Zitat Poria S, Cambria E, Howard N, Huang G-B, Hussain A. Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing. 2016;174:50–9.CrossRef Poria S, Cambria E, Howard N, Huang G-B, Hussain A. Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing. 2016;174:50–9.CrossRef
9.
Zurück zum Zitat Tahir M, Abdallah T, Feras AO, Babar S, Zahid H, Muhammad W. A novel binary chaotic genetic algorithm for feature selection and its utility in affective computing and healthcare. Neur Comp Appl. 2020:1-22. Tahir M, Abdallah T, Feras AO, Babar S, Zahid H, Muhammad W. A novel binary chaotic genetic algorithm for feature selection and its utility in affective computing and healthcare. Neur Comp Appl. 2020:1-22.
10.
Zurück zum Zitat Yannakakis GN. Enhancing health care via affective computing. 2018. Yannakakis GN. Enhancing health care via affective computing. 2018.
11.
Zurück zum Zitat Eyben F, Martin W, Tony P, Björn S, Christoph B, Berthold F, Nhu NT. Emotion on the road—necessity, acceptance, and feasibility of affective computing in the car. Advances in human-computer interaction. 2010. Eyben F, Martin W, Tony P, Björn S, Christoph B, Berthold F, Nhu NT. Emotion on the road—necessity, acceptance, and feasibility of affective computing in the car. Advances in human-computer interaction. 2010.
12.
Zurück zum Zitat Devillers L, Vidrascu L, Lamel L. Challenges in real-life emotion annotation and machine learning based detection. Neural Netw. 2005;18(4):407–22.CrossRef Devillers L, Vidrascu L, Lamel L. Challenges in real-life emotion annotation and machine learning based detection. Neural Netw. 2005;18(4):407–22.CrossRef
13.
Zurück zum Zitat Zhang L, Steffen W, Xueyao M, Philipp W, Ayoub AH, Harald CT, Sascha G. BioVid Emo DB: a multimodal database for emotion analyses validated by subjective ratings. In2016 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1-6. IEEE, 2016. Zhang L, Steffen W, Xueyao M, Philipp W, Ayoub AH, Harald CT, Sascha G. BioVid Emo DB: a multimodal database for emotion analyses validated by subjective ratings. In2016 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1-6. IEEE, 2016.
15.
Zurück zum Zitat Magdin M, Prikler F. Real time facial expression recognition using webcam and SDK affectiva. IJIMAI5. 2018;1:7-15. Magdin M, Prikler F. Real time facial expression recognition using webcam and SDK affectiva. IJIMAI5. 2018;1:7-15.
16.
Zurück zum Zitat McDuff D, Rana K, Thibaud S, May A, Jeffrey C, Rosalind P. Affectiva-mit facial expression dataset (am-fed): Naturalistic and spontaneous facial expressions collected. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 881-888. 2013. McDuff D, Rana K, Thibaud S, May A, Jeffrey C, Rosalind P. Affectiva-mit facial expression dataset (am-fed): Naturalistic and spontaneous facial expressions collected. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 881-888. 2013.
17.
Zurück zum Zitat Ando A, Satoshi K, Hosana K, Ryo M, Yusuke I, Yushi A. Soft-target training with ambiguous emotional utterances for DNN-based speech emotion classification. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2018. pp. 4964-4968. IEEE. Ando A, Satoshi K, Hosana K, Ryo M, Yusuke I, Yushi A. Soft-target training with ambiguous emotional utterances for DNN-based speech emotion classification. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2018. pp. 4964-4968. IEEE.
18.
Zurück zum Zitat Fang X, Jiancheng Y, Bingbing N. Stochastic label refinery: toward better target label distribution. In 2020 25th International Conference on Pattern Recognition (ICPR), pp. 9115-9121. IEEE, 2021. Fang X, Jiancheng Y, Bingbing N. Stochastic label refinery: toward better target label distribution. In 2020 25th International Conference on Pattern Recognition (ICPR), pp. 9115-9121. IEEE, 2021.
19.
Zurück zum Zitat Yin Da, Liu X, Xiuyu Wu, Chang B. A soft label strategy for target-level sentiment classification. Sentiment and Social Media Analysis: In Proceedings of the Tenth Workshop on Computational Approaches to Subjectivity; 2019. p. 6–15. Yin Da, Liu X, Xiuyu Wu, Chang B. A soft label strategy for target-level sentiment classification. Sentiment and Social Media Analysis: In Proceedings of the Tenth Workshop on Computational Approaches to Subjectivity; 2019. p. 6–15.
20.
Zurück zum Zitat Turing AM. Computing machinery and intelligence. In Parsing the turing test, pp. 23-65. Springer, Dordrecht, 2009. Turing AM. Computing machinery and intelligence. In Parsing the turing test, pp. 23-65. Springer, Dordrecht, 2009.
21.
Zurück zum Zitat Zeng Z, Jilin Tu, Liu M, Huang TS, Pianfetti B, Roth D, Levinson S. Audio-visual affect recognition. IEEE Trans Multimedia. 2007;9(2):424–8.CrossRef Zeng Z, Jilin Tu, Liu M, Huang TS, Pianfetti B, Roth D, Levinson S. Audio-visual affect recognition. IEEE Trans Multimedia. 2007;9(2):424–8.CrossRef
22.
Zurück zum Zitat Kratzwald B, Ilić S, Kraus M, Feuerriegel S, Prendinger H. Deep learning for affective computing: text-based emotion recognition in decision support. Decis Support Syst. 2018;115:24–35.CrossRef Kratzwald B, Ilić S, Kraus M, Feuerriegel S, Prendinger H. Deep learning for affective computing: text-based emotion recognition in decision support. Decis Support Syst. 2018;115:24–35.CrossRef
23.
Zurück zum Zitat Tao J, Tieniu T. Affective computing: a review. In International Conference on Affective computing and intelligent interaction, pp. 981-995. Springer, Berlin, Heidelberg, 2005. Tao J, Tieniu T. Affective computing: a review. In International Conference on Affective computing and intelligent interaction, pp. 981-995. Springer, Berlin, Heidelberg, 2005.
24.
Zurück zum Zitat Haber N, Catalin V, Azar F, Terry W, Dennis PW. A practical approach to real-time neutral feature subtraction for facial expression recognition. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1-9. IEEE, 2016. Haber N, Catalin V, Azar F, Terry W, Dennis PW. A practical approach to real-time neutral feature subtraction for facial expression recognition. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1-9. IEEE, 2016.
25.
Zurück zum Zitat Voss C, Jessey S, Jena D, Aaron K, Nick H, Peter W, Qandeel T et al. Effect of wearable digital intervention for improving socialization in children with autism spectrum disorder: a randomized clinical trial. JAMA pediatrics. 2019;173(5):446-454. Voss C, Jessey S, Jena D, Aaron K, Nick H, Peter W, Qandeel T et al. Effect of wearable digital intervention for improving socialization in children with autism spectrum disorder: a randomized clinical trial. JAMA pediatrics. 2019;173(5):446-454.
26.
Zurück zum Zitat Voss C, Peter W, Nick H, Aaron K, Jena D, Azar F, Titas D et al. Superpower glass: delivering unobtrusive real-time social cues in wearable systems. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, pp. 1218-1226. 2016. Voss C, Peter W, Nick H, Aaron K, Jena D, Azar F, Titas D et al. Superpower glass: delivering unobtrusive real-time social cues in wearable systems. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, pp. 1218-1226. 2016.
27.
Zurück zum Zitat Dinculescu A, Andra B, Carmen S, Livia P, Cristian V, Alexandru M, Nicoară T, Vlad V. Automatic identification of anthropological face landmarks for emotion detection. In 2019 9th International Conference on Recent Advances in Space Technologies (RAST), pp. 585-590. IEEE, 2019. Dinculescu A, Andra B, Carmen S, Livia P, Cristian V, Alexandru M, Nicoară T, Vlad V. Automatic identification of anthropological face landmarks for emotion detection. In 2019 9th International Conference on Recent Advances in Space Technologies (RAST), pp. 585-590. IEEE, 2019.
28.
Zurück zum Zitat Nguyen BT, Minh HT, Tan VP, Hien DN. An efficient real-time emotion detection using camera and facial landmarks. In 2017 seventh international conference on information science and technology (ICIST), pp. 251-255. IEEE, 2017. Nguyen BT, Minh HT, Tan VP, Hien DN. An efficient real-time emotion detection using camera and facial landmarks. In 2017 seventh international conference on information science and technology (ICIST), pp. 251-255. IEEE, 2017.
29.
Zurück zum Zitat Sharma M, Anand SJ, Aamir K. Emotion recognition using facial expression by fusing key points descriptor and texture features. Multi Tools Appl. 2019;78(12):16195-16219. Sharma M, Anand SJ, Aamir K. Emotion recognition using facial expression by fusing key points descriptor and texture features. Multi Tools Appl. 2019;78(12):16195-16219.
30.
Zurück zum Zitat Fan Y, Jacqueline CKL, Victor OKL. Multi-region ensemble convolutional neural network for facial expression recognition. In International Conference on Artificial Neural Networks, pp. 84-94. Springer, Cham, 2018. Fan Y, Jacqueline CKL, Victor OKL. Multi-region ensemble convolutional neural network for facial expression recognition. In International Conference on Artificial Neural Networks, pp. 84-94. Springer, Cham, 2018.
31.
Zurück zum Zitat Washington P, Haik K, Jack K, Arman H, Aaron K, Emilie L, Cathy H et al. Training an emotion detection classifier using frames from a mobile therapeutic game for children with developmental disorders. 2020. arXiv preprint https://arxiv.org/abs/2012.08678. Washington P, Haik K, Jack K, Arman H, Aaron K, Emilie L, Cathy H et al. Training an emotion detection classifier using frames from a mobile therapeutic game for children with developmental disorders. 2020. arXiv preprint https://​arxiv.​org/​abs/​2012.​08678.
32.
Zurück zum Zitat Ekman P. "Are there basic emotions?." 1992:550. Ekman P. "Are there basic emotions?." 1992:550.
33.
Zurück zum Zitat Ekman P. Basic emotions. Handbook of cognition and emotion. 1999;98(45–60):16. Ekman P. Basic emotions. Handbook of cognition and emotion. 1999;98(45–60):16.
34.
Zurück zum Zitat Du S, Tao Y, Martinez AM. Compound facial expressions of emotion. Proc Natl Acad Sci. 2014;111(15):E1454–62.CrossRef Du S, Tao Y, Martinez AM. Compound facial expressions of emotion. Proc Natl Acad Sci. 2014;111(15):E1454–62.CrossRef
35.
Zurück zum Zitat Zhang X, Wenzhong Li Xu, Chen, and Sanglu Lu. Moodexplorer: towards compound emotion detection via smartphone sensing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 2018;1(4):1–30. Zhang X, Wenzhong Li Xu, Chen, and Sanglu Lu. Moodexplorer: towards compound emotion detection via smartphone sensing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 2018;1(4):1–30.
36.
Zurück zum Zitat Lotfian R, Carlos B. Over-sampling emotional speech data based on subjective evaluations provided by multiple individuals. IEEE Transactions on Affective Computing (2019). Lotfian R, Carlos B. Over-sampling emotional speech data based on subjective evaluations provided by multiple individuals. IEEE Transactions on Affective Computing (2019).
37.
Zurück zum Zitat McFarland DJ, Muhammad AP, William AS, Rita ZG, Jonathan RW. Prediction of subjective ratings of emotional pictures by EEG features. J Neur Eng. 2016;14(1):016009. McFarland DJ, Muhammad AP, William AS, Rita ZG, Jonathan RW. Prediction of subjective ratings of emotional pictures by EEG features. J Neur Eng. 2016;14(1):016009.
38.
Zurück zum Zitat Rizos G, Björn WS. Average Jane, where art thou?–Recent avenues in efficient machine learning under subjectivity uncertainty. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp. 42-55. Springer, Cham, 2020. Rizos G, Björn WS. Average Jane, where art thou?–Recent avenues in efficient machine learning under subjectivity uncertainty. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp. 42-55. Springer, Cham, 2020.
39.
Zurück zum Zitat Villon O, Christine L. Toward recognizing individual’s subjective emotion from physiological signals in practical application. In Twentieth IEEE International Symposium on Computer-Based Medical Systems (CBMS'07), pp. 357-362. IEEE, 2007. Villon O, Christine L. Toward recognizing individual’s subjective emotion from physiological signals in practical application. In Twentieth IEEE International Symposium on Computer-Based Medical Systems (CBMS'07), pp. 357-362. IEEE, 2007.
40.
Zurück zum Zitat Mower E, Matarić MJ, Narayanan S. A framework for automatic human emotion classification using emotion profiles. IEEE Transactions on Audio, Speech, and Language Processing. 2010;19(5):1057–70.CrossRef Mower E, Matarić MJ, Narayanan S. A framework for automatic human emotion classification using emotion profiles. IEEE Transactions on Audio, Speech, and Language Processing. 2010;19(5):1057–70.CrossRef
41.
Zurück zum Zitat Mower E, Angeliki M, Chi CL, Abe K, Carlos B, Sungbok L, Shrikanth N. Interpreting ambiguous emotional expressions. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp. 1-8. IEEE, 2009. Mower E, Angeliki M, Chi CL, Abe K, Carlos B, Sungbok L, Shrikanth N. Interpreting ambiguous emotional expressions. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp. 1-8. IEEE, 2009.
43.
Zurück zum Zitat Thiel C. Classification on soft labels is robust against label noise. In International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, pp. 65-73. Springer, Berlin, Heidelberg, 2008. Thiel C. Classification on soft labels is robust against label noise. In International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, pp. 65-73. Springer, Berlin, Heidelberg, 2008.
44.
Zurück zum Zitat Yang Z, Liu T, Liu J, Wang Li, Zhao S. A novel soft margin loss function for deep discriminative embedding learning. IEEE Access. 2020;8:202785–94.CrossRef Yang Z, Liu T, Liu J, Wang Li, Zhao S. A novel soft margin loss function for deep discriminative embedding learning. IEEE Access. 2020;8:202785–94.CrossRef
45.
Zurück zum Zitat Peterson JC, Ruairidh MB, Thomas LG, Olga R. Human uncertainty makes classification more robust. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9617-9626. 2019. Peterson JC, Ruairidh MB, Thomas LG, Olga R. Human uncertainty makes classification more robust. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9617-9626. 2019.
46.
Zurück zum Zitat Uma A, Fornaciari T, Hovy D, Paun S, Plank B, Poesio M. A case for soft loss functions. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. 2020;8(1):173–7. Uma A, Fornaciari T, Hovy D, Paun S, Plank B, Poesio M. A case for soft loss functions. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. 2020;8(1):173–7.
47.
Zurück zum Zitat Chaturvedi I, Satapathy R, Cavallari S, Cambria E. Fuzzy commonsense reasoning for multimodal sentiment analysis. Pattern Recogn Lett. 2019;125:264–70.CrossRef Chaturvedi I, Satapathy R, Cavallari S, Cambria E. Fuzzy commonsense reasoning for multimodal sentiment analysis. Pattern Recogn Lett. 2019;125:264–70.CrossRef
48.
Zurück zum Zitat Nicolaou MA, Gunes H, Pantic M. Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. IEEE Trans Affect Comput. 2011;2(2):92–105.CrossRef Nicolaou MA, Gunes H, Pantic M. Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. IEEE Trans Affect Comput. 2011;2(2):92–105.CrossRef
49.
Zurück zum Zitat Parthasarathy S, Busso C. Jointly Predicting Arousal, Valence and Dominance with Multi-Task Learning. In Interspeech. 2017;2017:1103–7.CrossRef Parthasarathy S, Busso C. Jointly Predicting Arousal, Valence and Dominance with Multi-Task Learning. In Interspeech. 2017;2017:1103–7.CrossRef
50.
Zurück zum Zitat Stappen L, Baird A, Cambria E, Schuller Björn W. Sentiment analysis and topic recognition in video transcriptions. IEEE Intell Syst. 2021;36(2):88–95.CrossRef Stappen L, Baird A, Cambria E, Schuller Björn W. Sentiment analysis and topic recognition in video transcriptions. IEEE Intell Syst. 2021;36(2):88–95.CrossRef
51.
Zurück zum Zitat Yu LC, Jin W, Robert LK, Xue-jie Z. Predicting valence-arousal ratings of words using a weighted graph method. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pp. 788-793. 2015. Yu LC, Jin W, Robert LK, Xue-jie Z. Predicting valence-arousal ratings of words using a weighted graph method. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pp. 788-793. 2015.
52.
Zurück zum Zitat Zhao S, Hongxun Y, Xiaolei J. Predicting continuous probability distribution of image emotions in valence-arousal space. In Proceedings of the 23rd ACM international conference on Multimedia, pp. 879-882. 2015. Zhao S, Hongxun Y, Xiaolei J. Predicting continuous probability distribution of image emotions in valence-arousal space. In Proceedings of the 23rd ACM international conference on Multimedia, pp. 879-882. 2015.
53.
Zurück zum Zitat Kairam S, Jeffrey H. Parting crowds: characterizing divergent interpretations in crowdsourced annotation tasks. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pp. 1637-1648. 2016. Kairam S, Jeffrey H. Parting crowds: characterizing divergent interpretations in crowdsourced annotation tasks. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pp. 1637-1648. 2016.
54.
Zurück zum Zitat Rodrigues F, Francisco P. Deep learning from crowds. In Proceedings of the AAAI Conference on Artificial Intelligence. 2018;32(1). Rodrigues F, Francisco P. Deep learning from crowds. In Proceedings of the AAAI Conference on Artificial Intelligence. 2018;32(1).
55.
Zurück zum Zitat Korovina O, Fabio C, Radoslaw N, Marcos B, Olga B. Investigating crowdsourcing as a method to collect emotion labels for images. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-6. 2018. Korovina O, Fabio C, Radoslaw N, Marcos B, Olga B. Investigating crowdsourcing as a method to collect emotion labels for images. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-6. 2018.
56.
Zurück zum Zitat Korovina O, Baez M, Casati F. Reliability of crowdsourcing as a method for collecting emotions labels on pictures. BMC Res Notes. 2019;12(1):1–6.CrossRef Korovina O, Baez M, Casati F. Reliability of crowdsourcing as a method for collecting emotions labels on pictures. BMC Res Notes. 2019;12(1):1–6.CrossRef
57.
Zurück zum Zitat LoBue V, Baker L, Thrasher C. Through the eyes of a child: preschoolers’ identification of emotional expressions from the child affective facial expression (CAFE) set. Cogn Emot. 2018;32(5):1122–30.CrossRef LoBue V, Baker L, Thrasher C. Through the eyes of a child: preschoolers’ identification of emotional expressions from the child affective facial expression (CAFE) set. Cogn Emot. 2018;32(5):1122–30.CrossRef
58.
Zurück zum Zitat LoBue V, Thrasher C. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults. Front Psychol. 2015;5:1532.CrossRef LoBue V, Thrasher C. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults. Front Psychol. 2015;5:1532.CrossRef
59.
Zurück zum Zitat Paolacci G, Chandler J, Ipeirotis PG. Running experiments on amazon mechanical turk. Judgm Decis Mak. 2010;5(5):411–9. Paolacci G, Chandler J, Ipeirotis PG. Running experiments on amazon mechanical turk. Judgm Decis Mak. 2010;5(5):411–9.
60.
Zurück zum Zitat Hirth M, Tobias H, Phuoc TG. Anatomy of a crowdsourcing platform-using the example of microworkers. com. In 2011 Fifth international conference on innovative mobile and internet services in ubiquitous computing, pp. 322-329. IEEE, 2011. Hirth M, Tobias H, Phuoc TG. Anatomy of a crowdsourcing platform-using the example of microworkers. com. In 2011 Fifth international conference on innovative mobile and internet services in ubiquitous computing, pp. 322-329. IEEE, 2011.
61.
Zurück zum Zitat Coolican J, Eskes GA, McMullen PA, Lecky E. Perceptual biases in processing facial identity and emotion. Brain Cogn. 2008;66(2):176–87.CrossRef Coolican J, Eskes GA, McMullen PA, Lecky E. Perceptual biases in processing facial identity and emotion. Brain Cogn. 2008;66(2):176–87.CrossRef
62.
Zurück zum Zitat Coren S, Russell JA. The relative dominance of different facial expressions of emotion under conditions of perceptual ambiguity. Cogn Emot. 1992;6(5):339–56.CrossRef Coren S, Russell JA. The relative dominance of different facial expressions of emotion under conditions of perceptual ambiguity. Cogn Emot. 1992;6(5):339–56.CrossRef
63.
Zurück zum Zitat Gray, Katie LH, Wendy JA, Nicholas H, Kristiana E. Newton, and Matthew Garner. Faces and awareness: low-level, not emotional factors determine perceptual dominance. Emotion. 2013;13(3):537. Gray, Katie LH, Wendy JA, Nicholas H, Kristiana E. Newton, and Matthew Garner. Faces and awareness: low-level, not emotional factors determine perceptual dominance. Emotion. 2013;13(3):537.
64.
Zurück zum Zitat Allahbakhsh M, Boualem B, Aleksandar I, Hamid RMN, Elisa B, Schahram D. Quality control in crowdsourcing systems: issues and directions. IEEE Internet Computing 2013;17(2):76-81. Allahbakhsh M, Boualem B, Aleksandar I, Hamid RMN, Elisa B, Schahram D. Quality control in crowdsourcing systems: issues and directions. IEEE Internet Computing 2013;17(2):76-81.
65.
Zurück zum Zitat Buchholz S, Javier L. Crowdsourcing preference tests, and how to detect cheating. In 12th Annual Conference of the International Speech Communication Association. 2011. Buchholz S, Javier L. Crowdsourcing preference tests, and how to detect cheating. In 12th Annual Conference of the International Speech Communication Association. 2011.
66.
Zurück zum Zitat Daniel F, Kucherbaev P, Cappiello C, Benatallah B, Allahbakhsh M. Quality control in crowdsourcing: a survey of quality attributes, assessment techniques, and assurance actions. ACM Computing Surveys (CSUR). 2018;51(1):1–40.CrossRef Daniel F, Kucherbaev P, Cappiello C, Benatallah B, Allahbakhsh M. Quality control in crowdsourcing: a survey of quality attributes, assessment techniques, and assurance actions. ACM Computing Surveys (CSUR). 2018;51(1):1–40.CrossRef
67.
Zurück zum Zitat Lease M. On quality control and machine learning in crowdsourcing. Human Computation. 2011;11(11). Lease M. On quality control and machine learning in crowdsourcing. Human Computation. 2011;11(11).
68.
Zurück zum Zitat He K, Xiangyu Z, Shaoqing R, Jian S. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016. He K, Xiangyu Z, Shaoqing R, Jian S. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016.
69.
Zurück zum Zitat Deng J, Wei D, Richard S, Li-Jia L, Kai L, Li FF. Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009. Deng J, Wei D, Richard S, Li-Jia L, Kai L, Li FF. Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009.
70.
Zurück zum Zitat Chollet F. Keras: the python deep learning library. ascl. ascl-1806. 2018. Chollet F. Keras: the python deep learning library. ascl. ascl-1806. 2018.
71.
Zurück zum Zitat Abadi M, Paul B, Jianmin C, Zhifeng C, Andy D, Jeffrey D, Matthieu D et al. Tensorflow: a system for large-scale machine learning. In 12th {USENIX} symposium on operating systems design and implementation (OSDI 16). 2016. pp. 265-283. Abadi M, Paul B, Jianmin C, Zhifeng C, Andy D, Jeffrey D, Matthieu D et al. Tensorflow: a system for large-scale machine learning. In 12th {USENIX} symposium on operating systems design and implementation (OSDI 16). 2016. pp. 265-283.
73.
Zurück zum Zitat Daniels J, Schwartz JN, Voss C, Haber N, Fazel A, Kline A, Washington P, Feinstein C, Winograd T, Wall DP. Exploratory study examining the at-home feasibility of a wearable tool for social-affective learning in children with autism. NPJ digital medicine. 2018;1(1):1–10.CrossRef Daniels J, Schwartz JN, Voss C, Haber N, Fazel A, Kline A, Washington P, Feinstein C, Winograd T, Wall DP. Exploratory study examining the at-home feasibility of a wearable tool for social-affective learning in children with autism. NPJ digital medicine. 2018;1(1):1–10.CrossRef
74.
Zurück zum Zitat Daniels J, Nick H, Catalin V, Jessey S, Serena T, Azar F, Aaron K et al. Feasibility testing of a wearable behavioral aid for social learning in children with autism. Appl Clin Info 2018;9(1):129. Daniels J, Nick H, Catalin V, Jessey S, Serena T, Azar F, Aaron K et al. Feasibility testing of a wearable behavioral aid for social learning in children with autism. Appl Clin Info 2018;9(1):129.
75.
Zurück zum Zitat Deriso D, Joshua S, Lauren K, Marian B. Emotion mirror: a novel intervention for autism based on real-time expression recognition. In European Conference on Computer Vision, pp. 671-674. Springer, Berlin, Heidelberg, 2012. Deriso D, Joshua S, Lauren K, Marian B. Emotion mirror: a novel intervention for autism based on real-time expression recognition. In European Conference on Computer Vision, pp. 671-674. Springer, Berlin, Heidelberg, 2012.
76.
Zurück zum Zitat Haber N, Voss C, Wall D. Making emotions transparent: Google Glass helps autistic kids understand facial expressions through augmented-reaiity therapy. IEEE Spectr. 2020;57(4):46–52.CrossRef Haber N, Voss C, Wall D. Making emotions transparent: Google Glass helps autistic kids understand facial expressions through augmented-reaiity therapy. IEEE Spectr. 2020;57(4):46–52.CrossRef
77.
Zurück zum Zitat Kalantarian H, Jedoui K, Washington P, Wall DP. A mobile game for automatic emotion-labeling of images. IEEE Transactions on Games. 2018;12(2):213–8.CrossRef Kalantarian H, Jedoui K, Washington P, Wall DP. A mobile game for automatic emotion-labeling of images. IEEE Transactions on Games. 2018;12(2):213–8.CrossRef
78.
Zurück zum Zitat Kalantarian H, Jedoui K, Washington P, Tariq Q, Dunlap K, Schwartz J, Wall DP. Labeling images with facial emotion and the potential for pediatric healthcare. Artif Intell Med. 2019;98:77–86.CrossRef Kalantarian H, Jedoui K, Washington P, Tariq Q, Dunlap K, Schwartz J, Wall DP. Labeling images with facial emotion and the potential for pediatric healthcare. Artif Intell Med. 2019;98:77–86.CrossRef
79.
Zurück zum Zitat Kalantarian H, Khaled J, Kaitlyn D, Jessey S, Peter W, Arman H, Qandeel T, Michael N, Aaron K, Dennis PW. The performance of emotion classifiers for children with parent-reported autism: quantitative feasibility study. JMIR mental health. 2020;7(4):e13174. Kalantarian H, Khaled J, Kaitlyn D, Jessey S, Peter W, Arman H, Qandeel T, Michael N, Aaron K, Dennis PW. The performance of emotion classifiers for children with parent-reported autism: quantitative feasibility study. JMIR mental health. 2020;7(4):e13174.
80.
Zurück zum Zitat Kalantarian H, Washington P, Schwartz J, Daniels J, Haber N, Wall DP. Guess what? J Healthcare Info Res. 2019;3(1):43–66.CrossRef Kalantarian H, Washington P, Schwartz J, Daniels J, Haber N, Wall DP. Guess what? J Healthcare Info Res. 2019;3(1):43–66.CrossRef
81.
Zurück zum Zitat Kalantarian H, Peter W, Jessey S, Jena D, Nick H, Dennis W. A gamified mobile system for crowdsourcing video for autism research. In 2018 IEEE international conference on healthcare informatics (ICHI), pp. 350-352. IEEE, 2018. Kalantarian H, Peter W, Jessey S, Jena D, Nick H, Dennis W. A gamified mobile system for crowdsourcing video for autism research. In 2018 IEEE international conference on healthcare informatics (ICHI), pp. 350-352. IEEE, 2018.
82.
Zurück zum Zitat Kline A, Catalin V, Peter W, Nick H, Hessey S, Qandeel T, Terry W, Carl F, Dennis PW. Superpower glass. GetMobile: Mobile Computing and Communications. 2019;23(2):35-38. Kline A, Catalin V, Peter W, Nick H, Hessey S, Qandeel T, Terry W, Carl F, Dennis PW. Superpower glass. GetMobile: Mobile Computing and Communications. 2019;23(2):35-38.
83.
Zurück zum Zitat Pioggia G, Roberta I, Marcello F, Arti A, Filippo M, Danilo DR. An android for enhancing social skills and emotion recognition in people with autism. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2005;13(4):507-515. Pioggia G, Roberta I, Marcello F, Arti A, Filippo M, Danilo DR. An android for enhancing social skills and emotion recognition in people with autism. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2005;13(4):507-515.
84.
Zurück zum Zitat Smitha KG, Prasad VA. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation. Medical & biological engineering & computing. 2015;53(11):1221-1229. Smitha KG, Prasad VA. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation. Medical & biological engineering & computing. 2015;53(11):1221-1229.
85.
Zurück zum Zitat Washington P, Catalin V, Nick H, Serena T, Jena D, Carl F, Terry W, Dennis W. A wearable social interaction aid for children with autism. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 2348-2354. 2016. Washington P, Catalin V, Nick H, Serena T, Jena D, Carl F, Terry W, Dennis W. A wearable social interaction aid for children with autism. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 2348-2354. 2016.
86.
Zurück zum Zitat Washington P, Voss C, Kline A, Haber N, Daniels J, Fazel A, De T, Feinstein C, Winograd T, Wall D. SuperpowerGlass: a wearable aid for the at-home therapy of children with autism. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies. 2017;1(3):1–22.CrossRef Washington P, Voss C, Kline A, Haber N, Daniels J, Fazel A, De T, Feinstein C, Winograd T, Wall D. SuperpowerGlass: a wearable aid for the at-home therapy of children with autism. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies. 2017;1(3):1–22.CrossRef
87.
Zurück zum Zitat Kaiser R, Karina O. Emotions in HCI: an affective e-learning system. In Proceedings of the HCS Net workshop on Use of vision in human-computer interaction-Volume 56, pp. 105-106. Australian Computer Society, Inc., 2006. Kaiser R, Karina O. Emotions in HCI: an affective e-learning system. In Proceedings of the HCS Net workshop on Use of vision in human-computer interaction-Volume 56, pp. 105-106. Australian Computer Society, Inc., 2006.
88.
Zurück zum Zitat Thiam P, Sascha M, Markus K, Günther P, Friedhelm S. Detection of emotional events utilizing support vector methods in an active learning HCI scenario. In Proceedings of the 2014 workshop on emotion representation and modelling in human-computer-interaction-systems, pp. 31-36. 2014. Thiam P, Sascha M, Markus K, Günther P, Friedhelm S. Detection of emotional events utilizing support vector methods in an active learning HCI scenario. In Proceedings of the 2014 workshop on emotion representation and modelling in human-computer-interaction-systems, pp. 31-36. 2014.
89.
Zurück zum Zitat Duda M, Daniels J, Wall DP. Clinical evaluation of a novel and mobile autism risk assessment. J Autism Dev Disord. 2016;46(6):1953–61.CrossRef Duda M, Daniels J, Wall DP. Clinical evaluation of a novel and mobile autism risk assessment. J Autism Dev Disord. 2016;46(6):1953–61.CrossRef
90.
Zurück zum Zitat Duda M, Haber N, Daniels J, Wall DP. Crowdsourced validation of a machine-learning classification system for autism and ADHD. Transl Psychiatry. 2017;7(5):e1133–e1133.CrossRef Duda M, Haber N, Daniels J, Wall DP. Crowdsourced validation of a machine-learning classification system for autism and ADHD. Transl Psychiatry. 2017;7(5):e1133–e1133.CrossRef
91.
Zurück zum Zitat Duda M, Ma R, Haber N, Wall DP. Use of machine learning for behavioral distinction of autism and ADHD. Transl Psychiatry. 2016;6(2):e732–e732.CrossRef Duda M, Ma R, Haber N, Wall DP. Use of machine learning for behavioral distinction of autism and ADHD. Transl Psychiatry. 2016;6(2):e732–e732.CrossRef
92.
Zurück zum Zitat Halim A, Garberson F, Stuart LM, Eric G, Dennis PW. Multi-modular AI approach to streamline autism diagnosis in young children. Sci Rep (Nature Publisher Group). 2020;10(1). Halim A, Garberson F, Stuart LM, Eric G, Dennis PW. Multi-modular AI approach to streamline autism diagnosis in young children. Sci Rep (Nature Publisher Group). 2020;10(1).
93.
Zurück zum Zitat Kosmicki JA, Sochat V, Duda M, Wall DP. Searching for a minimal set of behaviors for autism detection through feature selection-based machine learning. Transl Psychiatry. 2015;5(2):e514–e514.CrossRef Kosmicki JA, Sochat V, Duda M, Wall DP. Searching for a minimal set of behaviors for autism detection through feature selection-based machine learning. Transl Psychiatry. 2015;5(2):e514–e514.CrossRef
94.
Zurück zum Zitat Leblanc E, Washington P, Varma M, Dunlap K, Penev Y, Kline A, Wall DP. Feature replacement methods enable reliable home video analysis for machine learning detection of autism. Sci Rep. 2020;10(1):1–11.CrossRef Leblanc E, Washington P, Varma M, Dunlap K, Penev Y, Kline A, Wall DP. Feature replacement methods enable reliable home video analysis for machine learning detection of autism. Sci Rep. 2020;10(1):1–11.CrossRef
95.
Zurück zum Zitat Tariq Q, Scott LF, Jessey NS, Kaitlyn D, Conor C, Peter W, Haik K, Naila ZK, Gary LD, Dennis PW. Detecting developmental delay and autism through machine learning models using home videos of Bangladeshi children: development and validation study. J Med Int Res. 2019;21(4):e13822. Tariq Q, Scott LF, Jessey NS, Kaitlyn D, Conor C, Peter W, Haik K, Naila ZK, Gary LD, Dennis PW. Detecting developmental delay and autism through machine learning models using home videos of Bangladeshi children: development and validation study. J Med Int Res. 2019;21(4):e13822.
96.
Zurück zum Zitat Tariq Q, Jena D, Jessey NS, Peter W, Haik K, Dennis PW. Mobile detection of autism through machine learning on home video: a development and prospective validation study. PLoS medicine. 2018;15(11):e1002705. Tariq Q, Jena D, Jessey NS, Peter W, Haik K, Dennis PW. Mobile detection of autism through machine learning on home video: a development and prospective validation study. PLoS medicine. 2018;15(11):e1002705.
97.
Zurück zum Zitat Dennis PW, Kosmicki J, Deluca TF, Harstad E, Vincent AF. Use of machine learning to shorten observation-based screening and diagnosis of autism. Translational psychiatry. 2012;2(4):e100-e100. Dennis PW, Kosmicki J, Deluca TF, Harstad E, Vincent AF. Use of machine learning to shorten observation-based screening and diagnosis of autism. Translational psychiatry. 2012;2(4):e100-e100.
98.
Zurück zum Zitat Washington P, Emilie L, Kaitlyn D, Yordan P, Aaron K, Kelley P, Min WS et al. Precision Telemedicine through crowdsourced machine learning: testing variability of crowd workers for video-based autism feature recognition. J Person Medicine. 2020;10(3):86. Washington P, Emilie L, Kaitlyn D, Yordan P, Aaron K, Kelley P, Min WS et al. Precision Telemedicine through crowdsourced machine learning: testing variability of crowd workers for video-based autism feature recognition. J Person Medicine. 2020;10(3):86.
99.
Zurück zum Zitat Washington P, Natalie P, Parishkrita S, Catalin V, Aaron K, Maya V, Qandeel T et al. Data-driven diagnostics and the potential of mobile artificial intelligence for digital therapeutic phenotyping in computational psychiatry. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. 2019. Washington P, Natalie P, Parishkrita S, Catalin V, Aaron K, Maya V, Qandeel T et al. Data-driven diagnostics and the potential of mobile artificial intelligence for digital therapeutic phenotyping in computational psychiatry. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. 2019.
100.
Zurück zum Zitat Washington P, Haik K, Qandeel T, Jessey S, Kaitlyn D, Brianna C, Maya V et al. Validity of online screening for autism: crowdsourcing study comparing paid and unpaid diagnostic tasks. J Med Int Res. 2019;21(5):e13668. Washington P, Haik K, Qandeel T, Jessey S, Kaitlyn D, Brianna C, Maya V et al. Validity of online screening for autism: crowdsourcing study comparing paid and unpaid diagnostic tasks. J Med Int Res. 2019;21(5):e13668.
101.
Zurück zum Zitat Washington P, Aaron K, Onur CM, Emilie L, Cathy H, Nate S, Kelley P, Brianna C, Dennis PW. Activity recognition with moving cameras and few training examples: applications for detection of autism-related headbanging. 2021. arXiv preprint https://arxiv.org/abs/2101.03478. Washington P, Aaron K, Onur CM, Emilie L, Cathy H, Nate S, Kelley P, Brianna C, Dennis PW. Activity recognition with moving cameras and few training examples: applications for detection of autism-related headbanging. 2021. arXiv preprint https://​arxiv.​org/​abs/​2101.​03478.
102.
Zurück zum Zitat Washington P, Emilie L, Kaitlyn D, Yordan P, Maya V, Jae-Yoon J, Brianna C et al. Selection of trustworthy crowd workers for telemedical diagnosis of pediatric autism spectrum disorder. In BIOCOMPUTING 2021: Proceedings of the Pacific Symposium, pp. 14-25. 2020. Washington P, Emilie L, Kaitlyn D, Yordan P, Maya V, Jae-Yoon J, Brianna C et al. Selection of trustworthy crowd workers for telemedical diagnosis of pediatric autism spectrum disorder. In BIOCOMPUTING 2021: Proceedings of the Pacific Symposium, pp. 14-25. 2020.
103.
Zurück zum Zitat Washington P, Kelley MP, Haik K, Nathaniel S, Catalin V, Aaron K, Ritik P et al. Feature selection and dimension reduction of social autism data. In Pac Symp Biocomput. 2020;25:707-718. Washington P, Kelley MP, Haik K, Nathaniel S, Catalin V, Aaron K, Ritik P et al. Feature selection and dimension reduction of social autism data. In Pac Symp Biocomput. 2020;25:707-718.
104.
Zurück zum Zitat Washington P, Qandeel T, Emilie L, Brianna C, Kaitlyn D, Aaron K, Haik K et al. Crowdsourced privacy-preserved feature tagging of short home videos for machine learning ASD detection. Sci Rep. 2021;11(1):1-11. Washington P, Qandeel T, Emilie L, Brianna C, Kaitlyn D, Aaron K, Haik K et al. Crowdsourced privacy-preserved feature tagging of short home videos for machine learning ASD detection. Sci Rep. 2021;11(1):1-11.
105.
Zurück zum Zitat Washington P, Serena Y, Bethany P, Nicholas T, Jan L, Dennis PW. Achieving trustworthy biomedical data solutions. In BIOCOMPUTING 2021: Proceedings of the Pacific Symposium, pp. 1-13. 2020. Washington P, Serena Y, Bethany P, Nicholas T, Jan L, Dennis PW. Achieving trustworthy biomedical data solutions. In BIOCOMPUTING 2021: Proceedings of the Pacific Symposium, pp. 1-13. 2020.
Metadaten
Titel
Training Affective Computer Vision Models by Crowdsourcing Soft-Target Labels
verfasst von
Peter Washington
Haik Kalantarian
Jack Kent
Arman Husic
Aaron Kline
Emilie Leblanc
Cathy Hou
Cezmi Mutlu
Kaitlyn Dunlap
Yordan Penev
Nate Stockham
Brianna Chrisman
Kelley Paskov
Jae-Yoon Jung
Catalin Voss
Nick Haber
Dennis P. Wall
Publikationsdatum
27.09.2021
Verlag
Springer US
Erschienen in
Cognitive Computation / Ausgabe 5/2021
Print ISSN: 1866-9956
Elektronische ISSN: 1866-9964
DOI
https://doi.org/10.1007/s12559-021-09936-4

Weitere Artikel der Ausgabe 5/2021

Cognitive Computation 5/2021 Zur Ausgabe