Skip to main content
Top
Published in: Quality & Quantity 1/2021

Open Access 20-05-2020

Using emojis in mobile web surveys for Millennials? A study in Spain and Mexico

Authors: Oriol J. Bosch, Melanie Revilla

Published in: Quality & Quantity | Issue 1/2021

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

To involve Millennials in survey participation, and obtain high-quality answers from them, survey designers may require new tools that better catch Millennials' interest and attention. One key new tool that could improve the communication and make the survey participation more attractive to young respondents are the emojis. We used data from a survey conducted among Millennials by the online fieldwork company Netquest in Spain and Mexico (n = 1614) to determine how emojis can be used in mobile web surveys, in particular in open-ended questions, and how their use can affect data quality, completion time, and survey evaluation. Overall, results show a high willingness of Millennials to use emojis in surveys (both stated and actual use) and a positive impact of encouraging Millennials to use emojis in open-ended questions on the amount of information conveyed, the completion time and the survey enjoyment.
Notes

Electronic supplementary material

The online version of this article (https://​doi.​org/​10.​1007/​s11135-020-00994-8) contains supplementary material, which is available to authorized users.

1 Introduction

Strauss and Howe (1991) define the Millennials as the cohort of individuals born between 1982 and 2003. Millennials represent the first generation to have had, during their formative years, access to the Internet (Pew Research Center 2014). They are the generation with the highest technology exposure (Hartman and McCambridge 2011). Although Millennials' communicative skills have been found of lower quality than the ones of previous generations (Hartman and McCambridge 2011), this cohort has a greater affinity for Computer-Mediated Communication tools (Myers and Sadaghiani 2010). For instance, 97% of this cohort owns at least one smartphone (Nielsen 2016). In addition, a Pew Research Center study (2014) found that, in the USA, 89% have a profile in a Social Network Site (SNS), which is 16 percentage points higher than for Generation X (born in 1961–1981).
Consequently, Millennials differ from other generations at several levels. Thus, approaches that were useful for older generations may not work anymore for this one. In particular, their behaviors in terms of survey participation are different, with lower participation rates in surveys and higher proportions of surveys answered using smartphones (Bosch et al. 2019b).
Therefore, to improve this generation's survey participation, and obtain high-quality answers from them, survey designers may require new tools to make the survey experience for this age cohort natural, interesting and enjoyable (van Heerden et al. 2014). To increase the engagement of web surveys' respondents, the inclusion of visually appealing or gamified elements has been used in recent years (Cechanowicz et al. 2013; Hamari et al. 2014), especially for young respondents (Mavletova 2015). The idea is to create a piece of “creative communication” using the appropriate language to connect with respondents (Puleston 2011).
In addition, optimizing surveys for mobile devices can also contribute to improve the survey participation of young respondents since the average proportion of surveys answered through smartphones is very high for Millennials (up to 78.8% in the USA, see Bosch et al. 2018b).
Furthermore, mobile web surveys make it easier to introduce new tools to interact with respondents and gather new types of data, such as images (Bosch et al. 2019a) or voice (Lütters et al. 2018; Revilla et al. 2020). One key new tool that could make the survey participation more attractive to Millennials are the emojis. Emojis are picture characters or pictographs treated by the computers as letters (non-western ones), which means that the software has to explicitly support them. They should not be confounded with emoticons, i.e. typographic displays of a facial representation, used to convey emotions in a text-only medium (see Fig. 1 for a few examples).
Since a few years, emojis became omnipresent both in the online and offline world: in 2015, six billion of emojis were used every day (SwiftKey 2015). In addition, emojis are used all over the world (Lu et al. 2016), which make them the first international language. Their use has been increasing during the last years (Barbieri et al. 2016), primarily associated with the rise of SNSs, where communication in text-based forms needed a tool for expressing nonverbal information (Lo 2008). Moreover, the worldwide adoption of emojis increased when virtual keyboards started to incorporate a standard international emoji keyboard, Apple being the first mainstream company to make the emoji menu standard for iOS 5 (Riordan 2017).
Millennials are the generation with the highest emoji usage (Emogi Research Team 2016). In the United Kingdom, half of the Millennials consider that emojis have improved the ability to interact with others (Evans 2015). In Spain and Mexico, 84% of the Millennials studied declared using emojis daily, and half of them used emojis in 40% or more of the messages sent (Bosch Jover and Revilla 2018). Thus, integrating the use of emojis in surveys seems an interesting alternative to make surveys more natural for Millennials. However, past research testing the effect of asking participants to answer using new data collection approaches (e.g. images or voice) on nonresponse, data quality, survey evaluation and completion time have found potential drawbacks (Bosch et al. 2019a; Revilla et al. 2020). Besides, the semantical and emotional interpretation of some emojis differs between individuals (Miller et al. 2016). This could affect the interpretability of the answers and, consequently, the quality of the data collected.
Therefore, in this paper, we study: (1) if Millennials would like to use emojis in surveys and for which type of questions; (2) if Millennials are using emojis to answer open questions when encouraged to do so; and (3) if encouraging Millennials to use emojis to answer open-ended questions has an impact on data quality, completion time and survey evaluation.
To assess the robustness of the results, we collected data in two countries that share the same language but differ on other aspects that could affect both their use of emojis and their survey behavior: Spain and Mexico. First, 96.5% of the Millennials use the Internet daily in Spain versus 84.0% in Mexico (Statista, 2016a, b). Second, Millennials living in Mexico significantly send more emojis weekly than those living in Spain (Bosch Jover and Revilla 2018). Finally, Millennials in Spain present similar break-off rates than in Mexico, but answer surveys trough smartphones in a higher proportion and evaluate surveys more positively (Bosch et al. 2019b).

2 Background

2.1 Visual elements and non-verbal labels in surveys

Visual elements are used in questionnaires to help respondents and connect with them (Christian and Dillman 2004; Couper et al. 2006, 2007). In particular, faces or “smileys” have been used as symbolic labels in rating scales much before the apparition of emojis, to measure emotions and satisfaction (Kunin 1955; Herman et al. 1975; Wanous et al. 1997; Jäger and Bortz 2001), especially for children (De Leeuw et al. 2004; Reynolds-Keefer and Johnson 2011; Hall et al. 2016), but also to measure pain levels (Chambers et al. 1999) and to survey adults using hedonic scales (Hox et al. 2012).
Contrary to these previously used visual elements, emojis are nowadays adopted to communicate in real life. However, the use of emojis in surveys has been little explored. Recently, emoji scales have been proposed to measure product-elicited emotional associations (selecting all the emojis that apply), with promising results (Jaeger et al. 2018a; Swaney-Stueve et al. 2018). Nevertheless, emojis can represent not only complex facial expressions but also a huge range of objects, animals, foods, etc. (Novak et al. 2015). Thus, they can be used as non-verbal labels for more types of scales than the hedonic or satisfaction ones.
However, emojis represent a tool out of researchers control. The criteria of inclusion or exclusion of emojis, their design, as well as the updates of these aspects are decided by third actors (e.g. Unicode). Therefore, compared to research-based non-verbal tools, emojis have not been pre-tested to understand the meaning that participants attribute to them and if they are suited or not for specific types of research as cross-national research at their design stage (e.g. Lokman et al. 2012). Researchers might do that, but with no possibility to change emojis’ design characteristics. This is especially important if we consider that, although a similar tendency on the interpretation of the affection of emojis has been found between Spain and Mexico, differences in terms of semantical interpretation are still present (Bosch Jover and Revilla 2018).

2.2 Emojis and open-ended questions

Mobile respondents represent a high percentage of the online respondents (Revilla et al. 2016; Liu et al. 2017). However, smartphone respondents tend to provide shorter responses to open-ended questions (Mavletova 2013; Buskirk and Andrus 2014; Revilla and Ochoa 2016). This may be linked to the method of data entry of smartphone devices (mainly small screens with digital keyboards), which can affect the results by reducing the willingness to convey more information (Lugtig and Toepoel 2016). Furthermore, respondents answering from a mobile web environment take longer to complete the survey (Mavletova 2013). Most mobile devices keyboards integrate emojis. Hence, emojis offer new opportunities for open-ended questions and could help increasing the quality of responses.
Previous research studied why and how people use emojis. The use of emojis increases when having a conversation, and in socio-emotional contexts rather than in task-oriented ones (Sampietro 2016). Bosch Jover and Revilla (2018) found that the main reason to use emojis for Millennials is to better communicate emotions. Kaye et al. (2017) found that emojis have the function to disambiguate the intent behind messages in absence of non-verbal cues. Since individuals use emojis primarily to better remark the emotional intend of the messages, questions about feelings may benefit more from the implementation of emojis than factual ones. Nevertheless, some emojis present a high degree of sentiment and/or semantical misconstrual i.e. different people interpret differently the sentiment and/or the meaning of the same emoji rendering (Miller et al. 2016). The ambiguity of emojis could make survey answers more difficult to be interpreted.

2.3 The effect of emojis on data quality, completion time and survey evaluation

Emojis represent a complementary tool for non-verbal cues that can help disambiguate the communicative intent behind the message (Kaye et al. 2016), implying that the use of emojis is related to an enhanced capacity of expressing the real meaning of the message. Besides, emojis are mainly used to express the emotional intent of the message (Bosch Jover and Revilla, 2018). Hence, encouraging respondents to answer using emojis may allow gathering extra information from the answers. For instance, instead of answering “I like this advert” a respondent may answer “I like this advert (emoji laughing)”. Moreover, emojis can be used instead of words, communicating by themselves (Barbieri et al. 2016), for example, sending an emoji of thumb up instead of writing “I like this advert”. This may reduce respondents' burden due to writing. Nevertheless, some emojis present a high degree of sentiment and/or semantical misconstrual (Miller et al. 2016). Although emojis are mainly used to disambiguate the meaning of textual message (Kaye et al. 2017), answers based only on emojis could be problematic considering the ambiguity of their meaning. This can be even more problematic for cross-national studies, since differences in terms of semantical interpretation have been found even for countries sharing the same language (Bosch Jover and Revilla 2018).
Therefore, emojis have both the potential to improve and to harm data quality. On the one hand, considering the quality of open-ended questions as “the amount and type of information contained in responses as well as response time and item nonresponse” (Smyth et al. 2009, p. 1), emojis could improve data quality by helping respondents to convey extra information as the emotional intent of the message. On the other hand, considering the ambiguity of emojis’ meaning, answers could be less interpretable, eventually harming data quality.
Furthermore, the use of emojis may affect completion time. On the one hand, it can reduce completion times by making easier to express and describe ideas and objects. On the other hand, it can increase completion times if respondents have difficulties searching for a specific emoji within the vast number of options available.
Finally, using emojis in surveys can influence survey evaluation. The study of facial expressions or emoticons as a mean to report the emotional response of participants has been found to be enjoyable, more effective and more intuitive than verbal expressions (Etcoff and Magee 1992; Desmet 2005; Emde and Fuchs 2013), maybe because emoticons make the communication process easier, more interactive and funnier (Huang et al. 2008). Derham (2011) found that emoticon scale presentations were enjoyed by more respondents than surveys containing words or numbers. Regarding emojis, Bacon et al. (2017) found that respondents replying to emoji-based scales enjoyed more the survey than those responding to text. Hence, emojis use in surveys, in particular in open-ended questions, could improve the survey evaluation. However, if encouraging participants to answer with emojis is perceived as an extra burden, this could indeed worsen the survey experience.

2.4 Current study

Past research has studied how to use visual elements, smileys and, to a lower extent, emojis as labels for some specific questions. However, further research is needed to determine how emojis can be used in mobile web surveys, in particular in open-ended questions, when Millennials are the target population, and how this use can affect data quality, completion time, and survey evaluation.
Therefore, the current study has three main goals: (1) study the willingness of Millennials to use emojis in mobile web surveys for different types of questions; (2) analyze the real use of emojis by Millennials when encouraged to answer open-ended questions using emojis; and (3) assess the impact of encouraging Millennials to use emojis in open-ended questions on data quality (measured as the average item nonresponse and the average information conveyed), completion time (measured using paradata about focus times) and survey evaluation (measured as the self-reported satisfaction and usability). Besides, to improve the robustness of our study, we look at the results for Spain and Mexico. To the best of our knowledge, this is the first study investigating the use of emojis to answer open-ended questions in mobile web surveys.

3 Methods

3.1 Data collection

We collected data in Spain and Mexico, through the online fieldwork company Netquest (www.​netquest.​com). Our target population included all the individuals between 16 and 34 years old who had regular internet access through a smartphone. The survey had to be completed through smartphones, since the use of emojis on these devices is more common and easier. Cross quotas for age and gender were used in each country to guarantee that the sample was similar on these variables to the general internet population between 16 and 34 years old. The objective was to get 800 respondents finishing the survey per country.
Data collection took place between the 2nd and the 19th of June 2017. In total 1614 respondents completed the survey until the end, 808 in Spain and 806 in Mexico. This corresponds respectively to 66.4% and 59.7% of those who started the survey, and 97.3% and 97.7% of those who answered the first main survey question. Thus, only 2.7% and 2.3% dropped out during the survey, while the others were filtered out because they were older than 35, did not have a smartphone or accessed the internet from a smartphone once a week or less or dropped out on the very first page. The average completion time for those that completed the survey was of 18 min in Spain and 22 min in Mexico. The proportion of women was of 50% and the mean age of 25, for both countries. For more information about the sample composition, we refer to “Appendix 1”. No differences were found between Spain and Mexico samples in terms of age, gender and frequency of internet use, whereas differences for the number of smartphones used and urbanization were found.

3.2 The questionnaire

The questionnaire counted a total of 62 questions about emojis use, the conditions and reasons of that use, the meaning and interpretation of common emojis, background variables and questions about personality. In this paper, we focus on a set of 15 questions about the willingness of using emojis within surveys, and on a set of six open-ended questions which were part of an experiment included in the survey.
In this experimental part, respondents in each country were randomly assigned to a control or a treatment group. There were no significant differences between control and treatment groups in terms of age, gender, urbanization, number of smartphones used and just a small difference in terms of frequency of internet use (see “Appendix 1”). The treatment group received an introduction about emojis and was then explicitly encouraged (see “Appendix 2”) to answer a set of six open-ended questions using text but also emojis, whereas the control group directly received the open-ended questions with no mention of emojis at all. Moreover, respondents in the treatment group were provided with an emoji keyboard placed below the text box (see “Appendix 2”), to guarantee that all respondents saw the same rendering across different devices and operating systems. The experimental part was placed close to the beginning of the survey, before any other question related to the use of emojis, to avoid possible effects of previous questions on the answers.
The design was optimized for mobile devices and skipping questions was allowed, but going back was not. The full questionnaire in Spanish1 is available at: https://​test.​nicequest.​com/​respondent/​esnpa/​ebdd996c-d9ae-4708-9e6e-7dda4313d9a3.

3.3 The analyses

3.3.1 Stated willingness to use emojis in surveys

First, to investigate if Millennials would like to use emojis in surveys and for what type of questions, after the experimental part, we asked respondents three separate questions about if they would be willing to use emojis in different surveys contexts: (1) in general (yes, no, I already use it, I do not know), (2) as labels for specific scales (yes, no) and (3) in open-ended questions (yes, no). Moreover, we will report the percentage of respondents answering in a check-all-that-apply question that they would be willing to use emojis to express (1) emotions and feelings, (2) opinions about products, services and campaigns and (3) how much they like or dislike something. We should notice that the different question format (separate questions versus check–all-that-apply) could lead to lower percentages of respondents being willing for the last three cases. The question about the use of emojis in general was not dichotomic: respondents answering “I already use it” were considered as willing and the “I do not know” answers as missing values. The exact formulation of these questions can be found in the Electronic Supplementary Material. All these questions were included after the experimental questions. This could produce a priming effect, affecting the willingness of those asked to answer with emojis in the experimental part. The significance of the differences between willingness levels for control and treatment groups was tested using Z-tests. We found that the willingness to use emojis was significantly higher for the experimental group participants for open-ended questions (79.0% vs. 86.5%, Z = − 3.67, p < 0.001) and scales (86.4% vs. 91.0%, Z = − 2.69, p = 0.007). No other significant difference was found.2 We will report the proportions of respondents stating that they would indeed be willing to use emojis in each case. Analyses were done by country and based only on those who answered the questions (i.e. missing values were excluded). Differences between countries were tested using Z-tests.
Second, we studied if Millennials would prefer to use emoji scales instead of: (1) a dichotomic Like/Dislike type of scale: the thumbs up/down emojis were opposed to the verbal labels “Like” and “Dislike”; (2) an emotion scale: emoji labels were compared to verbal labels and to a reduced set of PrEmo3 labels, a non-verbal tool used to measure consumer emotions (“Appendix 3” shows the PrEmo scale presented in the survey); (3) a satisfaction scale: smiley-emoji labels, similar to the ones used in hedonic scales for children, were compared to verbal labels from “very satisfied” to “very unsatisfied”; (4) a specific check-all-that-apply scale about what respondents consider before choosing a trip destination: a wide range of different verbal labels (e.g. price) were faced off with their associated emoji labels representing different reasons; and (5) a specific check–all-that-apply scale about the means of transport respondents use to go to their job: verbal labels (e.g. car) were opposed to the emoji labels representing those means. Screenshots of the questions and the scales, as well as their English translations can be found in the Electronic Supplementary Material. We will report the proportions of respondents that prefer or consider equivalent to use emojis in each case. Again, analyses were done by country and based only on those who answered the questions (i.e. missing values were excluded). Differences of distributions between countries were tested using Pearson’s Chi-squared tests. Since Pearson’s Chi-squared test is sensitive to sample size, we additionally computed Cramer’s V to measure the strength of the association between the scale preference and country. Cramer’s V ranges from 0 (no association) to 1 (perfect association).

3.3.2 Emoji use in the current study

To study if Millennials are really using emojis in the survey context when encouraged to do so, we looked at the answers to the six open-ended questions in the experimental part, which were proposed before any other questions about emojis. We analyzed to what extent respondents in the treatment group used emojis in these open-ended questions and to express what (emotions, ideas, opinions, etc.). For this purpose, we quantified the percentage of respondents using emojis: (1) in general (i.e. in at least one of the six open-ended questions); (2) to answer the like/dislike questions (at least one of the three questions); (3) to answer the question about opinion of a product; and (4) to answer the question about emotions and feelings experienced with an advertising. Differences between countries were tested using Student’s T-tests (averages) or Z-tests (proportions).

3.3.3 Impact on data quality, completion time and survey evaluation

To assess the impact of encouraging Millennials to use emojis in open-ended survey answers on data quality, completion time and survey evaluation, we compared treatment and control group on each aspect using data from the experimental part. The statistical significance of the differences between control and treatment groups was assessed using Student’s T-tests (averages) or two-sample Z-tests of proportions.
First, we focus on data quality. Different indicators of data quality have been used in previous research, such as item nonresponse, response latency and multitasking, primacy effect, non-differentiation and straight-lining (e.g. Mavletova 2013; de Bruijne and Wijnant 2014). However, many of them (e.g. non-differentiation and straight-lining) are thought for closed ended questions (presenting an answer scale). Nevertheless, there are also different indicators that have been used to assess the quality in the case of open-ended questions. In particular: (1) the proportion of valid/substantive answers, or its complement, item nonresponse (e.g. Mavletova 2013; Toepoel and Lugtig 2014) and (2) the amount of information conveyed by each answer (e.g. Smyth et al. 2009; Revilla and Ochoa 2016). Item nonresponse is considered as an indicator of low data quality since it suggests respondents did not put the necessary effort into answering the question. Moreover, item nonresponse results in a loss of information which can make estimates less efficient and can threaten the representativeness of the answers if there is a systematic bias in who decide to answer versus not. However, providing an answer is not enough. Open answers can vary a lot in terms of their content. Thus, it is important to consider also other aspects to evaluate the quality of open answers. The amount of information is a key one: indeed, one of the main reasons for using open questions is to obtain more developed answers and thus more salient and detailed information (Geer 1991). The amount of information is often measured by the length of the answers (e.g. number of characters; Mavletova 2013; Revilla and Ochoa 2016). However, the number of characters does not always represent well how much information is provided. Longer sentences can be due to the use of a writing style that is repetitive, and not provide further information. Thus, some authors (see Smyth et al. 2009) proposed to assess the amount of conveyed information in open-ended responses by calculating the number of themes, described as concepts or subjects that answered the question but was independent of all other concepts. This approach seems more suited for measuring the quality of open-ended answers.
Therefore, we use two indicators in order to measure data quality: item nonresponse and amount of information conveyed.
Item nonresponse was calculated as the number of experimental questions without an intelligible answer per person (maximum of six). The averages for each group (treatment/control) were then compared.
Concerning the amount of information conveyed, we adapted Smyth et al. (2009) approach for emojis. Hence, we computed the amount of information as the number of ideas, opinions, emotions or feelings independent from each other conveyed in each answer. Therefore, the answer “I like this advert” would be a theme, and an emoji laughing, which would convey the information “it is funny”, would represent another theme. “Appendix 4” presents some real examples. This was coded manually by a researcher. Then, we took the average across all six questions. Finally, we compared the average for each group. To test inter-rater reliability (IRR), a second researcher coded a random 20% sample of answers for each question. We then computed Spearman’s ρ for the number of information, for each question. Spearman’s ρ was 0.80 for question 1 (opinion), 0.89 for question 2 (emotion), 0.81, 0.79 and 0.88 for questions 3, 4 and 5 (reaction to different slogans) and 0.78 for question 6 (personality). Both coders were native Spanish speakers.
Second, in order to compute the completion time, we used an adapted version of the paradata tool “SurveyFocus” (Höhne et al. 2017), which allows determining how often and for how long respondents are focusing on the survey page, and therefore to control for multitasking within the same device. However, we could not control for multitasking on a different device or offline. Therefore, in order to deal with outliers, for each page, we applied the same method as Revilla and Ochoa (2015): substituting the values for the 1% respondents with the highest time (considered as the ones that clearly were multitasking) by the average time4 spent by the other 99% to answer the questions on that same page. We used the average time and not the maximum time of the other 99% because following Revilla and Ochoa (2015), we believe that very long completion times do not indicate extremely slow respondents but respondents who interrupted the survey. Then, there is no reason to expect these respondents to spend a longer time on the page once they come back to it. We report and compare the average completion times per question.
Third, to measure the survey evaluation, we used two indicators: the level of satisfaction (proportion of respondents saying that they liked (a lot) answering the experimental questions) and the usability perceived by the users (proportion of respondents saying that it was (very) easy to fill in the experimental questions).
Finally, we conducted linear and logistic regressions for the treatment group to determine the impact of the number of emojis used in open-ended questions on different dependent variables: the amount of information conveyed and the completion time in seconds (multivariable linear regressions); and the stated enjoyment (liked (a lot) = 1, the rest = 0) and usability ((very) easy = 1, the rest = 0) (logistic regressions). We expect that the higher the number of emojis used in the experimental part, the higher the quality, the completion time and the survey evaluation levels. In addition, we included some control variables. First, we included socio-demographic variables: gender (women = 1) and country of residence (Mexico = 1). Moreover, we included the average number of hours of internet use per day through their smartphone (from 1 = “less than 30 min” to 7 = “5h01 or more”), expecting that more familiarity with internet may be related to a higher number of emojis used as well as a higher fluency answering online survey, making it easier and quicker for respondents to complete the survey. Furthermore, personality traits could be related to the number of emojis used. For instance, extrovert, creative and lazy individuals may use more emojis when given the opportunity, for different reasons. This, in turn, can make them convey more information, as well as liking more the experience. Thus, we included the personality traits of extroversion, creativity and laziness (composite scores5 ranging from − 9 to 9). In addition, we included if others were present while respondents completed the survey (others = 1). We expect respondents answering when others are present to convey less information and found it more difficult, as well as to be more prone to use emojis to answer in an easier and faster way. Finally, we included the stated number of emojis sent weekly (from “1 = none” till “12 = more than 100”), expecting that this would affect how much respondents like having to use emojis. This number might not accurately measure the real emoji use because of human memory limitations and recall issues when reporting online behaviors (e.g. Revilla et al. 2017). However, it can give an estimate of respondent’s perceived use of emojis, a proxy for their actual behavior.

4 Results

4.1 Willingness to use emojis in surveys

4.1.1 Stated willingness to use emojis in surveys

First, we studied the stated willingness of Millennials to use emojis. Table 1 presents, for Spain and Mexico, the proportions of respondents that would like to use emojis in surveys in general, for different types of questions (closed or open-ended), and for different purposes (express emotions, opinions or that they like a product).
Table 1
Proportions of panelists willing to use emojis in different survey contexts
Context
Spain
Mexico
% would like
n
% would like
n
Surveys in general
75.5**
666
88.1
734
Scales based on emojis
86.9
649
90.3
730
Open-ended questions
83.2
648
82.4
728
Expressing emotions
69.0
645
65.6
730
Expressing opinions
51.0
645
55.8
730
Expressing like/dislike
58.3
645
53.8
730
The question regarding surveys in general was a filter question, those that answered "no" were not asked the other questions. The asterisks in column "% would like" for Spain indicate when distributions are significantly different between Spain and Mexico
*p < 0.05; ** p < 0.01
Overall, a high proportion of the Millennials would like to use emojis in surveys in both countries: 88.1% in Mexico and 75.5% in Spain. Furthermore, in both countries, a higher proportion of respondents would like to use emojis as labels for scales in closed-ended questions than in open-ended questions. However, there is still a high proportion of respondents who would like to use emojis in open-ended questions: 83.2% in Spain and 82.4% in Mexico. Concerning the purposes, for both countries, a higher proportion of respondents would like to use emojis in order to express emotions (69.0% in Spain, 65.6% in Mexico). However, still more than 50% would also like to use them for expressing opinions and like/dislike. Overall, this suggests a clear interest from Millennials in using emojis in surveys. Differences between Spain and Mexico are not statistically significant except for the general willingness.
Next, Table 2 presents the proportions of respondents who stated that they would prefer to use emoji scales instead of scales based on verbal labels or would consider it equivalent.
Table 2
Preference to use each scale (in %)
Type of scale
Spain
Mexico
Emoji
Equivalent
PrEmo
Traditional
n
Emoji
Equivalent
PrEmo
Traditional
n
Like/Dislike
67.0**
22.1
10.9
805
80.9
13.4
5.7
801
Emotion
50.2**
7.0
25.9
16.9
803
55.7
6.1
15.3
22.9
803
Satisfaction
65.1**
16.3
18.6
802
71.9
11.9
16.2
800
Travel (specific)
33.0**
16.2
50.8
797
45.2
14.6
40.2
799
Mobility (specific)
52.3**
22.3
25.4
782
67.5
15.6
16.9
790
The asterisks in column Emoji for Spain indicate when distributions are significantly different between Spain and Mexico
*p < 0.05; ** p < 0.01
First, an emoji scale is preferred by a majority of respondents for all types of scales, with the exception of the travel one: Millennials show a clear interest for using emojis in surveys. The Like/Dislike scale and the satisfaction scale are the ones with the highest preference levels. Nevertheless, the scale measuring emotions was faced also with the PrEmo scale. Results of this three-way comparison show that the emoji scale is preferred also when faced with another visual scale. This could be because emojis are a more natural way to communicate for Millennial respondents. In addition, we can see that when the concepts of a scale are more complex and less attached to feelings and emotions, the levels of preference are lower (case of the specific scales).
Although all trends are similar in both countries, Mexico presents significantly higher proportions of participants preferring to use emojis instead of different types of verbal scales. This could be linked to the fact that people living in Mexico use a higher number of emojis daily and more frequently in non-dialogic contexts (Bosch Jover and Revilla 2018). However, Cramer’s V values are low,6 meaning that the association between country and preference is weak. Differences between countries, although significant, are relatively negligible.

4.1.2 Emoji use in the current study

Because what people say can be different from what they do, we then investigate what is the actual use of emojis in the current study. Table 3 shows the proportions of respondents in the experimental group who used emojis to answer, in general and for different types of questions, as well as the average number of emojis they used in each case.
Table 3
Proportions of panelists using emojis in different contexts and average number of emojis used
Context
Spain
Mexico
% using
Avg. number
n
%using
Avg. number
n
In general
98.5
2.7**
403
98.5
3.6
401
To express opinion
88.8**
2.3**
376
94.1
3.3
391
To express emotions
83.9**
1.8**
353
92.5
2.8
343
To express like/dislike
94.0
1.9**
403
96.0
2.5
401
The asterisks in columns for Spain indicate when proportions and means are significantly different between Spain and Mexico
*p < 0.05; ** p < 0.01
Overall, Table 3 shows that 98.5% of panelists in the treatment group used emojis to answer at least one of the six open-ended questions, in both countries. This is even more than what was expected based on the stated willingness: some Millennials who did not state that they would be willing to use emojis still used them if encouraged to do so. For instance, for the question that asked about emotions and feelings, 88.4% of those who said they would like to use emojis (prefer or equivalent) answered the question about emotions in the experimental part using emojis, whereas 91.6% of those who said they would not like to use them, answered using emojis. We should mention that although the control group was not encouraged nor had a specific emoji keyboard, 2.0% of the respondents also used emojis in this group. In addition, contrary to our expectations, the question asking about emotions and feelings is the one with the lowest percentage of emoji use. However, these percentages are still high (83.9% in Spain, 92.5% in Mexico), and even higher than the stated willingness. Furthermore, a significantly higher proportion of participants from Mexico used emojis for the question asking for an opinion and for the one asking for an emotional reaction. Besides, participants from Mexico, on average, sent a significantly higher number of emojis. Again, this might be linked to the fact that people living in Mexico use a higher number of emojis daily (Bosch Jover and Revilla 2018).

4.2 Impact on data quality, completion time and survey evaluation

Our last goal is to assess the impact of encouraging Millennials to use emojis in open-ended survey answers on three aspects: data quality, completion time and survey evaluation, measured as explained in Sect. 3.3.3. Table 4 presents a comparison of the control and treatment groups on these three aspects.
Table 4
Comparing quality, completion time and survey evaluation between treatment and control groups
Aspects compared
Spain
Mexico
Control n = 404
Treatment n = 404
Control n = 405
Treatment n = 401
Data quality
 Average information conveyed
1.4**
1.9
1.4**
2.3
 Average item nonresponse
0.4
0.4
0.2
0.2
Completion time
 Average completion time
28.0**
40.6
31.4**
45.2
Survey evaluation
 (Very) easy (%)
89.9
86.4
90.2
86.6
 Liked (a lot) (%)
66.9**
80.1
79.7
84.3
The asterisks in column "Treatment" indicates when differences between proportions and means are statistically significant between treatment and control groups. Bold in Spain columns indicates a significant difference between countries for a given group
*p < 0.05; ** p < 0.01
Concerning data quality, there is some improvement when encouraging Millennials to use emojis. Indeed, the average item nonresponse is the same for both treatment and control groups, but the average information conveyed is significantly higher for the treatment group in both countries. Concretely, it shows an increase of 35.7% for Spain and 64.3% for Mexico.
As an example, one of the like/dislike questions was about a slogan with an evident double sense (“Licking has never been so much fun”), and most of the control group respondents just answered: “double sense” whereas for the treatment group, the majority added an emoji to this answer. Those emojis were the key elements to understand the emotional impact of the slogan (found it funny, did not like it, etc.). Nevertheless, overall, 37.0% of the treatment group respondents answered only with emojis, which may difficult the interpretation of the answers. Comparing the results for both countries, we can see that participants from Mexico conveyed significantly more information than those from Spain when encouraged to answer with emojis.
Concerning the completion time per question, it is significantly higher for the treatment group in both countries (increases of 38.6% for Spain and 39.6% for Mexico). This could be because looking for the right emoji actually takes some time, and/or because respondents provide more information (e.g. same text plus an emoji), which therefore takes them longer.
Next, concerning survey evaluation, the proportion of respondents that liked (a lot) to answer this set of open-ended questions is also higher for the treatment group, with an increase of 19.7% in Spain and 5.8% in Mexico. Besides, no significant differences are found in the proportions of respondents reporting that answering to the experimental questions was (very) easy. Overall, this suggests a positive effect of encouraging to use emojis. Besides, since a high proportion of respondents in the treatment group used emojis when encouraged, most of the effect can be attributed to the use of emojis.
Finally, to analyze if the use of emojis affects the quality, completion time and survey evaluation when controlling for several variables, Table 5 presents the coefficients of the two linear regressions and two binary logistic regressions conducted.
Table 5
Determinants of different information conveyed, completion time and survey evaluation (liked and easy)
Independent variables
Information conveyed
Completion time
Liked
Easy
No. of emojis used
.15**
3.53**
1.04
1.09
Women
.19**
5.94
1.80**
1.10
Age
− .01
.49
.95*
1.01
Mexico
.19**
4.99
1.00
.99
Internet usage
.02*
.73
1.03
.92
Extroversion
.01*
.06
1.05
1.00
Creativity
.02**
.91
1.05
1.03
Laziness
− .02**
− .47
.99
1.03
Emoji usage
.00*
− .79
1.07
1.01
Others
− .02
4.36
1.28
1.43
Constant
.97**
− .35
3.79
4.98
No. observations
642
677
684
684
Adjusted/Negelkerke R2
.49
.07
.09
.02
Information conveyed and completion time are OLS models. Liked and easy are logistic models. For logistic models, results are presented in odds ratios
*p < 0.05; ** p < 0.01
Table 5 shows that a higher number of emojis used to answer the open questions increases significantly the amount of information conveyed and the completion time: for each additional emoji used, respondents conveyed 0.15 more information and spent 3.5 more seconds. Other variables have significant effects: all except others and age for the information conveyed; and age and gender for the enjoyment.

5 Discussion and conclusions

In this paper, our goal was to study the implementation of emojis in mobile web surveys for Millennials. We studied first the stated willingness of Millennials to use emojis in different survey contexts. Then, we analyzed the real use of emojis by Millennials when encouraged to answer open-ended questions using emojis. Finally, we assessed the impact of encouraging Millennials to use emojis in open-ended survey questions on data quality, completion time and survey evaluation.

5.1 Main results

Overall, we found a high willingness of Millennials to use emojis in surveys: 76.9% in Spain and 89.6% in Mexico. Furthermore, more than 80% of the respondents would like to use emojis in open-ended questions and a majority of them would prefer using emoji scales instead of traditional scales using verbal labels or other visual scales as PrEmo. These results confirm that Millennials show a clear interest for using emojis in surveys in both countries.
Next, regarding the actual use of emojis in open-ended questions, overall, 98.5% of panelists in the treatment group used emojis to answer at least one open-ended question. Moreover, the actual use was not always in line with what was expected based on the stated willingness: Millennials who did not state that they would be willing to use emojis still used them when encouraged to do so (this could be linked to the fact that we provided a specific keyboard), and the question with the lowest proportion of respondents using emojis was the one about emotions. An explanation is that emojis are used to communicate the emotional intent of a message (Kaye et al. 2017). When explicitly asked for emotions, an emoji may result redundant. However, when asked for an opinion, an emoji may be useful to express the underlying emotion of the opinion.
Finally, we found a positive impact overall of encouraging Millennials to use emojis in open-ended questions. Indeed, the average information conveyed and the completion time per question are significantly higher for the treatment group in both countries. Nevertheless, emojis helping to convey more information does not necessarily mean that data quality is higher. The ambiguity of emojis must be taken into account. Some emojis present a high degree of sentiment and/or semantical misconstrual (Miller et al. 2016), answers with emojis could be harder to interpret by researchers. This is especially problematic for messages based only on emojis. For instance, in our study, 37% of the experimental group answers contained only emojis and no text. How this affects substantive conclusions is unclear, but it has the potential to harm data quality. Furthermore, in order to use emojis for cross-national studies, researchers should take into account that semantical differences might exist between countries (Bosch Jover and Revilla 2018). Besides, we cannot know if the longer completion time is due to more thoughtful answers or because looking for the right emoji actually takes time.
In terms of survey evaluation, a higher proportion of respondents liked (a lot) to answer this set of open-ended questions when encouraged to use emojis. Nevertheless, contrary to what Jaeger et al. (2017) found using check-all-that-apply questions, no significant differences were found in the proportions of respondents reporting that answering to the experimental questions was (very) easy nor in the item nonresponse. Finally, a higher number of emojis used increases the amount of information conveyed and the completion times.

5.2 Limits and further research

Nevertheless, these results have limits. First, we used data from an online opt-in panel, targeting only Millennials in Spain and Mexico. Thus, results cannot be extrapolated to other populations. Moreover, we tested only a limited number of scales (to compare them with emojis scales), and a limited number of open-ended questions: questions about different topics or using different contents might have led to higher/lower use of emojis than in the current experiment. Also, we only considered a limited set of indicators for data quality, completion time and survey evaluation, and different indicators could provide different insights. Furthermore, although a high IRR was found by the random subsample of messages coded by two Spanish native speaker researchers, the full coding of the messages was only done by one researcher. Moreover, we cannot know if longer completion times in this study are a positive or a negative result. In addition, we only studied one rendering of the emojis, that was the one selected for the keyboard. Finally, the regression analyses presented for completion time and survey evaluation (liked and easy) have very low explanatory power (see adjusted and Negelkerke R2), suggesting that we might be missing important variables not measured in this study.
Therefore, further research is needed, in particular, to study the real emojis use and its impact for other types of questions, topics, and age cohorts. Equal ability to use emoji questionnaires in food-related research has been found among younger and older people (Jaeger et al. 2018b). Nevertheless, it has not been tested for open-ended questions nor for other topics. In addition, since emoji keyboards can be added to the survey, the implementation could be tested also in other devices such as tablets or computers. Moreover, as Bacon et al. (2017) showed, for some questions emojis lead to more emotional responses, leading to higher satisfaction ratings. Thus, further research could focus on potential response bias when answering with emojis. For instance, if evaluating a product or an advertisement, respondents answering with emojis tend to be more positive than when answering only with text.
Furthermore, more research is needed to understand the meaning of emojis in the context of survey answers, and how these meanings might differ between respondents. This is especially necessary for cross-national research, to understand when emojis have similar meanings across countries. In addition, considering that emojis need to be interpreted by researchers, further research should explore how emojis affect substantive conclusions and if there are coders' effects.
Finally, another line of research is to assess the comparability between respondents’ meaning of the emojis used to answer. Jaeger et al. (2018b) found that the interpretation of emojis is largely independent of gender, age and frequency of emoji use for most emojis. Nevertheless, does the meaning of emojis differ when answering a specific question? Are these differences higher when conducting cross-national research?

5.3 Conclusions

This research demonstrates that Millennial in Spain and Mexico show a clear interest in using emojis in surveys. The actual use of emojis to answer the experimental open-ended questions is even higher than the stated willingness. Providing respondents with a specific emoji keyboard in the survey, and encouraging them to use emojis, may have increased the willingness of respondents to use emojis.
Encouraging participants to use emojis to answer open-ended questions can have an overall positive impact. Emojis have the potential to increase data quality. Besides, by improving the survey evaluation, emojis might help to reduce the survey breakoff, which is higher for Millennials than for other cohorts (Bosch et al. 2019b). However, researchers must consider that emojis are ambiguous and harder to interpret than words and that the meaning of emojis might differ between participants. Besides, a high percentage of messages might be answered only with emojis, which might complicate the interpretability of results. This might be less problematic if researchers are interested in collecting the emotional intention of the answers, since the affective meaning of emojis is clearer.
Therefore, taking into consideration the potential limits, allowing and/or encouraging respondents to use emojis when answering open-ended questions has the potential to improve data quality and survey evaluation, but at the peril of making open-ended answers harder to interpret.

Acknowledgements

We are very thankful to Netquest for providing us with the data, and in particular to Gerardo Ortiz for programming the survey and Carlos Ochoa for his feedback during the survey preparation.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.
Appendix

Appendices

Appendix 1: Sample composition of the experimental groups, for each country

 
Spain
Mexico
Control
Treatment
Control
Treatment
Women (%)
50.2
49.8
49.9
49.9
Age (Average)
25.7
25.8
25.5
25.4
Living in a city (%)
33.7
32.8
49.6
51.4
Internet daily (%)
94.6*
97.5
97.3
96.0
One smartphone (%)
88.1
87.4
81.7
84.8
N
404
404
401
405

Appendix 2: Introduction, encouragement and emoji keyboard proposed to respondents in the treatment group

Introduction

In the following questions we will ask you to use emojis to express your opinions, attitudes, feelings, etc.
Emojis are pictograms or images that are used to represent faces, people, animals, signs and so on. Some examples:
Emojis are not the same as emoticons, which generally represent facial expressions using combinations of symbols, numbers and letters. Examples could be::);):o ^^:$ xD

Encouragement

To answer this question and the following ones, you can use as many emojis as you want, apart from text.

Emoji keyboard

Appendix 3: PrEmo scale

Appendix 4: Real answers, their codification and explanation

Note: The question asked: “we are going to show you several slogans for Oreo, please, tell us which reactions they provoke on you”. The slogan was: “Liking has never been so funny.”

Electronic supplementary material

Below is the link to the electronic supplementary material.
Footnotes
1
An English translation of the questions used in this study is available upon request to the first author. We also provide some screenshots in the Electronic Supplementary Material and the English translation of the main questions.
 
2
Z = − 1.14 (p = .26) for surveys in general, Z = − .46 (p = .65) for expressing emotion, Z = 1.84 (p = .07) for expressing opinion, Z = .84 (p = .40) for expressing like/dislike. Df = 1 for all tests.
 
3
PrEmo is a commercial tool used in market research to get insight in consumer emotions. Find more in their webpage: https://​www.​premotool.​com/​.​
 
4
Results were also tested using median times instead as averages. Completion times for the median are one second lower, but the differences and their statistical significance remain the same.
 
5
The composite score for each personality trait is computed as the sum of the values of three questions (each ranging from − 3 to 3) asking to what extent respondents resemble to different sentences (e.g. “I feel comfortable around people”).
 
6
Cramer’s V values are .16 for Like/Dislike, .14 for Emotion, .08 for Satisfaction, .13 for Travel (Specific) and .15 for Mobility (specific).
 
Literature
go back to reference Barbieri, F., Ronzano, F., Saggion, H.: What does this emoji mean? A vector space skip-gram model for twitter emojis. In: Proceedings of the Tenth International Conference on Language Resources and Evaluation, pp 3967–3972 (2016) Barbieri, F., Ronzano, F., Saggion, H.: What does this emoji mean? A vector space skip-gram model for twitter emojis. In: Proceedings of the Tenth International Conference on Language Resources and Evaluation, pp 3967–3972 (2016)
go back to reference Cechanowicz, J., Gutwin, C., Brownell, B., Goodfellow, L.: Effects of gamification on participation and data quality in a real-world market research domain. In: Proceedings of the First International Conference on Gameful Design, Research, and Applications–Gamification ’13, pp 58–65 (2013) Cechanowicz, J., Gutwin, C., Brownell, B., Goodfellow, L.: Effects of gamification on participation and data quality in a real-world market research domain. In: Proceedings of the First International Conference on Gameful Design, Research, and Applications–Gamification ’13, pp 58–65 (2013)
go back to reference De Leeuw, E., Borgers, N., Smits, A.: Pretesting questionnaires for children and adolescents. In: Presser, S., Rothgeb, J.M., Couper, M.P., Lessler, J.T., Martin, E., Martin, M., Singer, E. (eds.) Methods for Testing and Evaluating Survey Questionnaires, Chap 20, pp. 409–429. John Wiley & Sons Inc, Hoboken, NJ, USA (2004)CrossRef De Leeuw, E., Borgers, N., Smits, A.: Pretesting questionnaires for children and adolescents. In: Presser, S., Rothgeb, J.M., Couper, M.P., Lessler, J.T., Martin, E., Martin, M., Singer, E. (eds.) Methods for Testing and Evaluating Survey Questionnaires, Chap 20, pp. 409–429. John Wiley & Sons Inc, Hoboken, NJ, USA (2004)CrossRef
go back to reference Derham, P.A.J.: Using preferred, understood or effective scales? How scale presentations effect online survey data collection. Aust. J. Mark. Soc. Res. 19, 13–26 (2011) Derham, P.A.J.: Using preferred, understood or effective scales? How scale presentations effect online survey data collection. Aust. J. Mark. Soc. Res. 19, 13–26 (2011)
go back to reference Desmet, P.: Measuring emotions: development and application of an instrument to measure emotional responses to products. In: Funology: from Usability to Enjoyment, pp 111–123 (2005) Desmet, P.: Measuring emotions: development and application of an instrument to measure emotional responses to products. In: Funology: from Usability to Enjoyment, pp 111–123 (2005)
go back to reference Emde, M., Fuchs, M.: Exploring animated faces scales in web surveys: drawbacks and prospects. Surv Pract 5, 3–5 (2013) Emde, M., Fuchs, M.: Exploring animated faces scales in web surveys: drawbacks and prospects. Surv Pract 5, 3–5 (2013)
go back to reference Emogi Research Team: 2016 EMOJI Report (2016) Emogi Research Team: 2016 EMOJI Report (2016)
go back to reference Hamari, J., Koivisto, J., Sarsa, H.: Does gamification work? A literature review of empirical studies on gamification. In: Proceedings of the Annual Hawaii International Conference on System Sciences, pp 3025–3034 (2014) Hamari, J., Koivisto, J., Sarsa, H.: Does gamification work? A literature review of empirical studies on gamification. In: Proceedings of the Annual Hawaii International Conference on System Sciences, pp 3025–3034 (2014)
go back to reference Jaeger, S.R., Xia, Y., Lee, P.Y., et al.: Emoji questionnaires can be used with a range of population segments: findings relating to age, gender and frequency of emoji/emoticon use. Food Qual. Prefer. 68, 397–410 (2018)CrossRef Jaeger, S.R., Xia, Y., Lee, P.Y., et al.: Emoji questionnaires can be used with a range of population segments: findings relating to age, gender and frequency of emoji/emoticon use. Food Qual. Prefer. 68, 397–410 (2018)CrossRef
go back to reference Kaye, L.K., Malone, S.A., Wall, H.J.: Emojis: insights, affordances, and possibilities for psychological science. Trends Cogn. Sci. 21, 66–68 (2017)CrossRef Kaye, L.K., Malone, S.A., Wall, H.J.: Emojis: insights, affordances, and possibilities for psychological science. Trends Cogn. Sci. 21, 66–68 (2017)CrossRef
go back to reference Lokman, A.M., Ishak, K.K., Razak, F.H.A., Aziz, A.A.: The feasibility of PrEmo in cross-cultural Kansei measurement. In: SHUSER 2012—2012 IEEE Symposium on Humanities, Science and Engineering Research (2012) Lokman, A.M., Ishak, K.K., Razak, F.H.A., Aziz, A.A.: The feasibility of PrEmo in cross-cultural Kansei measurement. In: SHUSER 2012—2012 IEEE Symposium on Humanities, Science and Engineering Research (2012)
go back to reference Lu, X., Ai, W., Liu, X., et al.: Learning from the ubiquitous language. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing—UbiComp ’16, pp 770–780 (2016) Lu, X., Ai, W., Liu, X., et al.: Learning from the ubiquitous language. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing—UbiComp ’16, pp 770–780 (2016)
go back to reference Lütters, H., Friedrich-Freksa, M., Egger, M.: Effects of speech assistance in online questionnaires. In: General Online Research Conference 18. Deutsche Gesellschaft für Online-Forschung, Cologne, Germany (2018) Lütters, H., Friedrich-Freksa, M., Egger, M.: Effects of speech assistance in online questionnaires. In: General Online Research Conference 18. Deutsche Gesellschaft für Online-Forschung, Cologne, Germany (2018)
go back to reference Miller, H., Thebault-Spieker, J., Chang, S., et al.: “Blissfully happy” or “ready to fight”: varying interpretations of emoji. In: International AAAI Conference on Web and Social Media, pp 259–268 (2016) Miller, H., Thebault-Spieker, J., Chang, S., et al.: “Blissfully happy” or “ready to fight”: varying interpretations of emoji. In: International AAAI Conference on Web and Social Media, pp 259–268 (2016)
go back to reference Pew Research Center: Millennials in adulthood (2014) Pew Research Center: Millennials in adulthood (2014)
go back to reference Puleston, J.: Online research–game on! A look at how gaming techniques can transform your online research. In: Birks, D. (ed) Shifting the Boundaries of Research: Proceedings of the Sixth ASC International Conference. Association for Survey Computing, Berkeley, England, pp 20–50 (2011) Puleston, J.: Online research–game on! A look at how gaming techniques can transform your online research. In: Birks, D. (ed) Shifting the Boundaries of Research: Proceedings of the Sixth ASC International Conference. Association for Survey Computing, Berkeley, England, pp 20–50 (2011)
go back to reference Reynolds-Keefer, L., Johnson, R.: Is a picture is worth a thousand words? Creating effective questionnaires with pictures. Pract. Assess. Res. Eval. 16, 1–7 (2011) Reynolds-Keefer, L., Johnson, R.: Is a picture is worth a thousand words? Creating effective questionnaires with pictures. Pract. Assess. Res. Eval. 16, 1–7 (2011)
go back to reference Sampietro, A.: Emoticonos y emojis. Análisis de su historia, difusión y uso en la comunicación digital actual. Universitat de Valencia (2016) Sampietro, A.: Emoticonos y emojis. Análisis de su historia, difusión y uso en la comunicación digital actual. Universitat de Valencia (2016)
go back to reference Strauss, W., Howe, N.: Generations: The History of America’s Future, 1584 to 2069. William Morrow & Company, New York (1991) Strauss, W., Howe, N.: Generations: The History of America’s Future, 1584 to 2069. William Morrow & Company, New York (1991)
Metadata
Title
Using emojis in mobile web surveys for Millennials? A study in Spain and Mexico
Authors
Oriol J. Bosch
Melanie Revilla
Publication date
20-05-2020
Publisher
Springer Netherlands
Published in
Quality & Quantity / Issue 1/2021
Print ISSN: 0033-5177
Electronic ISSN: 1573-7845
DOI
https://doi.org/10.1007/s11135-020-00994-8

Other articles of this Issue 1/2021

Quality & Quantity 1/2021 Go to the issue