Skip to main content
Top
Published in: Quality & Quantity 2/2024

Open Access 22-05-2023

Skills, availability, willingness, expected participation and burden of sharing visual data within the frame of web surveys

Authors: Patricia A. Iglesias, Melanie Revilla

Published in: Quality & Quantity | Issue 2/2024

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Although there is literature on the willingness to share visual data in the frame of web surveys and the actual participation when asked to do so, no research has investigated the skills of the participants to create and share visual data and the availability of such data, along with the willingness to share them. Furthermore, information on the burden associated with answering conventional questions and performing visual data-related tasks is also scarce. Our paper aims to fill those gaps, considering images and videos, smartphones and PCs, and visual data created before and during the survey. Results from a survey conducted among internet users in Spain (N = 857) show that most respondents know how to perform the studied tasks on their smartphone, while a lower proportion knows how to do them on their PC. Also, respondents mainly store images of landscapes and activities on their smartphone, and their availability to create visual data during the survey is high when answering from home. Furthermore, more than half of the participants are willing to share visual data. When analyzing the three dimensions together, the highest expected participation is observed for visual data created during the survey with the smartphone, which also results in a lower perception of burden. Moreover, older and lower educated respondents are less likely to capture and share visual data. Overall, asking for visual data seems feasible especially when collected during the survey with the smartphone. However, researchers should reflect on whether the expected benefits outweigh the expected drawbacks on a case-by-case basis.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Web surveys have become increasingly used compared to the rest of survey modes in the last two decades (Evans and Mathur 2018). Besides, more and more respondents are using mobile devices to answer web surveys (Couper et al. 2017; Revilla et al. 2016; Toepoel and Lugtig 2018), especially smartphones (see Jäckle et al. 2019; Read 2019). The omnipresence of smartphones—they are right in people’s pockets, anywhere, anytime (Peng and Zhu 2020)—, introduces new opportunities for collecting data through web surveys, in particular due to the presence of sensors included in these devices. For instance, the GPS can be used to track respondents’ location and send adhoc surveys (Toepoel et al. 2020), the microphone to answer open questions through voice input (Revilla et al. 2020), and the camera to send images (Bosch et al. 2019). These new tools could improve data quality by replacing or complementing conventional survey questions (Revilla 2022).
This paper focuses on the request for sharing visual data in the frame of web surveys. Even if the idea of requesting visual data in this frame has been mainly linked to the growing smartphone participation, visual data can also be requested from PC participants. Visual data can be produced during the survey or can consist in files already stored by the participants.
Visual data produced during the survey can come from two main sources:
  • The device camera: Visual data coming from the camera is mostly obtained through the sensor in mobile devices, since cameras in computers (PCs) tend to have a different use than taking pictures (e.g., videocalls). Therefore, visual data produced during the survey include the photos and videos taken in the moment with the camera, principally by means of mobile devices.
  • Screenshots: A screenshot is a digital image of the contents displayed on the screen of a PC or mobile device. Thus, screenshots can cover any subject that the respondents can access through these devices. The way a screenshot is captured and stored differs depending on the type of device. In PCs, they are made using tools specifically designed for that purpose1 or by pressing specific buttons (such as “Impr Pant” in Windows), and depending on the operating system, they are buffered on the clipboard or automatically saved in a folder. In mobile devices, they are usually created by pressing a specific set of buttons and stored in their library.2
Visual data already stored can include photos, videos, screenshots, and other visual content produced before answering the survey and stored in a place that can be accessed from the device used by respondents to answer the survey (e.g., in the smartphone's main gallery, in a folder saved in the PC memory or in the cloud). Such visual data may have been captured by the device’s camera, downloaded, received through messaging apps or social media, or produced using other devices (e.g., an analog camera).
In some cases, visual data could help improving data quality by providing more accurate information and/or reducing the respondents’ burden (Herzing 2019) compared to conventional survey answers. Previous research suggests that, in particular, questions that require recall might provide data of low quality due to the limitations of human memory (Revilla et al. 2017; Tourangeau 1999). Besides, the lack of awareness of certain topics might also be problematic for questions requiring some knowledge (de Leeuw et al. 2003).
Previous research on asking for visual data in surveys has focused mainly on the respondents’ willingness to send images, the participation and compliance with the task, and to a lower extent the classification of the images. However, before asking to answer survey questions with visual data it is important to understand better when it can be beneficial to ask for them in web surveys.

2 Scope, research questions and contribution

Different aspects must be considered by researchers when deciding whether asking for visual data in the frame of web surveys can be beneficial for their research. In this paper, we focus on four dimensions that we consider particularly relevant when it comes to surveys’ participants: (a) the respondents’ skills to share visual data, (b) the availability of different types of such data, (c) the willingness to share them, and (d) the burden associated to the creation and sharing of visual data compared to conventional survey questions. We study these dimensions considering separately smartphones and PCs. Tablets are excluded due to their lower use to answer surveys (Jäckle et al. 2019; Read 2019; Revilla et al. 2017). Other aspects such as the quality of the information (visual data reducing errors compared to conventional questions) or the addition of new insights (getting information that currently is not being provided through surveys) are also very important but are beyond the scope of this study.
We expect asking for visual data to be beneficial for the research only if respondents know how to capture and share such data, since not having the skills to upload visual data will lead to missing information. Then, our first research question is:
  • RQ1: To what extent respondents have the needed skills to capture and share different types of visual data?
Even if respondents know how to capture and share visual data, a high level of missingness might occur if the proportion of respondents who have the required data available is insufficient. Therefore, our second research question is:
  • RQ2: To what extent are different types of visual data available so respondents can share them within the frame of a web survey?
Even if most respondents have the skills and the visual data of interest are available, unwillingness could lead to high levels of break-off or item non-response, and, consequently, to a final number of observations that is too low to conduct reliable analyses. Moreover, it could compromise the representativeness of the results if only respondents with specific characteristics accept to share visual data. Thus, our next research question is:
  • RQ3: To what extent respondents declare that they would be willing to share different types of visual data?
We focus on stated willingness, i.e., respondents are not asked to send visual data but to declare if they would be willing to share them. This stated willingness is highly related to the likeliness of actually responding (Jäckle et al. 2019).
To the best of our knowledge, the skills, availability and willingness to share visual data have not been studied all together, although we consider that the intersection of these three dimensions is a proxy of actual participation, since respondents must (a) have the skills, (b) have visual data available, and (c) be willing to share them in order to successfully share it. Therefore, we will refer to it as “expected participation” throughout the paper and we propose the following research question:
  • RQ4: What is the expected participation when asking for different types of visual data (i.e., the proportion of respondents who have the skills, have such visual data available and are willing to share them)?
Furthermore, asking for visual data is beneficial to participants if it reduces their burden compared to conventional questions. Previous research suggests that answering with images takes longer than answering a single open question (Bosch et al. 2022). However, the burden might become lower if one piece of visual data replaces the answers to several conventional questions. Thus, our fifth research question is:
  • RQ5: To what extent respondents consider it burdensome to perform visual data-related tasks, compared to answering with conventional response formats?
Finally, the levels of skills, availability, willingness and burden may differ across respondents depending on their sociodemographic profile, which could affect the representativeness of the results obtained through visual data. Thus, our final research question is:
  • RQ6: How do sociodemographic variables such as age, gender and education affect the skills, availability, willingness, expected participation and burden to share different types of visual data?
By answering these research questions, we provide information that can be used by researchers to identify whether they should consider requesting visual data in their projects and which type of visual data.
Moreover, research so far has focused on the capture and sharing of images within smartphone users. In contrast, we consider photos but also videos and screenshots, shared from both smartphones and PCs.

3 Background

3.1 Skills

There is not much evidence related to the skills of sharing visual data in the frame of web surveys. Bosch et al. (2019) found that around 25% of respondents did not know how to upload an image within a web survey for both in the moment and already stored images. Furthermore, in the study of Ilic et al. (2022), 12–15% of participants reported technical difficulties in uploading images taken in the moment. Regarding screenshots, Ohme et al. (2020) found that 61.5% of those who finished the survey did not successfully upload a screenshot, part of them sharing incomplete and imprecise content.
However, the lack of evidence addressing whether respondents have or not the skills to create and share photos, videos and screenshots from their smartphone and/or PC, prevents to know the extent to which respondents can be requested for visual data, especially considering the different ways to create and/or store them.

3.2 Availability

Regarding data produced during the survey by the device’s camera, the availability depends mainly on the possibility for respondents to create the visual data from the place where they answer the survey. The main limitation in this regard may be related to not being at the location from which the respondent is required to provide visual data. However, since surveys are mostly completed from home, even if respondents answer from smartphones (Revilla et al. 2016), the availability to produce visual data of things within the home may be high if respondents consider that their context allows them to do so. For instance, if they are asked for a picture of the TV or the last electricity bill, being at the house makes it more likely to take such picture or look for the bill (either paper or electronic). Furthermore, the device used to answer the survey might also affect the availability. In particular, in the case of PCs, the possibility to produce visual data in the moment might be lower.
The second kind of data produced during the survey are screenshots. Screenshots can be collected from most of the devices from which the survey can be answered, covering any topic that respondents can access through these devices. The availability may be limited by the size of the device screen, since they only cover the content visible in the screen. Moreover, some mobile apps restrict the use of screenshots by default for security reasons (e.g., some banks apps). However, data accessed through the browser and other apps can be extracted using this method, allowing to capture screenshots of almost everything viewed on mobile devices and PCs.
Regarding visual data already stored, its availability depends mainly on how respondents store them. There are plenty of options to store data: they can be saved automatically in smartphone galleries or clouds, or purposely stored in hard disk drives or PCs. Therefore, the limitations have to do with the chosen storing support, which may vary for the different types of visual data (e.g., photos taken with the smartphone camera are stored on the device itself, while photos taken with a digital camera are on a hard disk drive).
Moreover, within each option, the specific organization implemented by the individual might vary. For instance, some people might create sub-folders within their smartphones, whereas others might keep everything in the same folder. Then, if respondents are asked for a photo of something that happened months ago, they might have problems finding it, especially if they have a lot of visual data in their gallery.
Finally, people may store visual data useful to answer some topics but not others. Individuals mostly photograph other persons, food, pets, events (as weddings, concerts, and birthdays), cars and landscapes (Perry 2015). Also, nearly 15% of pictures are of practical things, such as receipts or shopping lists (Chennapragada 2018). Thus, although we expect a wide range of visual data to be available (especially in smartphones), we still need to learn more about the visual data that respondents store.

3.3 Stated willingness to share visual data in a survey

Previous studies about willingness have focused mainly on images taken by the device camera. In the United Kingdom, 65% of smartphone respondents in the Understanding Society Innovation Panel stated being willing to use their smartphone camera to take pictures or scan codes for a survey (Wenz et al. 2019). In Spain, 49.6% of respondents in the Netquest opt-in online panel declared that they would take photos with their smartphone and send them (Revilla et al. 2019). In the Netherlands, 38.2% of the LISS panel respondents stated being willing to share a photo of their house, 23.6% a video of the surroundings and 17.7% a selfie (Struminskaya et al. 2021).
The diversity of results does not allow to reach conclusive statements regarding willingness. Moreover, videos and screenshots have not been studied. Thus, further research is needed.

3.4 Actual participation

Different studies have asked respondents to share visual data created within the frame of web surveys. First, in the Understanding Society Spending Study sample (subsample of the Innovation Panel) from the 2,112 persons invited to download an app to scan and send receipts, only 10,2% did so at least once a week over the course of the five-week fieldwork (Jäckle et al. 2019).
Bosch et al. (2019), using the Netquest opt-in panel, requested millennials for images taken in the moment. In Spain, 48.6% of respondents sent an image while 24.2% skipped the question. The rest said they did not know how to share them. In Mexico, 57.6% sent images and 16.6% skipped the question.
In the Respondi opt-in online panel in Germany, higher item non-response was found for those asked to provide images taken in the moment than for those asked to answer by text (Bosch et al. 2022). Also, showing a motivational message increases the likelihood of the respondents to share an image.
In the Netherlands, two groups of respondents in the LISS panel were asked to answer sharing a photo or typing in a text, respectively, while a third one could choose the answering method (57% chose to share a photo). Between the two image groups there were no compliance differences: giving a choice did not affect the task completion. However, the completion rates for the text condition were significantly superior than those of the image groups (Ilic et al. 2022).
Regarding screenshots taken in the moment, one study in the Netherlands asked respondents of an online panel to send screenshots of the iOS Screen Time function included in iPhones. Only 11.6% of the sample successfully shared a screenshot with the information asked (Ohme et al. 2020). Interestingly, in another study in the United States asking this same task in several waves, 78% of the participants of the first wave successfully uploaded screenshots by the fourth wave (Sewall et al. 2022).
Finally, Bosch et al. (2019, 2022) also asked for already stored images. The former found that 54.7% of their sample of millennials in Spain sent an image whereas 21.8% skipped the task, while in Mexico 62.5% complied with the task and 15.2% decided not to answer. The latter found that between 48.4 and 74.9% of the respondents included in their experimental groups complied with sending an image (versus 97.8% to 99.1% in the text-based groups).

3.5 Respondents’ burden

Respondents’ burden can be assessed in an objective (e.g., time and/or resources expended to provide answers) or subjective way (e.g., how respondents perceive the survey in terms of time, difficulty and stress). Burden can be affected by factors such as the survey length, the effort and capabilities required, and the respondents' motivation (Read 2019).
Regarding the burden associated to sending visual data, we expect it to be particularly related to the respondents’ skills and the availability of visual data. For instance, when asked for already stored data, depending on how respondents store their files and how many files they have stored, finding visual data may be perceived as easy/quick or long/complex. When asked for data captured during the survey using the device camera, the burden may vary depending on their smartphone usage competence and the conditions in which they answer the survey (e.g., place, others present, etc.).
Overall, Bosch et al. (2022) found that respondents providing images took more time to answer than those typing text, both when answering from PCs and smartphones, but little is known about the perceived burden itself.

4 Methods and data

4.1 Questionnaire

The questionnaire included a maximum of 71 questions.3 After a few sociodemographic and contextual questions, respondents were asked if they would accept to participate in a survey if they could only complete it from a PC (yes/no; compulsory question) or only from a smartphone (yes/no; compulsory question). Respondents answering “yes” for both devices were presented with a set of similar questions about each device. The order of the sections was randomized: some respondents started with the questions about PC, others with the ones about smartphone. Respondents answering “yes” to only one of the two devices were shown only the questions about such device. Respondents answering “no” for both devices were filtered out. For the sake of simplicity, we will call those individuals answering the block of questions about smartphone “smartphone respondents”, and those who answer the block about PC “PC respondents”. In neither case the denomination refers to the device respondents used to answer the survey.
The main part of the questionnaire covered the four dimensions studied in the research questions (skills, availability, willingness and burden). They were presented in the following order so that the questions flowed intuitively and to reduce order effect:
  • Skills to create and share visual data:“Smartphone respondents” were asked if they knew how to take a photo, make a video, take a screenshot and find a file in order to share it using their smartphone (yes/no/not sure). “PC respondents” were only asked two of these questions (screenshots and sharing files) since many PCs (especially desktops) do not allow taking photos or making videos. Moreover, even when the PCs have a camera, it might be complicated to capture the visual data required to answer survey questions. For instance, if respondents are asked to take a photo of their heating system (as in Ilic et al. 2022), they would need to move the PC around the house to do so, which would not be possible for desktops and might not be handy even for laptops.
  • Burden: Respondents who reported having the skills to do at least one of the tasks, were asked how much effort it took them to perform these tasks4 (0 = “no effort at all” to 4 = “a huge effort”). In addition, all respondents were asked how much effort it took them to answer one, five or ten conventional radio button questions and one open-ended question, from their smartphone and/or PC, using the same 5-point scale.
  • Availability: Availability has different meanings depending on whether visual data are already stored or must be captured during the survey.
    • For visual data captured during the survey: As explained in the first point on skills, we do not consider PCs. For smartphones, we consider that the data are available if the respondents are in a situation allowing them to take a photo or make a video of what the researchers ask for.5 Thus, the place from where respondents answer the survey plays a key role. Other factors may also be important. For instance, if respondents are asked to take a photo of their balcony/garden/terrace/courtyard/patio (as done in the study of Ilic et al. 2022), respondents need to be at home, but in addition, they need to be in a situation allowing them to take a picture of the outside area. For example, if the outside area has not a proper lighting system, the respondents should take the photo when there is enough daylight.
      Thus, to study the availability for visual data captured during the survey, besides asking for the place respondents were answering from, we asked respondents if they were in a situation allowing them to take a photo or make a video (at the very moment) of themselves and of something in the place from where they answered the survey, using their smartphone.
      These questions were asked to all “smartphone respondents”. However, for respondents who stated not having the skills or not answering the skills questions, it was explicitly mentioned that they should answer considering only the availability and not if they had the skills. This was done to distinguish the lack of skills from the lack of available data. Researchers could provide the necessary information within the survey to teach participants who do not have the skills how to create and/or share the visual data. Then, they might be in a situation to provide the data.
    • For already stored visual data: We expected a huge majority of participants to have some visual data already stored both in smartphones and PCs. Thus, we decided to ask for the availability of visual data covering some specific topics. We focused on those that seemed less likely to be covered with visual data captured during the survey itself. For instance, if researchers would be interested in asking a photo of the respondents themselves (as done by Bosch et al. 2022), taking a picture in the moment with the smartphone used to answer the survey is expected to be possible in most cases. Thus, to keep the survey as short as possible, we decided to ask about the availability of visual data for four topics for which we expected that taking the photo or making the video in the moment would not be possible in most cases: (1) food and dishes, prepared and/or consumed, (2) products the respondents had bought or planned to buy, (3) landscapes and places visited, and (4) events and activities in which they participated. These questions were asked both for smartphone and PC, since respondents might have different kinds of visual data already stored depending on the device type. Moreover, we distinguished between images and videos, since we expected respondents to have less videos saved on their devices than images.
  • Willingness: Following previous research on stated willingness (Revilla et al. 2019; Struminskaya et al. 2021; Wenz et al. 2019), we directly asked respondents if they would be willing to complete the different tasks from their PC (take and share a screenshot, share an already stored image, share an already stored video) and/or smartphone (the three asked for PC but now the files should come from the smartphone, and two additional ones: take a photo and share it, and make a video and share it). The answer scale included three categories: “yes”, “no” and “it depends on the specific photo/video/screenshot”. Similar to what was done for availability, respondents who stated not having the skills or did not answer to the skill questions, were instructed to consider what they would do if they had the skills, since some respondents not having the skills might be willing to perform the tasks if provided with information on how to do it.

4.2 Data collection

Data were collected through the Netquest opt-in online panel in Spain (www.​netquest.​com) where participants get rewards each time they participate in a survey.
The target population included all people aged 18 years or older living in Spain who had access to the internet. Quotas for age, gender and education were used to get a sample similar on these variables to the overall adult internet population living in Spain, based on the estimations made by the National Statistics Institute of Spain.6 The survey was accessible from any device, and the layout was optimized for mobile devices. Respondents could continue without providing an answer to most questions.7
Data collection took place in May 2021. 1,581 individuals received the invitation to participate, while 1,376 started the survey. Of those starting the survey, 1,296 (94.2%) provided informed consent and were presented with the first survey question. 421 respondents (30.6%) were filtered out because quotas were full and 5 (0.4%) due to not passing basic anti-fraud checks. 4 (0.3%) abandoned the survey before the end and 9 (0.7%) stated they would not participate in a survey that had to be answered exclusively from a smartphone or from a PC. This left 857 respondents (62.3% of the ones who started) for the analyses. 796 (92.9%) answered the smartphone block, 691 (80.6%) the PC block, and 630 (73.5%) both. 50.5% of the participants were female. The mean age was 45.8 years. 34.9% had a higher degree. 68.6% answered using smartphones, 28.7% using PCs and 2.7% using other devices. The median completion time was 7.1 min.

4.3 Analyses

The analyses were performed using R 4.0. The script is available at https://​osf.​io/​vfxbg.
To answer RQ1 (skills), we report the percentages of respondents who stated knowing how to do each of the listed tasks with their smartphone and/or PC (i.e., answering “yes”), over those who saw the question (n = 796 for smartphone and n = 691 for PC).
For RQ2 (availability), we present the results separately for visual data produced during the survey and already stored. As for the first, we present the percentages of “smartphone respondents” who reported being in a situation allowing them to take a photo or make a video of themselves and of something in the place they were answering the survey from (considering the sample for each place). For instance, within those answering from home, we report the percentage who report being in a situation allowing them to take a photo of something in their house (n = 662).
Regarding visual data already stored, we report the percentages of availability for each topic (“yes”), making a distinction between the type of device (smartphone vs. PC) and kind of visual data (image vs. video). The proportions consider all respondents who saw this question (n = 796 for smartphone and n = 691 for PC).
To answer RQ3 (willingness), we report the proportions of respondents who stated being willing to create and/or share visual data during a survey, as well as the proportion answering “it depends on the photo/video/screenshot asked”. We distinguish between the type of device and visual data: in smartphone, screenshot, photo and video taken in the moment, and image and video already stored; in PC, screenshot created in the moment, and image and video already stored. The proportions were calculated over the total respondents for each block.
To estimate the proportions of the respondents’ expected participation, that is, of those who simultaneously have the skills, are in a situation to provide the visual data (availability), and are willing to share them (RQ4), we created dummy variables for visual data produced in the moment and already stored. In the first case, each variable matched separately images and videos of themselves, the house, the work/study place, and other places (dummy = 1 when respondents, at the same time, stated they know how to take a photo/make a video, were in a situation to create one piece of visual data of themselves/the place they were answering from, and declared being willing to create a photo/video/screenshot and send it). In the case of selfies, we used the general percentage (not separating for the place from where they answered the survey). As for visual data already stored, the variables matched images and videos of food, products, landscapes and activities in each device (dummy = 1 when respondents, at the same time, stated they know how to share a file, that they had visual data already stored of the said categories in the device(s), and that they were willing to send an already stored photo/video). The willingness only considered the “yes” answers (not “it depends on the file”). The analyses distinguish the moment the visual data was created, the device, and the type of visual data, following the criteria previously specified for each of the dimensions.
To answer RQ5 (burden), the respondents’ mean burden of answering radio button and open questions were calculated and compared with the perceived burden of performing visual data-related tasks. The means were calculated considering only those respondents who answered all the burden questions, so comparisons are for the same sample within each device (n = 694 for smartphone and n = 434 for PC). We distinguish between smartphones and PCs, since specific tasks, such as typing an open answer, may have a different perceived burden depending on the device.
The significance of the differences (at the 5% level) for the first four research questions was tested using Z-test when proportions were compared among different groups (e.g., those answering from home compared to those answering from their workplace) and McNemar’s test for comparisons within the same group (e.g., smartphone respondents) or between mostly similar ones (e.g., between devices, with 630 respondents in common). T-test for paired samples was used in the case of means.
Regarding RQ6, we performed regression analyses to assess whether and to what extent each of the dependent variables (levels of skills, availability, willingness, expected participation and burden as defined previously) is impacted by the following respondents' characteristics: gender (1 = women), age (numerical), level of education (using primary education or less as reference category against, independently, secondary education and tertiary education), and experience as a Netquest panelist (logarithm of the number of surveys completed in the last three months). We selected these variables because previous studies found they sometimes impact the willingness to perform innovative tasks during a survey (Revilla et al. 2019; Wenz et al. 2019).
Logistic regressions were used for skills, availability and expected participation (1 = have skills/is available/is expected to participate). For willingness, we also included the category "2 = it depends on the file". Therefore, we used a multinomial model. For the burden, we performed linear regressions with the perceived burden of answering  conventional survey questions and performing visual data-related tasks in the 0 to 4 scale as dependent variables, considering only respondents who answered all the burden questions. When an analysis had few cases (e.g., those answering from their workplace), regressions were not performed. Respondents who saw a question and did not answer it, were excluded from the regression analyses. For the sake of simplicity, we present the mean and standard deviation of the regression coefficients per dimension, which allows assessing the average magnitude of the coefficients and if the general effect was positive or negative. The full results of the 74 regressions performed (one for each type of skill, availability, willingness, expected participation, and burden, for both PC and smartphone), including the value of each coefficient and their significance, are available at https://​osf.​io/​kygp2.

5 Results

5.1 Skills of the respondents to create and/or share visual data

To answer RQ1, Table 1 shows the percentage of respondents stating that they have the skills to create and share different types of visual data using their smartphone and/or PC.
Table 1
Proportion of respondents stating that they know how to do each task
Device
Produced during the survey
Already stored
Screenshot
(a)
Photo
(b)
Video
(c)
Any file
(d)
Smartphone
(n = 796)
91.8b,c
99.0
98.0
93.1b,c
PC
(n = 691)
64.8d
  
86.8
The letters indicate significant differences between tasks. Bold indicates significant differences between devices
First, almost all participants know how to take a photo (99.0%) and make a video (98.0%) using their smartphone. Second, the proportions of respondents knowing how to create screenshots and send already stored files with their smartphone are significantly lower (respectively 91.8% and 93.1%), although at least 9 out of 10 know how to perform them. All skills related to the smartphone are highly spread, even if there are differences among them.
Finally, significantly lower proportions of respondents declared knowing how to send an already stored file (86.8%) and create a screenshot (64.8%) using the PC than the smartphone. The latter one is the skill with the lowest presence among respondents.

5.2 Availability

5.2.1 Visual data during the survey

To answer RQ2, Table 2 presents the percentages of respondents in a situation that would allow them to create visual data of themselves (“selfie”) and of something in the place they were when answering the survey.
Table 2
Proportion of respondents in a situation to produce visual data
Variable
Total
Photo/video of something in…
House (a)
Workplace/study center (b)
Other places (c)
In a situation to produce a selfie from… (%)
73.6
74.3
72.1
66.7
In a situation to produce visual data of something in the place they are answering from (%)
80.8
82.9b
66.3
77.1
n
796
662
86
48
The letters indicate significant differences between categories
Nearly three in four “smartphone respondents” would be in a situation allowing them to take a selfie, with no statistically significant differences based on the place from which they answered. This is lower than expected, especially for respondents answering from their house. However, an analysis of the open-ended answers to a question about reasons for not sending a selfie, shows that a large proportion of respondents provided arguments related to willingness rather than availability (e.g., they did not like the way they looked at the time). Therefore, the availability to send selfies from home is likely to be higher than stated.
Furthermore, 83.2% completed the survey from home and 82.9% of them stated being in a situation allowing them to create a piece of visual data of something inside their house. Only a few respondents answered from other places. Moreover, the availability is lower for these other places, especially in the case of the workplace/study center.

5.2.2 Visual data already stored

Concerning the availability of already stored visual data, Table 3 presents the percentages of respondents storing images and/or videos of different topics in their device(s).
Table 3
Proportion of respondents storing different types of visual data
Device
Type of data
Food
(a)
Products
(b)
Landscapes/places visited (c)
Events/activities (d)
Smartphone
(n = 796)
Images
67.1*b
63.3*
90.7*a,b,d
85.3*a,b
Videos
37.7b
33.5
79.5a,b,d
75.3a,b
PC
(n = 691)
Images
38.8*
41.4*
76.1*a,b,d
72.9*a,b
Videos
23.6
24.3
62.7a,b
60.5a,b
The letters indicate significant differences between categories. Bold indicates significant differences between devices.
* indicates significant differences between types of visual data. The categories “Images” include photos and screenshots
First, respondents have more images available than videos for all topics, regardless of the type of device. The differences range from 10.0 percentage points (for events and activities in smartphone) to 29.8 (for products in smartphone). Thus, for any of these topics, it is more feasible to get images than videos for both devices.
Second, a higher proportion of respondents store visual data in these categories on their smartphone than on their PC, both considering images and videos. This suggests that a higher participation in sharing visual data can be expected for “smartphone respondents”, due to a higher availability of images and videos in these devices.
Lastly, more respondents store visual data related to landscapes and activities than to food and products, regardless of the device and the type of visual data.

5.3 Willingness to share visual data

As for RQ3, Table 4 presents the proportion of respondents who stated that they would be willing to share different types of visual data during a survey (“yes”), or that “it depends”.
Table 4
Proportion of respondents willing to share different types of visual data
Device
Willing
Produced during the survey
Already stored
Screenshot
(a)
Photo
(b)
Video
(c)
Image
(d)
Video
(e)
Smartphone
(n = 796)
Yes
62.1c,d,e
61.9c,d,e
52.9d,e
43.6e
37.2
It depends
30.0c,d,e
30.5c,d,e
34.8d,e
47.1
47.1
PC
(n = 691)
Yes
48.5e
  
49.1e
41.7
It depends
36.5d,e
  
39.4
41.4
The letters indicate significant differences between types of visual data. Bold indicates significant differences between devices. The category “Image” includes photo and screenshot
First, the absolute willingness (“yes”) ranges from 37.2% (videos already stored in smartphone) to 62.1% (screenshots in smartphones). Lower figures are found among visual data already stored, whereas there are higher proportions of respondents stating absolute willingness for visual data produced during the survey. For screenshots and photos created with the smartphone the absolute willingness is over 60%, even when respondents were presented with the option “it depends”. Thus, these two methods are perceived positively from most respondents. Regarding visual data already stored, larger values of relative willingness (“it depends”) are found, especially among “smartphone respondents”.
Second, even if visual data produced during the survey have higher levels of “Yes” answers, the willingness to share screenshots (the only in-the-moment task studied for both devices) is significantly higher among “smartphone” than “PC respondents”.
Finally, more than 90% of respondents would be willing (always or in some conditions) to take a photo, screenshot, and send a stored image with their smartphone, whereas lower proportions would be willing to perform the tasks for videos and PCs. However, it is worth mentioning that all activities show a willingness higher than 80% when considering both “yes” and “it depends”.

5.4 Expected participation

5.4.1 Visual data during the survey

To answer RQ4, Table 5 presents the percentages of individuals that (a) stated to have the skills to take photos/videos, (b) were in a situation that would allow them to create a piece of visual data during the survey (i.e., have availability), and (c) reported a positive willingness to share a photo/video created during the survey.
Table 5
Proportion of respondents expected to participate for visual data created during the survey
Type of Data
Selfie
Photo/video of something in…
The house
The workplace/study center
Other places
Images
48.7
54.2
43.0
52.1
Videos
42.7
46.7
43.0
45.8
n
796
662
86
48
No significant differences between categories were found. Bold indicates significant differences between types of visual data. For willingness, we consider only the ones saying “yes”. The “Selfie” category is based on the general value for that category in Table 2
Nearly half of the “smartphone respondents” provided a positive response in the three dimensions at the same time when it comes to images. Thus, we can expect that around half of the respondents would actually participate, which is in line with previous research (Bosch et al. 2019). This is particularly relevant for selfies (48.7%) and photos of something inside the house (54.2%), given that most respondents answered from home.
Also, for both selfie and visual data from the house, there are significantly higher proportions of respondents expected to participate if asked for images than for videos. However, if researchers have a particular interest in getting videos, it is likely that 4 in 10 participants would send them.

5.4.2 Already stored visual data

Next, Table 6 presents the percentage of respondents who have the skills to send files from their device(s), have images/videos stored in them, and are willing to share them within a web survey.
Table 6
Proportion of respondents expected to participate for already stored visual data
Device
Type of data
Food
(a)
Products
(b)
Landscapes/places visited (c)
Events/activities (d)
Smartphone
(n = 796)
Images
31.4*c,d
29.7*c,d
39.2*d
36.9*
Videos
18.2c,d
17.0c,d
32.2
30.5
PC
(n = 691)
Images
23.7*c,d
24.0*c,d
38.4*
36.9*
Videos
14.0c,d
14.2c,d
28.7
28.2
The letters indicate significant differences between categories. Bold indicates significant differences between devices.
*indicates significant differences between types of visual data. The categories “Images” include photos and screenshots. For willingness, we consider only the ones saying “yes”
Overall, the proportions of respondents meeting the three dimensions are lower than in the case of visual data created during the survey. Thus, we expect a somehow lower actual participation when asking for already stored data.
However, the level of actual participation is expected to vary depending on the topic, type of visual data and device. Landscapes is the category with the highest expected participation, followed by events, food and products. Moreover, in each category and device, the highest proportions are observed for images. Finally, highest proportions are observed for smartphones than PCs: the differences between devices are significant in almost all categories.

5.5 Perceived burden

To answer RQ5, Table 7 presents the mean perceived burden of answering different types of survey questions and performing visual data-related tasks measured using a scale from 0 (“No effort at all”) to 4 (“A huge effort”).
Table 7
Mean burden of answering conventional questions and performing visual data-related tasks (0–4 scale)
Device
Conventional survey questions
Visual data-related tasks
Already Stored Files (h)
1
RB
(a)
5
RB
(b)
10 RB
(c)
Open Narrative (d)
Screenshot
(e)
Photo
(f)
Video
(g)
Smartphone (n = 694)
.11
b,c,d
.22
c,d
.42
.69
.10
b,c,d,h
.08
a.b.c,d,g,h
.12
b,c,d,h
.24
a,c,d
PC
(n = 434)
.13
b,c,d
.21
c,d
.44
d
.75
.24
a,c,d,h
  
.15
c,d
The letters indicate significant differences between types of questions. Bold indicates significant differences between devices. RB stands for “radio button question(s)”
First, all questions and tasks present a very low perception of burden (max = 0.75). Second, even if the level of effort is perceived as small, open narrative questions have the highest perception of burden compared to visual data-related tasks in both devices. Third, making a video and taking a screenshot using the smartphone represent less effort than answering 5 and 10 radio button questions, whereas taking photos is also perceived as less burdensome than answering one radio button question. In that sense, performing visual data-related tasks would be perceived as less burdensome than answering conventional questions in a survey. When it comes to PCs, producing screenshots has a lower perception of burden than answering 10 radio button questions.
Lastly, the perception of effort associated with sending visual data already stored is higher than the one related to visual data created in the moment. Nevertheless, sharing a file is still perceived as less burdensome than responding 10 radio button questions or an open narrative question.

5.6 Impact of the sociodemographic attributes on the skills, availability, willingness, expected participation and burden

To answer our last research question, the results of the 74 regressions performed, including the value of each coefficient and their significance, are available at https://​osf.​io/​kygp2. As a way to summarize these results, Table 8 presents the average mean of the regression coefficients per each of the dimensions (skills, availability, willingness: yes, willingness: it depends, expected participation, and burden).8
Table 8
Mean and standard deviation (SD) of the regression coefficients per dimension
https://static-content.springer.com/image/art%3A10.1007%2Fs11135-023-01670-3/MediaObjects/11135_2023_1670_Tab8_HTML.png
First, gender impacts negatively in almost every dimension (the only exception is “willingness: it depends”), showing that women would have an overall more negative disposition than men, although they might find survey questions and visual data-related tasks less burdensome. However, the effects are significant in only few cases.
Second, age has a negative effect in all the dimensions and in both devices. Even if this is good news for burden (a negative sign means less burden), the mean coefficient in that dimension is too low ( − 0.001 for smartphone, and − 0.002 for PC) to suggest that this impact is relevant. Additionally, gender and age are the only variables negatively affecting the expected participation in both devices, but specially in smartphones: in these type of devices, gender and age are the two variables with more negative significant effects.
Third, middle and high education have overall positive effects in both devices compared to lower education. The effect of middle education is especially remarkable for participation from the PC, since their effect is always positive and significant for expected participation.
Higher participation in Netquest’s surveys during the last 3 months positively impacts almost all the dimensions in both devices. As expected since respondents already participate in a panel, the variable also increases the likelihood of the expected participation (skills, availability and willingness together) in smartphones and PCs (with more significant effects in smartphones).

6 Conclusions

6.1 Summary of main results

Different aspects must be considered by researchers when deciding if it can be beneficial for their research and for their participants to ask for visual data in the frame of web surveys. In this paper, we focus on four dimensions that we considered particularly relevant (and their combination): skills, availability, willingness and burden.
To shed some light on these four dimensions, we used data from an online survey implemented in an opt-in online panel in Spain. First, we found that most respondents have the required skills to capture and share visual data within the frame of a web survey (RQ1). The skills are especially high in smartphones, whereas the task with the lowest level of skills is taking a screenshot using the PC.
Second, most respondents were in a situation to create visual data in the moment, particularly for things inside the house (RQ2). This is highly relevant considering that 83% of participants answered from their house. For already stored visual data (also RQ2), it is more likely to have visual data available when considering images (vs. videos), smartphones (vs. PCs), and landscapes and activities (vs. food and products).
Third, we found high levels of willingness to share visual data (RQ3). Indeed, even when respondents were proposed an “it depends” option, around 60% answered “yes”, showing an unconditional willingness to provide visual data. This is in line with what was previously found in the United Kingdom (Wenz et al. 2019). Nevertheless, previous research did not consider the conditional willingness (“it depends”). When taking this “it depends” option into account, our study hints to a higher disposition of respondents to share visual data in the frame of web surveys, with at least 8 in 10 participants who would eventually be willing to share visual data from smartphones and/or PCs, than previous studies (Revilla et al. 2019; Struminskaya et al. 2021). The highest willingness relates to sharing photos and screenshots created with the smartphone during the survey, and images already stored (using the same device).
Fourth, when analyzing the expected participation (RQ4), we found higher proportions of respondents who have the skills, have data available and are willing to share, for visual data produced during the survey (around 40–55%). For visual data already stored, these proportions sometimes get really small (minimum = 14.0% for videos of food in PCs). If we consider that respondents would participate if they had the skills, had data available and were willing to, our results combining these three dimensions should be in line with previous studies about actual participation. We do find similar results to Bosch et al. (2019) for visual data created during the survey (around 50%). However, for already stored visual data our findings show lower figures. The difference could be related to the fact that we asked for different categories of images (food, nature, activities and products vs. image of something that made respondents laugh in Bosch et al. 2019). Overall, even if there are differences across types of visual data and topics, our results suggest that asking for visual data might be a good option to consider for researchers since most respondents have the skills and willingness to share them. Availability might be more problematic, depending on the type of visual data of interest (especially low for videos already stored).
Fifth, in terms of perceived burden (RQ5), taking photos has the lowest average perception of burden, even lower than answering one radio button question. For visual data already stored, respondents considered that the burden is on average lower than answering five or 10 radio button questions.
Finally, regarding the impact of sociodemographic variables (RQ6), older and female respondents were less likely to eventually participate in a survey requiring for visual data. On the contrary, people more educated and with more recent participations in the Netquest panel are expected to present higher participation.

6.2 Limitations

First, we used only one (opt-in) panel in one country (Spain). Thus, robustness checks when analyzing different types of samples or countries are needed, and results from this paper cannot be extended to panels with different characteristics (such as a probability-based panel). Further, we used an opt-in panel whose respondents already voluntarily participate in surveys and use a PC and/or smartphone to do so. Thus, the levels of skills, willingness and/or availability presented in this study could be higher than those of the general population, who are likely to be less familiar with these devices and less willing to participate in a survey than panel members. In any case, the results presented in this study still offer valuable insights, particularly for researchers who also plan to use opt-in panels. Moreover, we relied on respondents’ reports to measure skills, availability, willingness and burden. However, such reports might contain errors (e.g., social desirability could push respondents to answer that they know how to make a screenshot when, actually, they do not know how to do it).
Also, analyzing skills, availability and willingness together works as a proxy of the expected participation, but there may still be other factors influencing the actual participation. For instance, respondents may state knowing how to take a photo, being in a situation to take a selfie and be willing to share visual data in a survey, but if asked specifically for a selfie they might have different concerns that prevent them from completing the task, such as the privacy or how they look this day. Finally, even if respondents stated overall low levels of burden, we believe that researchers should be cautious about this result, since respondents might have underestimated the real burden as they were not asked to actually perform the actions and subsequently evaluate the burden. Moreover, respondents were asked for the burden of doing some visual data-related tasks in general, but not within the frame of a web survey. Thus, different perceptions of burden might arise when asking for that specific type of tasks. Finally, there is still no evidence regarding the burden when asking for several pieces of visual data together.

6.3 Practical implications

Based on our results, we propose some recommendations. First, even if the level of skills is overall high, there are still participants who do not have them. In specific tasks (mainly screenshots from PCs), this represents an important proportion. Thus, we recommend providing guidance on how to perform the tasks before individuals are asked to do so. In particular, if screenshots from the PC are going to be requested, an explanation seems necessary. This could be in the form of a text, video or image displayed just before presenting the task.
Second, availability seems to be the most limiting factor, especially for already stored data. Moreover, the storage of visual data varies across topics and respondents. Thus, if researchers are interested in visual data that can hardly be produced during the survey itself, participants may be contacted first, explaining which kind of visual data the researchers would like from them and asking them to start collecting it. Then, respondents can be provided with a survey link allowing them to get in and out to submit the visual data whenever it is more convenient for them. So, for instance, if researchers are interested in getting videos of food individuals have eaten in restaurants, in view of the low availability, we recommend researchers to contact the respondents previously asking them to do videos of the plates they will consume in such establishments, in order to be able to share them with the researchers later on. Also, geolocalization information could be used to send reminders every time the participant goes to a restaurant (Ochoa 2022).
Overall, researchers should carefully evaluate when to ask for visual data, and should do so only when the expected benefits for both the researchers and the participants overweight the expected drawbacks. Moreover, other aspects (besides the ones studied in this paper), should also be considered when deciding on the use of visual data. In particular, data quality needs to be assessed: when measuring some concepts, visual data are expected to provide more accurate and complete insights compared to conventional questions or even to allow measuring new concepts, but empirical evidence is needed to confirm such expectations. Also, the quality of visual data depends on both the inputs sent by respondents and the subsequent classification researchers do of the data (e.g., manual or automatic coding) (Iglesias et al. 2022). Furthermore, even if the quality is increased, researchers also need to assess if the gain in quality is worth the possible extra costs. In addition, ethical considerations should also be part of the decision process. For instance, it is more likely to share information inadvertently when sharing visual data than when answering conventional survey questions (Revilla 2022) (e.g., if researchers ask for pictures of plants and the ID card with personal data is next to the plant, the respondent might share this personal information without being conscious of it). All these aspects would require further research. Finally, researchers should consider the age and level of education of their objective sample when deciding. Indeed, older and lower educated populations are less likely to send visual data in a survey, either due to lack of skills, availability and/or willingness. Then, the decision of using visual data might be different when targeting these groups.

Acknowledgements

We are very grateful to Carlos Ochoa and Oriol J. Bosch for their help in the development of this paper, and to the anonymous reviewers for their helpful comments to previous versions of this manuscript.

Declarations

Conflict of interest

The authors declared no potential conflicts of interest regarding the research, authorship, and/or publication of this article.

Ethical approval

This study was reviewed and approved by the Institutional Committee for Ethical Review of Projects from the Universitat Pompeu Fabra.
All participants were presented with an information sheet before starting and only those providing informed consent could participate in the survey.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Footnotes
2
Details on the creation and storing of screenshots in smartphones (the most common type of mobile devices) are available at https://​support.​google.​com/​android/​answer/​9075928?​hl=​en and https://​support.​apple.​com/​en-us/​HT200289
 
3
The English translation of the questionnaire and screenshots with the full survey in Spanish (original language) are available at https://​osf.​io/​am5cj and https://​osf.​io/​zwjc8, respectively.
 
4
The question was also presented to those skipping any skill question. The number of respondents doing so ranged between 0 (taking a picture with the smartphone and sharing files from the PC) and 5 (sharing files from the smartphone).
 
5
Screenshots captured during the survey were not studied since their availability should be guaranteed if respondents are asked something that can be visualized from their device(s).
 
7
A warning message was planned if participants tried to skip their fourth question (not necessarily in a row). However, this never occurred.
 
8
Some independent variables presented negative and positive coefficients for different dependent variables in the same dimension. See https://​osf.​io/​kygp2 for details on the coefficient values and their signs.
 
Literature
go back to reference Couper, M., Antoun, C., Mavletova, A.: Mobile web surveys: a total survey error perspective. In: Biemer, P., de Leeuw, E., Eckman, S., Edwards, B., Kreuter, F., Lyberg, L., Tucker, N.C., West, B. (eds.) Total survey error in practice, pp. 133–154. Wiley (2017)CrossRef Couper, M., Antoun, C., Mavletova, A.: Mobile web surveys: a total survey error perspective. In: Biemer, P., de Leeuw, E., Eckman, S., Edwards, B., Kreuter, F., Lyberg, L., Tucker, N.C., West, B. (eds.) Total survey error in practice, pp. 133–154. Wiley (2017)CrossRef
go back to reference de Leeuw, E., Hox, J., Huisman, M.: Prevention and treatment of item nonresponse. J. off. Stat. 19, 153 (2003) de Leeuw, E., Hox, J., Huisman, M.: Prevention and treatment of item nonresponse. J. off. Stat. 19, 153 (2003)
go back to reference Herzing, J.: Mobile web surveys. Guide No 01 Version 10 Lausanne Swiss Cent. Expert. Soc. Sci. FORS. 45, 45 (2019) Herzing, J.: Mobile web surveys. Guide No 01 Version 10 Lausanne Swiss Cent. Expert. Soc. Sci. FORS. 45, 45 (2019)
go back to reference Ilic, G., Lugtig, P., Schouten, B., Streefkerk, M., Mulder, J., Kumar, P., Höcük, S.: Pictures instead of survey questions: An experimental investigation of the feasibility of using pictures in a housing survey. J. r. Stat. Soc. Ser. A Stat. Soc. 185, S437–S460 (2022). https://doi.org/10.1111/rssa.12960CrossRef Ilic, G., Lugtig, P., Schouten, B., Streefkerk, M., Mulder, J., Kumar, P., Höcük, S.: Pictures instead of survey questions: An experimental investigation of the feasibility of using pictures in a housing survey. J. r. Stat. Soc. Ser. A Stat. Soc. 185, S437–S460 (2022). https://​doi.​org/​10.​1111/​rssa.​12960CrossRef
go back to reference Toepoel, V., Lugtig, P., Schouten, B.: Active and passive measurement in mobile surveys. Surv. Stat. 82, 14–26 (2020) Toepoel, V., Lugtig, P., Schouten, B.: Active and passive measurement in mobile surveys. Surv. Stat. 82, 14–26 (2020)
Metadata
Title
Skills, availability, willingness, expected participation and burden of sharing visual data within the frame of web surveys
Authors
Patricia A. Iglesias
Melanie Revilla
Publication date
22-05-2023
Publisher
Springer Netherlands
Published in
Quality & Quantity / Issue 2/2024
Print ISSN: 0033-5177
Electronic ISSN: 1573-7845
DOI
https://doi.org/10.1007/s11135-023-01670-3

Other articles of this Issue 2/2024

Quality & Quantity 2/2024 Go to the issue

Premium Partner