1 Introduction
-
The device camera: Visual data coming from the camera is mostly obtained through the sensor in mobile devices, since cameras in computers (PCs) tend to have a different use than taking pictures (e.g., videocalls). Therefore, visual data produced during the survey include the photos and videos taken in the moment with the camera, principally by means of mobile devices.
-
Screenshots: A screenshot is a digital image of the contents displayed on the screen of a PC or mobile device. Thus, screenshots can cover any subject that the respondents can access through these devices. The way a screenshot is captured and stored differs depending on the type of device. In PCs, they are made using tools specifically designed for that purpose1 or by pressing specific buttons (such as “Impr Pant” in Windows), and depending on the operating system, they are buffered on the clipboard or automatically saved in a folder. In mobile devices, they are usually created by pressing a specific set of buttons and stored in their library.2
2 Scope, research questions and contribution
-
RQ1: To what extent respondents have the needed skills to capture and share different types of visual data?
-
RQ2: To what extent are different types of visual data available so respondents can share them within the frame of a web survey?
-
RQ3: To what extent respondents declare that they would be willing to share different types of visual data?
-
RQ4: What is the expected participation when asking for different types of visual data (i.e., the proportion of respondents who have the skills, have such visual data available and are willing to share them)?
-
RQ5: To what extent respondents consider it burdensome to perform visual data-related tasks, compared to answering with conventional response formats?
-
RQ6: How do sociodemographic variables such as age, gender and education affect the skills, availability, willingness, expected participation and burden to share different types of visual data?
3 Background
3.1 Skills
3.2 Availability
3.3 Stated willingness to share visual data in a survey
3.4 Actual participation
3.5 Respondents’ burden
4 Methods and data
4.1 Questionnaire
-
Skills to create and share visual data:“Smartphone respondents” were asked if they knew how to take a photo, make a video, take a screenshot and find a file in order to share it using their smartphone (yes/no/not sure). “PC respondents” were only asked two of these questions (screenshots and sharing files) since many PCs (especially desktops) do not allow taking photos or making videos. Moreover, even when the PCs have a camera, it might be complicated to capture the visual data required to answer survey questions. For instance, if respondents are asked to take a photo of their heating system (as in Ilic et al. 2022), they would need to move the PC around the house to do so, which would not be possible for desktops and might not be handy even for laptops.
-
Burden: Respondents who reported having the skills to do at least one of the tasks, were asked how much effort it took them to perform these tasks4 (0 = “no effort at all” to 4 = “a huge effort”). In addition, all respondents were asked how much effort it took them to answer one, five or ten conventional radio button questions and one open-ended question, from their smartphone and/or PC, using the same 5-point scale.
-
Availability: Availability has different meanings depending on whether visual data are already stored or must be captured during the survey.
-
For visual data captured during the survey: As explained in the first point on skills, we do not consider PCs. For smartphones, we consider that the data are available if the respondents are in a situation allowing them to take a photo or make a video of what the researchers ask for.5 Thus, the place from where respondents answer the survey plays a key role. Other factors may also be important. For instance, if respondents are asked to take a photo of their balcony/garden/terrace/courtyard/patio (as done in the study of Ilic et al. 2022), respondents need to be at home, but in addition, they need to be in a situation allowing them to take a picture of the outside area. For example, if the outside area has not a proper lighting system, the respondents should take the photo when there is enough daylight.Thus, to study the availability for visual data captured during the survey, besides asking for the place respondents were answering from, we asked respondents if they were in a situation allowing them to take a photo or make a video (at the very moment) of themselves and of something in the place from where they answered the survey, using their smartphone.These questions were asked to all “smartphone respondents”. However, for respondents who stated not having the skills or not answering the skills questions, it was explicitly mentioned that they should answer considering only the availability and not if they had the skills. This was done to distinguish the lack of skills from the lack of available data. Researchers could provide the necessary information within the survey to teach participants who do not have the skills how to create and/or share the visual data. Then, they might be in a situation to provide the data.
-
For already stored visual data: We expected a huge majority of participants to have some visual data already stored both in smartphones and PCs. Thus, we decided to ask for the availability of visual data covering some specific topics. We focused on those that seemed less likely to be covered with visual data captured during the survey itself. For instance, if researchers would be interested in asking a photo of the respondents themselves (as done by Bosch et al. 2022), taking a picture in the moment with the smartphone used to answer the survey is expected to be possible in most cases. Thus, to keep the survey as short as possible, we decided to ask about the availability of visual data for four topics for which we expected that taking the photo or making the video in the moment would not be possible in most cases: (1) food and dishes, prepared and/or consumed, (2) products the respondents had bought or planned to buy, (3) landscapes and places visited, and (4) events and activities in which they participated. These questions were asked both for smartphone and PC, since respondents might have different kinds of visual data already stored depending on the device type. Moreover, we distinguished between images and videos, since we expected respondents to have less videos saved on their devices than images.
-
-
Willingness: Following previous research on stated willingness (Revilla et al. 2019; Struminskaya et al. 2021; Wenz et al. 2019), we directly asked respondents if they would be willing to complete the different tasks from their PC (take and share a screenshot, share an already stored image, share an already stored video) and/or smartphone (the three asked for PC but now the files should come from the smartphone, and two additional ones: take a photo and share it, and make a video and share it). The answer scale included three categories: “yes”, “no” and “it depends on the specific photo/video/screenshot”. Similar to what was done for availability, respondents who stated not having the skills or did not answer to the skill questions, were instructed to consider what they would do if they had the skills, since some respondents not having the skills might be willing to perform the tasks if provided with information on how to do it.
4.2 Data collection
4.3 Analyses
5 Results
5.1 Skills of the respondents to create and/or share visual data
Device | Produced during the survey | Already stored | ||
---|---|---|---|---|
Screenshot (a) | Photo (b) | Video (c) | Any file (d) | |
Smartphone (n = 796) | 91.8b,c | 99.0 | 98.0 | 93.1b,c |
PC (n = 691) | 64.8d | 86.8 |
5.2 Availability
5.2.1 Visual data during the survey
Variable | Total | Photo/video of something in… | ||
---|---|---|---|---|
House (a) | Workplace/study center (b) | Other places (c) | ||
In a situation to produce a selfie from… (%) | 73.6 | 74.3 | 72.1 | 66.7 |
In a situation to produce visual data of something in the place they are answering from (%) | 80.8 | 82.9b | 66.3 | 77.1 |
n | 796 | 662 | 86 | 48 |
5.2.2 Visual data already stored
Device | Type of data | Food (a) | Products (b) | Landscapes/places visited (c) | Events/activities (d) |
---|---|---|---|---|---|
Smartphone (n = 796) | Images | 67.1*b | 63.3* | 90.7*a,b,d | 85.3*a,b |
Videos | 37.7b | 33.5 | 79.5a,b,d | 75.3a,b | |
PC (n = 691) | Images | 38.8* | 41.4* | 76.1*a,b,d | 72.9*a,b |
Videos | 23.6 | 24.3 | 62.7a,b | 60.5a,b |
5.3 Willingness to share visual data
Device | Willing | Produced during the survey | Already stored | |||
---|---|---|---|---|---|---|
Screenshot (a) | Photo (b) | Video (c) | Image (d) | Video (e) | ||
Smartphone (n = 796) | Yes | 62.1c,d,e | 61.9c,d,e | 52.9d,e | 43.6e | 37.2 |
It depends | 30.0c,d,e | 30.5c,d,e | 34.8d,e | 47.1 | 47.1 | |
PC (n = 691) | Yes | 48.5e | 49.1e | 41.7 | ||
It depends | 36.5d,e | 39.4 | 41.4 |
5.4 Expected participation
5.4.1 Visual data during the survey
Type of Data | Selfie | Photo/video of something in… | ||
---|---|---|---|---|
The house | The workplace/study center | Other places | ||
Images | 48.7 | 54.2 | 43.0 | 52.1 |
Videos | 42.7 | 46.7 | 43.0 | 45.8 |
n | 796 | 662 | 86 | 48 |
5.4.2 Already stored visual data
Device | Type of data | Food (a) | Products (b) | Landscapes/places visited (c) | Events/activities (d) |
---|---|---|---|---|---|
Smartphone (n = 796) | Images | 31.4*c,d | 29.7*c,d | 39.2*d | 36.9* |
Videos | 18.2c,d | 17.0c,d | 32.2 | 30.5 | |
PC (n = 691) | Images | 23.7*c,d | 24.0*c,d | 38.4* | 36.9* |
Videos | 14.0c,d | 14.2c,d | 28.7 | 28.2 |
5.5 Perceived burden
Device | Conventional survey questions | Visual data-related tasks | Already Stored Files (h) | |||||
---|---|---|---|---|---|---|---|---|
1 RB (a) | 5 RB (b) | 10 RB (c) | Open Narrative (d) | Screenshot (e) | Photo (f) | Video (g) | ||
Smartphone (n = 694) | .11 b,c,d | .22 c,d | .42 | .69 | .10 b,c,d,h | .08 a.b.c,d,g,h | .12 b,c,d,h | .24 a,c,d |
PC (n = 434) | .13 b,c,d | .21 c,d | .44 d | .75 | .24 a,c,d,h | .15 c,d |