Skip to main content
Erschienen in: Quality & Quantity 3/2021

Open Access 29.08.2020

ICT skills measurement in social surveys: Can we trust self-reports?

verfasst von: Marta Palczyńska, Maja Rynko

Erschienen in: Quality & Quantity | Ausgabe 3/2021

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Self-reports are the most common way of measuring information and communications technology (ICT) skills in social surveys. Studies comparing self-reported computer skills with objective assessments have found evidence of significant overreporting of skills, but were conducted only among non-representative groups of individuals. This paper fills an important gap by analysing the degree to which ICT skills are overreported in the working-age population of Poland, and the potential causes of this behaviour. We compare answers to Eurostat questions on ICT usage with direct assessments of the corresponding tasks. The results suggest that those individuals who are most likely to possess ICT skills are also most likely to overreport having these skills. The propensity to overreport decreases with age and increases with years of education and numeracy level; women are less likely than men to overestimate their skills. The positive relationship between the probability of a group to overreport their own skills and their expected levels of skills suggests that social desirability bias may explain this phenomenon.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Skills are an important aspect of human capital, which is, in turn, one of the main factors necessary for economic and social development. In addition to economic growth discussions, human capital and skills measures are often embedded in discussions on the economics of education, ageing, employability, crime, well-being, and health (e.g., Stroombergen et al. 2002). Skills may be analysed from different perspectives: namely, supply and demand, skills mismatch, and skills development (European Commission and Statistical Office of the European Union 2016). Economists have investigated the returns to both cognitive and non-cognitive skills with respect to labour market outcomes and social behaviour (e.g., Heckman et al. 2006).
Estimates of individual or population skill levels are typically obtained using direct or indirect measures. Indirect measures of skills supply include information on the level, orientation (general or vocational), and field of educational attainment; i.e., the information that can be obtained from administrative registers or from social surveys, including the Labour Force Survey. Direct measures include self-assessments of skills (e.g., Adult Education Survey), self-reports of the ability to perform specific tasks (e.g., ICT Usage in Households and by Individuals, Skills and Jobs Survey), and objective skills assessment frameworks (e.g., Programme for International Student Assessment—PISA or Programme for the International Assessment of Adult Competencies—PIAAC) (European Commission and Statistical Office of the European Union 2016).
Here, we concentrate on skills related to ICT, and specifically on the ability to perform basic tasks on a computer. These skills have become increasingly important in people’s daily and professional lives since the computer and computer network revolutions began in the 1970s and 1980s. Thus, the early development of ICT or computer skills is increasingly perceived as being as critical as reading and writing; and has even been called a “new literacy” (Grant et al. 2009). ICT knowledge and usage are among the digital skills frequently cited as being necessary in the twenty-first century (e.g., van Laar et al. 2017). Levels of ICT skills and of computer and internet usage across populations are of interest to international organisations and institutions, as well as to national governments, as the numerous strategic documents published on these topics demonstrate.
Scientific analyses have attempted to quantify the gains associated with the acquisition of ICT skills. The results on individual returns to ICT skills in the labour market are mixed, with some studies showing substantial gains (Falck et al. 2016; Krueger 1993), and others questioning the causal link between ICT skills and higher earnings (Borghans and ter Weel 2004; DiNardo and Pischke 1997). As Falck et al. (2016) have pointed out, most of these analyses relied on self-reported measures of computer use and computer skills, which are rather imperfect proxies for the true skills level due to reporting biases, or to individuals’ lack of awareness of the skills distribution at the population level, and of where they are positioned within it.
Studies that compare self-assessments of ICT skills with analogical objective direct assessments have been limited. While some of these studies have shown that the problem of skills overestimation or overreporting is common, their findings referred only to groups of students at selected US or UK universities (Grant et al. 2009; Larres et al. 2003; Merritt et al. 2005). Our study has a much broader scope: namely, to analyse the phenomenon of the misreporting of ICT skills across the whole working-age population of Poland. To the best of the authors’ knowledge, there are no existing studies that examine the quality of ICT skills indicators and the problem of misreporting using a statistically representative sample of a population. Moreover, we use measures for self-assessments of ICT skills administered in international Eurostat surveys. Our results show that the problem of misreporting is prevalent across the surveyed group, but that different subgroups have different probabilities of misreporting their skills. Specifically, we find that people who belong to groups with higher average skill levels are also more likely to overreport their skills when they fail to have them. This finding indicates that the observed discrepancies could be driven by the social desirability bias: individuals who belong to groups with higher skills probably feel that they are expected to have higher skills, which could make them more likely to overreport their skills. The results of our analyses can help ensure that ICT skill levels are properly assessed at the population level, and that governments are able to develop public policies that promote social and economic growth.
The article is organised as follows. In the following section, we present a review of the literature on why respondents misreport. In the data and methods section, we introduce the Polish follow-up study to PIAAC (postPIAAC survey), in which ICT skills self-assessments were followed up with direct assessments of comparable skills. In Sect. 4, we outline the results of the analysis that models the probability of skills misreporting. In the final section, we discuss our results and conclusions.

2 Why do respondents misreport?

In surveys, there are many sources of non-sampling errors, including response and measurement errors. Respondents may provide untrue or incorrect information, which may result in biased estimates, for which also the variability may become impossible to assess. The process of answering questions can be seen as a multi-stage process. The Handbook of Recommended Practices for Questionnaire Development and Testing in the European Statistical System (Eurostat 2004) has identified a five-stage process of answering questions: encoding, comprehension, retrieval, judgement, and reporting. Each stage is subject to certain cognitive biases or distortions that may cause the response to deviate from the true answer. Some of these biases can be reduced by following appropriate statistical guidelines related to the survey mode, interviewer behaviour, question formulation, response formats, and approaches to dealing with sensitive topics or fear of disclosure. Three cognitive biases in particular are potentially important for reporting computer skills: information deficits, social desirability, and social approval. The first of these biases relates to the stages of comprehension, retrieval, and judgement; while the other two biases relate to the reporting stage.
Dunnig et al. (2004) have provided a comprehensive overview of the psychological mechanisms that lead people to hold opinions of themselves that diverge from objective reality. People typically do not possess the full information that would allow them to engage in perfectly accurate self-assessment. Moreover, individuals often neglect or give too little weight to valuable information that could help them shape a more realistic view of themselves. Inaccurate self-assessments often occur because “the skills needed to produce correct responses are virtually identical to those needed to evaluate the accuracy of one’s responses” (Dunning et al. 2003, p. 85). This effect drives the so-called double curse of incompetence that has been observed in a number of significant social and intellectual domains (Kruger and Dunning 1999); and that can shape individuals’ careers and life course through their impressions of their own skills, talents, knowledge, or personality (Ehrlinger et al. 2008). The information deficits explanation is consistent with the results of a meta-analysis presented by Falchikov and Boud (1989), which showed that students in advanced courses provided more accurate self-assessments than students in introductory courses. Gibbs et al. (2017) went beyond the academic environment to analyse information deficits in the context of workplace end-user computing. Their study was among the first to investigate the existence of the curse of incompetence in the computing domain. Their results supported the existence of this bias.
The other cognitive biases mentioned above include social desirability and social approval. The first of these biases refers to a tendency to avoid criticism and to portray oneself in a favourable social light, whereas the second is defined as the need to obtain a positive response in a testing situation. Social desirability is related to the concept of “injunctive social norms” (Cialdini 2007), which has been defined as an individual’s perception or expectation of what most others approve or disapprove of. Social desirability effects are likely to exist when sensitive questions are asked, and are more likely to be observed in interviewer-administered than in self-administered surveys (Aquilino 1994; Aquilino and Lo Sciuto 1990; Felderer et al. 2019; Gnambs and Kaspar 2017; Kreuter et al. 2008; Tourangeau 2018). In addition, time use analyses have shown that in many cases, the diary method of data collection is more accurate than the use of questionnaires (Bonke 2005; Kan 2008; Niemi 1993). However, in a literature review, Bosnjak (2018) found no strong evidence of differences in misreporting tendencies across modes. Yet it should be noted that the understanding of what constitutes a sensitive question can differ across individuals and modes of data collection (Kreuter et al. 2008).
The social desirability and social approval mechanisms have, for example, been investigated in comparisons of self-assessed and objective measures of physical activity. In the U.S. context, estimates of the proportions of Americans who adhere to the official guidelines regarding the time people should spend on moderate to vigorous physical activity vary significantly across studies that rely on different approaches to collecting information. For example, using both objective and subjective information on physical activity, Adams et al. (2005) found a positive association between the social desirability of an activity (as measured on a specific scale) and the tendency of people to overreport the amount of time and energy they spent on the activity. Unexpectedly, they also detected a correlation between social approval and the tendency to underreport physical activity, but the magnitude of this association was small. Several other studies have stressed the role of social desirability in the overreporting of physical activity, and have found different propensities to overreport for subgroups distinguished by sex, age, education, weight, and other characteristics (Beyler and Beyler 2017; Ferrari et al. 2007; Hill et al. 1997; Nusser et al. 2012; Sallis and Saelens 2000; Troiano et al. 2008); or by the data collection procedures used (Brenner and DeLamater 2016, 2014).
It has also been observed that energy intake or diet survey data are subject to measurement errors. The self-reported data on food intake are usually biased downwards. This bias has been related to social desirability scores, and appears to differ by sex, eating habits (amount of fat consumed), and, potentially, education (Hebert et al. 1995, 1997, 2002). In addition, there is evidence that socially desirable answers are often given when reporting voting behaviour (Bernstein et al. 2001; Holbrook and Krosnick 2010) and religious service attendance (Brenner 2017; Brenner and DeLamater 2016). In these cases, overreporting is usually observed. Underreporting tends to occur when people are asked about contra-normative or embarrassing behaviours, like drug use or nonregistered employment (Aquilino 1994; Aquilino and Lo Sciuto 1990).
A vast body of research has examined the quality of self-reported measures on health. Johnston et al. (2009) investigated the reporting heterogeneity on hypertension in England. They found that the probability of false negative reporting was significantly income-graded, which implies that the true income-related inequalities in health were underestimated. Among the reasons cited for health status misreporting are that people have different concepts of the meaning health; that individuals’ expectations of their own health and their use of health care can vary; and that people may have difficulties comprehending the survey questions asked. The last issue may be corrected by the use of the anchoring vignettes methodology, and was applied by Bago d’Uva et al. (2008) in correcting the disparities in health by education and income observed in selected Asian countries.
Finally, some studies have shown that personality and individual differences are related to the tendency to misjudge one’s own skills. MacIntyre et al. (1997) investigated the relationship between individuals’ perceived competence in a second foreign language and their actual competence, as well as their language anxiety. They found that a high anxiety level was correlated with a tendency to underestimate one’s skills, while a low anxiety level was correlated with a tendency to overestimate one’s skills. Anthony et al. (2000) and Korukonda (2005) analysed the relationship between computer anxiety, technophobia, and personality. They showed that neuroticism was positively correlated with technophobia, while openness was negatively correlated with it, which might affect the self-assessment of ICT skills. Moreover, Judge et al. (2002) showed that neuroticism, locus of control, self-esteem, and generalised self-efficacy were strongly related; and that a single factor explained this relationship: namely, core self-evaluation. It might be expected that individuals who have lower core self-evaluation levels (and are thus more neurotic) tend to underestimate their own skills.
Only a few studies have looked at the discrepancy between self-assessed and objective measures of computer skills. Grant et al. (2009) compared self-assessments through questionnaires with computer-based assessments of ICT skills among more than 200 business students enrolled in an introductory business computer applications course at a US university. Their results showed that the students tended to overestimate their ICT skills, and especially their spreadsheet and word processing skills. No differences were found between the students’ perceptions of their presentation skills (MS PowerPoint) and their actual performance. Larres et al. (2003) reported similar results for accounting students at two UK universities: i.e., they found that a majority of the students overestimated their computer literacy. Analogical findings have been reported for students in the US by Merritt et al. (2005). However, none of these results can be generalised to a total adult population, and none of these studies investigated the differences between the groups or their potential reasons for overreporting ICT skills.

3 Data, measures, and methods

3.1 Data

We use data from the postPIAAC study: the Polish Follow-up Study to PIAAC. PIAAC is an international survey of adult competencies coordinated by the OECD. The first round of the survey was conducted in 2011–2012 (Burski et al. 2013; OECD 2013a). It included direct assessments of key information-processing skills: i.e., literacy, numeracy, and problem-solving in technology-rich environments.
The postPIAAC study was conducted three years later (in 2014–2015) among the PIAAC respondents who were living in Poland during the fieldwork. The target population was aged 18–69 at the time of the survey. Of the initial sample of 9366 PIAAC respondents, 5224 completed postPIAAC interviews. Both the PIAAC and the postPIAAC surveys were coordinated in Poland by the Educational Research Institute (IBE).
The postPIAAC study collected additional background information on the PIAAC respondents that was not available in the international study such as personality tests, a working memory test,and a coding speed test, and also gathered some longitudinal information on education and labour market situation. The background questionnaire (BQ) was a modified version of the PIAAC international questionnaire, and was administered as a computer-assisted personal interview (CAPI). Some parts of the direct assessment were administered on computer (a working memory test and a basic ICT skills test), while other parts were administered on paper (a coding speed test and a Big Five personality test) (Palczyńska and Świst 2018).
The weighting process of the postPIAAC sample was based on the PIAAC guidelines (OECD 2013b). Since the PIAAC sample design included both stratification (by regions and class size) and clustering, the replication approach is used for estimating variances of full sample statistics, both for PIAAC and postPIAAC data. The data analyses on Poland use paired jackknife methodology with replications conducted on 80 subsamples drawn from the full sample and applying 80 replication weights included in the dataset. All of the postPIAAC analyses presented here follow this replication approach.
We use postPIAAC data combined with the proficiency estimates from the PIAAC. To increase the accuracy of the proficiency estimates, multiple imputations are used for the PIAAC sample (plausible values—PVs): i.e., 10 PVs are drawn for each respondent per each domain measured (for details, see OECD 2013b). The regression analyses are run separately for each of the 10 PVs, and the average results are reported with the imputation error added to the variance estimator. Out of the 5224 postPIAAC respondents, 4459 had all of the variables of interest, 3986 had ever used computer, and 3896 finished the ICT test. The final sample consists of 3896 individuals for whom we can assess the incidence of overreporting.

3.2 Measures

The postPIAAC questionnaire included a self-reported question on computer usage from the 2013 Eurostat Community survey on ICT usage in Households and by Individuals (hereafter, Eurostat ICT survey): “Which of the following computer related activities have you ever carried out?” Respondents could answer “yes” or “no” for each activity, but only the respondents who declared that they had ever used a computer were asked this question. The question is presented in “Appendix”. Three of the listed tasks that are comparable with the direct assessment are analysed in this paper:
  • copying or moving a file or folder (hereafter, file managing),
  • using copy and paste tools to duplicate or move information within a document (hereafter, text editing), and
  • using basic arithmetic formulas in a spreadsheet (hereafter, spreadsheet calculations).
A test of basic ICT skills was also incorporated into the postPIAAC interview. It was based on the PIAAC ICT-Core test, which determined whether a respondent could proceed with the computer-based assessment on literacy, numeracy, and problem-solving (OECD 2013b). Moreover, the postPIAAC ICT test included three additional items measuring the respondent’s performance of tasks that had been previously self-assessed using Eurostat ICT questions: i.e., copying or moving a file or folder, using copy and paste tools to duplicate or move information within a document, and using basic arithmetic formulas in a spreadsheet. The environment resembled typical office applications, with all of the options to complete a task made available (e.g., using a mouse or a drop-down menu). Only respondents who declared that they had ever used a computer took the test. The screenshots of these items are presented in “Appendix”.
The self-reported question on computer usage was asked before the direct assessment of ICT skills, which was the last part of the interview. Between these two parts of the interview there were three other blocks of questions: (i) general opinion questions, (ii) demographic questions, and (iii) self-assessment of literacy, numeracy and language skills. The respondents were not informed that they would be tested on ICT skills but the survey information leaflet attached to the invitation letter contained a general information that the interview included some interesting tasks to be done. All of the postPIAAC respondents participated in PIAAC survey three years earlier which included a competence test; thus, they could potentially expect that they would be tested also this time. It could potentially limit the overreporting in self-assessed questions.
We define the overreporting of ICT skills as occurring when a respondent claimed in the BQ that s/he had carried out a task, but did not manage to perform the corresponding task in the direct assessment. There are three separate indicators of overreporting for three analysed tasks. We model the probability of overreporting one’s own ICT skills among those individuals who failed the direct assessment of a given task, as only they could overreport. Individuals who completed a task have a missing value for this indicator. The indicator takes a value of one if a respondent declared s/he had carried out a task before but failed the direct assessment, and a value of zero if a respondent declared s/he had not carried out a task before and failed the direct assessment.
The underreporting indicator takes a value of one if a respondent declared s/he had not carried out a task before but succeeded in the direct assessment, and a value of zero if a respondent declared s/he had carried out a task before and succeeded in the direct assessment. Individuals who failed a task had a missing value for the indicator. There are three separate indicators of underreporting for three analysed tasks.
The control variables include measures of human capital: years of education, level of cognitive skills (from the PIAAC), and non-formal and informal educational activities in the last 12 months. We also control for the basic sociodemographic characteristics: age, sex, size of the city, and employment status. Additionally, we are interested in looking at how personality is related to the propensity to misreport one’s skills. To do so, we use two available short scales: the Big Five Inventory-Short (BFI-S) (Gerlitz and Schupp 2005; John et al. 1991) and the short Grit scale (Grit-S) (Duckworth and Quinn 2009). The Big Five model identifies five dimensions of personality: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism (John and Srivastava 1999). Grit has two dimensions: perseverance of effort, defined as the “tendency work hard even in the face of setbacks”; and consistency of interest, defined as “the tendency to not frequently change goals and interests” (Credé et al. 2017). The descriptive statistics of the overreporting and underreporting indicators, task completion indicators and control variables are included in Table 7 in “Appendix”.

3.3 Method

We use logistic regression models to analyse the determinants of ICT skills overreporting and probabilities of completing the direct assessment of ICT tasks as all dependent variables are binary. However, odds ratios (OR) cannot be interpreted as substantive effects and compared across groups within samples as they also reflect unobserved heterogeneity (Mood 2010). Therefore, we also estimate linear probability models to be able to compare the effect measures between the three indicators. The results are qualitatively similar to those of logit models in terms of the sign and the statistical significance of the relationships (Tables 10 and 11 in “Appendix”).
We analyse the overreporting indicators in homogeneous groups with respect to actual skills measured: those who did not complete a task. This approach allows us to isolate the determinants of misreporting without modelling the factors that affect the actual level of ICT skills. In separate models, we also look at the determinants of completing each task to see if they are related to the determinants of overreporting. Acknowledging that underreporting is an interesting phenomenon, we include it in the descriptive analysis, but we do not analyse it in the multivariate framework because of the low incidence of underreporting.

4 Results

The share of individuals who were able to perform the tasks differed significantly across the tasks. The spreadsheet task was more difficult than the file managing and the text editing tasks. The spreadsheet task was completed by 24% of respondents who had ever used a computer, while the file editing and the text editing tasks were completed by 73% and 74% of respondents, respectively (Table 1). The overreporting of skills was much more common than the underreporting of skills. For all of the tasks, most of the respondents who had the respective skill correctly reported it, while among the respondents without a given skill, half or more overreported their skill level. The classification measures are included in Table 2. Respondents correctly reported their skills in the most difficult task (spreadsheet calculations) less often than in the two easier ones (53% versus 79% and 81%). Only 32% of those who reported using spreadsheet calculations before actually completed this task while this proportion was 82% and 84% in case of the other tasks.
Table 1
The distribution of results in self-assessments and direct assessments in three tasks
 
With a skill
Without a skill
N
Total
Well-assessed
Underreported
Total
Well-assessed
Overreported
File managing
0.73***
0.67***
0.05***
0.27***
0.12***
0.15***
3896
 
(0.011)
(0.011)
(0.005)
(0.011)
(0.007)
(0.009)
 
Text editing
0.74***
0.68***
0.06***
0.26***
0.13***
0.13***
3896
 
(0.010)
(0.011)
(0.006)
(0.010)
(0.007)
(0.008)
 
Spreadsheet calculations
0.24***
0.22***
0.02***
0.76***
0.31***
0.46***
3896
 
(0.010)
(0.010)
(0.003)
(0.010)
(0.010)
(0.012)
 
SE in parentheses
*p < 0.05; **p < 0.01; ***p < 0.001
Table 2
Classification measures for three tasks
 
Accuracya
Precision
Sensitivity
Specificity
F-score
File managing
0.79
0.82
0.93
0.44
0.87
Text editing
0.81
0.84
0.92
0.50
0.88
Spreadsheet calculations
0.53
0.32
0.92
0.40
0.48
aHaving a skill is considered as a positive case in the confusion matrix terminology
The proportions of individuals who could perform simple calculations in a spreadsheet, and who overreported their skill levels if they could not, differed by their socio-demographic characteristics (Table 3). The results for file managing and text editing tasks reveal different levels of skills, but follow a similar pattern of differences between the groups, and are presented in Tables 8 and 9 in “Appendix”. Whereas 35% of the youngest group (aged 19–24) completed the task, < 5% of the oldest age group (aged 65 +) did so (F(1, 79) = 59.58, p = 0.000).1 The propensity to overreport one’s skills decreased with age, from 85% in the youngest group to 35% in the oldest group (F(1, 79) = 31.60, p = 0.000). More men than women completed the task (28% versus 20%) (F(1, 79) = 17.73, p = 0.000), but there was no difference in their propensity to overreport (F(1, 79) = 0.06, p = 0.801). The fraction of individuals with this skill increased with the educational level, while the propensity to overreport decreased. The exception was the group with the lowest level of education (ISCED 2), because many of these individuals were still in education. The share of individuals who completed the task differed depending on whether they were living in a village or in a city. The size of the city did not affect the likelihood of completing the task, but the propensity to overreport increased slightly with the size of the city from 58 to 76% (F(1, 79) = 6.48, p = 0.013). The individuals who participated in a nonformal or informal educational activity had twice as high likelihood of completing the task (F(1, 79) = 66.72, p = 0.000 and F(1, 79) = 59.34, p = 0.000, respectively) and a higher propensity to overreport their skills (F(1, 79) = 80.61, p = 0.000 and F(1, 79) = 145.81, p = 0.000, respectively). Unemployed and inactive individuals were less likely than employed individuals to have completed the task (18% versus 24%) (F(1, 79) = 5.77, p = 0.019 and F(1, 79) = 8.81, p = 0.004, respectively), but the unemployed individuals were more similar to their employed counterparts when it came to their overreporting propensity: over 60% in both groups (F(1, 79) = 0.18, p = 0.675).
Table 3
The distribution of the results of direct assessments and overreporting and underreporting incidence in task 3 (spreadsheet calculations) by socio-demographic characteristics
 
With a skill
Without a skill
Tests of independencea
Overrep.b
Underrep.c
(1)
(2)
(3)
(4)
(5)
Total
0.24
0.76
 
0.60
0.07
Age 19–24
0.35
0.65
F(4.4, 347.4) = 33.7
0.85
0.03
Age 25–34
0.36
0.65
p = .000, N = 3896
0.73
0.07
Age 35–44
0.23
0.77
 
0.60
0.06
Age 45–54
0.10
0.90
 
0.45
0.15
Age 55–65
0.03
0.97
 
0.37
0.20
Age 66–68
0.05
0.95
 
0.35
0.00
Male
0.28
0.72
F(1, 79) = 18.07
0.60
0.07
Female
0.20
0.80
p = .000, N = 3896
0.59
0.07
ISCEDd 2
0.07
0.93
F(4.45, 351.88) = 42
0.42
0.14
ISCED 3B
0.04
0.96
p = .000, N = 3896
0.32
0.43
ISCED 3A voc. or ISCED 4
0.16
0.84
 
0.54
0.13
ISCED 3A
0.24
0.76
 
0.67
0.06
ISCED 5 (bechelor)
0.35
0.65
 
0.86
0.02
ISCED 5 (master) or ISCED 6
0.40
0.60
 
0.82
0.04
Village
0.18
0.82
F(3.65, 288.31) = 4.42
0.54
0.09
City < 20,000
0.23
0.77
p = .002, N = 3896
0.58
0.09
City ≥ 20,000 and < 100,000
0.29
0.71
 
0.60
0.08
City ≥ 100,000 and < 1,000,000
0.28
0.72
 
0.66
0.05
City > 1,000,000
0.25
0.75
 
0.76
0.01
No nonformal activities
0.16
0.84
F(1, 79) = 77.52
0.51
0.08
Nonformal activities
0.34
0.66
p = .000, N = 3896
0.74
0.06
No informal activities
0.17
0.83
F(1, 79) = 61.08
0.49
0.10
Informal activities
0.33
0.67
p = .000, N = 3896
0.77
0.05
Employed
0.26
0.74
F(1.72, 135.93) = 6.45
0.63
0.07
Unemployed
0.18
0.82
p = .003, N = 3896
0.61
0.08
Inactive
0.18
0.82
 
0.47
0.04
aThe Pearson chi square statistic is corrected for the survey design with the second-order correction of Rao and Scott (1984) and is converted into an F statistic
bProportion of overreporting among those who do not have a skill
cProportion of underreporting among those who have a skill
dInternational Standard Classification of Education 1997 (UNESCO Institute for Statistics 2012)
In interpreting these results, it is important to keep in mind that the presented estimates characterise the population who ever used computer. If we included individuals who had never used a computer as being without a skill, the differences in the groups’ skill levels would increase.
The main result of the multivariate analysis of the probability of overreporting one’s skills is that those individuals who belonged to the groups with the highest average ICT skills levels were also the most likely to overreport their skills. The propensity to overreport decreased with age and increased with years of education and numeracy level. The analysis also showed that women were less likely than men to overreport their skills (Table 4). These observed gender differences are consistent with the results of Hargittai and Shafer (2006), who analysed the internet-use abilities of randomly selected US internet users, and showed that while the interviewed men and women did not differ much in their online abilities, the women’s self-assessed skills were significantly lower. In general, the level of overreporting was higher in groups who had a higher probability of completing the task: i.e., among men and among the younger and better educated (Table 5). Those characteristics have been shown to be positively related to the observed ICT or internet usage skills in other studies as well (van Deursen et al. 2011; van Deursen and van Dijk 2009). The positive relationship found between the probability of a group to overreport their skills and the expected levels of skills suggests that social desirability bias may explain the overreporting phenomenon. The positive relationships observed between the propensity to overreport and numeracy and years of education call into question the information deficits explanation for overreporting suggested by the literature.
Table 4
Determinants of overreporting
 
File managing
Text editing
Spreadsheet calculations
(1)
(2)
(3)
Age
0.929***
0.913***
0.930***
 
(− 6.25)
(− 7.86)
(− 15.29)
Female
0.812
0.704
0.739*
 
(− 0.86)
(− 1.41)
(− 2.15)
Place of residence (ref.: village)
   
 City < 20,000
2.568
1.398
1.297
 
(1.92)
(0.73)
(1.20)
 City ≥ 20,000 and < 100,000
1.296
2.006*
1.490*
 
(0.65)
(2.00)
(2.01)
 City ≥ 100,000 and < 1,000,000
1.893*
2.897***
1.906**
 
(2.11)
(3.94)
(3.22)
 City > 1,000,000
7.746**
10.409*
3.591**
 
(2.91)
(2.33)
(2.79)
Years of education
1.417***
1.425***
1.354***
 
(5.82)
(5.25)
(8.78)
Nonformal education
1.016
0.800
1.307
 
(0.05)
(− 0.62)
(1.76)
Informal education
1.259
2.207**
2.082***
 
(0.87)
(2.68)
(6.45)
Employment status (ref.: employed)
   
 Unemployed
1.098
1.885
1.014
 
(0.17)
(1.01)
(0.05)
 Inactive
0.593
0.664
1.053
 
(− 1.89)
(− 1.44)
(0.33)
Numeracy
1.239
1.213
1.327**
 
(1.41)
(1.05)
(2.68)
Personality traits
   
 Conscientiousness
0.662
0.946
0.900
 
(− 1.42)
(− 0.18)
(− 0.64)
 Extraversion
1.028
0.895
1.244*
 
(0.16)
(− 0.94)
(2.49)
 Agreeableness
0.830
0.624
1.024
 
(− 0.67)
(− 1.55)
(0.15)
 Openness
1.397
1.556*
0.899
 
(1.56)
(2.34)
(− 0.95)
 Neuroticism
0.863
0.894
0.853*
 
(− 1.26)
(− 0.86)
(− 2.15)
 Perseverance of effort
1.373*
1.180
1.310*
 
(2.12)
(0.96)
(2.36)
 Consistency of interest
0.981
0.962
0.889
 
(− 0.12)
(− 0.22)
(− 1.53)
 Constant
0.462
0.612
0.313*
 
(− 1.09)
(− 0.57)
(− 2.48)
Observations
697
717
2701
Exponentiated coefficients; t statistics in parentheses. Personality traits and numeracy measures are standardised within the total population
*p < 0.05; **p < 0.01; ***p < 0.001
Table 5
Probability of completing tasks
 
File managing
Text editing
Spreadsheet calculations
(1)
(2)
(3)
Age
0.920***
0.934***
0.922***
 
(− 16.39)
(− 13.52)
(− 10.94)
Female
0.647**
1.004
0.566***
 
(− 3.01)
(0.03)
(− 4.20)
Place of residence (ref.: village)
   
 City < 20,000
2.241**
1.772**
1.136
 
(2.89)
(2.97)
(0.55)
 City ≥ 20,000 and < 100,000
1.809**
1.323
1.635**
 
(3.28)
(1.62)
(2.71)
 City ≥ 100,000 and < 1,000,000
1.586**
1.240
1.195
 
(2.90)
(1.19)
(1.19)
 City > 1,000,000
2.410**
3.340**
1.062
 
(2.69)
(3.14)
(0.17)
Years of education
1.172***
1.218***
1.333***
 
(6.29)
(6.88)
(9.29)
Nonformal education
1.051
1.170
1.483**
 
(0.33)
(0.99)
(2.87)
Informal education
1.773***
1.446*
1.103
 
(3.62)
(2.32)
(0.73)
Employment status (ref.: employed)
   
 Unemployed
1.029
0.815
0.857
 
(0.09)
(− 0.75)
(− 0.61)
 Inactive
1.555*
1.042
1.247
 
(2.44)
(0.22)
(1.22)
Numeracy
1.389***
1.437***
1.836***
 
(3.40)
(3.94)
(6.78)
Personality traits
   
 Conscientiousness
0.783
0.803
1.143
 
(− 1.50)
(− 1.34)
(0.71)
 Extraversion
0.949
1.026
0.983
 
(− 0.48)
(0.24)
(− 0.15)
 Agreeableness
1.031
1.166
0.792
 
(0.19)
(0.86)
(− 1.27)
 Openness
1.087
0.977
0.988
 
(0.74)
(− 0.20)
(− 0.10)
 Neuroticism
0.877*
0.949
0.965
 
(− 2.00)
(− 0.77)
(− 0.46)
 Perseverance of effort
1.039
0.963
0.971
 
(0.42)
(− 0.44)
(− 0.37)
 Consistency of interest
1.151
1.018
0.963
 
(1.57)
(0.20)
(− 0.57)
 Constant
5.777***
1.959
0.055***
 
(5.25)
(1.76)
(− 6.91)
Observations
3896
3896
3896
Exponentiated coefficients; t statistics in parentheses. Personality traits and numeracy measures are standardised within the total population
*p < 0.05; ** p < 0.01; *** p < 0.001
Some of the personality traits were also found to be related to the propensity to overreport. Neurotic individuals were less likely to overreport their skills; probably because they are less self-confident. Other studies have also shown that neuroticism is positively correlated with technophobia (Anthony et al. 2000; Korukonda 2005), which could affect self-assessment of ICT skills. By contrast, perseverance of effort was found to be positively related to the propensity to overreport. A potential explanation for this result is that individuals with high levels of perseverance of effort tend to have high expectations of themselves.
The relationships of overreporting with gender, numeracy, and personality traits were found to be significant only for the spreadsheet task, probably because this task had the biggest sample size (i.e., the number of respondents who did not complete this task). For the other tasks, the estimates had the same direction and the same order of magnitude (see also Table 10 in “Appendix” with linear probability model estimates).

5 Discussion and conclusions

Our results showed that individuals who belonged to groups with higher ICT skills were also the most likely to overreport having these skills. The propensity to overreport one’s ICT skills decreased with age and increased with years of education and the size of city where the individual was living. These results might suggest that social desirability plays a big role in the process of forming the answers to questions related to ICT skills: i.e., individuals who belong to groups with higher skills probably feel that they are expected to have higher skills, which could make them more likely to overreport their skills. This assumption is consistent with the observation of Bernstein et al. (2001) related to the overreporting of voting behaviour: “people who are under the most pressure to vote are the ones most likely to misrepresent their behavior when they fail to do so” (p. 24). As our findings indicated that the propensity to overreport was positively related to numeracy level and years of education, it appears that information deficits did not play a significant role in skills reporting, or their effects were outweighed by the social desirability mechanism.
This study is not without limitations. First, the interviewer effect is usually smaller in the self-administered mode than in the interviewer-administered interview. Therefore, it is likely that the observed discrepancies in the ICT skills measures (subjective vs objective assessment) were reinforced to some extent by the mode effects (interviewer-mediated in the self-assessment in our study versus self-administered in the direct measurement in our study). However, as we were not able to disentangle these two effects, the presented results capture both effects. Future experiments could compare self-administered self-reports with direct assessments, which should eliminate the pure mode effect. Second, a partial explanation for the failure in the direct assessments could be that the respondents used the interviewer’s laptop instead of their own computer with familiar settings. This could have made completing the tasks more difficult for some of the respondents but this effect should have been similar in different socio-demographic groups. Finally, this study differs from typical cross-sectional studies with skills self-assessment in one important respect. The postPIAAC respondents were cognitively tested in the PIAAC survey three years earlier, so they could potentially suspect that this survey also includes a competence test. If it affected their answers to self-reported questions on skills, it would probably limit the incidence of overreporting of skills and our estimates of this phenomenon could be considered as the lower bound of measurement in a typical survey.
Generally, our results support the development of methodologies in which skills are measured directly. The examples of such projects are the OECD’s PIAAC survey. Direct measurements of skills may be more comparable across countries and for some types of skills (like basic digital literacy). Moreover, the use of this approach may be the only way to compare the skills of different populations. Figure 1 shows the discrepancies between the PIAAC’s and the Eurostat’s ICT indicators of computer skills on the country level.
This comparison is limited to the age group 25–342 (i.e., young individuals who are supposed to have the highest skill levels) and to countries for which the data were available in both surveys. In some countries, a significant share of PIAAC respondents refused to take the computer-based assessment (CBA), even though they had reported earlier that they had computer experience. The country with the highest share of such respondents was Poland. Nearly 30% of the PIAAC respondents aged 25–34 did not take the CBA in Poland, while in the Eurostat ICT survey, only 15% of these younger respondents reported not having computer experience or not having done any of the six basic ICT activities. This pattern differs by country: for example, in the UK, the analysed Eurostat indicator is higher than the PIAAC indicator; while in Czechia and Belgium, the two indicators have similar levels. These findings suggest that the misreporting phenomenon may look different across populations.
The disadvantages of using direct assessments include the high costs and the limitations in the number of skills that can be measured. In the PIAAC, only basic skills related to literacy, numeracy, and problem-solving in technology-rich environments were subject to testing. But given that different job-specific skills are likely to be of interest to analysts and policy-makers, we expect that the direct assessment of skills will become more and more common. On the other hand, a potential alternative to relying on direct assessments is to use existing special methodological approaches for measuring sensitive behaviour as randomised-response (Böckenholt and van der Heijden 2007; Kirchner 2015) or item-count techniques (Holbrook and Krosnick 2010). Although the level of ICT skills does not seem to be a sensitive topic, the presented results suggest that it should be treated as sensitive. In addition, the application of the abovementioned methods to learn more about this phenomenon could be tested.
Finally, the importance of ICT skills should be stressed. The lack of an appropriate level of ICT skills can be considered from the perspective of the digital divide; a topic that has garnered considerable attention in the literature (e.g., van Deursen et al. 2011; van Deursen and van Dijk 2019; van Dijk 2006). While the concept of the digital divide initially referred to having physical access to computers and the internet, it has been extended to include possessing the right skills (digital skills, computer or media literacy), and to information inequality or information gaps. In the developed countries, the divides in material access to computers and other digital technologies are smaller than the divides in skills. Regardless of how large these gaps are, it is important to bear in mind that individuals who are unable to properly assess their skills may be less likely to believe that they need to improve their skills or seek professional guidance. Such an effect has been shown for financial literacy, with overconfident consumers being less likely to seek professional advice in some financial domains (Porto and Xiao 2016). Psychological experiments have revealed that an individual’s self-assessment of his/her knowledge increases after searching for information online, even about an unrelated topic (Fisher et al. 2015). It would be interesting to explore the computer and internet usage patterns of individuals who overreport their basic computer skills, such as the skills investigated in this paper.
Inequalities in ICT skills may be transferred to other life domains, and may have an impact on, or even reinforce, other types of inequalities, including immaterial, material, social, and educational gaps (van Dijk 2006; DiMaggio et al. 2001; Hargittai and Shafer 2006; Robinson et al. 2015; Selwyn 2006). Today, having access to information is critical, and it can shape an individual’s life chances in multiple ways. Public policies should be designed with the aim of ensuring that the number of actual or potential digital illiterates is as small as possible. But before individuals can acquire the right skills and physical access to ICT devices, they must be motivated to do so. At the same time, official statistical agencies should be able to correctly assess the level of ICT skills in the population, and public policy-makers should seek to mitigate existing and potential inequalities in these skills. We believe that the results of our analyses will be helpful in selecting the right direction for methodological developments in ICT skills measurement.

Acknowledgements

We would like to thank two anonymous referees for their useful comments and suggestions. We also wish to thank Tomasz Drabowicz and participants of the SGH research seminar in 2018, the European Conference on Quality in Official Statistics 2018, the 2nd Congress of Polish Statistics 2018, the Warsaw International Economic Meeting 2019, and the European Sociological Association Conference 2019 for their helpful comments. Maja Rynko acknowledges funding from the Warsaw School of Economics, Grant number KAE/S19/20/19.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Appendix

See Tables 6, 7, 8, 9, 10 and 11 and Figs. 2, 3 and 4.
Table 6
Question used in the self-assessment of ICT skills
Please consider your general experience with the computer both at work as well as in daily life. Which of the following computer related activities have you ever carried out? (YES/NO)
1
Copying or moving a file or folder
2
Using copy and paste tools to duplicate or move information within a document
3
Using basic arithmetic formulas in a spreadsheet
4
Compressing (or zipping) file using, e.g., WinZip, WinRAR
5
Connecting and installing new devices, e.g., a modem, a printer or scanner
6
Writing a computer program using a specialised programming language
Table 7
Descriptive statistics
 
Mean
SD
Min
Max
N
Type of variable
Overreporting indicatorsa
      
 File managing
0.55
0.50
0
1
697
Binary
 Text editing
0.51
0.50
0
1
717
Binary
 Spreadsheet calculations
0.60
0.49
0
1
2701
Binary
Underreporting indicatorsb
     
 File managing
0.07
0.26
0
1
3199
Binary
 Text editing
0.08
0.27
0
1
3179
Binary
 Spreadsheet calculations
0.07
0.25
0
1
1195
Binary
Task completion indicators
     
 File managing
0.73
0.45
0
1
3896
Binary
 Text editing
0.74
0.44
0
1
3896
Binary
 Spreadsheet calculations
0.24
0.43
0
1
3896
Binary
 Age
38.25
12.97
18
69
3896
Continuous
 Female
1.52
0.50
1
2
3896
Binary
 Place of residence
     
Ordinal
  Village
0.36
0.48
0
1
3896
 
  City < 20,000
0.13
0.33
0
1
3896
 
  City ≥ 20,000 and < 100,000
0.20
0.40
0
1
3896
 
  City ≥ 100,000 and < 1,000,000
0.25
0.43
0
1
3896
 
  City > 1,000,000
0.06
0.24
0
1
3896
 
 Years of education
13.84
2.67
6
21
3896
 
 Nonformal education
0.42
0.49
0
1
3896
Binary
 Informal education
0.43
0.49
0
1
3896
Binary
 Employment status
     
Nominal
  Employed
0.75
0.44
0
1
3896
 
  Unemployed
0.04
0.21
0
1
3896
 
  Inactive
0.21
0.41
0
1
3896
 
 Numeracy
0.25
0.90
− 3.27
3.31
3896
Continuous
Personality traits
      
 Conscientiousness
− 0.06
0.93
− 3.39
2.74
3896
Continuous
 Extraversion
− 0.01
0.96
− 2.86
3.09
3896
Continuous
 Agreeableness
− 0.06
0.94
− 3.89
3.10
3896
Continuous
 Openness
0.02
0.94
− 3.18
2.76
3896
Continuous
 Neuroticism
− 0.02
0.97
− 2.57
3.15
3896
Continuous
 Perseverance of effort
− 0.03
0.97
− 3.53
1.90
3896
Continuous
 Consistency of Interest
0.01
0.98
− 3.24
2.45
3896
Continuous
aAmong individuals who did not complete a task
bAmong individuals who completed a task. Personality traits and numeracy measures are standardised within the total population
Table 8
The distribution of direct assessments’ results and overreporting and underreporting incidence by socio-demographic characteristics (task 1: file managing)
 
With a skill
Without a skill
Tests of independencea
Overrep.b
Underrep.c
(1)
(2)
(3)
(4)
(5)
Total
0.73
0.27
 
0.55
0.07
Age 19–24
0.95
0.05
F(4.32, 341.14) = 71.65
0.93
0.01
Age 25–34
0.87
0.13
p = .000, N = 3896
0.81
0.03
Age 35–44
0.68
0.32
 
0.57
0.07
Age 45–54
0.58
0.42
 
0.48
0.15
Age 55–65
0.44
0.57
 
0.45
0.35
Age 66–68
0.29
0.71
 
0.38
0.12
Male
0.76
0.24
F(1, 79) = 9.19
0.55
0.08
Female
0.69
0.31
p = .003, N = 3896
0.55
0.07
ISCEDd 2
0.67
0.33
F(4.53, 357.94) = 35.8
0.37
0.16
ISCED 3B
0.50
0.50
p = .000, N = 3896
0.37
0.26
ISCED 3A voc. or ISCED 4
0.66
0.34
 
0.55
0.11
ISCED 3A
0.77
0.23
 
0.65
0.04
ISCED 5 (bechelor)
0.88
0.12
 
0.78
0.01
ISCED 5 (master) or ISCED 6
0.85
0.15
 
0.84
0.01
Village
0.67
0.33
F(3.63, 286.42) = 3.42
0.52
0.09
City < 20,000
0.76
0.24
p = .012, N = 3896
0.61
0.09
City ≥ 20,000 and < 100,000
0.76
0.24
 
0.49
0.09
City ≥ 100,000 and < 1,000,000
0.75
0.25
 
0.60
0.05
City > 1,000,000
0.80
0.21
 
0.76
0.01
No nonformal activities
0.67
0.34
F(1, 79) = 53.3
0.51
0.12
Nonformal activities
0.81
0.19
p = .000, N = 3896
0.66
0.02
No informal activities
0.65
0.36
F(1, 79) = 64.7
0.51
0.12
Informal activities
0.84
0.16
p = .000, N = 3896
0.67
0.03
Employed
0.74
0.26
F(1.95, 154.1) = 4.37
0.61
0.06
Unemployed
0.74
0.26
p = .015, N = 3896
0.59
0.03
Inactive
0.67
0.33
 
0.38
0.13
aThe Pearson chi square statistic is corrected for the survey design with the second-order correction of Rao and Scott (1984) and is converted into an F statistic
bProportion of overreporting among those who do not have a skill
cProportion of underreporting among those who have a skill
dInternational Standard Classification of Education 1997 (UNESCO Institute for Statistics 2012)
Table 9
The distribution of direct assessments’ results and overreporting and underreporting incidence by socio-demographic characteristics (task 2: text editing)
 
With a skill
Without a skill
Tests of independencea
Overrep.b
Underrep.c
(1)
(2)
(3)
(4)
(5)
Total
0.74
0.26
 
0.51
0.08
Age 19–24
0.91
0.09
F(4.39, 346.61) = 39.54
0.97
0.01
Age 25–34
0.85
0.15
p = .000, N = 3896
0.82
0.03
Age 35–44
0.74
0.26
 
0.49
0.11
Age 45–54
0.62
0.38
 
0.40
0.14
Age 55–65
0.45
0.55
 
0.37
0.26
Age 66–68
0.50
0.50
 
0.15
0.23
Male
0.74
0.26
F(1, 79) = 0
0.53
0.09
Female
0.74
0.26
p = .994, N = 3896
0.49
0.08
ISCEDd 2
0.62
0.38
F(4.78, 377.59) = 41.82
0.18
0.11
ISCED 3B
0.51
0.49
p = .000, N = 3896
0.37
0.34
ISCED 3A voc. or ISCED 4
0.66
0.34
 
0.54
0.12
ISCED 3A
0.78
0.22
 
0.63
0.04
ISCED 5 (bechelor)
0.87
0.13
 
0.78
0.02
ISCED 5 (master) or ISCED 6
0.89
0.11
 
0.74
0.01
Village
0.69
0.31
F(3.86, 305.32) = 3.86
0.45
0.12
City < 20,000
0.77
0.23
p = .005, N = 3896
0.41
0.11
City ≥ 20,000 and < 100,000
0.74
0.26
 
0.54
0.08
City ≥ 100,000 and < 1,000,000
0.75
0.25
 
0.61
0.04
City > 1,000,000
0.87
0.13
 
0.75
0.00
No nonformal activities
0.67
0.33
F(1, 79) = 62.51
0.47
0.14
Nonformal activities
0.83
0.17
p = .000, N = 3896
0.62
0.02
No informal activities
0.66
0.34
F(1, 79) = 65
0.45
0.14
Informal activities
0.84
0.16
p = .000, N = 3896
0.68
0.02
Employed
0.76
0.24
F(1.86, 147.04) = 8.03
0.57
0.07
Unemployed
0.72
0.28
p = .001, N = 3896
0.61
0.03
Inactive
0.66
0.34
 
0.35
0.14
aThe Pearson chi square statistic is corrected for the survey design with the second-order correction of Rao and Scott (1984) and is converted into an F statistic
bProportion of overreporting among those who do not have a skill
cProportion of underreporting among those who have a skill
dInternational Standard Classification of Education 1997 (UNESCO Institute for Statistics 2012)
Table 10
Determinants of overreporting (OLS results)
 
File managing
Text editing
Spreadsheet calculations
(1)
(2)
(3)
Age
− 0.013***
− 0.015***
− 0.013***
 
(− 7.24)
(− 9.47)
(− 17.55)
Female
-0.033
− 0.060
− 0.054*
 
(− 0.74)
(− 1.46)
(− 2.33)
Place of residence (ref.: village)
   
 City < 20,000
0.161*
0.065
0.054
 
(2.02)
(0.99)
(1.51)
 City ≥ 20,000 and < 100,000
0.042
0.124*
0.070*
 
(0.55)
(2.14)
(2.10)
 City ≥ 100,000 and < 1,000,000
0.101
0.175***
0.105**
 
(1.93)
(3.86)
(3.28)
 City > 1,000,000
0.352***
0.398**
0.197**
 
(3.29)
(2.79)
(3.23)
Years of education
0.057***
0.054***
0.052***
 
(6.64)
(6.26)
(10.15)
Nonformal education
0.010
− 0.025
0.049*
 
(0.18)
(− 0.42)
(1.97)
Informal education
0.048
0.125**
0.124***
 
(1.06)
(2.60)
(6.47)
Employment status (ref.: employed)
   
 Unemployed
0.010
0.115
0.011
 
(0.10)
(1.07)
(0.19)
 Inactive
− 0.099
− 0.060
0.010
 
(− 1.95)
(− 1.31)
(0.35)
Numeracy
0.039
0.025
0.048**
 
(1.44)
(0.85)
(2.80)
Personality traits
   
 Conscientiousness
− 0.077
− 0.019
− 0.021
 
(− 1.54)
(− 0.37)
(− 0.76)
 Extraversion
0.001
− 0.023
0.033*
 
(0.05)
(− 1.14)
(2.29)
 Agreeableness
− 0.026
− 0.071
0.005
 
(− 0.51)
(− 1.41)
(0.17)
 Openness
0.059
0.079*
− 0.014
 
(1.53)
(2.35)
(− 0.73)
 Neuroticism
− 0.027
− 0.016
− 0.027*
 
(− 1.27)
(− 0.74)
(− 2.11)
 Perseverance of effort
0.051
0.031
0.046*
 
(1.92)
(1.07)
(2.36)
 Consistency of interest
− 0.002
− 0.007
− 0.023
 
(− 0.07)
(− 0.22)
(− 1.78)
 Constant
0.409***
0.469***
0.322***
 
(3.34)
(3.44)
(4.10)
Observations
697
717
2701
R2
0.267
0.327
0.304
t statistics in parentheses. Personality traits and numeracy measures are standardised within the total population
*p < 0.05; **p < 0.01; ***p < 0.001
Table 11
Probability of completing tasks (OLS results)
 
File managing
Text editing
Spreadsheet calculations
(1)
(2)
(3)
Age
− 0.014***
− 0.012***
− 0.009***
 
(− 18.22)
(− 14.83)
(− 13.83)
Female
− 0.057**
0.007
− 0.086***
 
(− 2.75)
(0.37)
(− 4.60)
Place of residence (ref.: village)
   
 City < 20,000
0.101**
0.074**
0.039
 
(2.92)
(2.77)
(1.24)
 City ≥ 20,000 and < 100,000
0.085***
0.039
0.079**
 
(3.38)
(1.51)
(3.19)
 City ≥ 100,000 and < 1,000,000
0.061**
0.027
0.040
 
(2.68)
(1.01)
(1.95)
 City > 1,000,000
0.123**
0.147***
0.015
 
(2.92)
(3.61)
(0.31)
Years of education
0.027***
0.032***
0.036***
 
(7.29)
(7.56)
(8.75)
Nonformal education
0.011
0.024
0.055**
 
(0.54)
(1.05)
(2.74)
Informal education
0.071**
0.050*
0.019
 
(3.26)
(2.20)
(0.99)
Employment status (ref.: employed)
   
 Unemployed
− 0.000
− 0.030
− 0.020
 
(− 0.00)
(− 0.73)
(− 0.60)
 Inactive
0.041
− 0.007
0.054*
 
(1.58)
(− 0.24)
(2.44)
Numeracy
0.045**
0.054***
0.081***
 
(3.28)
(4.19)
(7.18)
Personality traits
   
 Conscientiousness
− 0.033
− 0.029
0.013
 
(− 1.41)
(− 1.19)
(0.57)
 Extraversion
− 0.006
0.005
− 0.004
 
(− 0.38)
(0.29)
(− 0.27)
 Agreeableness
− 0.003
0.018
− 0.022
 
(− 0.13)
(0.65)
(− 0.92)
 Openness
0.011
− 0.006
0.004
 
(0.66)
(− 0.33)
(0.25)
 Neuroticism
− 0.022*
− 0.011
0.000
 
(− 2.34)
(− 1.05)
(0.04)
 Perseverance of effort
0.005
− 0.007
− 0.004
 
(0.37)
(− 0.52)
(− 0.34)
 Consistency of interest
0.020
0.003
− 0.007
 
(1.56)
(0.23)
(− 0.79)
 Constant
0.802***
0.664***
0.029
 
(15.98)
(10.80)
(0.52)
Observations
3896
3896
3896
R2
0.268
0.230
0.228
t statistics in parentheses. Personality traits and numeracy measures are standardised within the total population
*p < 0.05, **p < 0.01, ***p < 0.001
Fußnoten
1
The F statistics reported in the text are calculated as the adjusted Wald test for survey data.
 
2
The statistical populations are defined differently for the two surveys, and the available aggregated statistics do not allow us to obtain comparable statistics for wider age groups. For this reason, we show the results for a selected 10-year age group.
 
Literatur
Zurück zum Zitat Bosnjak, M.: Evidence-Based Survey Operations: Choosing and Mixing Modes. In: Vannette, D.L., Krosnick, J.A. (eds.) The Palgrave Handbook of Survey Research, pp. 319–330. Palgrave Macmillan, Cham (2018)CrossRef Bosnjak, M.: Evidence-Based Survey Operations: Choosing and Mixing Modes. In: Vannette, D.L., Krosnick, J.A. (eds.) The Palgrave Handbook of Survey Research, pp. 319–330. Palgrave Macmillan, Cham (2018)CrossRef
Zurück zum Zitat Burski, J., Chłoń-Domińczak, A., Palczyńska, M., Rynko, M., Śpiewanowski, P.: Umiejętności Polaków – wyniki Międzynarodowego Badania Kompetencji Osób Dorosłych (PIAAC). Instytut Badań Edukacyjnych, Warszawa (2013) Burski, J., Chłoń-Domińczak, A., Palczyńska, M., Rynko, M., Śpiewanowski, P.: Umiejętności Polaków – wyniki Międzynarodowego Badania Kompetencji Osób Dorosłych (PIAAC). Instytut Badań Edukacyjnych, Warszawa (2013)
Zurück zum Zitat DiNardo, J.E., Pischke, J.-S.: The returns to computer use revisited: have pencils changed the wage structure too? Q. J. Econ. 112, 291–303 (1997)CrossRef DiNardo, J.E., Pischke, J.-S.: The returns to computer use revisited: have pencils changed the wage structure too? Q. J. Econ. 112, 291–303 (1997)CrossRef
Zurück zum Zitat Eurostat: Handbook of recommended practices for questionnaire development and testing in the European Statistical System. Eurostat (2004) Eurostat: Handbook of recommended practices for questionnaire development and testing in the European Statistical System. Eurostat (2004)
Zurück zum Zitat Fisher, M., Goddu, M.K., Keil, F.C.: Searching for explanations: how the Internet inflates estimates of internal knowledge. J. Exp. Psychol. Gen. 144, 674 (2015)CrossRef Fisher, M., Goddu, M.K., Keil, F.C.: Searching for explanations: how the Internet inflates estimates of internal knowledge. J. Exp. Psychol. Gen. 144, 674 (2015)CrossRef
Zurück zum Zitat Gerlitz, J.-Y., Schupp, J.: Zur Erhebung der big-five-basierten persoenlichkeitsmerkmale im SOEP. DIW Res. Notes 4, 2005 (2005) Gerlitz, J.-Y., Schupp, J.: Zur Erhebung der big-five-basierten persoenlichkeitsmerkmale im SOEP. DIW Res. Notes 4, 2005 (2005)
Zurück zum Zitat Grant, D.M., Malloy, A.D., Murphy, M.C.: A Comparison of student perceptions of their computer skills to their actual abilities. J. Inf. Technol. Educ. Res. 8, 141–160 (2009) Grant, D.M., Malloy, A.D., Murphy, M.C.: A Comparison of student perceptions of their computer skills to their actual abilities. J. Inf. Technol. Educ. Res. 8, 141–160 (2009)
Zurück zum Zitat Hebert, J.R., Ebbeling, C.B., Matthews, C.E., Hurley, T.G., Ma, Y., Druker, S., Clemow, L.: Systematic errors in middle-aged women’s estimates of energy intake: comparing three self-report measures to total energy expenditure from doubly labeled water. Ann. Epidemiol. 12, 577–586 (2002). https://doi.org/10.1016/S1047-2797(01)00297-6CrossRef Hebert, J.R., Ebbeling, C.B., Matthews, C.E., Hurley, T.G., Ma, Y., Druker, S., Clemow, L.: Systematic errors in middle-aged women’s estimates of energy intake: comparing three self-report measures to total energy expenditure from doubly labeled water. Ann. Epidemiol. 12, 577–586 (2002). https://​doi.​org/​10.​1016/​S1047-2797(01)00297-6CrossRef
Zurück zum Zitat John, O.P., Donahue, E.M., Kentle, R.L.: The Big Five Inventory—Versions 4a and 54. University of California, Institute of Personality and Social Research, Berkeley (1991) John, O.P., Donahue, E.M., Kentle, R.L.: The Big Five Inventory—Versions 4a and 54. University of California, Institute of Personality and Social Research, Berkeley (1991)
Zurück zum Zitat John, O.P., Srivastava, S.: The big five trait taxonomy: history, measurement, and theoretical perspectives. Handb. Personal. Theory Res. 2, 102–138 (1999) John, O.P., Srivastava, S.: The big five trait taxonomy: history, measurement, and theoretical perspectives. Handb. Personal. Theory Res. 2, 102–138 (1999)
Zurück zum Zitat Merritt, D.K., Smith, K.D., Renzo, J.C.D.: An investigation of self-reported computer literacy: is it reliable? Issues Inf. Syst. 6, 289–295 (2005) Merritt, D.K., Smith, K.D., Renzo, J.C.D.: An investigation of self-reported computer literacy: is it reliable? Issues Inf. Syst. 6, 289–295 (2005)
Zurück zum Zitat Mood, C.: Logistic regression: Why we cannot do what we think we can do, and what we can do about it. Eur. Sociol. Rev. 26, 67–82 (2010)CrossRef Mood, C.: Logistic regression: Why we cannot do what we think we can do, and what we can do about it. Eur. Sociol. Rev. 26, 67–82 (2010)CrossRef
Zurück zum Zitat OECD: Skills Outlook 2013: first results from the Survey of Adult Skills. OECD (2013a) OECD: Skills Outlook 2013: first results from the Survey of Adult Skills. OECD (2013a)
Zurück zum Zitat OECD: Technical report of the Survey of Adult Skills (PIAAC). OECD (2013b) OECD: Technical report of the Survey of Adult Skills (PIAAC). OECD (2013b)
Zurück zum Zitat Porto, N., Xiao, J.J.: Financial literacy overconfidence and financial advice seeking. J. Financ. Serv. Prof. 70, 78–88 (2016) Porto, N., Xiao, J.J.: Financial literacy overconfidence and financial advice seeking. J. Financ. Serv. Prof. 70, 78–88 (2016)
Zurück zum Zitat Rao, J.N.K., Scott, A.J.: On chi-squared tests for multiway contingency tables with cell proportions estimated from survey data. Ann. Stat. 12, 46–60 (1984)CrossRef Rao, J.N.K., Scott, A.J.: On chi-squared tests for multiway contingency tables with cell proportions estimated from survey data. Ann. Stat. 12, 46–60 (1984)CrossRef
Zurück zum Zitat Stroombergen, A., Rose, D., Nana, G., Pink, B.: Statistics New Zealand review of the statistical measurement of human capital (2002) Stroombergen, A., Rose, D., Nana, G., Pink, B.: Statistics New Zealand review of the statistical measurement of human capital (2002)
Zurück zum Zitat Tourangeau, R.: Confidentiality, privacy and anonymity. In: Vannette, D.L., Krosnick, J.A. (eds.) The Palgrave Handbook of Survey Research, pp. 501–507. Palgrave Macmillan, Cham (2018)CrossRef Tourangeau, R.: Confidentiality, privacy and anonymity. In: Vannette, D.L., Krosnick, J.A. (eds.) The Palgrave Handbook of Survey Research, pp. 501–507. Palgrave Macmillan, Cham (2018)CrossRef
Zurück zum Zitat UNESCO Institute for Statistics: International standard classification of education: ISCED 2011. UNESCO Institute for Statistics, Montreal (2012) UNESCO Institute for Statistics: International standard classification of education: ISCED 2011. UNESCO Institute for Statistics, Montreal (2012)
Metadaten
Titel
ICT skills measurement in social surveys: Can we trust self-reports?
verfasst von
Marta Palczyńska
Maja Rynko
Publikationsdatum
29.08.2020
Verlag
Springer Netherlands
Erschienen in
Quality & Quantity / Ausgabe 3/2021
Print ISSN: 0033-5177
Elektronische ISSN: 1573-7845
DOI
https://doi.org/10.1007/s11135-020-01031-4

Weitere Artikel der Ausgabe 3/2021

Quality & Quantity 3/2021 Zur Ausgabe

Premium Partner