Skip to main content
Top
Published in: Social Indicators Research 1/2021

Open Access 19-02-2021 | Original Research

Measuring Non-electoral Political Participation: Bi-factor Model as a Tool to Extract Dimensions

Author: Piotr Koc

Published in: Social Indicators Research | Issue 1/2021

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Political participation is a mainstay of political behavior research. One of the main dilemmas many researchers face pertains to the number of dimensions of political participation, i.e. whether we should model political participation as a unidimensional or multidimensional latent construct. Over the years, scholars usually have favored the solution with more than one dimension of political participation and they have backed the claim of multiple dimensions with a number of empirical tests. In this paper, I argue that the results from the frequently used testing procedures which rely on the model fit inspection and the Kaiser criterion can be very misleading and may yield in extracting too many dimensions. By employing bi-factor modeling to a European Social Survey dataset, I show that in a majority of countries political participation can be considered an essentially unidimensional latent quantity. I demonstrate that additional dimensions of political participation are very weak and unreliable and that we cannot regress them on external variables nor build composite scores based on them. These findings cast doubt on the conclusions of numerous previous studies where researchers modeled more than one dimension of political participation.
Notes

Supplementary Information

The online version of this article (https://​doi.​org/​10.​1007/​s11205-021-02637-3) contains supplementary material, which is available to authorized users.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

While conducting research on political participation, scholars oftentimes face the decision of how many dimensions to extract. This is a crucial step regardless of whether we want to model political participation as a latent variable in a structural equation model or to build a composite score and run a regression. Even though we could find authors who argue for the unidimensional structure of political participation (e.g. van Deth 1986), various participatory forms have been usually divided into two or more dimensions, also known as modes (Barnes and Kaase 1979; Inglehart and Catterberg 2002; Teorell et al. 2007; Marien et al. 2010). The preference for more than one dimension has been backed with numerous empirical tests. Yet, once we take a closer look at those testing procedures, we may have doubts if the alleged multiple dimensions are really mirrored by the data. If they are not, conclusions from numerous studies may need to be reconsidered.
To investigate the number of dimensions of political participation, researchers commonly use two methods: principal component analysis (PCA) and factor analysis. In the case of the PCA, they usually run a model and they check the number of eigenvalues greater than 1.0 (e.g. Teorell et al. 2007; Theocharis and van Deth 2018). But as shown by Russell (2002) and van der Eijk and Rose (2015), this method often leads to solutions which suffer from overdimensionalisation, i.e. extracting too many factors, and, therefore, should not be used. As to the factor analysis, the procedure frequently consists in checking the model fit (e.g. Gibson and Cantijoch 2013; Ohme et al. 2018; Talò and Mannarini 2015). However, in the psychometric literature, where seeking dimensionality is a core enterprise, determining dimensionality via a test of model fit has been criticized for being oversimplistic (e.g. Bentler 2009; Berge and Sočan 2004) and for wrongly conflating model fit with model validity (Reise et al. 2018). The problem is that “every set of responses by real individuals is multidimensional to some degree” (Harrison 1986). In practice, whenever we have more than three items, perfect unidimensionality is impossible (Berge and Sočan 2004) and the one-factor model can rarely describe data with a reasonably large number of items (Bentler 2009). Still, this does not automatically imply that the construct we are interested in is multidimensional. To treat a construct as having two or more distinct dimensions, these should uniquely explain variation in item responses. However, even when a multidimensional model fits better than a model with a single dimension, the factors can offer a very little unique (unshared) contribution (Gignac and Kretzschmar 2017). If we ignore this and model more than one dimension, we may end up explaining largely the measurement error and not the variance attributable to each dimension. Consequently, any estimate of the association between that dimension and an external variable will be attenuated or inflated and so will the measures of uncertainty of that estimate (Wiernik et al. 2015).
To find out how many dimensions of political participation to extract, I propose to use a bi-factor modeling strategy along with factor strength, internal consistency reliability, and construct replicability indices: Explained Common Variance, omega coefficients, and the H index. Bi-factor model allows us to inspect if we deal with a truly multidimensional construct or a construct that is essentially unidimensional, i.e. having one major latent dimension and a couple of minor dimensions that can be ignored (Strout 1990). With a bi-factor model, we can disentangle various sources of variance, which otherwise often remain indistinguishable. I apply this strategy to the European Social Survey, a dataset frequently employed by the students of political participation.
The results show that once we take into account the common variance among the indicators, additional dimensions of political participation explain very little variance in most of the countries and are likely to be unreplicable across different samples. If we were to build separate subscales for each dimension and score individuals on them, the scores would be severely unreliable as most of the variance in the unit-weighted composite scores is due to the general trait of political participation. Overall, additional dimensions of political participation do not acquire sufficient specificity nor stability to represent empirically identifiable constructs in most cases. They are virtually impossible to interpret and when regressed on external variables (e.g., in structural equation modeling context), the coefficient will be unreliable at least.
These findings have direct consequences for how to model political participation irrespective of whether this is done in a structural equation model framework or in the form of composite scores and regression analysis.

2 Dimensions of Political Participation

When trying to measure political participation, researchers often are not interested in just one form, like signing petitions, but they want to reveal a broader picture of how citizens participate in politics. Among others, they want to know whether young people participate differently than adults (García-Albacete 2014) or if political trust impacts the choice of forms of participation (Hooghe and Marien 2013). To capture the whole phenomenon of political participation, scholars usually treat political participation as a continuous latent variable and the multiple participatory forms as manifestations of this latent quantity (e.g. Marsh 1974; Verba et al. 1978; Vráblíková 2014). The relation between an observable indicator, \({x}_{i}\), and the unobserved quantity, \({\xi }_{i}\), can be expressed with the following equation:
$${x}_{i}= {\xi }_{i }+ {\delta }_{i}$$
(1)
where \({\delta }_{i}\) denotes a measurement error, \(i\) refers to the unit of observation. Latent formulation of political participation allows us to incorporate into the analysis the idea that survey measures do not resemble the real-world behavior of individuals perfectly and that these measures come with an error. Additionally, and more importantly, the formulation reminds us that an indicator, \(x\), is not the latent variable of interest, \(\xi\) (Jackman, 2008). In the case of political participation, a reported signing of a petition (\(x\)), for instance, even if accurately reported, is not political participation (\(\xi\)); it is related to political participation and the Eq. (1) shows this relationship. The latent construct of political participation is comprised of multiple observable indicators and not just one. If we believe that political participation is made up of many different acts and, still, we equal it to one indicator, we make a translation error, i.e. we fail to translate the theoretical concept into the measurement specification (Fariss et al. 2020; Trochim andDonnelly 2007).
The core issue about conceiving of political participation as a latent variable is the question of the number of dimensions. Definitions of political participation are agnostic as to whether there are single or multiple dimensions. Most of the efforts have been devoted to imposing clear boundaries on what political participation is and what is not (Almond and Verba 1963; Conge 1988; Fox 2014; Pattie et al. 2004; Verba and Nie 1972). For example, in the recent work by van Deth (2014), political participation is defined as an activity or action that is carried out voluntarily by non-professional citizens, and which is located within the realm of politics, government, or the state. Alternatively, the action should be targeted at a subject from one of those realms or aimed at solving a community problem. When even the target is unclear, one should look at the political context or the motives of participants (van Deth 2014; van Deth and Theocharis 2018). By doing so, we can capture all forms of political participation, both traditional like voting and new ones like online participation.
In practice, these various forms are often divided into two or more dimensions, also known as modes of participation. There is an underlying assumption that people choose forms from either of those dimensions and rarely people’s participatory repertoires consist of acts that stem from different modes. In other words, people tend to participate in forms from either of the modes. This view on political participation as a multidimensional construct has been popularized by numerous authors. They differ as to the number of dimensions and the criteria according to which we should divide the forms (e.g., Barnes and Kaase 1979; Inglehart and Catterberg 2002; Teorell et al. 2007; Marien et al. 2010; Hooghe and Marien 2013). In this article, I concentrate on the division on institutionalized and non-institutionalized forms which is prevalent in the empirical research and which builds on the proposition of Barnes and Kaase (1979). Institutionalized forms are directly related to the institutional process. These forms are usually regulated by the members of the political elite, primarily by political parties. By contrast, non-institutionalized forms are conceived of as having no direct link to the electoral process or to the functioning of the political institutions. They are used predominantly by actors who are not members of the political elite, who want to manifest their dissatisfaction with the elite’s actions or to impact the decision process. The impact of non-institutionalized forms on the political system is usually indirect (Hooghe and Marien 2013; Marien et al. 2010).
However, some scholars indicate that talking about multiple dimensions is essentially dated. Today, participatory repertoires of citizens are made up of multiple diverse forms, like taking part in demonstrations, consumerism, and petitioning (Norris 2007). Having multiple forms at their disposal, people opt for acts which allow them to express themselves, which match their interests and resources (Bang 2009; van Deth and Theocharis 2018). Politically skilled and policy-oriented citizens will choose whatever means appropriate to influence policies they care about (Dalton 2008). They may combine different forms, like voting and demonstrating, which is not a rare practice nowadays (Norris et al. 2005; van Aelst and Walgrave 2001). Given all this, it makes sense to treat political participation as an essentially unidimensional latent construct.

3 Data

To solve the controversy around the number of dimensions of political participation, I analyze the battery of participatory items of the 8th edition of the European Social Survey (ESS) (2016) for each country. I use ESS for two reasons. First, ESS is frequently used by scholars researching political participation (e.g., Bäck and Christensen 2016; Kostelka 2014). Second, it includes numerous participatory items, representing both institutionalized and non-institutionalized forms.
From the whole battery of items, I utilize eight items measuring political participation: contacting politicians or government, working for a political party or action group, working for another organization or association, badging, signing petitions, taking part in lawful demonstrations, boycotting products, and posting or sharing things online. Each respondent was asked whether he or she had participated in each form in the past 12 months. Thanks to this, the variation in the levels of political activity across individuals only due to different timespans is greatly limited. The ESS battery of political participation items was intended to measure a variety of political participation acts. The items were not meant to conform to any particular conceptualization of political participation and, therefore, to form any particular scale (Thomassen 2001). Descriptive statistics for the items can be found in Table 3 in the "Appendix".
For the purpose of this analysis, I exclude voting. Many studies have shown a special character of voting: it is a very frequent activity and therefore an exception on other indicators (Parry et al. 1992; Marien et al. 2010). The difference in prevalence between voting and other forms of participation often leads to problems with estimating measurement models. For these reasons, voting is often left-out from testing political participation scales in practice (e.g. Hooghe and Marien 2013; Theocharis and van Deth 2018; Vráblíková 2014).
The items which measure political participation were recoded into dichotomous variables, with values 1 “participation” and 0 “no participation”. Refusals, “No answer”, and “Don’t know” answers were recoded as missing values and were not used in the analysis. In addition, all interviewees-minors were excluded from the analysis as they could not participate in some participatory forms for legal reasons.

4 Analytical Approach

4.1 Classical Methods of Establishing the Number of Dimensions

Before dwelling on the bi-factor approach, it is worth checking what conclusions we would draw if we relied either on the model fit comparison or the PCA with the Kaiser-1 rule. To conduct model fit comparison, we have to estimate two factor models: a unidimensional model, where all items load on one common factor, and a bidimensional model, where participatory items are grouped into correlated institutionalized and non-institutionalized modes. Three items—contacting politicians, working for a party, working for an organization—load on the group factor “institutionalized mode”. Five items—badging, signing petitions, taking part in public demonstrations, boycotting products, and sharing or posting politics-related content online—load on the group factor “non-institutionalized mode” (Table 1).
Table 1
Global fit indices for unidimensional and bidimensional factor models, and the number of dimensions extracted using Kaiser-1 rule
 
Unidimensional Factor Model
Bidimensional Factor Model
PCA
CFI
TLI
SRMR
RMSEA
RMSEA 90% CI
CFI
TLI
SRMR
RMSEA
RMSEA 90% CI
K-1
AT
.939
.915
.086
.063
.054, .071
.988
.982
.048
.029
.019, .039
2
BE
.921
.889
.085
.060
.051, .07
.961
.942
.065
.043
.034, .053
2
CH
.962
.946
.076
.042
.031, .052
.987
.980
.054
.025
.012, .037
2
CZ
.974
.964
.069
.043
.035, .052
.991
.986
.047
.027
.017, .036
1
DE
.919
.886
.081
.061
.054, .068
.970
.955
.055
.038
.031, .046
2
EE
.969
.956
.079
.033
.024, .042
.987
.981
.064
.021
.01, .032
2
ES
.983
.977
.052
.046
.037, .055
.990
.986
.043
.036
.027, .046
1
FI
.823
.753
.107
.078
.07, .087
.900
.852
.077
.061
.052, .07
2
FR
.965
.951
.075
.054
.046, .063
.985
.977
.051
.037
.028, .046
2
GB
.971
.959
.073
.048
.04, .058
.983
.975
.058
.038
.028, .047
1
HU
.973
.962
.103
.030
.02, .041
.983
.975
.094
.025
.012, .036
1
IE
.947
.926
.095
.059
.052, .067
.970
.956
.070
.046
.038, .054
2
IL
.968
.955
.078
.049
.041, .056
.985
.977
.055
.035
.026, .043
2
IS
.898
.858
.090
.079
.066, .092
.969
.954
.058
.045
.03, .06
2
IT
.982
.975
.060
.039
.032, .047
.986
.979
.052
.036
.028, .045
1
LT
.963
.948
.109
.045
.036, .054
.970
.956
.099
.042
.033, .051
2
NL
.939
.915
.085
.039
.029, .049
.966
.950
.078
.030
.019, .041
2
NO
.945
.923
.071
.055
.045, .065
.991
.986
.038
.023
.01, .036
2
PL
.967
.954
.100
.052
.042, .061
.998
.997
.046
.014
. .027
2
PT
.923
.893
.105
.058
.047, .069
.962
.944
.086
.042
.03, .054
2
RU
.980
.972
.071
.033
.024, .041
.980
.971
.068
.033
.025, .042
1
SE
.952
.933
.065
.043
.033, .054
.962
.944
.058
.039
.029, .051
2
SI
.923
.893
.100
.054
.043, .066
.979
.969
.072
.029
.016, .042
2
CFI, Comparative Fit Index; TLI, Tucker‐Lewis Index; SRMR, Standardized Root Mean Squared Residual; RMSEA, Root Mean Squared Error of Approximation. The following cut-off values were used: CFI ≥ 0.95; TLI ≥ 0.95; SRMR < 0.08; RMSEA < 0.08. Values indicating at least acceptable fit were bolded. The PCA model was estimated using polychoric correlations. The factor models were estimated with the WLSMV estimator using the lavaan package (Rosseel, 2012) in R
AT, Austria; BE, Belgium; CH, Switzerland; CZ, Czechia; DE, Germany; EE, Estonia; ES, Spain; FI, Finland; FR, France; GB, the United Kingdom; HU, Hungary; IE, Ireland; IL, Israel; IS, Iceland; IT, Italy; LT, Lithuania; NL, the Netherlands; NO, Norway; PL, Poland; PT, Portugal; RU, Russia; SE, Sweden; SI, Slovenia
By comparing the typically used global fit indices, i.e. Comparative Fit Index (CFI), Tucker‐Lewis Index (TLI), Standardized Root Mean Squared Residual (SRMR), and the Root Mean Squared Error of Approximation (RMSEA), we would conclude that in all but one cases (Russia) the bidimensional model fits the data better than the unidimensional model. Chi‑Square difference test (not reported here) also indicates that the bidimensional model is to be preferred over the unidimensional in all the countries but Russia. What is more, whereas CFI, TLI, and SRMR signal a poor fit of the unidimensional model in roughly half of the cases, only 1–4 cases with the bidimensional solution seem to have an unsatisfactory fit. Based on this, we would argue probably that there exist two well-defined separate dimensions of political participation, which are better resembled by the data than the model with one dimension.
Using the PCA with the Kaiser-1 rule would not change the picture substantially. In six out of 23 countries the number of eigenvalues greater than one equals 1. That is, in these countries we would extract just one dimension of political participation. For all the others, the model with two dimensions would be more appropriate.
Hence, irrespective of whether we use the model fit inspection or the Kaiser-1 rule, we would end up extracting two dimensions of political participation in most of the countries.

4.2 Confirmatory Bi-factor Model

To investigate the latent structure of political participation, I use a bi-factor modeling strategy. A major advantage of the bi-factor model is that we can easily partition the variance in item responses between the general factor and the group factors (Reise 2012; Rodriguez et al. 2015). The general factor reflects what is common among all the items. Group factors “represent common factors measured by the items that potentially explain item response variance not accounted for by the general factor” (Reise et al. 2010). In this work, bi-factor model allows us to check if the additional dimensions of political participation explain the variance in items responses beyond the common variance among all the indicators, and if so, how much.
To identify a bi-factor model we need at least two group factors. For each group factor, we need at least three items that uniquely load on that factor and the general factor (Zinbarg et al. 2007). The model assumes orthogonal relationships between the group factors, i.e. group factors should be independent of each other. Even though the orthogonality restriction could be relaxed, the estimation in such a case is far from being easy and the interpretation of the group factors is rather difficult (Reise et al. 2018).
Figure 1 shows a bi-factor model of political participation. Three items—contacting politicians, working for a party, working for an organization—load on the group factor “institutionalized mode” and on the general factor “political participation”. Five items—badging, signing petitions, taking part in public demonstrations, boycotting products, and sharing or posting politics-related content online—load on the group factor “non-institutionalized mode” and on the general factor “political participation”. The general trait, or general factor, of political participation accounts for the common variance among the indicators, and the group factors for variance attributable to the specific modes of participation after controlling for the common variance.
The model for each country was estimated using full-information item factor analysis where the entire item responsematrix was a part of the calibration. To be more precise, I employed the dimensional reduction EM algorithm as incorporated in the bfactor function from the mirt package (Chalmers, 2012) in R.1 I used the factor-analytic metrics of the estimates. Factors were scaled using the default option, i.e. standardized by imposing a unit variance identification constraint on factor variances, and by fixing factor means to zero. The code can be found in the Online Resource 1.

4.3 Indices

To answer the question about the number of dimensions of political participation we need a measure that would help us (1) to quantify the amount of common variance accounted by the general factor and group factors, a measure of (2) how reliable the factors are and, therefore, replicable across studies, and a measure which would allow us (3) to quantify the amount of reliable observed variance in a unit-weighted composite score explained by each factor uniquely. Whereas the first and second measures will drive the decisions on how to model political participation in the structural equation modeling framework, the third will instruct us how to score individuals: whether we should score them just on the whole scale (the general trait of political participation), only on the subscales (institutionalized and non-institutionalized modes), or on the whole scale and subscales. The third measure is particularly important in this case because unit-weighted composite scores in form of additive indices are widely used in the research on political participation (e.g., Dubrow et al. 2008; García-Albacete 2014; Moor and Verhaegen 2020; Vráblíková 2014).
To quantify the amount of common variance which is attributable to the general factor and group factors, I use the Explained Common Variance index, which is often conceptualized as a factor strength index (Rodriguez et al. 2016). The ECV can be computed by taking the common variance explained by the general factor and dividing it by the common variance explained by the general factor and group factors (Reise et al. 2010). In the case of the bi-factor model of political participation, ECV can be calculated as:
$$ECV= \frac{\sum {\lambda }_{PP}^{2}}{\sum {\lambda }_{PP}^{2}+ \sum {\lambda }_{I}^{2}+ \sum {\lambda }_{NI}^{2}}$$
(2)
where \({\lambda }_{PP}\), \({\lambda }_{NI}\), and \({\lambda }_{I}\) are vectors of standardized factor loadings for the general trait of political participation, group factor “non-institutionalized mode”, and group factor “institutionalized mode”, respectively. It is also possible to use ECV to quantify the amount of common variance due to each specific factor. The only thing that changes then is the numerator where we would have vectors of loadings from each specific factor.
As shown in a simulation study by Bonifay et al. (2015), the critical values of the ECV depend largely on the percent of uncontaminated correlations (PUC), i.e. a ratio of the number of correlations between items from different group factors to the total number of correlations (Rodriguez et al. 2015). When PUC is low, which is the case here (54%), ECV values above 0.7 on the general factor should indicate that the construct we are interested in is essentially unidimensional and we might fit a simpler unidimensional measurement model in structural equation model context without much bias introduced. In such a case, the difference in factor loadings between the general factor and a unidimensional model should be very small and so the structural parameter bias. Values lower than 0.7 make fitting a unidimensional model risky because it would introduce an unacceptable amount of bias (Bonifay et al. 2015; Reise et al. 2013; Reise et al. 2013).
To measure how reliable and replicable the factors are, I exploit the index H (Gagne and Hancock 2006; Hancock and Mueller 2001). H is a function of the sum of the proportion of variance explained by a given factor to the proportion of variance unexplained by that factor. For the general trait of political participation this would be:
$${H}_{PP}= \frac{1}{1+\left( \frac{1}{\sum \frac{{\lambda }_{PP}^{2}}{1-{\lambda }_{PP}^{2}}}\right)}$$
(3)
H values smaller than 0.7 indicate that the latent factor is poorly defined and likely to be unstable and change across samples and studies, values from 0.7 to 0.79 are in the acceptable range, while values equal to or above 0.8 indicate a good and well-defined latent factor, i.e. likely to show stability across samples and studies (Arias et al. 2018; Rodriguez et al. 2015). As stressed by Rodriguez et al. (2016), if the H value for a given factor is low, we cannot trust the structural coefficient between that factor with an external variable.
In order to quantify the amount of reliable observed variance in unit-weighted composite scores, I make use of the omega coefficient (\(\omega\)). Unlike the coefficient alpha, a much more popular alternative, the omega coefficient does not necessitate the relation between the items and the latent variable to be essentially tau-equivalent (McDonald 1999; Zinbarg et al. 2005). In the case of a factor model, essential tau-equivalence would mean that the factor loadings are all equal, with factor intercepts varying (Kline 2016)—a highly implausible assumption. Even though omega does not differentiate between various sources of variance, i.e. it represents the proportion of variance in the observed total score attributable to all “modeled” sources of common variance (Revelle and Zinbarg 2009), there exist versions of omega coefficient which allow us to quantify the reliable variance due to particular factors in a multidimensional solution (Reise et al. 2013; Reise et al. 2013; Zinbarg et al. 2005, 2007). This is an important feature because, as Fig. 1 shows, in the case of the bi-factor model of political participation, the variance in item responses is explained by two factors simultaneously.
Omega coefficient for the total score (the general trait of political participation) is computed as:
$$\omega = \frac{{(\sum {\lambda }_{PP})}^{2}+ {(\sum {\lambda }_{I})}^{2}+ {(\sum {\lambda }_{NI})}^{2}}{{(\sum {\lambda }_{PP})}^{2}+{(\sum {\lambda }_{NI})}^{2}+{(\sum {\lambda }_{I})}^{2}+{\left(1-h\right)}^{2}}$$
(4)
In the numerator of the Eq. (4) we have all the reliable, or common, sources of unit-weighted total score variance, i.e. the sum of the factor standardized loadings on the general factor “political participation”, squared, plus the sum of the factor loadings on each group factor, squared. In the denominator we have the reliable variance plus items’ unique variances, squared—\({\left(1-h\right)}^{2}\). We can calculate the omega coefficient for the subscales in a similar way, by taking into account a subset of items relevant to each group factor. For the subscale “institutionalized mode”, we take the top three items, as presented in Fig. 1:
$${\omega }_{I}= \frac{{({\sum }_{i =1}^{3}{\lambda }_{PP})}^{2}+ {(\sum {\lambda }_{I})}^{2}}{{({\sum }_{i =1}^{3}{\lambda }_{PP})}^{2}+{(\sum {\lambda }_{I})}^{2}+{\left(1-h\right)}^{2}}$$
(5)
Yet, as mentioned earlier, what goes into the omega coefficient is all modeled sources of variance. If the aim was to quantify the proportion of variance in total scores due to the general trait of political participation only, we would have to use omega hierarchical (\({\omega }_{H}\)), which treats the variability in scores attributable to the group factors as measurement error (Zinbarg et al. 2005). Omega hierarchical can be regarded as a measure of the extent to which the unit-weighted total scores are essentially unidimensional (Rodriguez et al. 2016).
$${\omega }_{H}= \frac{{(\sum {\lambda }_{PP})}^{2}}{{(\sum {\lambda }_{PP})}^{2}+{(\sum {\lambda }_{NI})}^{2}+{(\sum {\lambda }_{I})}^{2}+{\left(1-h\right)}^{2}}$$
(6)
Equation (6) shows that the only difference between omega and omega hierarchical is that in the numerator we have just one source of reliable variance in unit-weighted total scores, that is due to the general factor “political participation”. To improve interpretability, I will follow Reise et al. (2013) and I will calculate the ratio of omega hierarchical to omega, which denotes the percentage of reliable variance in total scores due to the general factor.
The last problem pertains to the interpretability of subscale scores; namely, to what degree is the interpretation of subscale scores for institutionalized and non-institutionalized modes confounded by the general trait of political participation? For this type of questions, Reise et al. (2013) propose to use a version of omega, omega subscale (\({\omega }_{S}\)), which is calculated for a subset of items loading on each group factor. Similarly to omega hierarchical, I will compute ratios of omega subscale to omega calculated for the relevant subset of items. Here is an example of omega subscale for the subscale “institutionalized mode” where we take into account top three items, as presented in Fig. 1:
$${\omega }_{S}= \frac{{(\sum {\lambda }_{I})}^{2}}{ {({\sum }_{i =1}^{3}{\lambda }_{PP})}^{2} +{(\sum {\lambda }_{I})}^{2}+{\left(1-h\right)}^{2}}$$
(7)

5 Results

Table 2 shows the indices for the bi-factor model of political participation. In 15 out of 24 countries, the ECV value for the general trait of political participation is higher than 0.7, meaning that in those countries the general factor of political participation accounts for more than 70% of common variance. This is a strong indication that in these countries political participation is essentially unidimensional. Also, we should be able to fit a simpler unidimensional model for these countries without much bias introduced. In the other countries, the general factor explains still more than half of the common variance, except Finland, where the ECV is below 0.5. The amount of common variance explained by the specific factors is small. On average, the factor representing institutionalized mode explains 13% of the common variance and the factor representing non-institutionalized mode—15%. Only in Finland and Iceland the ECV for a specific mode (non-institutionalized mode) is noticeably higher, exceeding 30%.
Table 2
Indices for the bi-factor model of political participation
Country
\({\mathrm{ECV}}_{PP}\)
\({\mathrm{ECV}}_{I}\)
\({\mathrm{ECV}}_{NI}\)
\({H}_{PP}\)
\({H}_{I}\)
\({H}_{NI}\)
\(\frac{{\omega }_{H}}{\omega }\)
\(\frac{{\omega }_{{s}_{I}}}{{\omega }_{I}}\)
\(\frac{{\omega }_{{s}_{NI}}}{{\omega }_{NI}}\)
AT
.71
.15
.14
.88
.52
.46
.85
.37
.21
BE
.65
.16
.18
.90
.52
.60
.85
.45
.15
CH
.63
.12
.25
.83
.53
.60
.78
.16
.43
CZ
.81
.11
.07
.94
.45
.33
.93
.29
.05
DE
.66
.21
.14
.85
.59
.55
.88
.51
.05
EE
.73
.06
.21
.91
.26
.57
.86
0
.38
ES
.85
.08
.07
.93
.36
.30
.95
.2
.05
FI
.49
.16
.35
.75
.47
.72
.66
.34
.58
FR
.75
.15
.09
.90
.55
.37
.89
.36
.11
GB
.80
.10
.10
.90
.40
.44
.92
.24
.09
HU
.80
.12
.08
.95
.60
.47
.95
.28
.02
IE
.71
.12
.17
.94
.45
.55
.85
.35
.21
IL
.75
.07
.18
.92
.31
.58
.86
.17
.26
IS
.60
.06
.34
.90
.22
.67
.71
.03
.66
IT
.84
.08
.08
.96
.41
.36
.94
.16
.09
LT
.74
.04
.22
.95
.21
.69
.86
0
.36
NL
.66
.18
.16
.85
.62
.51
.92
.45
0
NO
.71
.20
.09
.84
.54
.30
.89
.57
.02
PL
.77
.14
.08
.93
.56
.44
.9
.38
.08
PT
.68
.17
.15
.86
.54
.49
.91
.46
.03
RU
.85
.04
.12
.94
.22
.46
.94
.03
.15
SE
.76
.13
.11
.82
.37
.30
.93
.35
.03
SI
.66
.20
.15
.88
.64
.66
.88
.56
.04
Columns two to four present ECV values for the general factor and specific factors. Columns five to seven show H for the general factor, institutionalized mode, and non-institutionalized mode. Column eight includes the ratio of omega hierarchical to omega. Columns nine and ten present the ratios of omega subscale to omega for the relevant subset of items: column nine for the institutionalized mode and column ten for the non-institutionalized mode
H values show that the general trait of political participation is not only strong but also reliable and well-defined. In almost every country this index is higher than 0.8, and only in Finland it is below that value but still acceptable. By contrast, H values for specific modes of political participation are all below 0.7, with the exception of the non-institutionalized mode in Finland. It means that they are poorly-defined by the set of indicators and that the group factors representing the modes are likely to lack stability across samples. Overall, ECV and H values suggest that there exists one strong general trait of political participation, and two weak and unreliable group factors—specific modes of political participation. Based on this, we can say that common variance among all the indicators cannot be ignored if political participation were to be modeled in the structural equation modeling framework and that we should not interpret the structural coefficients between an external variable and specific modes of participation.
As to the omega ratios, the results are rather unequivocal. In 20 out of 23 countries, the ratio of omega hierarchical to omega is above 0.8. It means that more than 80% of reliable unit-weighted total score variance is due to the general factor. Subscales in all the countries are largely confounded by the general factor. By using the rule of thumb, i.e. assuming that at least 50% of reliable variance in unit-weighted subscale scores is needed to score individuals on the subscales (Canivez 2015), we could do it in just five countries: in Germany, Norway, and Slovenia on the subscale “institutionalized mode”; in Finland and Iceland on the subscale “non-institutionalized mode”. Still, this would be risky and it is up to the researcher if he or she wants to take that risk.
Hence, in the case of political participation unit-weighted subscale scores generally do not provide meaningful and reliable information about the modes that is unique from the general trait of political participation. Given the results, building composite scores for subscales cannot be justified, with five possible exceptions mentioned earlier. If the subscale scores, e.g. as composite scores, were to be correlated with an external variable, the correlation will be largely inflated if the external variable predicts the general trait of political participation and a specific mode in the same direction (i.e., enhancing conflation) or attenuated if the external variable predicts in opposite directions (i.e., suppressive conflation) (Wiernik et al. 2015). Inflation and attenuation would impact the correlation between the total score and an external variable too, but the impact would be significantly lower since the general trait of political participation accounts for most of the variance.
Lastly, as a robustness check, I repeated the procedure on the 9th edition of the ESS (2018). The analysis corroborates the findings based on the 8th edition. Details can be found in the Online Resource 2.

6 Discussion

The main goal of this study was to establish how we should model political participation: as an essentially unidimensional or multidimensional construct, with institutionalized and non-institutionalized modes, in structural equation modeling framework and as a unit-weighted composite score. To establish this, I examined the battery of participation items in the European Social Survey using a bi-factor modeling strategy along with factor strength, internal consistency reliability, and construct replicability indices: Explained Common Variance, omega coefficients, and the H index.
Previous research relied strongly on procedures which produced a false impression that we can and should model political participation as a multidimensional construct, composed of separate modes, with the common variance being ignored. Those procedures usually involved a simple inspection of the model fit (e.g. Talò and Mannarini 2015) or other shortcuts in deciding about the optimal number of dimensions like using PCA with the Kaiser criterion (e.g. Theocharis and van Deth, 2018). But as argue a growing number of methodologists (Bentler 2009; Berge and Sočan 2004; Reise et al. 2018; van der Eijk and Rose 2015), these often employed methods suffer from some flaws which may lead to erroneous conclusions about the number of dimensions that we should extract. To model political participation as a multidimensional construct, these dimensions should offer some unique (unshared) contribution in explaining the variance in item responses (Gignac and Kretzschmar, 2017).
If we were to decide how to build unit-weighted scores of political participation, i.e. by using the total score (political participation as a whole), subscale scores (institutionalized and non-institutionalized modes), or both, the results are rather unequivocal. In 20 out of 23 countries, 80% of reliable total score variance was due to the general factor. Subscales in all the countries were largely confounded by the general factor and only in five instances we could try to score individuals on the subscales. Therefore, if modeled as a unit-weighted composite score, political participation should be treated as a whole in most countries. Otherwise, that is by using subscales for each mode, we would incorporate plenty of measurement error and the observed score correlations will be largely inflated or attenuated as estimates of relations between the specific modes and external variables. Total scores are affected significantly less by the attenuation and inflation.
If we were to decide how to model political participation in the structural equation modeling framework, the most optimal solution is to use a bi-factor model. However, only the general trait of political participation from this model is reliable and well-defined in most of the countries. By contrast, the specific factors representing institutionalized and non-institutionalized modes of participation are poorly-defined and unreliable, and, therefore, very likely not replicable across samples. With the bi-factor model, we can regress the general factor on external variables and account for the noise caused by the specific factors, which may distort the structural coefficients with external variables. Any structural coefficient between a specific factor and an external variable would be unreliable at least and cannot be interpreted.
Overall, the analysis shows that only the general trait of political participation is interpretable in most countries. The specific factors representing institutionalized and non-institutionalized modes of participation, generally, do not acquire sufficient specificity nor stability to represent empirically identifiable constructs. This makes the interpretability of the specific modes virtually impossible. The results lend strong evidence that in most countries political participation is an essentially unidimensional construct, with negligible group factors representing specific modes. The multidimensionality of political participation cannot be backed by sufficient empirical evidence and the multiple dimensions seem to be methodological artifacts.
Future research should concentrate on testing political participation with larger sets of items, and using other theoretical divisions of forms of political participation. Moreover, researchers should systematically investigate the structure of political participation in time—an issue that was beyond the scope of this article.

Acknowledgements

The author thanks Joshua K. Dubrow, Natalia Letki, Artur Pokropek, and Ilona Wysmułek for helpful comments on previous versions of this work.

Compliance with Ethical Standards

Conflicts of interest

The author has no relevant financial or non-financial interests to disclose.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Supplementary Information

Below is the link to the electronic supplementary material.

Appendix

See Table 3 .
Table 3
Descriptive statistics of indicators of political participation for each country
 
Proportion
N
contplt
wrkprty
wrkorg
badge
Sgnptit
pbldmn
bctprd
pstplonl
AT
.22
.06
.16
.06
.24
.07
.24
.16
1959
BE
.17
.04
.21
.09
.26
.08
.16
.19
1698
CH
.15
.06
.19
.05
.36
.06
.33
.15
1445
CZ
.10
.03
.08
.05
.15
.06
.12
.14
2164
DE
.18
.05
.31
.06
.39
.12
.34
.22
2707
EE
.15
.03
.05
.05
.12
.02
.08
.13
1959
ES
.17
.09
.23
.09
.31
.19
.18
.22
1891
FI
.21
.03
.40
.19
.36
.04
.37
.21
1858
FR
.13
.03
.15
.10
.31
.14
.33
.19
2004
GB
.19
.03
.08
.09
.43
.05
.22
.27
1881
HU
.07
.01
.04
.01
.04
.02
.02
.04
1549
IE
.19
.04
.11
.08
.21
.08
.11
.14
2662
IL
.14
.04
.04
.05
.13
.09
.09
.13
2448
IS
.31
.13
.41
.30
.58
.27
.47
.33
840
IT
.11
.02
.07
.06
.15
.08
.08
.13
2439
LT
.09
.04
.03
.03
.06
.01
.07
.08
1984
NL
.20
.04
.34
.04
.30
.02
.16
.18
1631
NO
.25
.08
.34
.35
.39
.10
.27
.29
1465
PL
.09
.03
.06
.06
.12
.06
.05
.06
1618
PT
.23
.05
.16
.06
.23
.06
.09
.21
1232
RU
.06
.04
.05
.05
.09
.05
.03
.05
2350
SE
.18
.05
.39
.19
.48
.11
.48
.28
1470
SI
.14
.04
.13
.03
.19
.03
.13
.10
1226
N, total sample size
Footnotes
1
To check whether the results were influenced by the choice of software and the implemented algorithm, I estimated the models using the Mplus 8 package (Muthén and Muthén (19982017) with the MLF estimator. The differences were negligible.
 
Literature
go back to reference Almond, G. A., & Verba, S. (1963). The civic culture: Political attitudes and democracy in five nations. Princeton, NJ: Princeton University Press.CrossRef Almond, G. A., & Verba, S. (1963). The civic culture: Political attitudes and democracy in five nations. Princeton, NJ: Princeton University Press.CrossRef
go back to reference Barnes, S. H., & Kaase, M. (Eds.). (1979). Political action: Mass participation in five Western democracies. Beverly Hills, London: Sage Publ. Barnes, S. H., & Kaase, M. (Eds.). (1979). Political action: Mass participation in five Western democracies. Beverly Hills, London: Sage Publ.
go back to reference Canivez, G. L. (2015). Bifactor modeling in construct validation of multifactored tests: Implications for understanding multidimensional constructs and test interpretation. In K. Schweizer & C. DiStefano (Eds.), Principles and methods of test construction: Standards and recent advancements (pp. 247–271). Göttingen, Germany: Hogrefe. Canivez, G. L. (2015). Bifactor modeling in construct validation of multifactored tests: Implications for understanding multidimensional constructs and test interpretation. In K. Schweizer & C. DiStefano (Eds.), Principles and methods of test construction: Standards and recent advancements (pp. 247–271). Göttingen, Germany: Hogrefe.
go back to reference Fariss, C. J., Kenwick, M. R., & Reuning, K. (2020). Measurement Models. In L. Curini & R. Franzese (Eds.), The SAGE handbook of research methods in political science and international relations (pp. 353–370). London: SAGE Publications Ltd.CrossRef Fariss, C. J., Kenwick, M. R., & Reuning, K. (2020). Measurement Models. In L. Curini & R. Franzese (Eds.), The SAGE handbook of research methods in political science and international relations (pp. 353–370). London: SAGE Publications Ltd.CrossRef
go back to reference García-Albacete, G. M. (2014). Young people’s political participation in western Europe. London: Palgrave Macmillan UK.CrossRef García-Albacete, G. M. (2014). Young people’s political participation in western Europe. London: Palgrave Macmillan UK.CrossRef
go back to reference Hancock, G. R., & Mueller, R. O. (2001). Rethinking construct reliability within latent variable systems. In R. Cudeck, S. Du Toit, & D. Sörbom (Eds.), Structural equation modelling: Present and future - A Festschrift in honor of Karl Jöreskog (pp. 195–216). Lincolnwood, IL: Scientifc Software International. Hancock, G. R., & Mueller, R. O. (2001). Rethinking construct reliability within latent variable systems. In R. Cudeck, S. Du Toit, & D. Sörbom (Eds.), Structural equation modelling: Present and future - A Festschrift in honor of Karl Jöreskog (pp. 195–216). Lincolnwood, IL: Scientifc Software International.
go back to reference Jackman, S. (2008). Measurement. In J. M. Box-Steffensmeier, H. E. Brady, & D. Collier (Eds.), The Oxford handbook of political methodology. Oxford: Oxford University Press. Jackman, S. (2008). Measurement. In J. M. Box-Steffensmeier, H. E. Brady, & D. Collier (Eds.), The Oxford handbook of political methodology. Oxford: Oxford University Press.
go back to reference Kline, R. B. (2016). Principles and practice of structural equation modeling (Methodology in the social sciences). New York: The Guilford Press. Kline, R. B. (2016). Principles and practice of structural equation modeling (Methodology in the social sciences). New York: The Guilford Press.
go back to reference McDonald, R. P. (1999). Test theory: A unified treatment. New York, N.Y.: Routledge. McDonald, R. P. (1999). Test theory: A unified treatment. New York, N.Y.: Routledge.
go back to reference Muthén, L.K., & Muthén, B. O. (1998–2017). Mplus User’s Guide. Eighth Edition. Los Angeles, CA: Muthén & Muthén. Muthén, L.K., & Muthén, B. O. (1998–2017). Mplus User’s Guide. Eighth Edition. Los Angeles, CA: Muthén & Muthén.
go back to reference Norris, P. (2007). Political activism: New challenges, new opportunities. In C. Boix & S. C. Stokes (Eds.), The Oxford handbook of comparative politics. New York: Oxford University Press. Norris, P. (2007). Political activism: New challenges, new opportunities. In C. Boix & S. C. Stokes (Eds.), The Oxford handbook of comparative politics. New York: Oxford University Press.
go back to reference Parry, G., Moyser, G., & Day, N. (1992). Political participation and democracy in Britain. Cambridge: Cambridge University Press.CrossRef Parry, G., Moyser, G., & Day, N. (1992). Political participation and democracy in Britain. Cambridge: Cambridge University Press.CrossRef
go back to reference Pattie, C. J., Seyd, P., & Whiteley, P. (2004). Citizenship in Britain: Values, participation, and democracy. Cambridge, New York: Cambridge University Press.CrossRef Pattie, C. J., Seyd, P., & Whiteley, P. (2004). Citizenship in Britain: Values, participation, and democracy. Cambridge, New York: Cambridge University Press.CrossRef
go back to reference Reise, S. P., Bonifay, W., & Haviland, M. G. (2018). Bifactor modelling and the evaluation of scale scores. In P. Irwing, T. Booth, & D. J. Hughes (Eds.), The Wiley handbook of psychometric testing: A multidisciplinary reference on survey, scale and test development (pp. 675–707). Chichester: Wiley Blackwell.CrossRef Reise, S. P., Bonifay, W., & Haviland, M. G. (2018). Bifactor modelling and the evaluation of scale scores. In P. Irwing, T. Booth, & D. J. Hughes (Eds.), The Wiley handbook of psychometric testing: A multidisciplinary reference on survey, scale and test development (pp. 675–707). Chichester: Wiley Blackwell.CrossRef
go back to reference Teorell, J., Torcal, M., & Montero, J. R. (2007). Political participation: Mapping the terrain. In J. W. van Deth, J. R. Montero, & Westholm (Eds.), Citizenship and involvement in European democracies: A comparative analysis. London: Routledge Press. Teorell, J., Torcal, M., & Montero, J. R. (2007). Political participation: Mapping the terrain. In J. W. van Deth, J. R. Montero, & Westholm (Eds.), Citizenship and involvement in European democracies: A comparative analysis. London: Routledge Press.
go back to reference Thomassen, J. (2001). European social survey core questionnaire development – Chapter 5: Opinions about political issues (European Social Survey). London. Thomassen, J. (2001). European social survey core questionnaire development – Chapter 5: Opinions about political issues (European Social Survey). London.
go back to reference Trochim, W. M. K., & Donnelly, J. P. (2007). The research methods knowledge base. Mason, OH: Cenage Learning. Trochim, W. M. K., & Donnelly, J. P. (2007). The research methods knowledge base. Mason, OH: Cenage Learning.
go back to reference van Deth, J. W., & Theocharis, Y. (2018). Political participation in a changing world: Conceptual and empirical challenges in the study of citizen engagement. New York, N.Y.: Taylor & Francis Ltd. van Deth, J. W., & Theocharis, Y. (2018). Political participation in a changing world: Conceptual and empirical challenges in the study of citizen engagement. New York, N.Y.: Taylor & Francis Ltd.
go back to reference Verba, S., & Nie, N. H. (1972). Participation in America: Political democracy and social equality. New York: Harper and Row. Verba, S., & Nie, N. H. (1972). Participation in America: Political democracy and social equality. New York: Harper and Row.
go back to reference Verba, S., Nie, N. H., & Kim, J.-O. (1978). Participation and political equality: A seven-nation comparison. Cambridge: Cambridge University Press. Verba, S., Nie, N. H., & Kim, J.-O. (1978). Participation and political equality: A seven-nation comparison. Cambridge: Cambridge University Press.
Metadata
Title
Measuring Non-electoral Political Participation: Bi-factor Model as a Tool to Extract Dimensions
Author
Piotr Koc
Publication date
19-02-2021
Publisher
Springer Netherlands
Published in
Social Indicators Research / Issue 1/2021
Print ISSN: 0303-8300
Electronic ISSN: 1573-0921
DOI
https://doi.org/10.1007/s11205-021-02637-3

Other articles of this Issue 1/2021

Social Indicators Research 1/2021 Go to the issue

Premium Partner