Hostname: page-component-76fb5796d-vfjqv Total loading time: 0 Render date: 2024-04-26T23:46:36.226Z Has data issue: false hasContentIssue false

Choice or Circumstance? Adjusting Measures of Foreign Policy Similarity for Chance Agreement

Published online by Cambridge University Press:  04 January 2017

Frank M. Häge*
Affiliation:
Department of Politics and Public Administration, University of Limerick, Limerick, Ireland. e-mail: frank.haege@ul.ie

Abstract

The similarity of states' foreign policy positions is a standard variable in the dyadic analysis of international relations. Recent studies routinely rely on Signorino and Ritter's (1999, Tau-b or not tau-b: Measuring the similarity of foreign policy positions. International Studies Quarterly 43:115–44) S to assess the similarity of foreign policy ties. However, S neglects two fundamental characteristics of the international state system: foreign policy ties are relatively rare and individual states differ in their innate propensity to form such ties. I propose two chance-corrected agreement indices, Scott's (1955, Reliability of content analysis: The case of nominal scale coding. The Public Opinion Quarterly 19:321–5) π and Cohen's (1960, A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20:37–46) κ, as viable alternatives. Both indices adjust the dyadic similarity score for a large number of common absent ties. Cohen's κ also takes into account differences in individual dyad members' total number of ties. The resulting similarity scores have stronger face validity than S. A comparison of their empirical distributions and a replication of Gartzke's (2007, The capitalist peace. American Journal of Political Science 51:166–91) study of the ‘Capitalist Peace’ indicate that the different types of measures are not substitutable.

Type
Research Article
Copyright
Copyright © The Author 2011. Published by Oxford University Press on behalf of the Society for Political Methodology 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Altfeld, Michael F., and Bueno de Mesquita, Bruce. 1979. Choosing sides in wars. International Studies Quarterly 23: 87112.CrossRefGoogle Scholar
Bapat, Navin A. 2007. The internationalization of terrorist campaigns. Conflict Management and Peace Science 24: 265–80.Google Scholar
Bearce, David H., Flanagan, Kristen M., and Floros, Katharine M. 2006. Alliances, internal information, and military conflict among member-states. International Organization 60: 595625.Google Scholar
Bennett, D. Scott, and Rupert, Matthew C. 2003. Comparing measures of political similarity: An empirical comparison of S versus in the study of international conflict. Journal of Conflict Resolution 47: 367–93.Google Scholar
Braumoeller, Bear F. 2008. Systemic politics and the origins of Great Power conflict. American Political Science Review 102: 7793.CrossRefGoogle Scholar
Bueno de Mesquita, Bruce. 1975. Measuring systemic polarity. Journal of Conflict Resolution 19: 187216.Google Scholar
Byrt, Ted, Bishop, Janet, and Carlin, John B. 1993. Bias, prevalence and kappa. Journal of Clinical Epidemiology 46: 423–9.Google Scholar
Cicchetti, Domenic V., and Feinstein, Alvan R. 1990. High agreement but low kappa II: Resolving the paradoxes. Journal of Clinical Epidemiology 43: 551–8.Google Scholar
Cohen, Jacob. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20: 3746.Google Scholar
Cohen, Jacob. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin 70: 213–20.Google Scholar
Correlates of War Project. 2003. Formal interstate alliance dataset, 1816-2000. Version 3.03. http://www.correlatesofwar.org/COW2%20Data/Alliances/Alliance_v3.03_dyadic.zip (accessed June 12, 2009).Google Scholar
Correlates of War Project. 2005. National material capabilities dataset.” Version 3.02. http://www.correlatesofwar.org/COW2%20Data/Capabilities/NMC_3.02.csv (accessed June 12, 2009).Google Scholar
Correlates of War Project. 2005. State system membership list. Version 2004.1. http://correlatesofwar.org/COW2%20Data/SystemMembership/system2004.csv (accessed January 8, 2008).Google Scholar
De Vaus, David A. 2001. Research design in social research. London: Sage.Google Scholar
Derouen, Karl, and Heo, Uk. 2004. Reward, punishment or inducement? US economic and military aid, 1946-1996. Defence and Peace Economics 15: 453–70.Google Scholar
Fay, Michael P. 2005. Random marginal agreement coefficients: Rethinking the adjustment for chance when measuring agreement. Biostatistics 6: 171–80.CrossRefGoogle ScholarPubMed
Feinstein, Alvan R., and Cicchetti, Domenic V. 1990. High agreement but low Kappa I: The problems of two paradoxes. Journal of Clinical Epidemiology 43: 543–9.Google Scholar
Gartzke, Erik. 1998. Kant we all just get along? Opportunity, willingness, and the origins of the democratic peace. American Journal of Political Science 42: 127.Google Scholar
Gartzke, Erik. 2007. The capitalist peace. American Journal of Political Science 51: 166–91.Google Scholar
Kastner, Scott L. 2007. When do conflicting political relations affect international trade? Journal of Conflict Resolution 51: 664–88.Google Scholar
Kellstedt, Paul M., and Whitten, Guy D. 2008. The fundamentals of political science research. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Kendall, Maurice G. 1938. A new measure of rank correlation. Biometrika 30: 8193.Google Scholar
Kirk, Jennifer. 2010. ‘rmac’: Calculate RMAC or FMAC agreement coefficients.” R package, Version 0.9. http://cran.r-project.org/web/packages/rmac/ (accessed May 3, 2011).Google Scholar
Krippendorff, Klaus. 1970. Bivariate agreement coefficients for reliability of data. Sociological Methodology 2: 139–50.Google Scholar
Krippendorff, Klaus. 2004. Measuring the reliability of qualitative text analysis data. Quality and Quantity 38: 787800.Google Scholar
Lantz, Charles A., and Nebenzahl, Elliott. 1996. Behavior and interpretation of the κ statistic: Resolution of the two paradoxes. Journal of Clinical Epidemiology 49: 431–4.Google Scholar
Lin, Lawrence I.-Kuei. 1989. A concordance correlation coefficient to evaluate reproducibility. Biometrics 45: 255–68.Google Scholar
Long, Andrew G., and Leeds, Brett Ashley. 2006. Trading for security: military alliances and economic agreements. Journal of Peace Research 43: 433–51.Google Scholar
Morrow, James D., Siverson, Randolph M., and Tabares, Tressa E. 1998. The political determinants of international trade: the major powers, 1907-90. American Political Science Review 92: 649.CrossRefGoogle Scholar
Neumayer, Eric. 2003. What factors determine the allocation of aid by Arab countries and multilateral agencies? Journal of Development Studies 39: 134–47.Google Scholar
Oneal, John R., and Russett, Bruce. 1999. Assessing the liberal peace with alternative specifications: Trade still reduces conflict. Journal of Peace Research 36: 423–42.Google Scholar
R Development Core Team. 2011. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. http://www.R-project.org.Google Scholar
Scott, John. 2000. Social network analysis: A handbook. London: Sage.Google Scholar
Scott, William A. 1955. Reliability of content analysis: The case of nominal scale coding. The Public Opinion Quarterly 19(3): 321–5.Google Scholar
Shankar, Viswanathan, and Bangdiwala, Shrikant I. 2008. Behavior of agreement measures in the presence of zero cells and biased marginal distributions. Journal of Applied Statistics 35: 445–64.CrossRefGoogle Scholar
Signorino, Curtis S., and Ritter, Jeffrey M. 1999. Tau-b or not tau-b: Measuring the similarity of foreign policy positions. International Studies Quarterly 43: 115–44.Google Scholar
Sim, Julius, and Wright, Chris C. 2005. The Kappa statistic in reliability studies: Use, interpretation, and sample size requirements. Physical Therapy 85: 257–68.CrossRefGoogle ScholarPubMed
Stone, R. W. 2004. The political economy of IMF lending in Africa. American Political Science Review 98: 577–91.Google Scholar
Sweeney, Kevin, and Keshk, Omar M. G. 2005. The similarity of states: Using S to compute dyadic interest similarity. Conflict Management and Peace Science 22: 165–87.Google Scholar
Vach, Werner. 2005. The dependence of Cohen's kappa on the prevalence does not matter. Journal of Clinical Epidemiology 58: 655–61.Google Scholar
Voeten, Eric, and Merdzanovic, Adis. 2009. United Nations General Assembly voting data. hdl:1902.1/12379UNF:3:Hpf6qOk-DdzzvXF9m66yLTg==.http://dvn.iq.harvard.edu/dvn/dv/Voeten/faces/study/StudyPage.xhtml?studyId=38311&;studyListing-Index=0_dee53f12c760141b21c251525331 (accessed June 12, 2009).Google Scholar
Zegers, Frits. 1986. A family of chance-corrected association coefficients for metric scales. Psychometrika 51: 559–62.CrossRefGoogle Scholar
Zwick, Rebecca. 1988. Another look at interrater agreement. Psychological Bulletin 103: 374–8.Google Scholar
Supplementary material: PDF

Häge supplementary material

Supporting Information

Download Häge supplementary material(PDF)
PDF 74.9 KB