Weitere Artikel dieser Ausgabe durch Wischen aufrufen
The purpose of this study was to determine the usage rates, measurement equivalence, and potential outcome differences between mobile and non-mobile device-based deliveries of an unproctored, non-cognitive assessment.
This study utilized a quasi-experimental design based on archival data obtained from applicants who completed a non-cognitive assessment on a mobile (n = 7,743; e.g., smartphones, tablet computers) or non-mobile (n = 929,341; e.g., desktop computers) device as part of an operational, high-stakes pre-employment selection process.
One percent of applicants used mobile devices to complete the assessment. Multiple-group confirmatory factor analysis indicated the assessment was equivalent across mobile and non-mobile devices at the configural, metric, scalar, and latent mean levels. A comparison of observed score means using one-way and factorial ANOVAs demonstrated that the use of mobile and non-mobile devices did not produce any practically significant score differences on the assessment across devices or applicant demographic subgroups.
Industry and technological trends suggest mobile device usage will only increase. Thus, demonstrating that mobile device functionality and hardware characteristics do not change the psychometric functioning or applicant outcomes for a non-cognitive, text-based selection assessment is critical to talent assessment.
This study provides the first empirical examination of the usage of mobile devices to complete talent assessments and their impact on assessment properties and applicant outcomes, and serves as the foundation for future research and application of this growing technological trend in pre-employment assessment.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: Authors
American Psychological Association. (1986). Guidelines for computer-based tests and interpretations. Washington, DC: Author.
American Psychological Association. (2010). Ethical principles of psychologists and code of conduct. Washington, DC: Author.
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423. CrossRef
Arthur, W., Glaze, R. M., Villado, A. J., & Taylor, J. E. (2010). The magnitude and extent of cheating and response distortion effects on unproctored internet-based tests of cognitive ability and personality. International Journal of Selection and Assessment, 18(1), 1–16. CrossRef
Barak, A., & English, N. (2002). Prospects and limitations of psychological testing on the internet. Journal of Technology in Human Services, 19(2/3), 65–89. CrossRef
Bartram, D. (2006). Testing on the internet: Issues, challenges and opportunities in the field of occupational assessment. In D. Bartram & R. K. Hambleton (Eds.), Computer-based testing and the internet: Issues and advances (pp. 13–37). San Francisco, CA: Wiley.
Beaty, J. C., Nye, C. D., Borneman, M. J., Kantrowitz, T. M., Drasgow, F., & Grauer, E. (2011). Proctored versus unproctored internet tests: Are unproctored noncognitive tests as predictive of job performance? International Journal of Selection and Assessment, 19(1), 1–10. CrossRef
Buchanan, T., Johnson, J. A., & Goldberg, L. R. (2005). Implementing a five-factor personality inventory for use on the internet. European Journal of Psychology Assessment, 21(2), 115–127. CrossRef
Carlson, J. E., & Timm, N. H. (1974). Analysis of nonorthogonal fixed-effects designs. Psychological Bulletin, 81(9), 563–570. CrossRef
Carstairs, J., & Myors, B. (2009). Internet testing: A natural experiment reveals test score inflation on a high-stakes, unproctored cognitive test. Computers in Human Behavior, 25, 738–742. CrossRef
Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9(2), 233–255. CrossRef
Chuah, S. C., Drasgow, F., & Roberts, B. W. (2006). Personality assessment: Does the medium matter? No. Journal of Research in Personality, 40, 359–376. CrossRef
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.
Coyne, I., Warszta, T., Beadle, S., & Sheehan, N. (2005). The impact of mode of administration on the equivalence of a test battery: A quasi-experimental design. International Journal of Selection and Assessment, 13(3), 220–224. CrossRef
Davis, R. (1999). Web-based administration of a personality questionnaire: Comparison with traditional methods. Behavior Research Methods, Instruments, & Computers, 31(4), 572–577. CrossRef
Doverspike, D., Arthur Jr., W., Taylor, J., & Carr, A. (2012, April). Mobile mania: The impact of device type on remotely delivered assessments. In J. Scott (chair), Chasing the tortoise: Zeno’s paradox in technology-based assessment. Paper presented at the 27th annual conference of The Society for Industrial and Organizational Psychology, San Diego, CA.
Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor & Department of Justice. (1978). Uniform guidelines on employee selection procedures. Washington, DC: Authors.
Fallaw, S. S., & Kantrowitz, T. M. (2011). 2011 Global assessment trends report [White paper]. Retrieved from http://www2.Shl.com/campaign/2011-global-assessment-trends-report/thankyou.aspx.
Fallaw, S. S., Kantrowitz, T. M., & Dawson, C. R. (2012). 2012 Global assessment trends report [White paper]. Retrieved from http://www.Shl.com/assets/GATR_2012_US.pdf.
Horn, J. L., & McArdle, J. J. (1992). A practical and theoretical guide to measurement invariance in aging research. Experimental Aging Research, 105(44), 117–144. CrossRef
Hough, L. M. (2010). Assessment of background and life experience: The past as prologue. In J. Scott & J. Waclawski (Eds.), Handbook of workplace assessment: Evidence-based practices for selecting and developing organizational talent (pp. 109–140). San Francisco, CA: Wiley.
Hough, L. M., Oswald, F. L., & Ployhart, R. E. (2001). Determinants, detection and amelioration of adverse impact in personnel selection procedures: Issues, evidence and lessons learned. International Journal of Selection and Assessment, 9(1/2), 152–194. CrossRef
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. CrossRef
Huff, K. C. (2006). The effects of mode of administration on timed cognitive ability tests. Unpublished doctoral dissertation, North Carolina State University, Raleigh, NC
International Test Commission. (2005). International guidelines on computer-based and Internet delivered testing. Granada, Spain: Author.
Johnson, J. J. (2000, March). Web-based personality assessment project description and rationale. Poster presented at the annual meeting of the Eastern Psychological Association, Baltimore, MD.
Jourbert, T., & Kriek, H. J. (2009). Psychometric comparison of paper-and-pencil and online personality assessments in a selection setting. South African Journal of Industrial Psychology, 35(1), 78–88.
Leeson, H. V. (2006). The mode effect: A literature review of human and technological issues in computerized testing. International Journal of Testing, 6(1), 1–24. CrossRef
Lievens, F., & Burke, E. (2011). Dealing with the threats inherent in unproctored Internet testing of cognitive ability: Results from a large-scale operational test program. Journal of Occupational and Organizational Psychology, 84, 817–824. CrossRef
Mael, F. A., & Hirsch, A. C. (1993). Rainforest empiricism and quasi-rationality: Two approaches to objective biodata. Personnel Psychology, 46, 719–738. CrossRef
Meade, A. W., Michels, L. C., & Lautenschlager, G. J. (2007). Are Internet and paper-and-pencil personality tests truly comparable? An experimental design measurement invariance study. Organizational Research Methods, 10(2), 322–345. CrossRef
Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50(9), 741–749. CrossRef
Muthén, L. K., & Muthén, B. O. (2010). Mplus user’s guide (6th ed.). Los Angeles, CA: Muthén & Muthén.
NPD Group. (2013, February). U.S. consumer technology retail sales decline 2 percent in 2012. NPD Group Press Release. Retrieved from https://www.npd.com/wps/portal/npd/us/news/press-releases/us-consumer-technology-retail-sales-decline-2-percent-in-2012-according-to-the-npd-group/.
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.
Nye, C. D., Do, B., Drasgow, F., & Fine, S. (2008). Two-step testing in employee selection: Is score inflation a problem? International Journal of Selection and Assessment, 16(2), 112–120. CrossRef
Overall, J. E., & Spiegel, D. K. (1969). Concerning least squares analysis of experimental data. Psychological Bulletin, 72(5), 311–322. CrossRef
Pettit, F. A. (2002). A comparison of world-wide web and paper-and-pencil personality questionnaires. Behavior Research Methods, Instruments & Computers, 34(1), 50–54. CrossRef
Ployhart, R. E., & Oswald, F. L. (2004). Applications of mean and covariance structure analysis: Integrating correlational and experimental approaches. Organizational Research Methods, 7(27), 27–65. CrossRef
Ployhart, R., Weekley, J. A., Holtz, B. C., & Kemp, C. (2003). Web-based and paper-and-pencil testing of applicants in a proctored setting: Are personality, biodata, and situational judgment tests comparable? Personnel Psychology, 56, 733–752. CrossRef
Potosky, D. (2008). A conceptual framework for the role of the administration medium in the personnel assessment process. Academy of Management Review, 33(3), 629–648. CrossRef
Reynolds, D. H., & Rupp, D. E. (2010). Advances in technology-facilitated assessment. In J. C. Scott & D. H. Reynolds (Eds.), Handbook of workplace assessment: Evidence-based practices for selecting and developing organizational talent (pp. 609–641). San Francisco, CA: Jossey-Bass.
Salgado, J. F., & Moscoso, S. (2003). Internet-based personality testing: Equivalence of measures and assesses’ perceptions and reactions. International Journal of Selection and Assessment, 11(2/3), 194–205. CrossRef
Schroeders, U., & Wilhelm, O. (2010). Testing reasoning ability with handheld computers, notebooks, and paper and pencil. European Journal of Psychological Assessment, 26(4), 284–292. CrossRef
Scott, J. C., & Mead, A. D. (2011). Foundations for measurement. In N. Tippins & S. Adler (Eds.), Technology-enhanced assessment of talent (pp. 1–18). San Francisco, CA: Wiley.
Smith, A. (2012, March 1). 46% of American adults are smartphone owners. Retrieved from http://PewInternet.org/Reports/2012/Smartphone-Update-2012.aspx.
Society for Industrial & Organizational Psychology. (2003). Principles for the validation and use of personnel selection procedures (4th ed.). Bowling Green, OH: Author.
Steenkamp, J. E. M., & Baumgartner, H. (1998). Assessing measurement in cross-national consumer research. Journal of Consumer Research, 25, 78–90. CrossRef
Stevens, J. (1990). Intermediate statistics: A modern approach. Hillsdale, NJ: Lawrence Erlbaum Associates.
Templer, K. J., & Lange, S. R. (2008). Internet testing: Equivalence between proctored lab and unproctored field conditions. Computers in Human Behavior, 24, 1216–1228. CrossRef
Tippins, N. T. (2011). Overview of technology-enhanced assessments. In N. Tippins & S. Adler (Eds.), Technology-enhanced assessment of talent (pp. 1–18). San Francisco, CA: Wiley. CrossRef
Tippins, N. T., Beaty, J., Drasgow, F., Gibson, W. M., Pearlman, K., Segall, D. O., et al. (2006). Unproctored internet testing in employment settings. Personnel Psychology, 59, 189–225. CrossRef
Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 4–70. CrossRef
Waters, S. D., & Pommerich, M. (2007, April). Context effects in internet testing: A literature review. Paper presented at the 22nd annual conference of The Society for Industrial and Organizational Psychology, New York City, NY.
- Internet-Based, Unproctored Assessments on Mobile and Non-Mobile Devices: Usage, Measurement Equivalence, and Outcomes
A. James Illingworth
Neil A. Morelli
John C. Scott
Scott L. Boyd
- Springer US
Neuer Inhalt/© Stellmach, Neuer Inhalt/© Maturus, Pluta Logo/© Pluta