Skip to main content
Log in

Effects of an Online Stimulus Equivalence Teaching Procedure on Research Design Open-Ended Questions Performance of International Undergraduate Students

  • ORIGINAL ARTICLE
  • Published:
The Psychological Record Aims and scope Submit manuscript

Abstract

The purpose of the present study was to assess if training in matching-to-sample (MTS) tasks would yield not only new MTS performance but also written topography-based responses involving research design names, definitions, notations, and examples in four international undergraduates. Thirty-six experimental stimuli—composed of nine research design names and their corresponding definitions, notations, and examples—were presented in a MTS format during teaching and emergent conditional relations testing sessions. The six topography-based response probes were composed of nine open-ended questions each. Participants learned all conditional relations, showed emergence of symmetric and transitive relations, and of topography-based responses. The present study provides some steps towards the use of stimulus equivalence for international delivery of content.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Even though the authors used A-B, A-C, and A-D to define the relations tested in Final Test 2, the “ ’ ” was added to highlight the fact that what was requested from the participants was a topography-based response.

  2. Money payment is not allowed in research studies in Brazil.

  3. When this study was designed, Campbell and Stanley’s (1963) book was the most cited reference in the area.

  4. The nine research design names had been previously determined as described at the beginning of the “Experimental Stimuli” section.

  5. Notations are preestablished graphic representations of research designs. Therefore, they cannot be manipulated in terms of its format and components.

  6. Part A tested name—written definition relations; Part B, name—written notation relations; and Part C, name—written new example relations.

  7. It is important to mention that instructions on note taking were not provided.

  8. BD and DB relations were not tested since pilot data showed that participants responded at 70 % of correct responses or more in these relations, before MTS teaching was presented.

References

  • Bandini, C., & de Rose, J. (2006). Os processos [The processes]. In A abordagem behaviorista do comportamento novo [The behavioral approach to new behavior] (pp. 63–84). Santo Andre, SP: ESETec.

  • Barlow, D., Nock, M., & Hersen, M. (2009). Single case experimental designs: Strategies for studying behavior for change (3rd ed.). Boston, MA: Pearson Education.

    Google Scholar 

  • Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for research. Chicago, IL: Rand-McNally.

    Google Scholar 

  • Cozby, P. (2008). Methods in behavioral research. New York, NY: McGraw-Hill.

    Google Scholar 

  • Critchfield, T., & Fienup, D. (2010). Using stimulus equivalence technology to teach statistical inference in a group setting. Journal of Applied Behavior Analysis, 43(4), 763–768.

    Article  PubMed Central  PubMed  Google Scholar 

  • de Rose, J., de Souza, D., & Hanna, E. (1996). Teaching reading and spelling: Exclusion and stimulus equivalence. Journal of Appled Behavior Analysis, 29(4), 451–469.

    Article  Google Scholar 

  • Fields, L., Travis, R., Roy, D., Yadlovker, E., de Aguiar-Rocha, L., & Sturmey, P. (2009). Equivalence class formation: A method for teaching statistical interactions. Journal of Applied Behavior Analysis, 42(3), 575–593. doi:10.1901/jaba.2009.42-575.

    Article  PubMed Central  PubMed  Google Scholar 

  • Fienup, D., & Critchfield, T. (2010). Efficiently establishing concepts of inferential statistics and hypothesis decision making through contextually controlled equivalence classes. Journal of Applied Behavior Analysis, 43(3), 437–462. doi:10.1901/jaba.2010.43-437.

    Article  PubMed Central  PubMed  Google Scholar 

  • Fienup, D., & Critchfield, T. (2011). Transportability of equivalence-based programmed instruction: Efficacy and efficiency in a college classroom. Journal of Applied Behavior Analysis, 44(3), 435–450. doi:10.1901/jaba.2011.44-435.

    Article  PubMed Central  PubMed  Google Scholar 

  • Fienup, D., Covey, D., & Critchfield, T. (2010). Teaching brain-behavior relations economically with stimulus equivalence technology. Journal of Applied Behavior Analysis, 43(1), 19–33. doi:10.1901/jaba.2010.43-19.

    Article  PubMed Central  PubMed  Google Scholar 

  • Green, G., & Saunders, R. (1998). Stimulus equivalence. In K. Lattal & M. Perone (Eds.), Handbook of research methods in human operant behavior (pp. 229–262). New York, NY: Plenum Press.

    Chapter  Google Scholar 

  • Horner, R., & Baer, D. (1978). Multiple-probe technique: A variation on the multiple baseline. Journal of Applied Behavior Analysis, 11(1), 189–196.

    Article  PubMed Central  PubMed  Google Scholar 

  • Johnson, K., & Street, E. (2004). The Morningside model of generative instruction: What it means to leave no child behind. Beverly, MA: Cambridge Center for Behavioral Studies.

    Book  Google Scholar 

  • Loft, K. (2009). The virtues of venom. Retrieved from http://fcit.usf.edu/fcat10r/home/sample-tests/virtues-of-venom/index.html.

  • Lovett, S., Rehfeldt, R., Garcia, Y., & Dunning, J. (2011). Comparison of a stimulus equivalence protocol and traditional lecture for teaching single-subject designs. Journal of Appled Behavior Analysis, 44(4), 819–833. doi:10.1901/jaba.2011.44-819.

    Article  Google Scholar 

  • Matos, M., & Passos, M. (2010). Emergent verbal behavior and analogy: Skinnerian and linguistic approaches. The Behavior Analyst, 33(1), 65–81.

    PubMed Central  PubMed  Google Scholar 

  • Neef, N., Perrin, C., Haberlin, A., & Rodrigues, L. (2011). Studying as fun and games: Effects on college students’ quiz performance. Journal of Applied Behavior Analysis, 44(4), 897–901. doi:10.1901/jaba.2011.44-897.

    Article  PubMed Central  PubMed  Google Scholar 

  • Ninness, C., Rumph, R., McCuller, G., Vasquez, E., Harrison, C. A. F., & Bradfield, A. (2005). A relational frame and artificial neural network approach to computer-interactive mathematics. The Psychological Record, 55(1), 135–153.

    Google Scholar 

  • Ninness, C., Barnes-Holmes, D., Rumph, R., McCuller, G., Ford, A. M., Payne, R., & Elliott, M. P. (2006). Transformations of mathematical and stimulus functions. Journal of Applied Behavior Analysis, 39(3), 299–321.

    Article  PubMed Central  PubMed  Google Scholar 

  • Ninness, C., Dixon, M., Barnes-Holmes, D., Rehfeldt, R. A., Rumph, R., McCuller, G., & McGinty, J. (2009). Constructing and deriving reciprocal trigonometric relations: A functional analytic approach. Journal of Applied Behavior Analysis, 42(2), 191–208. doi:10.1901/jaba.2009.42-191.

    Article  PubMed Central  PubMed  Google Scholar 

  • Odom, S., Brantlinger, E., Gersten, R., Horner, R., Thompson, B., & Harris, K. (2005). Research in special education: Scientific methods and evidence-based practices. Exceptional Children, 71(2), 137–148.

    Article  Google Scholar 

  • Sidman, M. (1971). Reading and auditory-visual equivalences. Journal of Speech and Hearing Research, 14(1), 5–13.

    Article  PubMed  Google Scholar 

  • Skinner, B. F. (1953). Science and human behavior. New York, NY: The MacMillan Company.

    Google Scholar 

  • Skinner, B. F. (2002). Verbal behavior. Cambridge, MA: B. F. Skinner Foundation. Original work published 1957.

    Google Scholar 

  • Sota, M., Leon, M., & Layng, T. (2011). Thinking through text comprehension II: Analysis of verbal and investigative repertoires. The Behavior Analyst Today, 12(1), 12–20.

    Article  Google Scholar 

  • Tiemann, P., & Markle, S. (1990). Analyzing instructional content (4th ed.). Champaign, IL: Stipes.

    Google Scholar 

  • Twyman, J., Layng, T., Stikeleather, G., & Hobbins, K. (2005). A non-linear approach to curriculum design: The role of behavior analysis in building an effective reading program. In W. L. Heward et al. (Eds.), Focus on behavior analysis in education (Vol. 3, pp. 55–68). Upper Saddle River, NJ: Merrill/Prentice Hall.

    Google Scholar 

  • Walker, B., & Rehfeldt, R. (2012). An evaluation of the stimulus equivalence paradigm to teach single-subject design to distance education students via Blackboard. Journal of Applied Behavior Analysis, 45(2), 329–344. doi:10.1901/jaba.2012.45-329.

    Article  PubMed Central  PubMed  Google Scholar 

  • Walker, B., Rehfeldt, R., & Ninness, C. (2010). Using the stimulus equivalence paradigm to teach course material in an undergraduate rehabilitation course. Journal of Applied Behavior Analysis, 43(4), 615.

    Article  PubMed Central  PubMed  Google Scholar 

Download references

Acknowledgments

This research was conducted at the University of Kansas as part of the first author’s requirements for a master’s degree in applied behavioral science. The study and the preparation of this article were supported in part by the National Institute on Disability and Rehabilitation Research, U.S. Department of Education, grant H133B060018. We thank Daniel J. Schober, Melissa Gard, and Jason Hirst for their helpful comments at different stages of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ana Carolina Sella.

Appendix A

Appendix A

 

Design name (Set A)

Design explanation or definition (Set B)

Notation (Set C)

Example (Set D)

1

The One-Shot Case Study

The dependent variable is measured (O) only after the independent variable (X) is introduced. The independent variable is introduced before the measure of the dependent variable. Only one group is needed and there is no control group. There is no randomization.

X O1

The dependent variable is heart rate and it is measured after the independent variable is introduced. The independent variable is jogging and it is introduced before measuring the heart rates. The participants are 25 students who will not be divided into groups. There is no randomization.

2

The One-Group Pretest–Posttest Design

The dependent variable is measured before (O1) and after (O2) the independent variable (X) is introduced. The independent variable is introduced after the first measure of the dependent variable. Only one group is needed and there is no control group. There is no randomization.

O1 X O2

The dependent variable is heart rate and it is measured before and after the independent variable is introduced. The independent variable is jogging and it is introduced after the first measure of heart rates. The participants are 25 students who will not be divided into groups. There is no randomization.

3

The Static Group Comparison

The dependent variable is measured (O) for both groups only after the independent variable (X) is introduced to the experimental group. The independent variable is introduced to the experimental group before the measure of the dependent variable. Two groups are needed; one is the control group. There is no randomization.

X  O1

__ __ __ __

   O1

The dependent variable is heart rate and it is measured for both groups after the independent variable is introduced to the experimental group. The independent variable is jogging and it is introduced to the experimental group before heart rates are measured in both groups. The participants are 50 students who will be divided into two groups: experimental group and control group. There is no randomization.

4

Nonequivalent Control-Group Design

The dependent variable is measured for both groups before (O1) and after (O2) the independent variable (X) is introduced to the experimental group. The independent variable is introduced to the experimental group after the first measure of the dependent variable. Two groups are needed; one is the control group. There is no randomization.

O1 __ X __ O2

O1  O2

The dependent variable is heart rate and it is measured for both groups before and after the independent variable is introduced. The independent variable is jogging and it is introduced to the experimental group after the first measure of heart rates. The participants are 50 students who will be divided into two groups: experimental group and control group. There is no randomization.

5

Counterbalanced Design

The dependent variable is measured for all groups (O), after each one of the four independent variables (X1, X2, X3, X4) is introduced for each experimental group. Each independent variable is introduced to all groups but in a different order for each group. Four groups are needed, but there is no “true control group”, since the independent variable is introduced for all groups. There is no randomization.

 Time 1 Time 2 Time 3 Time 4

Group A X1O X2O X3O X4O

 __ __ __ __ __ __ __ __ __ __ __

Group B X2O X4O X1O X3O

 __ __ __ __ __ __ __ __ __ __ __

Group C X3O X1O X4O X2O

 __ __ __ __ __ __ __ __ __ __ __

Group D X4O X3O X2O X1O

The dependent variable is heart rate and it is measured for all groups, after each independent variable is introduced for the groups. The independent variables can be jogging (X1), swimming (X2), dancing (X3), and walking (X4) and each one of them is introduced to all groups, but in a different order for each group. The participants are one hundred students who will be divided into four groups. There is no randomization.

6

The Multiple Time Series Design

The dependent variable is measured several times (O), for both groups, before and after the independent variable (X) is introduced to the experimental group. The independent variable is introduced to the experimental group after several measures of the dependent variable. Two groups are needed; one is the control group. There is no randomization.

O O O OXO O O O

__ __ __ __ __ __ __ __ __

O O O O O O O O

The dependent variable is heart rate and it is measured several times, for both groups, before and after the independent variable is introduced. The independent variable is jogging and it is introduced to the experimental group after several measures of heart rates. The participants are 50 students who will be divided into two groups: experimental group and control group. There is no randomization.

7

The Pretest-Posttest Control Group Design

The dependent variable is measured for both groups before (O1) and after (O2) the independent variable (X) is introduced to the experimental group. The independent variable is introduced to the experimental group after the first measure of the dependent variable. Two groups are needed; one is the control group. There is randomization.

R O1 X O2

R O1   O2

The dependent variable is heart rate and it is measured for both groups before and after the independent variable is introduced. The independent variable is jogging and it is introduced to the experimental group after the first measure of heart rates. The participants are 50 students who will be randomly assigned to either one of two groups: experimental group and control group. There is randomization.

8

The Solomon Four-Group Design

The dependent variable is measured for two groups before (O1 and O3) and after (O2 and O4) the independent variable (X) is introduced to the experimental groups. For the other two groups, the dependent variable is measured only after (O5 and O6) the independent variable is introduced to the experimental groups. The independent variable is introduced to the two experimental groups. For the first experimental group (Group A), the independent variable is introduced after the first measure of the dependent variable; for the other experimental group (Group C), it is introduced before. Four groups are needed; two are control groups. There is randomization.

Group A R O1 X O2

Group B R O1   O2

Group C R   X O2

Group D R     O2

The dependent variable is heart rate, it is measured before and after the independent variable is introduced for two of the four groups; it is measured only after in the other two groups. The independent variable is jogging and it is introduced to the experimental groups differently: for Group A it is introduced after the dependent variable is measured. For Group C, it is presented before the dependent variable is measured. The participants are 100 students who will be randomly assigned to either one of four groups: experimental group A or C, control group B or D. There is randomization.

9

The Posttest-Only Control Group Design

The dependent variable is measured for both groups only after (O) the independent variable (X) is introduced to the experimental group. The independent variable is introduced to the experimental group before the measure of the dependent variable. Two groups are needed; one is the control group. There is randomization.

R X O1

R   O1

The dependent variable is heart rate and it is measured for both groups after the independent variable is introduced to the experimental group. The independent variable is jogging and it is introduced to the experimental group before heart rates (dependent variable) are measured in both groups. The participants are 50 students who will be randomly assigned to either one of two groups: experimental group and control group. There is randomization.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sella, A.C., Mendonça Ribeiro, D. & White, G.W. Effects of an Online Stimulus Equivalence Teaching Procedure on Research Design Open-Ended Questions Performance of International Undergraduate Students. Psychol Rec 64, 89–103 (2014). https://doi.org/10.1007/s40732-014-0007-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40732-014-0007-1

Keywords

Navigation