A frequently asked question in mathematics education research is how to design didactical situations that best support students in developing their mathematical knowledge. One suggestion for enhancing learning involves the use of technology, in this case dynamic software. Several studies have compared the learning outcomes of students using dynamic software with those of students using pen and paper, and they report that students using dynamic software outperform the latter, as the technology offers students an interactive environment in which to explore and engage in reasoning and creative problem solving (Chan and Leung
2014; Diković
2009). Another question concerns task design, and whether students learn better solving guided tasks, where students are given instructions on how to solve (parts of) the task, or if it is more beneficial to provide students with problems, that is, unguided non-routine tasks that they solve by constructing at least part of the methods themselves. There are studies comparing learning outcomes from these designs that are in favor of the unguided approach. Didactical situations designed to invite students to learn mathematics by engaging in mathematical reasoning and problem solving, constructing their own methods, are reported as beneficial for learning (Jonsson et al.
2014; Norqvist
2017). For teachers, these findings raise questions about task design when introducing dynamic software into classrooms with a view to improving students’ learning. That is, do students’ learning outcomes differ depending on whether students are required to construct their own methods for solving a task or whether, instead, they are given instructions for solving it?
The present study will investigate whether the reported learning benefits of using dynamic software will be realized, regardless of whether students are provided with guidance. This will be done by examining the practice performance and learning outcomes of students who, supported by GeoGebra, solve guided non-routine tasks compared to the performance and outcomes of students who solve unguided non-routine tasks. Hypotheses examined in this study will be presented later.
1.1 Dynamic Software, Problem Solving and Learning Outcomes
The number of studies looking into learning outcomes when dynamic software is used in problem solving has increased in the last decade. In a review, Chan and Leung (
2014) examined nine quasi-experimental studies, which included a total of 587 participants, comparing groups of students using software such as Cabri Geometry and Geometer’s Sketchpad to students working with pen and paper. They found a distinct effect size in favor of the use of dynamic software. Several similar studies, where students used GeoGebra for problem solving, show its use to have positive effects on students’ learning outcomes. In posttests, students working with GeoGebra have outperformed students using pen and paper. These studies show significant positive effects on, for example, students’ development of conceptual and procedural knowledge of functions (Diković
2009; Zulnaidi and Zakaria
2012; Kepceoğlu
2016), as well as geometry (Bhagat and Chang
2015; Dogan and İçel
2011; Saha et al.
2010; Shadaan and Eu
2013; Zengin et al.
2012).
Numerous characteristics of dynamic software have been identified and suggested as explanations for these positive effects. Dynamic software such as GeoGebra allows students to interact with geometric and algebraic objects. For instance, functions can be defined algebraically and then changed dynamically (Hohenwarter and Jones
2007). That is, students may adjust the algebraic representation of the function and observe the change of the graph, or they can drag the graph and observe the way the formula changes in response. If anything is added or changed in any representation of a function, the others are automatically altered. This dynamic combination of the representation of functions, formula, graph, and table is seen as beneficial in helping students develop an understanding of functions (Coleman et al.
2015; Ferrara et al.
2006; Pierce et al.
2011). Furthermore, when students interact with the software they receive immediate visual responses to their actions. This kind of instant feedback of visualizing students’ ideas and confirming or falsifying their assumptions has been shown to make problem solving more efficient (Arcavi and Hadas
2000; Marrades and Gutiérrez
2000). Moreover, since the software takes care of time-consuming constructions, with tools such as drag and drop and sliders, students can construct multiple numerical variations of a mathematical object. These variations could be used to explore, contrast, and generalize concepts of, for example, functions (Leung
2008). Thus, dynamic software’s ability to visualize relations between representations, offer feedback on students’ actions, and provide multiple variations is outlined as beneficial for learning mathematics. That is, there are studies showing that dynamic software may support students’ understanding of mathematics (Diković
2009; Leung
2008), enhance their reasoning (Natsheh and Karsenty
2014; Granberg and Olsson
2015), and encourage them to explore, that is, to try out multiple ideas during problem solving (Fahlberg-Stojanovska and Stojanovski
2009; Hohenwarter and Jones
2007; Hähkiöniemi and Leppäaho
2012). The above-mentioned dynamic features provided by software such as GeoGerba are described as difficult to reproduce when working with pen and paper, and are therefore used to explain the positive effects of using such software.
These studies focus on differences in learning outcomes by comparing students using dynamic software with students using pen and paper. However, merely adding software to support students does not guarantee enhanced learning (Lou et al.
2001; Mullins et al.
2011). Other aspects, such as the way a given task is designed, may play an important role, and there is a fair amount of research looking into questions about task design and learning outcome.
1.2 Task Design and Learning Outcomes
A common question in educational research is whether students learn best from less guided approaches to mathematical content or if they need detailed instructions on how to approach specific tasks. Among the researchers advocating direct instructions, Kirschner et al. (
2006) claim that approaches offering no or minimal guided instructions are likely to be ineffective. One of their arguments concerns the idea that this kind of problem-based learning makes heavy demands on working memory, which in turn prevents students from accumulating knowledge in their long-term memory. Mayer (
2004) presents similar opinions in his review reporting on three decades of studies. Mayer argues that research, ever since the 1960s, has shown that guided methods of instruction are more effective for learning than pure discovery.
Hiebert and Grouws (
2007), on the other hand, together with researchers such as Brousseau (
1997) and Schoenfeld (
1985), represent another approach. They claim that students develop a deeper understanding of mathematics when they are required to struggle, in a positive sense, with important mathematical concepts. According to Hiebert and Grouws, the struggle is initiated when a student’s prior knowledge is insufficient to solve a given task and no solution method is provided. A productive struggle involves retrieval, reconstruction, and perhaps correction of prior knowledge, along with interpretation of the task at hand and construction of new knowledge in relation to what is already known (Hiebert and Grouws
2007).
In their review, Lee and Anderson (
2013) compared didactical designs of direct instruction and designs of learning through discovery with no or limited guidance. Both designs were shown to have benefits; however, although direct instructions provide correct solutions, are time efficient, and reduce demands on working memory, this design was shown to lead to superficial and rote learning methods that are poorly remembered. Lithner (
2008) describes how rote learning relates to students’ line of thinking or reasoning, which in turn relates to the design of the given task. When students are solving routine tasks they recall methods they learned earlier, or they make use of procedures provided in instructions for solving the task. These students engage in what Lithner defines as algorithmic reasoning (AR), that is, they follow procedures that are constructed to result in fast and accurate answers but that offer no broader context or meaning. Solving non-routine tasks, on the other hand, requires students to explore mathematical concepts, construct their own methods, and justify those methods using arguments anchored in intrinsic mathematics, a line of thinking Lithner describes as creative mathematical reasoning (CMR).
This kind of approach, engaging students in constructing (parts of) their methods, has implications for task design. To enhance learning and avoid rote learning methods, students need to struggle with mathematical problems; in other words, they must try to solve unguided non-routine tasks without the benefit of detailed instructions or memorized procedures. Jonsson et al. (
2014) compared the learning outcomes between two groups of students: one that practiced on a series of guided tasks promoting algorithmic reasoning, and another that practiced on the same tasks with less guidance, so that they were encouraged to explore and engage in creative reasoning. In this study, the students in the former group were provided with formulas for solving the tasks, and the students in the latter (intervention) group were required to construct the formulas themselves. The students working with the guided tasks were more successful in correctly solving the tasks during practice. However, the students in the intervention group outperformed the others during the posttest. Norqvist (
2017) replicated the study by Jonsson et al. (
2014), but added explanations to the guided tasks as to why the provided methods would work. The study showed the same results; the guided students were more successful during practice, however, the unguided students outperformed the guided students during posttests. These findings support the idea that merely providing students with direct instructions could be seen as promoting “unproductive success,” which results from “conditions that maximize performance in the initial learning but may not maximize learning in the longer term” (Kapur
2016, p. 1).
Although it is not in focus in the present study, it is worth noting that there is a fair amount of research combining knowledge construction with instructions. For example, Kapur (
2010,
2011) compared learning outcomes of students who were given instructions from the teacher either before or after solving a given task. The intervention group was given an unguided task involving a concept they had not yet learned and for which the students needed to construct a method. After the construction phase, the students were given a consolidation lesson in which they received instruction on the concept. The didactical design of the control group was reversed; first they were given direct instructions and then they worked on the same tasks as the intervention groups. The intervention groups scored higher on the posttests, indicating that the process of constructing has positive effects on students’ learning. Similar findings contrasting a didactical design of construction-before-instruction with a traditional design of instruction-before-practice are reported in other studies, for example, Hiebert and Stigler (
2004), Schwartz and Martin (
2004), and Schwartz et al. (
2011). This way of combining construction and guidance is also advocated by Hmelo-Silver et al. (
2007). They emphasize the importance of promoting competencies such as reasoning and problem solving, and argue that not all constructivist pedagogical approaches can be grouped together as unproductive unguided discovery learning. The cognitive load of working with unguided tasks may be reduced by providing students with supporting tools such as software, framing the activity with clear goals and rules, scaffolding students by, for instance, posing questions that challenge them to explain, and so on.
This question about task design, that is, about whether or not students should be provided with methods for solving the task, has been in focus for a long time and arises when it comes to designing didactical situations that include the use of dynamic software.
1.3 Task Design and Dynamic Software
The features of dynamic software such as GeoGebra that are reported as beneficial for learning, like dynamic visualization of multiple representations and instant feedback, are available to students regardless of task design. Students however, usually engage solely in the activities needed to solve a given problem (Joubert
2013). This is in line with studies that show that the very task design might influence how students use the software (Fahlberg-Stojanovska and Stojanovski
2009; Doorman et al.
2007; Hitt and Kieran
2009; Laborde
2001). Leung (
2011), for example, points out that for students to benefit from the features of dynamic software, the given task should invite them to explore, reconstruct, and explain mathematical concepts and relations. This brings us to the question of whether and how differences in task design (guided or unguided) will affect learning outcomes when students are supported by dynamic software.