Introduction

There is a growing demand for an ease of use of technology in science and technology today’s workforce. According to a report written by Stine and Matthews (2009) produced for the Congressional Research Service, the demand for workers in the fields of science, technology, engineering and mathematics (STEM) is growing very fast. However, the number of American students studying in these sectors is insufficient to meet this need. It thus becomes important (even urgent) to attract young people to these fields of study.

Life is being transformed by technological advancement in Canada also, where the country is facing ‘a continuously evolving digital revolution’, a perception of transformative changes related to technological advancement that was noticed by Barrell (2001 p. 18). A decade later, in 2010, the federal government launched a quest to reinforce the digital economy by strengthening digital competence, since it has been identified as a crucial factor of success: ‘a risk that the gap in skills could grow as the process of learning is increasingly connected to digital competence’ (Government of Canada 2010).

To this end, the use of robotics materials developed by LEGO (MindstormsFootnote 1) is a rapidly emerging phenomenon, both at the primary and secondary school level. These materials allow the construction of a robot using a variety of pieces, sensors and engine blocks. They also come with software to program actions such as moving in different directions, making turns, and even more complex tasks such as throwing a ball. We are interested in studying the learning opportunities that emerge from the use of robotics in the classroom and beyond, based on students’ and teachers’ perceptions, as well as on the issue of assessing learning outcomes, in general, and particularly those connected to the mathematics curriculum.

In 2007, the New Brunswick provincial government issued a call to teachers to apply for funding to support innovative teaching approaches in schools with the goal of enhancing classroom teaching through more inclusive approaches in order to meet the interests of all students while providing them with better opportunities to learn mathematics, science and literacy (Porter and AuCoin 2012) . One such initiative (called RoboMaTIC) gave the teachers of one rural elementary (K–8) school the opportunity to use LEGO Mindstorms robots and individual laptops with their students. Additional time was allocated for teacher professional development, in order to help them learn how to teach in such an augmented environment and to elaborate and implement robotics-based learning scenarios. The CAMI (Communauté d’apprentissages multidisciplinaires interactifs) research team from the Université de Moncton was invited to collaborate with teachers during the implementation of several robotics-based learning scenarios with two groups of students from Grades 5 and 6.

Blanchard (2009) analyzed the literature describing 21st-century learners, the so-called Net Generation. When addressing the diversity of needs of these learners, teachers aim to create a healthy and creative inclusive environment where students can reach their individual potential. Several pedagogical principles should be respected when building such environments: provide students freedom of choice, allow them to express their personality using a variety of digital communication tools, create opportunities for them to develop critical judgment and cyber-ethics, and foster integrity and openness. Such environments should also be interactive and motivating, prompt collaboration and socialization, and provide rapid information and feedback (Petre and Price 2004; Prensky 2001; Tapscott 2008). While reviewing the literature, Freiman et al. (2010) found that robotics-based learning may create an environment that respects the above-mentioned principles. While moving towards the issue of assessment, the multiple connections of robotics to (meta-)cognitive, problem-based and socio-affective aspects of learning, as presented in the model shown in Fig. 1, should be made explicit.

Fig. 1
figure 1

A robotics-based learning perspective (Freiman et al. 2010)

In our previous publications, we used this model to study participants’ perceptions of their experience with robotics-based learning tasks and to observe their problem-solving strategies. We noticed that having generally positive perceptions about working with robots does not automatically lead students to elaborate on effective problem-solving strategies beyond the trial-and-error method (Freiman et al. 2010; Blanchard et al. 2010). We also noticed several mathematical connections reported by students during the interviews. At the same time, we found a need to enrich our theoretical framework with different perspectives in order to allow us to conduct a deeper investigation of the mathematical skills involved in the problem-solving process. In this article, we thus used a sociocultural perspective, focusing on the emerging patterns of mathematical reasoning to gain a better understanding of how to assess students’ learning while performing a complex robotics-based task.

Literature Review

In different parts of the world, a view of mathematics teaching and learning in the 21st century goes beyond traditionally taught techniques and procedures by leaning towards the development of a more complex set of abilities that can help young students solve real-life related problem situations (Holtman et al. 2011). These situations allow students to employ diverse strategies, develop more sophisticated mathematical reasoning and communication skills, and make intra-, inter- and trans-disciplinary connections (NCTM 2000). The use of robotics-based tasks was seen by Papert (1980) and Rogers and Portsmore (2004) as an excellent source for interdisciplinary connections, including ones involving mathematics. Thus, it seems that the manipulable nature of these tools affords opportunities for problem solving and reasoning in addition to enabling mathematical thinking (Goodwin and Highfield 2013). Among others, Chambers and Carbonaro (2003) suggested that robotics-based learning tasks might support students in developing their mathematical knowledge (while providing them with a different way to learn mathematics).

The use of dynamic representation enables students to engage in mathematics learning and gives them opportunities to explore concepts and processes of mathematics (Savard and Highfield 2015). More specifically, Clements and Meredith (1993) and Yelland (1994) have argued that programming robots provides opportunities for students to explore spatial, measurement and geometry concepts, to develop problem-solving skills, and to engage with meta-cognitive processes. As we have pointed out, this meta-cognitive aspect of learning is also associated with socio-affective aspects. For instance, Adolphson (2005) conducted a naturalistic phenomenological study of the mathematical understanding of middle school students engaged in open-ended robotics tasks. He found that students who worked with robots understood mathematics differently from those who studied mathematics through a traditional curriculum. Robotics tasks exemplified the richness, recursion, relatedness and rigour of mathematics. The author suggested that the use of robotics tasks could help to address equity issues such as gender, minority status and learning disabilities. However, it remains unclear how this goal might be accomplished.

An experience with robotics is a complex venture that requires the mobilization of several intra-, inter- and trans-disciplinary competences explicitly connected to STEM fields. While emphasizing the role of robotics as a way to do authentic mathematics, Gura (2007) argues that students often see little relationship between the math curriculum and the world in which they live. This ‘misunderstanding’ of mathematics as a real-life- disconnected academic arena may have a negative impact on students’ achievement. The author claims that robotics involves such mathematical skills such as measuring, counting, calculating and estimating, as well as elements of content knowledge such as algebra and geometry. Moreover, the very nature of a robotics task helps present these concepts not in an isolated way but as being authentically integrated in a problem-solving process.

Another recent contribution in reviewing the impact of robotics on STEM learning was made by Karim et al. (2015), who point at a fundamental question which still remains unanswered, namely in what way does robotics reshape education and foster learning? The authors point to the need for updating the definition of educational robotics by incorporating recent pedagogical theories such as constructionism, learning by design, active learning and socio-constructivism into the building and programming of these robots. This update would enlarge the field of influence of robotics in both disciplinary (mathematics, music, physics, etc.) and trans-disciplinary tasks (robots socially assisting in the intellectual and cognitive development of children). It is interesting that the same review, while analysing studies related to mathematics learning, argues that overall positive impact on learning mathematical concepts (such as fractions, ratios, etc.), problem-solving skills, as well as motivation and attitude, is based only on data related to high-achieving students.

As pointed out in the introductory section, our preliminary findings about overall positive feedback received from students participating in the RoboMaTIC project does not seem to correlate with data from students’ performance on the robotics-based tasks (Blanchard et al. 2010). Moreover, a review of research on robotics–based learning conducted by Benitti (2012) reveals that only a few studies offer empirical data about the impact of robotics on students’ learning and that little is known about the learning of robotics with 11–12 year olds. This is precisely the age of most of our participants (during the first year of the project). While we found an insufficient number of studies which clarify the role of mathematics in carrying out robotics tasks and the role of robotics in learning mathematics (most of the literature focuses on technology and science skills), we agree with Benitti that assessment of the use of robotics in the development of critical thinking, problem-solving and teamwork skills is an important issue to be addressed. In order to extend and to deepen our understanding of how perceptions interact with performance, we decided to pursue our analysis by using a socio-cultural perspective of learning.

In her recent dissertation, Venturini (2015) studied teachers’ use of digital technologies in student assessment in a DGE (Dynamic Geometry Environment), and how to evaluate the learning developed through the use of technology in mathematics. The problem is that mathematics teachers do not assess students’ digital technology competence, even if they use it in their teaching. As a result, students do not learn explicitly what they can do in mathematics with digital technologies. Venturini was also interested in the kinds of technology-based assessment tasks teachers value, including the type of feedback given to students and, finally, she studied how to take advantage of the use of digital technology while students were taking a test as a way to allow them to learn. Her findings show that teachers are not very open to the use of digital technologies in student formal assessment in mathematics (that is leading to a student grade, also known as summative assessment), because this practice is not aligned with their assessment goals.

Teachers in her study valued tasks where technology played a fundamental role in the solution process by modifying strategies or by offering stronger meaning for the students. They were aware that they needed to assess students on the competences they acquired while using digital technologies in mathematics – otherwise it would not be possible to assess them at all. But it seemed difficult to assess students in mathematics while using digital technologies because of the prescribed curriculum and because of the challenges they may encounter through the process of assessment. Another finding showed that the use of the technology depends on the aim of the assessment. Venturini recommends that teachers should propose exploratory problems in student assessment, because doing so might create an opportunity for students to learn through the assessment. Even though her work focused on task design with a DGE, we consider her findings relevant for our work, because students interacted with digital technologies in the context of assessment.

Assessing students might be seen as making a professional judgement on the learning and understanding of some specific knowledge, concept(s) or process(es). This professional judgement may be expressed in words as feedback or as a grade or mark in relation to learning goals (Taras 2010). Assessing a complex task is about assessing the mobilization of students’ resources in order to assess their competence (Perrenoud 2002). Through performing a digital task, students interact with the milieu, which contains students, teachers, tasks and technologies. The feedback received by the milieu, including feedback provided through the use of different digital technologies, can also contribute to student learning (Olive and Makar 2010).

According to Taras (2005), in order to be understood and used by students, assessment and feedback must be negotiated. Thus, feedback provided only by digital technologies is considered neutral, because the interpretation depends only on the student, without any intervention coming from the teacher (Mackrell 2015). The interpretation of this feedback might lead students to use higher-level cognitive processes (Olive and Makar 2010), because they have to make sense of the process by performing the task themselves rather than by passively receiving comments. Thus, the feedback provided by digital technologies should support students without “telling too much”.

Theoretical Framework

In this study, we examine students’ actions when performing a robotics-based task. We are particularly interested in looking at the complexity of assessing such tasks. The task of programming a robot might require many steps and may also require the mobilization of an important body of knowledge (Perrenoud 2002). For instance, completing such the task is an ideal context for students to use mathematical modeling while learning underlying technological and engineering concepts. It might be possible to break down the task into parts, but it does not necessarily help to assess the cognitive and meta-cognitive processes used by the students. In order to highlight the important features involved in these processes, it becomes necessary to situate the learning as being achieved or still in progress. Identification of contexts that influence student activity in the classroom is therefore needed. For the purpose of our study, we focus our attention on the contexts identified first by Mukhopadhyay and Greer (2001), and further developed by Savard (2008), as mathematical, socio-cultural or socio-political. According to the first authors, a mathematical context is where people use quantitative information from a socio-cultural context, in order to interpret this context using mathematical tools such as models.

Savard recognizes that mathematics is more than quantitative information; this is why a mathematical context is more than arithmetic. Mathematics is thus seen as an analytical tool to understand the socio-cultural and political worlds. A socio-cultural context is one where the culture is taken from daily life experiences and a new insight developed during an investigation might help to empower people to act as well-informed citizens. This is why we used a citizenship context instead of a socio-political one, as proposed by Mukhopadhyay and Greer (2001). The use of sociocultural, mathematical and citizenship contexts leads us to refer to this as an ethnomathematical model, because the mathematical knowledge developed is contextualized in culture (Savard and Highfield 2015). More recently, Goos et al. (2014) have presented a model for numeracy in the twenty-first century where digital tools are used to develop mathematical thinking. Their model suggests that contexts in which to use mathematical and digital knowledge exist in school settings and beyond, especially in the work-place.

A critical orientation towards numeracy is necessary for all numerate citizens because mathematical information aids people in making decisions by providing challenge or support to arguments and positions (ten Dam and Volman 2004). Critical thinking is actually seen as a civic responsibility when actively participating in democracy and in the cognitive activity of students’ citizenship context, and can help to make judgments while taking into account the particular context and making self-corrections accordingly (Lipman 2003). It is also seen as evaluative thinking (Paul and Elder 2001), because this thinking raises and clearly formulates questions or concerns, gathers and assesses relevant information, and interprets it. This process might be seen as meta-cognitive because it requires monitoring before, during and after actions (Swartz and Perkins 1990). Emotions linked to the task are also to be taken into consideration (Guilbert 1999).

Meta-cognition involves monitoring one’s process of thinking (Flavell 1987). For example, building a robot and having it move on a particular path involves using artefacts from a digital context. But there are also opportunities to think critically about the use of robots in our society in a sociocultural context. Programming a robot requires the use of mathematical concepts and processes; this constitutes a mathematical context. When estimating values to be used, critical thinking may support the development of mathematical reasoning (Krulik and Rudnick 1999) and mathematics may support the development of critical thinking by providing rigorous data or quantitative information.

An area that straddles socio-cultural and mathematical contexts can be seen as a techno-pedagogical space in which technological artefacts or digital tools are considered both as tools for learning and as learning objects. Thus, programming a robot might support mathematical understanding, and mathematics can be used to understand how to code the robot. Programming or coding the robot is thus a learning object to be considered. The new digital context should be addressed explicitly in order to expand learning opportunities.

To this end, the work done by Venturini (2015) on feedback given/received while students performed an assessment task can help to clarify roles, which are to guide students to achieve the task and to provide information about their performance and learning. She identified two kinds of feedback: Activity Feedback and Student Feedback. The role of the first type of feedback is to help students to solve a task and to brings students’ attention to their activity:

I will call Activity Feedback the feedback teachers or digital technologies give to students during mathematical activities in class. This kind of feedback is about the activity that students are doing, like prompts given by the teacher to make them reasoning, or technology responses to students’ actions, and it is used by the students to carry on the activity they are undertaking. This kind of feedback should encourage students to focus on the goal of the task, and on the strategies to attain that goal. (p. 26)

The role of the second type of feedback is to orient students to think meta-cognitively when performing a task and the feedback brings students’ attention towards possible generalizations:

I will call Student Feedback the feedback teachers or digital technologies give to students after or during the assessment (formal or informal). This kind of feedback is about students’ performance and learning. It is used to make students aware of their level of competencies compared to the required achievement, and to give students advice on what they have to do to improve their work. It includes the capability to self-assess, and it is a fundamental part of formative assessment. Student feedback becomes formative when students use it to enhance their learning (p. 27)

This article addresses the complexity of assessing mathematics learning by looking at students’ actions when working at an assessment task using robotics. More specifically, we ask: How are we to address the complexity of assessing learning that arises from robotics-based tasks? By investigating this question, we take a closer look at elementary students’ interactions with educational robots, focusing on specific tasks, as suggested by Holmquist (2014), who also pointed out the need for authentic instructional strategies in integrated contexts to characterize how students mobilize STEM concepts across disciplines. More specifically, we studied the ways students interpreted the feedback received when performing an assessment task.

A Discussion of Method

In this section, we first provide an overall portrait of the project and then we explain our data sources, before analysing our data using the constructs described in our theoretical framework. As a part of the teaching program called Innovative Learning Agenda (MÉNB 2007), this robotics project used Lego Mindstorms and took place over two school years. During the first year (note that due to the time needed to purchase and deliver the kits and the laptops the project started in the last trimester of the school year), students from two classes (20 students from grade 5 and 25 from grade 6; we labelled them Group 1 and Group 2) participated in robot tasks organised within the mathematics and science curricula.

Several tasks, contextualized in a socio-cultural context, and teaching scenarios were designed and implemented by two classroom teachers (teaching all subjects in Grades 5 and 6 respectively) accompanied by an Information and Communication Technology (ICT) mentor (note that in this French province, teachers teach several subjects through different elementary grades, K–8). The tasks (the same ones for both groups) were implemented simultaneously with both groups (same period of the day); for this period, a wall separating two classrooms was removed, thus transforming the whole space into a single classroom. This allowed explanations to be given to both groups simultaneously. However, students were placed physically according to their grade and their home teacher supervised their work. The ICT mentor had the freedom to move around and help whenever needed.

The first scenario helped students, working in teams of four, to become familiar with the different parts of the robots included in the kit, by carrying out classification tasks. The second scenario required students to build their robot. The third scenario, which concluded the first year, asked students to program their robot to move one metre forward. Each scenario was built around a real-life problem situation that students had to investigate, while working in small groups. For example, the third scenario involved a merchandise train being derailed with some of its wagons containing hazardous substances. Using a robot was therefore meaningful in order to approach the area where the incident occurred safely. All students were involved in the robotics project by playing different roles. For example, students could play the role of supervisor, construction worker, tester or programmer, depending on the nature of the task.

During Year 2, the program continued as part of the regular curriculum for Group 1 only (the former Grade 5 class, which became the subsequent Grade 6 class). Students were asked to undertake new tasks elaborated by their teachers. For example, one of these tasks was to program the robot to reach three targets consecutively placed at the vertices of an equilateral triangle while moving along the three edges of this triangle. This work was conducted as part of a regular classroom activity during a series of mathematics and science lessons. At the same time, some of the participating students from Group 2 (former Grade 6 students who were now in Grade 7) could continue their tasks with robots in the context of an optional course (part of a special in-school program called arts – sports – studies) that took place once a week during free time (lunch hours) with little supervision from teachers. Hence, students from this group could undertake the same scenarios as Group 1, but they also had the option of modifying and adding tasks according to their interests.

Throughout the duration of this project, a team of university researchers conducted an evaluative study using questionnaires, classroom observations and interviews with teachers and students. We administered a survey in the beginning of the project which asked students questions about their perceptions of the school, their use of technology in school and at home, as well as their perceptions of robots and people working with robots. During each scenario, some of the research team members made in-class observations. Texts of scenarios, students’ and teachers’ observations and reflections, and certain research results were shared via a school blog and at a provincial forum of the Best Educational Practices (in March, 2009). To assess students’ learning at the end of the project, the research team designed a complex task.

Two teams of four students were selected to complete this assessment task (one from each school level). The researcher explained the task to the students and gave them a handout containing a written explanation. The four students looked briefly at the paper and, in teams, immediately started to search for the programing block they could use. Without spending much time on the analysis of the situation or the task as such, the students rapidly engaged in the programming task by changing the parameters of the robot using the computer software Mindstorms, thereby controlling the two main movements: moving forward by any straight-line distance and rotating by 90°.

Students’ work was video-recorded by the researchers and then analyzed using an interpretative method (Savoie-Zajac 2004) to examine the emergence of mathematical reasoning among these students (mathematics context). This was done within the socio-cultural context of a complex problem-solving process, which was created by means of robotics programming (digital context). It is necessary to mention that this paper was written using data already collected and it was impossible for us to reach a better alignment with our specific goals and questions, which is an important methodological limit of our findings.

The research team designed a complex task for assessing students’ learning called Alert at the subway station. The task was to program a robot that would perform an inspection of one suspicious package found at a subway entrance:

We recently found a suspicious box in one the main control rooms of the metro station. We need your help! Using your programming abilities, we ask your team of experts to verify the safety of the suspicious box. As we don’t want to harm any humans in this task, it is very important that the robot goes to the box without exterior help.

The robot should thus move from a starting point to its final destination along three sides of a rectangular fence. It should then enter the surrounded area in order to inspect the suspicious object. Therefore students had to make the robot go inside the entrance in order to reach its target. The robot was required to complete its course in less than 45 s. The whole task had to be completed in the shortest time possible (the limit was set at 45 min). The choice of the task was made based on the nature of tasks that students had completed during the project, namely incorporating the programming of two basic robot movements that should have been learned by the students during their classroom experience (advancing the robot a particular distance in one direction and making a 90° turn). However, this task did not take into account tasks completed by Group 2 (now in Grade 7) when working on their own outside of the regular lessons. The time limit of 45 min was set due to the fact that the robotics tasks had to be implemented during specific school hours.

This task required the use of mathematical and scientific concepts to make the robot move in a straight line and turn to get from a starting point to an end point. When making the robot move in a straight line, students had to control the number of wheel rotations. In order to do that, they would need to determine the velocity at which the robot would need to move by choosing among the following parameters: time (in seconds), the number of rotations, and the number of degrees of rotation. In order to make the robot change direction, students had the option of blocking one or more wheels, which would be expressed as a percentage. It is also possible to make the robot turn using the two wheels simultaneously. In this case, the robot turns on itself. The scientific concepts and processes required for programming the robot include motion, velocity, distance, design of an experiment (the steps involved in completing the task) and scientific investigation (inquiry). The mathematical concepts and processes required in geometry are: direction, translation and rotation; in measurement: circumference, time, distance and degrees; in arithmetic: percent, decimals and the four operations.

Table 1 presents a possible path that the robot could follow. The first cell in Table 1 shows this trip, while the second cell in Table 1 shows the same path that was recreated on the floor of the classroom, allowing students to test their program using the robot. The third cell presents the path followed by the robot for both teams. Both of the teams decided to split the path into five segments. Hence, part a (the third cell in Table 1) represents two squares identified by the number 1 in the first cell, part b with two squares labeled 2, part c with four squares labeled 3, part d, with two squares identified by the number 4, and finally part e with two squares labeled with the number 5. A star on the floor represents point 2.

Table 1 The task to be performed by the robot

The decisions students had to make (based on either calculation or estimation) had the potential of leading them into both digital and mathematical contexts as described in our theoretical framework.

Student Performance on the Task

In this section, we present the findings that emerged from analysis of the mathematical reasoning employed by the two groups of students solving the Alert at the subway station assessment task. At first, both teams were situated in a socio-cultural context where they were immersed in a real-life situation. All students left the elements of this context very quickly and focused only on programming the robot using its parameters in order to make the robot move in a straight line and rotate. The students did not take any time to discuss either the event itself or the use of robots in our daily lives. Students started by discussing which programming block they should use in order to make the robot move along the first segment of its journey (digital context) (Fig. 2).

Fig. 2
figure 2

Students use software to programme their robot

Team 1

Team 1 first discussed how to program the robot using the given software. They wanted the robot to move straight along the first segment (segment a) and then move to along the second segment (segment b). The mathematical model discussed was in relation to the circumference of the wheel and, consequently, to the wheel’s rotation time (mathematical context). Team 1 selected the latest model for making the robot move straight and for using degrees to make the robot turn because of the time constraint: “We have 40 min” (Ella). The computation of the circumference of the wheel seemed more challenging and required more planning time. This feedback given by students might be seen as Student Feedback, because it sheds light on their capacities to perform the task using self-assessment. In this case, they knew that they had not mastered the skills required to compute the circumference of the wheel, so they decided to use a different parameter that was more accessible to them.

The students decided to estimate time and angle, so they tried some values. Then, moving into the digital context, they identified the starting point by putting tape on the floor. Next, they activated the robot, gave a title to the program, and saved it. They launched the program and the robot moved in a half circle instead of moving straight on segment a, then it turned 90° towards the right. The feedback received was Student Feedback, because it gave the students suggestions on what not to do in order to improve their work. Thus, students decided to block a wheel in order to make the robot turn. When they tested this program in the digital context, the robot moved straight on segment a, and then made a small turn. The feedback received was a Student Feedback, because it made them reflect on their mathematical and digital knowledge. Thus, they discussed the value of the angle by referring to what they had previously learned in class: “Madame Belinda did explain to us that if you want to turn 90°, you need to double it” (Paul). “It is not all the time like that, like stairs [L’escalier, another task done in class]” (Ella).

By moving back and forth between mathematical and digital contexts the students were able to make the robot turn 90°. The feedback received confirmed what they had previously learned and thus can be considered as Student Feedback. Next, students programmed the robot and tested its trajectory on segments a, b, c and d. The difficulty for them was to make the robot move and turn without touching the “walls of the metro station”, i.e. the border of the yellow tape fastened to the floor. In fact, the socio-cultural context was present both when the students mentioned the walls and when they were concerned that the suspicious package might blow up if the robot were to cross a wall and touched it. The feedback received then might be seen as Activity Feedback, because the robot’s responses to students’ actions made them focus on the goal of the task, not on their performance or on their learning.

They returned to the mathematical context, where they estimated the values given in seconds and in degrees. But they left this context when positioning the robot because the robot was not connected. Moving back and forth once more between the mathematical and digital contexts, the students were able to figure out which values made the robot move and turn properly. Throughout this process, they received feedback when the robot did not move. The feedback received might be seen as Activity Feedback, because it informed students about their non-actions: a program was not saved because a student did not know how to save it, because they did not transfer the data to the robot or because the robot was not activated.

In one case, it was a student who gave feedback to her teammate: she told him which buttons to push on the computer keyboard. The robot or the program on the screen gave a second type of feedback. One student came back into the socio-cultural context: he was concerned by the amount of time left to accomplish the task. After realizing they had about 20 min left, and that it might take 16 min to accomplish their mission, Ella decided to program the robot herself. The feedback given by the student about the amount of time left might be seen as Student Feedback, because monitoring time is related to the ability to self-assess performance.

However, when the robot once again hit a “wall of the metro station”, students interpreted this feedback as a problem coming from the program and from the way the robot was positioned. In this case, the feedback might be seen as Student Feedback, because it indicated what needed to be done in order to improve their work. It also brought students into the sociocultural context, because hitting the wall might lead the suspicious package to blow up. Students were wondering what was in the package. A student explained that it is a box with something inside in, like the postman brings home. Her feedback was considered as Activity Feedback, because it helped them to understand the socio-cultural context of the task.

When the students realized there were only 5 min left, they rushed to complete the task. Once again, they tried to make the robot turn from segment c to segment d by navigating between the mathematical and the digital contexts. When the robot touched a yellow line again, one student noticed that the robot was not well positioned. The feedback received might be considered as Activity Feedback, because the robot informed students about their actions of programming. Positioning the robot correctly allowed them to observe that the robot needed to move straight for a longer period of time before making a rotation.

After another trial, the robot turned properly then moved too far away. The feedback received might be considered as Student Feedback, because students looked at the program and changed certain values. Taking care to put the robot in the same place, the students transferred the data and the robot turned too much. Again, the feedback received might be considered as Student Feedback, because it gave some indication of what to do next. At the final trial, the robot was able to move from point 1 to point 2 successfully.

Team 2

Team 2 programmed the robot using the software. We took a program here and we put it here. This is to make the robot move” (Peter). The members of Team 2 wanted the robot to move straight along the first segment (segment a). The mathematical model discussed was in relation to the circumference of the wheel and they then discussed rotation time (mathematical context). They selected the latest model. To confirm the parameters, the students proceeded by trial and error. They co-operated by adopting different roles at different times; this allowed them to work efficiently.

The students first estimated the time needed for the robot to move straight on the first segment (mathematical context). They then experimentally validated their estimation with the robot within the digital context and made subsequent adjustments (mathematical context). They then returned to the digital context, making sure to connecting the robot to the program properly prior to use. When the students pushed the ‘play’ button, they realised the robot was not activated. The feedback received in this context was Activity Feedback, because it is a technology response to students’ actions (play/not activated). They activated the robot and tested 3 s, observing the distance it moved in this time (digital context). The feedback received in this context was again Activity Feedback, because it can be considered as a technology response to students’ actions (tried time/observed what happened).

From their experiments, they concluded that the robot moved roughly one tile per second over the classroom floor: “A second is about a tile” (Peter). The feedback received was Activity Feedback, because it was a technological response to student actions (tried time/observed what happened). But this feedback allowed them to generalize the time and distance needed to construct a mathematical model they could use and generalize. Thus, the feedback received within the digital context allowed them to go into the mathematical context of the task. From this information, they were able to determine the time required depending on the distance. Thus, the robot was able to move along the first segment (segment a). The next step was then to have the robot rotate to move along the second segment (segment b).

To rotate the robot, students discussed and agreed on the need to program a rotation of 90° (mathematical context). To do this, they moved into the digital context when they chose the programming block that caused the robot to rotate. They decided to test and check each time. In order not to overwhelm the memory card of the robot, they choose to delete each program tested. One student showed the others how to do that on the robot. Again, in the digital context, the strategy used by the students was to get feedback from the tests, which can be seen as Activity Feedback, because this strategy focused on attaining the task’s goal, rather than on their performance or their learning (Fig. 3).

Fig. 3
figure 3

Programing block ‘Move’

Continuing with guesswork using different values and experimenting with the robot, the students concluded that a 90° rotation is accomplished in 0.8 s. Again, this Activity Feedback allowed them to construct a mathematical model they could use and generalize – in this case, the time and amount of rotation. Using the information they had generated about time, distance and rotation, students were able to move the robot along the path (segments b and c): “You’ll do your ninety degrees and then after that, one, two, two and … [the student counts the distance between the robot and the point of arrival in cash floor tiles]” (Henry). Thus, students used the knowledge emanating from their mathematical modeling context to continue the task.

However, the students faced a major problem: the robot deviated from the desired path. Concerned about the fact that the goal might not reached, students then hypothesized that either the wheels or the robot could be defective, thus returning to the digital context. The feedback received was about the robot deviating from the straight line when it started to move awry. One student noticed that a wheel was not working properly. Again, the feedback received was Activity Feedback, because it was a technological response to students’ actions (tried something/observed what happened). So, instead of trying to validate their mathematical reasoning, the students were placed in the digital context of identifying the source of their problems:

Julia: Do you want me to take another complete robot?

Henry: No, just a wheel. We just changed a wheel, because that wheel here went left.

Peter: That is changing the degrees all the time.

Henry: This is why it was never the same degrees.

After changing the wheel, the students found that it was not well affixed. So they chose a new robot. With the new robot, students returned to the mathematical context and realized their estimations were imprecise. Again, the Activity Feedback they received from testing the robot was a technological response to their program. Still in the digital context, they realized they needed to take into account the exact starting point of the path as a parameter for their trials: the need to put the robot at the same place every time they tested their programming. When searching for ways of explaining the robot’s deviation from its trajectory, critical thinking was applied, thus allowing the students to enter into a mathematical context. However, the students never rigorously identified the robot’s starting point, since they positioned it at an approximate starting point.

The students then abandoned rigorous mathematical reasoning and relied instead on an approximate way of placing the robot at the starting point. They continued to use approximate values when verifying the robot’s trajectory. They then proceeded by trial and error to get the robot to follow its target trajectory, rather than by using mathematical modeling. In fact, they relied on Activity Feedback to attain the goal of the task. The students did not use Student Feedback to make themselves aware of what they had to do to improve their work. As a result, students failed to use the concept of measurement properly: instead, they used the surface of the tiles rather than calculate the perimeter as the length of the path that would be different depending on the surface (on the tile) used. Since the parameters of the program depended on the starting point, the problem was only partially solved by the students, since they arrived at their result by using a trial and error method and were unable to execute the task fully. In this case, the Activity Feedback they paid attention to was not enough to perform the task successfully.

The lack of rigour when they used the strategy of trial and error sheds light on their methodology. But this might be also associated with the fact that the students did not take the space on the floor and its effect on programming the robot’s movements into consideration. Thus, it was necessary for the students to test different values using the same conditions, with the robot always having the same starting point. However, in actuality, their testing conditions were not always the same, because they erroneously considered the entire tile as a starting point instead of considering an exact point of departure. Here, their reasoning was associated with the surface area instead of the perimeter. It is interesting to note that this type of reasoning was mobilized when testing the code and not through the actual process of coding. This situation resulted in an issue that students were not able to solve. Team 2 succeeded in moving the robot from the starting point to the end of segment d. However, they failed to move the robot along segment e to get to point 2.

Mathematical, Digital and Socio-Cultural Contexts

Analysis conducted on students’ actions reveals both knowledge and processes. We further identified and classified these actions according to the context in which they belonged. Table 2 presents the knowledge and the processes used by students when performing the Alert in the subway station task.

Table 2 Knowledge and processes used in the ‘Alert in the subway station’ task

Within the task designed to assess students’ learning in a robotics-based environment, the students were placed in a situation where several simultaneous (but sometimes sequential) actions needed to be co-ordinated. One action (originating within the socio-cultural and citizenship context) was employed to handle a real-life problem situation where robots can help people to reach certain social goals (here, inspect a suspicious object while keeping people safe). The second action was to program the robot to execute the investigation and then to test how it works (digital context). The third action was to mobilize (mathematical) knowledge (such as measurement) to build a model that allowed the students to plan and control the robots’ operations (mathematical context). For each action, students demonstrated an impressive set of skills and abilities, thus ensuring a progression through the problem-solving process. In the case of Team 2, some important connections of meta-cognitive nature were missing, which prevented the students from employing a back-and-forth monitoring that may have enabled them to complete the task successfully.

Discussion and Conclusion

Without making conclusive generalizations, we believe that our findings bring particular insight into several issues identified in the literature review, especially in relation to the complexity of learning, which is not easy to define or to measure. For instance, our work addressed the assessment of digital and mathematical learning, as opposed to the use of digital technologies when assessing mathematical learning. As pointed out by Venturini (2015), teachers too often do not assess technological competence. Our work suggests a way to do this. Knowing that students successfully performed the task is not enough: knowing which concepts and processes they used gives more information to position them within their learning process. Thus, the use of an ethnomathematical model (Savard 2008) allowed us to identify which concepts and processes students used (or did not use) when programming their robot. It also allowed us to associate which context (mathematical, digital, socio-cultural) each concept and process belonged to.

The use of contexts in our analysis sheds lights on other aspects of learning, namely how students used the milieu in each context to supports other contexts. In our study, the students moved back-and-forth between mathematical and digital contexts. At first, they did not design a strong mathematical model and then test it. Instead, they choose parameters and plugged in numbers. The trial-and error-strategy employed by both teams allowed them to pay attention to the feedback given by the digital technology.

In our study, students performed the task without any teacher in the room. In contrast with Venturini’s work, none of the feedback given to students was initiated by a teacher. Thus, the feedback received by students came only from digital technologies, with two exceptions in Team 1, where a student gave feedback to her team-mates (peer feedback). Venturini’s teachers were not convinced that digital technologies could give constructive Activity Feedback during assessment. One of the major reasons for this is that her teachers were concerned that students would rely on trial-and-error strategies to solve the task by using feedback provided by the digital technology. This would prevent them from thinking more deeply about the problem.

Our results show that while this might be the case, especially with Team 2, it is also possible that the feedback provided by the digital technology could lead students to understand the socio-cultural context of the task, as with Team 1. In the context of solving a complex problem, based on Dörner’s (1986) concept of operative intelligence, Fischer et al. (2012) revealed the importance of more formative aspects of problem solving, which include the ability to organize cognitive operations such as knowing when to use trial-and-error and when to analyze systematically the situation at hand. To this end, our results show that Team 1 briefly considered these two alternatives when planning which methods they would use to make the robot move.

We also analysed ways in which students interpreted feedback given by digital technologies, by identifying the role and the context of each category of feedback. We studied which context the students used as a response to the feedback given by the digital technologies. Our analysis only presents responses coming from students who were part of this program, while our study also showed that students might give feedback during or after the assessment (formal and informal). Assessing students’ learning might also be done when a task is performed as a team. In this case, the collective assessment might be considered as a different way of gathering information that can be analysed by teachers in order to help them make a more balanced overall judgement.

Team 1 referred to the metro station and the suspicious package, whereas Team 2 referred to yellow lines on the floor and the star (to illustrate point 2 on the third cell in Table 1). Why was that? Students from Team 1 were 1 year younger than the students from Team 2. Despite the fact that Team 1 referred to the socio-cultural context more often during their performance, no student in either team took the time to discuss it in terms of artefacts and the implications of the new knowledge developed in this situation. Even though the task was contextualized, the students immediately jumped into the digital context.

Also, each time students in Team 1 moved from the digital to the socio-cultural context, it was because of Activity Feedback. In fact, Team 1, unlike Team 2, interpreted feedback more as Student Feedback than as Activity Feedback. This makes us wonder if the fact that Student Feedback focuses more on meta-cognitive skills was what allowed students to perform the task successfully. Thus, our study sheds some light not only on the cognitive aspects of solving a complex task, but also on meta-cognitive aspects of this process, which is part of the model in every context. In fact, Team 2 did not use measurement as a rigorous method to validate their mathematical model. At that moment, the use of critical thinking might support the entire process and make it rigorous.

As Lipman (2003) pointed out, critical thinking implies using criteria and self-correction of the thinking process, which implies thinking back-and-forth meta-cognitively. This did not happen with our students, since there was no teacher there to guide them. Team 1 used a piece of tape to identify the starting point. They knew that they had to consistently use the same starting place. Team 2 reflected on this at the end of the task. In this case, first planning a method and then validating it might lead students undergo a more robust STEM experience. In this case, it might be a good illustration of critical orientation as stated by Goos et al. (2014).

This raises important questions in terms of implementing complex STEM tasks aimed at developing students’ competence and the mobilization of this new competence outside schools. It also raises questions about developing critical thinking and meta-cognitive skills when implementing complex STEM tasks. As an extension of our previous studies cited above, note that apart from the phenomenon of joy and usefulness of this type of work (Moundridou and Kalinoglou 2008), the effects of robot programming on students’ cognitive development are still difficult to measure (Matson et al. 2004). How can we make students pay more attention to cultural aspects and not only the mathematical ones? Which cognitive structures guide students in their choice? What is the effect of negotiation within the group during decision-making (which parameters change while performing the task)?

In our study, students were able to make connections between data entries in the program (time and distance, time and rotation) and how successfully the robot followed the path. A limitation of this project is that we were not able to make claims about the conceptualization of knowledge required for programming tasks or about the generalized relationships made by students. We also note that the trial-and-error strategy used by students allowed some of them to program the robot, but it did not occasion any meta-cognitive review of why the code did or did not work. Furthermore, trial and error did not allow the students to detect a source of error and was therefore a lost opportunity in terms of gaining a greater conceptual understanding of mathematics.

Nevertheless, our findings seem to support the richness of the vision of learning with technology as advocated by Howland et al. (2012) regarding the development of cognitive strategies, critical thinking, communication of meaning and the interaction between the learner and technology when the two are conceptually and intellectually engaged as learning partners. Although trial and error appeared to be the most frequently used strategy, students may eventually develop more effective cognitive and meta-cognitive tools when they find themselves in a more supportive, risk-free learning environment (Apiola and Tedre 2013). How to guide students towards the emergence of higher-order thinking still remains an unsolved pedagogical task.

Finally, a detailed micro-analysis of the performance of two groups of students in a robot programming task highlighted the task’s complexity for both the students and the teacher. This is also reflected in the findings of other authors, which connect robotics tasks to the development of critical thinking and the ability to solve problems in situ, based on the construction of meaning (Ricca et al. 2006). How to assess student work? How to guide students through the process of building knowledge? In recent times, where robotics captures the attention of STEM educators as a potentially rich and authentic context for introducing integrated curriculum in schools, especially at the elementary level, the issue of assessment remains unclear. Hopefully, the growing literature on assessment in technology-based environments will support teachers.

Our analysis points to possible limitations of a trial-and-error strategy, which can be seen as helping to make a task accessible to young learners; but this strategy can also be seen as an obstacle, especially when it comes to the meta-cognitive capacity of using trial outcomes to fix errors. The role of teachers, as well as their techno–pedagogical and didactical knowledge (used in assessing students’ learning and in guiding students through the process of executing a complex task), can be seen as an important aspect when initiating future studies. This is especially true in a context of an integrated STEM curriculum in which this complexity becomes an everyday reality within today’s increasingly inclusive classrooms. Another limitation of this study is that the assessment task was not a formal assessment task given by a teacher. Thus, students were not graded. We wonder about the impact of the role of informal vs formal assessment on students’ interpretation of feedback.

Our study focused on analysing students’ attempts to investigate a robotics-based task with a particular emphasis on the mathematics they employed. This complex problem-solving activity forced students to navigate among digital (programming the robot), mathematical (mobilizing reasoning and knowledge), and socio-cultural (engaging in solving an authentic task such as using a robot to reach a suspicious object) contexts.