Quality of task sets

An instrument for analysing science tasks with different functions along the learning process

Authors

  • Sebastian Stuppan
  • Markus Wilhelm
  • Katrin Bölsterli Bardy

DOI:

https://doi.org/10.25321/prise.2022.1330

Abstract

Background: In competency-oriented education, tasks in science subjects have increased in importance in recent years. In addition, it is suggested that to develop a competency, a set of tasks with different functional embeddings is required. An option for the arrangement of tasks is explained in the Learning Process Model and starts with the so-called “confrontation” task at the beginning of a new topic, challenging learners with a new problem. Next, learners build up the required concepts and skills with the help of development tasks to be able to solve the initial problem, followed by exercises to train and expand the competency. To sum up, learners solve the initial problem in a synthesis task and may then need to apply the competency in a transfer task. Each task type (e.g., confrontation task) is described by the weighting of the following nine scales: 1) chart of competencies, 2) relationship to daily life, 3) learners’ con-ceptions, 4) knowledge, 5) knowledge activities, 6) forms of representation, 7) task openness, 8) learning supports, and 9) learning paths. Each scale contains between one and four subscales that describe the task types by their weighting. The description in the form of scale values is taken up empirically.

Purpose: This study combines existing task scales from research with the different functions of tasks along the learning process model. An expert panel worked in a general-theoretical manner, and trained lecturers rated tasks with the instrument to analyse tasks.

Sample/setting, Design and Methods: In our study, we used the existing scales from the Instrument to Analyse Tasks (IAT). To develop the experts’ proposed scale values, we consulted four experts. We calculated the ADM index (average absolute deviation) as a quality-control measure for the experts’ level of agreement. According to the tasks’ scale values as rated by trained lecturers (N = 2), we selected 25 of the 146 science education tasks from the project MINTa unterwegs (“STEM on the move”). In the comparison, we calculated the score differences between the experts’ scale values and the rated task.

Results: The results show that it is possible to describe different task types of a task set with the IAT when the scale values are obtained from expert proposals. Moreover, the IAT scale values obtained from the expert proposals are quite similar to those from task ratings by trained lecturers.

Conclusions: This study indicates that experts can distinguish and characterize the Learning Process Model’s various task types by weighting IAT scale proposals. Furthermore, it has been shown that tasks can be analysed with the IAT. When the results of the task analysis by trained lecturers are compared to the experts’ proposals, the tasks can be revised reasonably and optimized for the learning process.

Keywords: competency, science, task sets, learning, model, instrument, analysis, task quality

Downloads

Published

2022-06-09

Issue

Section

Research-Based Report of Practice