Abstract
The evaluation of an introductory programming course can be a challenging task. Developing metrics to evaluate whether the students satisfied the learning objectives requires the consideration of aspects from didactic and programming perspectives. From a programming perspective, the manual assessment of programming submissions is undesirable. Thus, an automatic process is required. From a didactic perspective, the task formulation plays a significant role and as well requires evaluation. This thesis aims to propose a concept for the solution of this problem statement. The concept suggests several metrics to measure whether the learning objectives are satisfied. These metrics are derived from potential learning objectives for an introductory programming course by using the Goal Question Metric approach. The automation of the evaluation process is addressed by suggesting summative assessment approaches for static and dynamic analysis to assess whether given requirements are satisfied. Moreover, to further support the automation, this thesis proposes an auxiliary framework for the evaluation of metrics. The didactic perspective suggests an iterative improvement process that uses the Plan-Do-Check-Act approach to refine a task’s formulation. The evaluation of the adduced concept is done by conducting a case study with the introductory programming course of the RWTH. The course incorporates a programming game called Obstacle Run that challenges the students to develop a strategy to move through a field of obstacles. The case study was conducted with the Obstacle Run game and student submissions from the year 2017.
Project information
Finished
Bachelor
Marc Luqué
2020-021