On the suitability of programming tasks for automated evaluation

From IOI Wiki
Jump to: navigation, search
Pub article.png
  • Article: On the suitability of programming tasks for automated evaluation
  • Author(s): Michal Forišek
  • Journal: Informatics in Education 5 (2006), no. 1, 63-76
  • Conference: Perspectives on computer science competitions for (high school) students

Abstract: For many programming tasks we would be glad to have some kind of automatic evaluation process. As an example, most of the programming contests use an automatic evaluation of the contestants' submissions. While this approach is clearly highly efficient, it also has some drawbacks. Often it is the case that the test inputs are not able to "break" all flawed submissions. In this article we show that the situation is not pleasant at all – for some programming tasks it is impossible to design good test inputs. Moreover, we discuss some ways how to recognize such tasks, and discuss other possibilities for doing the evaluation. The discussion is focused on programming contests, but the results can be applied for any programming tasks, e.g., assignments in school.

Keywords: programming contests, programming tasks, IOI, automated testing, black-box testing, task analysis

Download: http://www.mii.lt/informatics_in_education/htm/INFE076.htm (free)