Abstract
In computer science education, variations in the application contexts in modelling and
programming tasks enhance the development of problem-solving skills. This results in a demand
for a vast training and testing corpus of open questions with varying domain models usable in
online and offline assessments. This paper proposes a two-step workflow for the automatic
generation of items with varied domain models. First, experiences are provided about using the
generated test corpora in formative assessments in computer science courses in higher education,
especially for problem-based questions.