Automated Scoring of Scientific Creativity in German

Goecke, B., DiStefano, P. V., Aschauer, W., Haim, K., Beaty, R. E., & Forthmann, B.

Forschungsartikel (Zeitschrift) | Peer reviewed

Zusammenfassung

Automated scoring is a current hot topic in creativity research. However, most research has focused on the English language and popular verbal creative thinking tasks, such as the alternate uses task. Therefore, in this study, we present a large language model approach for automated scoring of a scientific creative think- ing task that assesses divergent ideation in experimental tasks in the German language. Participants are required to generate alternative explanations for an empirical observation. This work analyzed a total of 13,423 unique responses. To predict human ratings of originality, we used XLM-RoBERTa (Cross-lingual Language Model-RoBERTa), a large, multilingual model. The prediction model was trained on 9,400 responses. Results showed a strong correlation between model predictions and human ratings in a held-out test set (n = 2,682; r = 0.80; CI-95% [0.79, 0.81]). These promising findings underscore the potential of large language models for automated scoring of scientific creative thinking in the German language. We encourage researchers to further investigate automated scoring of other domain-specific creative thinking tasks.

Details zur Publikation

FachzeitschriftJournal of Creative Behavior
Jahrgang / Bandnr. / Volume58
Ausgabe / Heftnr. / Issue3
Seitenbereich321-327
StatusVeröffentlicht
Veröffentlichungsjahr2024
DOI10.1002/jocb.658
Link zum Volltexthttps://doi.org/10.1002/jocb.658
Stichwörtercreativity, automated scoring, scientific creativity, large language models

Autor*innen der Universität Münster

Forthmann, Boris
Professur für Statistik und Forschungsmethoden in der Psychologie (Prof. Nestler)