Abstract
Massive open online courses (MOOCs) offer rich opportunities to comprehend learners’ learning experiences by examining their self-generated course evaluation content. This study investigated the effectiveness of fine-tuned BERT models for the automated classification of topics in online course reviews and explored the variations of these topics across different disciplines and course rating groups. Based on 364,660 course review sentences across 13 disciplines from Class Central, 10 topic categories were identified automatically by a BERT-BiLSTM-Attention model, highlighting the potential of fine-tuned BERTs in analysing large-scale MOOC reviews. Topic distribution analyses across disciplines showed that learners in technical fields were engaged with assessment-related issues. Significant differences in topic frequencies between high-and low-star rating courses indicated the critical role of course quality and instructor support in shaping learner satisfaction. This study also provided implications for improving learner satisfaction through interventions in course design and implementation to monitor learners’ evolving needs effectively.
Original language | English |
---|---|
Pages (from-to) | 57-79 |
Number of pages | 23 |
Journal | International Review of Research in Open and Distributed Learning |
Volume | 26 |
Issue number | 1 |
DOIs | |
Publication status | Published - Mar 2025 |
Bibliographical note
Publisher Copyright:© 2025, Athabasca University. All rights reserved.
Funding
This work was supported by the National Natural Science Foundation of China (No. 62307010) and the Philosophy and Social Science Planning Project of Guangdong Province of China (Grant No. GD24XJY17).
Keywords
- automatic classification
- BERTs
- course evaluation
- fine-tuned
- learner-generated content