Last fall Michael Ekstrand and I co-taught An Introduction to Recommender Systems on Coursera (if you search for the course, you can find the lectures open as part of the course preview). In offering the course we had three goals:
- to make a high-quality introductory recommender systems course available to the world
- to actually experience the MOOC-teaching process, including exploring how elements of the MOOC could be useful in on-campus teaching
- to study the effectiveness of the MOOC in student learning
To accomplish these goals, we had extensive support from not only videographers and course support staff, but also from learning technology and evaluation experts. Our first published result of this work is the paper:
Teaching recommender systems at large scale: Evaluation and lessons learned from a hybrid MOOC (Proceedings of the first ACM Conference on Learning @ Scale) http://doi.acm.org/10.
I wanted to share a few key findings and experiences from the paper, but first I should probably say a few things about the MOOC itself. Teaching the course was a wonderful experience, but it was also an incredible amount of work. We decided to offer a full course — simultaneously offered as a 3-credit graduate course and a free Coursera course. The course had 14 weeks of content covering recommender systems design, algorithms, and evaluation. We divided the course into two “tracks,” a comprehensive “programming” track, and a “concepts” track that included everything except the six programming assignments. Most programming assignments were designed to be completed using our open-source LensKit toolkit.
The course attracted a large number of students (over 28,000 of them), but like most large-scale free courses, there were many students who registered and never returned, or who visited for a while and then left. At the end of the course there were about 2200 students still active.
We measured everything we could, including knowledge gain (tested using a knowledge test administered before and after the course). We wanted to answer four key questions:
- What factors predict student retention?
- Do students really learn from these courses?
- What factors predict student learning?
- How are results different for on-campus enrolled students vs. online students?
Here’s what we found:
- It is very hard to predict student retention. Students who intend to complete the course, know more about the topic at the start, and have taken MOOCs before are more likely to finish. Students taking lots of other MOOCs at the same time are les likely to finish. But all of these factors only explain 6% of the variation in retention. More interestingly, age, sex, language proficiency, and country are not significant predictors of student retention.
- Student learning is much easier to measure, though only for the subset of students who completed the course and finished the pre-course and post-course knowledge tests. Among those 262 students, average scores on our assessment increased from 25% to 70%. Gains were consistent across high-knowledge and low-knowledge students, and between the concepts and programming tracks. The only positive predictor of learning was effort, measured by the number of written assignments submitted. Again, age, sex, country, and English proficiency were not significant predictors.
- We didn’t have enough on-campus students to get statistically significant results, but we did find much higher knowledge gains for on-campus students (67% vs. 58%, normalized knowledge gain). We also found from student feedback that on-campus students strongly preferred the online course format (they only came to the classroom for help and discussion). Some students just liked having the freedom to take the course at home on their own schedule, but many cited the benefits of being able to control lecture speed — speeding up when they understood, or slowing down when concepts or vocabulary required them to take time to understand what was said.
We don’t know what will happen with this course next, but stay tuned!