Related Talk Titles
8. Estimating Individual Treatment Effect from Educational Studies with Residual Counterfactual Networks
In this post I have brought together the conference topics of Collaborative Learning and Pedagogical Policies because they mainly have implications for learners in structured learning settings. Whether on a massive online open course (MOOC) or in a classroom, the research in 1, 2, 3-7, seeks to improve the learning experience by adapting teaching practices based on collected data and their interpretation. The research in 7, and 8 collected data from an online learning platform called ASSISTments, used in schools to teach math education. These two studies described innovative methods for the way we conduct educational research.
An exception to the focus on structured learning settings was the research done in Predicting Prospective Peer Helpers to Provide Just-In-Time Help to Users in Question and Answer Forums which gathered data from Stack Overflow, an informal online forum specializing in distributing programming knowledge. The forum also has sections for almost any subject. The problem identified by this study was that questions which have received no answers or very little response can "pile up," making the forums less useful.
This study looked at 5 parameters related to user behavior on the large forum for finding support for questions in almost any subject. These parameters: frequency, knowledgeability, eagerness, willingness, recency, were based on the observed response times, "up-votes", and subject preferences which the users displayed. Using these parameters they were able to create a model for predicting which user would answer a question and be the first answerer, the best answerer, and the answerer with the highest score. This study interestingly found out that the measure of "willingness", "a combination of how active and eager the user has been in answering questions related to the question tag in the past", was the best predictor of the success and timeliness of a users response. As such by using willingness to determine the most willing users in a particular topic, the study proposes a possible recommendation system which connects these users with unanswered questions.
However, the main takeaway from this study is perhaps the importance of selecting features to describe the user behavior. In the view of the authors, the process of selecting relevant features may be more important than selecting the best algorithm to fit a statistical model. This study however is still linked to research done in classroom and work settings to identify "the various strategies that could be used in predicting the prospective helpers within the classroom and workplace learning environments."
In A General Model and Algorithm for Grouping Students for Maximizing Learning from Peers deals with the question of how to split up students when it comes to doing group work. Educators are currently split on the question of how to group students for group work. One side proposes group diversity based on performance while another suggests stratified or ability based groups. The authors envisioned this problem to a computational algorithmic maximization question. This study makes the assumption that low performing students who are grouped with high performing students will see improvements in their learning gains . A further assumption is made as to who can improve and to what extent can they improve. The research found an O(N log N) algorithm for assigning or partitioning N students to groups such that the improvements of learning gains were maximized. This study was particularly interesting to me because of its unique approach of envisioning a problem as a computational algorithmic question.
It would be interesting to see whether or not in the future, educators make use of such algorithms to split up students for seating assignments or group work. Another study in the trend of "algorithmic" teaching policies was Towards Closing the Loop: Bridging Machine-induced Pedagogical Policies to Learning Theories. This work looked at intelligent tutoring systems and looked to intelligently adapt the questions asked of a user and the order in which subjects were introduced or changed based on user data collected. A policy in this case, is the decision as to what "pedagogical strategy" or action is to be made in the face of alternatives. The bridge this study sought to make was from impenetrable machine learning results and overly general existing cognitive or learning theories. The beginning of the link was made by their successful results which showed that students who were subject to the "machine-induced" policies showed better results than students who were subject to randomized policies.
These were very interesting research topics and I believe that there are many connections to our work here at EdLab. Particularly with regards to the collaborative learning strategies, given that most of our EdLab suite platforms are community, and interaction oriented there are many opportunities to think about how we could connect users working on similar things to achieve successful learning communities. Machine induced policies may also be useful for our platforms when we think about exploration capacities. Having the ability for a user to intelligently "explore" content on a platform may be a very engaging experience. Additionally the work RiPLE: Recommendation in Peer-Learning Environments Based on Knowledge Gaps and Interests done on MOOCs may also be adapted for our purposes. The work in this study sought to generate suggestions to students to review questions, topics, or advance to a new question during a MOOC session.
Additionally, as we think about recommendation systems, studies like Adaptive Sequential Recommendation for Discussion Forums on MOOCs using Context Trees are very insightful. In this study they have adapted a recommendation algorithm traditionally used for news websites onto an online forum to address "drifting user interests and preferences." Having intelligent recommendation systems, exploration capacities, and even some amount of feedback may be very beneficial to the user experience of our EdLab suite.
What do you think about algorithmically, machine induced education?