2. Identifying relationships between students' questions type and their behavior (Best Poster award)
During the conference there were two poster sessions with about 20 posters each. The EdLab Members presented their works Cluster Analysis of Real Time Location Data - An Application of Gaussian Mixture Models, and A Topic Model and Social Network Analysis of a School Blogging Platform during the first session. During the second session we encountered some interesting works, particularly focused on learner behaviors. Two of these works (1,3) made the use of graph techniques which were used for visualization and for understanding social networks. Two of the works made use of clustering techniques (2,3) in order to make sense of a large set of data.
In Using Graph-based Modelling to explore changes in students' affective states during exploratory learning tasks the authors looked at a group of students interactions on an online exploratory learning environment (ELE). The ELE's system requires students to talk out-loud about their reasoning process during their work. With this data along with interaction data the system is able to automatically predict the users affective state. The researchers further took this data, looked more closely at every single student action or click, and made a graph visualization where each node represented an event or action taken by the student, and each edge linked successive events. They further color coded each node to reflect the affective state, creating an easy to understand visualization of the changes in affective states during a student's ELE session. With this information they are easily able to see which sections caused students the most stress or confusion along with seeing what led them there and how they got out.
If we at the EdLab solicited a group of volunteers to use our platforms, would tracking the affective states of the volunteers during use of our tools be useful to understand the strengths and weaknesses of each platform?
With only looking at forum data Identifying relationships between students' questions type and their behavior set out to automate classification of about 600 questions made in two courses. Given a manually created classification taxonomy, their method of pulling key words to automatically detect classify questions proved to be accurate when compared to a manual classification of the questions. They next looked at student grades, and behaviors such as attendance, the number of questions asked, and the popularity or number of up-votes their questions received. By using K-means they were able to cluster students based on these behaviors and observe the distribution of question types in each cluster. They found interestingly that the cluster containing mostly low performing students tended to have less attendance, ask less questions but asked questions which were more popular. They asked a high number of "how-to" questions when compared to other students. On the other hand, high performing students asked more questions which were less popular and they mostly asked "detailed" questions. The researchers hypothesis was that the lower popularity of their questions was due to the complexity or higher understanding of the course material required.
The method of clustering students by behavior is something which could be very useful to us at the EdLab. Additionally by observing interactions such as comments, posts, and content on our platforms we could perform a similar type of analysis to better understand our users. What sorts of behaviors, on which platforms would be useful to look at?
Lastly, Identifying student communities in blended courses also looked at forum data from a class but looked at who replied to who. With this information they were able to generate graphs where each node represented a student and an edge represented an interaction the students had on the forum. They clustered the resulting graph using Girvan Newman clustering and color coded each node with the grade the student received and found significant relationships between the grade performance and the cluster. Upon further consideration these results are interesting since interactions on online forums do not necessarily reflect interactions during outside group work or study sessions. However, these results may correspond to the findings of the previous study; For example, low performing students may tend to interact more with the same types of questions ("how-to").
A Topic Model and Social Network Analysis of a School Blogging Platform similarly performed graph analysis to analyze the groups and content within the EdLab platform Pressible. Could we learn more by looking at interactions on our other platforms and make hypotheses as to what types of users are using our tools? Could we gain insight into the goals that our different users have for using the platforms?