This website uses cookies and similar technologies to understand visitors' experiences. By continuing to use this website, you accept our use of cookies and similar technologies,Terms of Use, and Privacy Policy.

Jul 02 2013 - 02:16pm
Active Reading and Vialogues
A couple of weeks ago at the EdLab Seminar, Craig Tashman gave a nice talk about LiquidText, an iPad application for supporting people's reading experience. The idea originated from two papers (click here) that are published in CHI, a very good conference in the field of human computer interaction. Both of these papers mentioned the concept active reading, which involves annotation, note taking, commenting, and highlighting, among others, when people are reading materials. This is not a new direction in academics as some other researchers have already discussed how to support active reading in computer-based devices and online environments since the 1990s. Some example works are Morris et al. , Schilit et al. , and Wilcox et al. However, most of those research works is only about supporting active reading for books. With more popular use of videos, audio and other multimedia materials, the support of active reading for videos and audios are big challenges. Vialogues provide such support for people commenting, discussing and annotating on videos (maybe the active watching rather than active reading), but there are still some remaining challenges. Most of the current research on supporting active reading is on an individual level. Supporting collaborative and active reading is still a problem. Two recent papers from Pearson et al. : Investigating collaborative annotation on slate pcs and Co-reading: investigating collaborative group reading may provide some general principles on designing such a system. In these papers, the mutual naviagtion is one important function, which could help users become aware of what others are doing. In supporting collaborative active reading, Vialogues provide the discussion function, which is very nice to both synchronize and asynchronize learning. But the support for mutual navigation might be missing. If you look into the details of annotations for each video, there might be some repeated annotations. Vialogues not only supports synchronize learning but also aim to the asynchronize learning. The challenge is how to make use the current annotations and comments. The time-coded statistic bar is one excellent function, but what others can we use? All of the annotations from any user might benefit others, and how to extract the most useful threads of discussion and highlighting for users. Although the video is still not uploaded, I still put a link here.