This website uses cookies and similar technologies to understand visitors' experiences. By continuing to use this website, you accept our use of cookies and similar technologies,Terms of Use, and Privacy Policy.

Apr 14 2014 - 11:16pm
What Can We Do with Google Glass?
Tomorrow at 9am, the purchase window of Google Glass will reopen. Google Glass is a new type of information access terminal other than computers, smart phones and tablets. Given its great portability, it may potentially become the most popular terminal in the long run. With its unique software and control system, applications that run on it will have a great deal of new functionality and features. I was excited to hear from Hui Soo that EdLab already bought a pair of Google Glass. It will be great if EdLabbers can think of new products or features that make use of it. The first thing I thought about Google Glass is how we could include it in our existing products to extend their services and functionality. There can actually be many such cases. For example, we can make Vialogues support Google Glass-based video playback, real-time video streaming and even vocal-based commenting. The swiping pad on the side can be used for fast forwarding and rewinding of a video. The first-person view camera of the glass can be used for recording real time video. And the microphone can be used for vocal-based commenting. The Vialogues application for Google Glass will be able to convert the vocal data into text and post to our server. Google Glass can also be used as a new user interface for mSchool. This has the potential to largely increase accessibility and usability of mSchool. Here is how Google Glass displays contents. All the contents displayed in Google Glass are organized around a 'timeline'. At different points of the timeline are card bundles, some for future, some for the past and some for the present. Each card stands for a unit of contents and can be one of four types: text, image, video, html. This configuration goes well with the structure of mSchool. Our course can be decomposed into a number of bundles, each representing a class. Then each class will contain a number of elements, each represented by a card. An mSchool learner using Google Glass can swipe on the side to choose the class he wants to take. The Glass's eye gesture detector will allow users to carry out all kinds of operation on the class contents. They can include flipping between pages, underscoring contents, taking a quiz, etc. The camera can even be used for taking vocal notes. There are a lot more possible cases where Google Glass can be used to extend our existing products or services. However, it can actually help enhance the user experiences of yet another project of EdLab, which is under design and has attracted lots of our attention. That is the Learning Theater on the fourth floor. One immediate thought people can get about using the Glass in the Theater is to replace fixed displays. And it is true. Considering its much smaller volume, by using the Glass, we can potentially have more audience in our Learning Theater. But don't forget that the Glass is actually not wired to the wall and that people wearing it can walk around in the Theater. This actually makes it possible for a very interesting design of how we interact with audience in the Theater. The Glass can essentially display whatever we want to the audience. So we can load a hierarchical guide in the Glass and different items will be triggered when the audience wearing it move to a certain section of the Theater. This is just one example. Do you have better ideas about using Google Glass at EdLab? Please share it with us.
Posted in: Project IdeaDesign Event|By: Yudan Li|773 Reads