Chris Harrison of CMU Visits DGP

Prof. Chris Harrison of CMU visited the DGP to talk about his work in mobile sensing.

Dr. Harrison's Lecture at DGP

Dr. Harrison’s Lecture at DGP


Chris is an Assistant Professor of Human-Computer Interaction at Carnegie Mellon University. He broadly investigates novel sensing technologies and interaction techniques, especially those that empower people to interact with small devices in big ways. Harrison has been named a top 30 scientist under 30 by Forbes, a top 35 innovator under 35 by MIT Technology Review, a Young Scientist by the World Economic Forum, and one of six innovators to watch by Smithsonian. He has been awarded fellowships by Google, Qualcomm, Microsoft Research and the Packard Foundation. He is also the CTO of Qeexo, a touchscreen technology startup.  When not in the lab, Chris can be found welding sculptures, visiting remote corners of the globe, and restoring his old house.


Interacting with Small Devices in Big Ways.


Eight years ago, multi-touch devices went mainstream, and changed our field, the industry and our lives. In that time, mobile devices have gotten much more capable, yet the core user experience has evolved little. Contemporary touch gestures rely on poking screens with different numbers of fingers: one-finger tap, two-finger pinch, three-finger swipe and so on. We often label these as “natural” interactions, yet the only place I perform these “gestures” is on my touchscreen device. We are also too quick to blame the “fat finger” problem for much of our touch interface woes – if a zipper or pen were too small to use, we would simply call that “bad design”. Fortunately, our fingers and hands are amazing, and with good technology and design, we can elevate touch interaction to new heights. I believe the era of multi-touch is coming to a close, and that we are on the eve of an exciting new age of “rich-touch” devices and experiences.

Jonathan Deber, Ricardo Jota, Clifton Forlines, Daniel Wigdor CHI Paper: Honorable Mention

How much Faster is Fast Enough? User Perception of Latency & Latency Improvements in Direct and Indirect Touch

Dr. Ricardo Jota Awarded Mitacs Postdoctoral Award

Ricardo Jota was awarded the Postdoctoral Award for Outstanding Innovation for his research with Tactual Labs.

Congratulations to Jota for the well-deserved recognition!

Read more about the Mitacs Awards Reception and Jota’s work in this press release.

Photo Courtesy of Mitacs. Pictured Above: Ted Mao (Trojan Technologies), Rafael Falcon (University of Ottawa), Linda Gowman (Trojan Technologies), Daniela Tuchel (Royal Roads University), Minister Chris Alexander (Minister of Citizenship and Immigration), Minister Ed Holder (Minister of State, Science and Technology), Stephen Dugdale (Universite INRS), Minister Kerry-Lynne Findlay (Minister of National Revenue), Dr. Rob Annan (interim Chief Executive Officer, Mitacs), Professor Alan Fung (Ryerson University) and Ricardo Jota (University of Toronto).

Dr. Ali Mazalek Presents a Talk at DGP

Dr. Ali Mazalek, Associate Professor of Ryerson University and Georgia Tech will be visiting our lab on Thursday, December 4th. She will be presenting a talk between 11:00 am until 12:30 pm.

Welcome to DGP, Dr. Mazalek!

“Mind, material, and movement: embodying creativity in the digital era”


We are increasingly tethered to a range of pixelated boxes of varying shapes and sizes. These devices are ever present in our lives, transporting us daily into vast information and computational realms. And while our interactions with digital devices are arguably becoming more fluid and “natural”, they still make only limited use of our motor system and largely isolate us from our immediate physical surroundings. Yet a gradual shift in the cognitive sciences toward embodied paradigms of human cognition can inspire us to think about why and how computational media should engage our bodies and minds together. What is the role of physical movements and materials in the way we engage with and construct knowledge in the world? This talk will provide some perspectives on this question, highlighting research from the Synaesthetic Media Lab that supports creativity and expression across the physical and digital worlds.

Dr. Ali Mazalek has spent over 15 years trying to get digital technologies to fit better into her physical world and life, rather than letting them drag her into the pixelated depths of her computer screens. At the same time, she has a deep interest in how computational media can support and enhance creative practices and processes, supporting new forms of expression and new ways of thinking and learning. She is a Canada Research Chair in Digital Media and Innovation and Associate Professor in the RTA School of Media at Ryerson University, as well as Associate Professor of Digital Media at Georgia Tech. Her Synaesthetic Media Lab ( is a playground where physical materials, analog sensors, and digital media happily co-exist and come together in novel ways to support creativity and expression across both science and art disciplines.

Professor Michael Terry Speaking at DGP

Professor Michael Terry will be presenting a talk on Thursday, November 20th from 11:00 am until 12:30 pm in room BA5187.

Please join us in welcoming Professor Terry!

“Interactive Systems Need to Know How to Read the Web and Watch YouTube”

In this talk, Professor Michael Terry will argue that there is great value in interactive systems that can learn how to accomplish tasks by “reading” web-based tutorials and “watching” how-to videos. He will focus primarily on text-based documents and search queries, and show how techniques from the fields of machine learning and information retrieval can be leveraged to extract streams of “how-to” information from web-based resources and instrumentation logs. These information sources enable a new class of interactive system that is more aware of the tasks it can perform, as well as how to accomplish these tasks. Importantly, this awareness continually evolves and tracks how the user community actually uses the system.

Michael Terry is an associate professor in the Cheriton School of Computer Science at the University of Waterloo, where he co-directs the HCI Lab. His research lies at the intersection of HCI, machine learning, and information retrieval. His current projects include machine understanding of instructional materials, task-centric user interfaces, and interactive machine learning systems designed to assist the digitization and cataloging of millions of biological specimens in London’s Natural History Museum.

Karen Myers of SRI International

As part of the Distinguished Lecture Series, Dr. Karen Myers is presenting a talk at the Bahen Centre this Tuesday, November 18th. The lecture is hosted in BA1170 at 11:00 am.

Learning from Demonstration Technology: A Tale of Two Applications

Learning from demonstration technology has seen increased focus in recent years as a means to endow computers with capabilities that might otherwise be difficult or time-consuming for a user to program. This talk describes two efforts that employ learning from demonstration technology to quite distinct ends. The first is to provide a capability that supports users with no programming experience in the creation of procedures that automate repetitive or time-consuming tasks. This capability has been operationally deployed within a collaborative planning environment that is used widely by the U.S. Army. The second is to support automated performance evaluation of students as they seek to acquire complex procedural skills through training in virtual environments. In this second case, instructional content developers employ learning from demonstration technology to create solution models for training exercises. An automated assessment capability employs soft graph matching to align a trace of a students response to an exercise with the solution models for that exercise, providing a flexible basis for evaluating student performance. In contrast to intelligent tutoring systems that force students to follow a pre-specified solution trajectory, our approach enables meaningful feedback in domains where solutions can have significant variability.

Karen Myers is a Principal Scientist within the Artificial Intelligence Center at SRI International, where she leads a team focused on developing intelligent systems that facilitate man-machine collaboration. Myers has led the development of several AI technologies that have been successfully transitioned into operational use in areas that span collaborative systems, task management, and learning from demonstration. Her research interests include autonomy, multi-agent systems, automated planning, personalization, and mixed-initiative problem solving

Dr. Beverly Harrison of Yahoo! Labs

Dr. Beverly Harrison is presenting a talk on Thursday, October 30 at 11:00 am in room BA5187.

Please welcome Dr. Harrison back to DGP!

“Yahoo Labs – Mobile Research Group”

In this talk, Dr. Beverly Harrison will highlight strategic research areas and directions for Yahoo Labs overall, and then describe key areas the Mobile Research team is actively working on (and hiring for!). Several recent research projects will be presented including a study of teens use of smartphones and mobile apps, a study about people’s understanding of what “personalized ads” means, a social TV prototype app, and some highlights of wearables and hardware prototyping efforts.

Dr. Beverly Harrison is currently the Senior Director of Mobile Research at Yahoo Labs. Her expertise and passion over the last 20 years is creating, building and evaluating innovative mobile user interface technologies and in inferring user behaviour patterns from various types of sensor data. She has previously worked at Xerox PARC, IBM Research, Intel Research, and Amazon/Lab126 as well as doing startups. Beverly has 80+ publications, holds over 50 patents, and held 3 affiliate faculty positions in CSE, iSchool, Design (University of Washington). She has a B. Mathematics (Waterloo) and a M.Sc. and PhD in Human Factors Engineering (Toronto) where she was also an active member of the dgp Lab.

Professor Roel Vertegaal at DGP

Professor Roel Vertegaal is presenting a talk this Friday, October 24th at 11:00 am in the Bahen Centre, Room BA1210.

Designing everyday computational things

In his book The Psychology of Everyday Things, Donald Norman outlined a world of things around us that are poorly designed because their designers did not apply psychology to the design process. The idea that psychologists can answer questions about design, through a user-centered design process, is a thesis that has guided our field for several decades. However, if we examine what the world’s top industrial designers, such as Yves Béhar, Jonathan Ive, Karim Rashid, and Philippe Starck, actually do, it becomes clear that they work quite differently. To them, thinking about function is like thinking intuitively about three-dimensional shapes. Interaction design is at the dawn of a new age: Flexible Organic Light Emitting Diodes (FOLEDs) and Flexible Electrophoretic Ink (E Ink) present a third revolution in display technologies that will greatly alter the way computer interfaces are designed. Instead of being constrained to flat surfaces, we will have the ability to shrink-wrap displays around any three-dimensional object, and thus, potentially, every everyday thing. You will order your morning coffee through a display on the skin of your beverage container and your newspaper will be displayed on a flexible paper computer that can be folded into your pocket. Each “thing” will ease mental load by serving only one – physical – function. As opposed to most software, computational things live in real reality. This means they will have to be designed by industrial designers that can intuit how physical shape and materiality trigger deeply haptic, emotive and immersive connections between real-world objects of use and our bodies, souls and minds.

RoelVertegaal   Bio
Roel Vertegaal is a Dutch-Canadian interaction designer, scientist, musician and entrepreneur working in the area of Human-Computer Interaction. He is the director of the Human Media Lab and Professor at Queen’s University’s School of Computing. He is best known for his pioneering work on flexible and paper computers, with systems such as PaperWindows (2004), PaperPhone (2010) and PaperTab (2012). He is known for inventing ubiquitous eye input, such as Samsung’s Smart Pause and Smart Scroll technologies. He is also a co-founder of Mark One, and co-inventor of Vessyl, the smart beverage container.

Dr. David Flatla Presents a Talk

Dr. David Flatla is presenting a talk at DGP on October 23rd, 2014 at 11:30 am. The talk s being hosted in the DGP Seminar room, BA5187.

Title: Colour Identification through Sensory and Sub-Sensory Substitution

Abstract: Colour vision is one of those fundamental elements of day-to-day life; it helps us coordinate our clothing, prepare food, read charts, decorate our homes, keep safe, and enjoy nature and the arts. However, people with impaired colour vision (ICV) often cannot discriminate between colours that everyone else can, making these day-to-day activities difficult. In an attempt to help people with ICV, recolouring (or Daltonization) tools have been developed that remap problem colours to more distinguishable ones for people with ICV, thereby enhancing colour differentiability.

However, in spite of almost 20 years of recolouring research, empirical results showing that recolouring actually helps people with ICV are very rare. One potential reason for this is that recolouring often destroys the subtle colour cues that people with ICV rely on. A second (and indirect) reason is that recolouring is a captivating challenge for computing – the problem (dimensionality reduction) is accessible, solutions are easy to build but optimality is elusive, and the algorithms have a number of challenging user-satisfaction constraints (e.g., speed, temporal invariance).

“In this talk, I will present my recent work on the next generation of tools for helping people with ICV that preserve the subtle colour cues relied on by people with ICV, and (hopefully) represent a new captivating computing challenge. These tools look to address the fundamental problem of ICV – reduced colour perception – by enabling users to correctly identify colours in their environment by mapping colour information to other aspects of vision (sub-sensory substitution) or to hearing (sensory substitution). I will describe four prototype tools, present early user study results, and discuss future directions for this work.”

– Dr. David Flatla

Dr. David Flatla is a Lecturer and Dundee Fellow in the School of Computing at the University of Dundee. He received his PhD in Computer Science from the University of Saskatchewan (Canada), where he was supervised by Carl Gutwin. David’s research explores the intersection of human colour perception and digital interfaces, where he models the unique abilities of individual users and adapts interfaces accordingly. His current research focusses on re-evaluating previous assistive technologies designed to support people with impaired colour vision to identify gaps where more effective assistance can be provided. David’s previous work has received Best Paper awards from CHI and ASSETS, and he received a Canadian Governor General’s Gold Medal for his PhD work.