Portrait of Nikola Banovic

Nikola Banovic

Ph.D. Student in Human-Computer Interaction
Human-Computer Interaction Institute
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh, PA 15213, USA
nbanovic (at) cs.cmu.edu

Summary

I am a Ph.D. student at the Human-Computer Interaction Institute (HCII) at Carnegie Mellon University working with Prof. Anind Dey and Prof. Jennifer Mankoff.

Before joining HCII, I received my M.Sc. and B.Sc. degrees from the University of Toronto where I worked with Prof. Khai Truong at the Dynamic Graphics Project (DGP) lab and Toronto Ubicomp Research Group.

My research interests lie in the fields of Human-Computer Interaction (HCI) and Ubiqutous Computing (UbiComp). My other academic interests include Cognitive Science and practical applications of Machine Learning.

More information on me can be found in my curriculum vitae.

Publications

Journal Papers (Peer reviewed)

[J.1]
Nikola Banovic, Koji Yatani, and Khai N. Truong. 2013. Escape-Keyboard: A Sight-free One-handed Text Entry Method for Mobile Touch-screen Devices. International Journal of Mobile Human Computer Interaction (IJMHCI), Volume 5, Issue 3 (July-September 2013). [Publisher] [Abstract]
Mobile text entry methods traditionally have been designed with the assumption that users can devote full visual and mental attention on the device, though this is not always possible. We present our iterative design and evaluation of Escape-Keyboard, a sight-free text entry method for mobile touch-screen devices. Escape-Keyboard allows the user to type letters with one hand by pressing the thumb on different areas of the screen and performing a flick gesture. We then examine the performance of Escape-Keyboard in a study that included 16 sessions in which participants typed in sighted and sight-free conditions. Qualitative results from this study highlight the importance of reducing the mental load with using Escape-Keyboard to improve user performance over time. We thus also explore features to mitigate this learnability issue. Finally, we investigate the upper bound on the sight-free performance with Escape-Keyboard by performing theoretical analysis of the expert peak performance.

Refereed Conference Papers (These papers appeared in the conference’s main proceedings)

[C.8]
Nikola Banovic, Christina Brant, Jennifer Mankoff, and Anind Dey. 2014. ProactiveTasks: the short of mobile device use sessions. In Proceedings of the 16th international conference on Human-computer interaction with mobile devices & services (MobileHCI '14). ACM, New York, NY, USA, 243-252. [Publisher] [Abstract] Best Paper Award
Mobile devices have become powerful ultra-portable personal computers supporting not only communication but also running a variety of complex, interactive applications. Because of the unique characteristics of mobile interaction, a better understanding of the time duration and context of mobile device uses could help to improve and streamline the user experience. In this paper, we first explore the anatomy of mobile device use and propose a classification of use based on duration and interaction type: glance, review, and engage. We then focus our investigation on short review interactions and identify opportunities for streamlining these mobile device uses through proactively suggesting short tasks to the user that go beyond simple application notifications. We evaluate the concept through a user evaluation of an interactive lock screen prototype, called ProactiveTasks. We use the findings from our study to create and explore the design space for proactively presenting tasks to the users. Our findings underline the need for a more nuanced set of interactions that support short mobile device uses, in particular review sessions.
[C.7]
Christian Koehler, Nikola Banovic, Ian Oakley, Jennifer Mankoff, and Anind K. Dey. 2014. Indoor-ALPS: an adaptive indoor location prediction system. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14). ACM, New York, NY, USA, 171-181. [Publisher] [Abstract]
Location prediction enables us to use a person's mobility history to realize various applications such as efficient temperature control, opportunistic meeting support, and automated receptionists. Indoor location prediction is a challenging problem, particularly due to a high density of possible locations and short transition distances between these locations. In this paper we present Indoor-ALPS, an Adaptive Indoor Location Prediction System that uses temporal-spatial features to create individual daily models for the prediction of when a user will leave their current location (transition time) and the next location she will transition to. We tested Indoor-ALPS on the Augsburg Indoor Location Tracking Benchmark and compared our approach to the best performing temporal-spatial mobility prediction algorithm, Prediction by Partial Match (PPM). Our results show that Indoor-ALPS improves the temporal-spatial prediction accuracy over PPM for look-aheads up to 90 minutes by 6.2%, and for up to 30 minute look-aheads by 10.7%. These results demonstrate that Indoor-ALPS can be used to support a wide variety of indoor mobility prediction-based applications.
[C.6]
Nikola Banovic, Rachel L. Franz, Khai N. Truong, Jennifer Mankoff, and Anind K. Dey. 2013. Uncovering Information Needs for Independent Spatial Learning for Users who are Visually Impaired. In Proceedings of the 15th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '13). ACM, New York, NY, USA, Article 24, 8 pages. [Publisher] [Abstract]
Sighted individuals often develop significant knowledge about their environment through what they can visually observe. In contrast, individuals who are visually impaired mostly acquire such knowledge about their environment through information that is explicitly related to them. This paper examines the practices that visually impaired individuals use to learn about their environments and the associated challenges. In the first of our two studies, we uncover four types of information needed to master and navigate the environment. We detail how individuals’ context impacts their ability to learn this information, and outline requirements for independent spatial learning. In a second study, we explore how individuals learn about places and activities in their environment. Our findings show that users not only learn information to satisfy their immediate needs, but also to enable future opportunities – something existing technologies do not fully support. From these findings, we discuss future research and design opportunities to assist the visually impaired in independent spatial learning.
[C.5]
Nikola Banovic, Tovi Grossman, and George Fitzmaurice. 2013. The Effect of Time-based Cost of Error in Target-directed Pointing Tasks. In Proceedings of the 2013 ACM annual conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 1373-1382. [Publisher] [Abstract]
One of the fundamental operations in today’s user interfaces is pointing to targets, such as menus, buttons, and text. Making an error when selecting those targets in real-life user interfaces often results in some cost to the user. However, the existing target-directed pointing models do not consider the cost of error when predicting task completion time. In this paper, we present a model based on expected value theory that predicts the impact of the error cost on the user’s completion time for target-directed pointing tasks. We then present a target-directed pointing user study, which results show that time-based costs of error significantly impact the user’s performance. Our results also show that users perform according to an expected completion time utility function and that optimal performance computed using our model gives good prediction of the observed task completion times.
[C.4]
Nikola Banovic, Tovi Grossman, Justin Matejka, and George Fitzmaurice. 2012. Waken: reverse engineering usage information and interface structure from software videos. In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12). ACM, New York, NY, USA, 83-92. [Publisher] [Abstract]
We present Waken, an application-independent system that recognizes UI components and activities from screen captured videos, without any prior knowledge of that application. Waken can identify the cursors, icons, menus, and tooltips that an application contains, and when those items are used. Waken uses frame differencing to identify occurrences of behaviors that are common across graphical user interfaces. Candidate templates are built, and then other occurrences of those templates are identified using a multi-phase algorithm. An evaluation demonstrates that the system can successfully reconstruct many aspects of a UI without any prior application-dependant knowledge. To showcase the design opportunities that are introduced by having this additional meta-data, we present the Waken Video Player, which allows users to directly interact with UI components that are displayed in the video.
[C.3]
Nikola Banovic, Fanny Chevalier, Tovi Grossman, and George Fitzmaurice. 2012. Triggering triggers and burying barriers to customizing software. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 2717-2726. [Publisher] [Abstract]
General-purpose software applications are usually not tailored for a specific user with specific tasks, strategies or preferences. In order to achieve optimal performance with such applications, users typically need to transition to an alternative efficient behavior. Often, features of such alternative behaviors are not initially accessible and first need to be customized. However, few research works formally study and empirically measure what drives a user to customize. In this paper, we describe the challenges involved in empirically studying customization behaviors, and propose a methodology for formally measuring the impact of potential customization factors. We then demonstrate this methodology by studying the impact of different customization factors on customization behaviors. Our results show that increasing exposure and awareness of customization features, and adding social influence can significantly affect the user's customization behavior.
[C.2]
Koji Yatani, Nikola Banovic, and Khai Truong. 2012. SpaceSense: representing geographical information to visually impaired people using spatial tactile feedback. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 415-424. [Publisher] [Abstract]
Learning an environment can be challenging for people with visual impairments. Braille maps allow their users to understand the spatial relationship between a set of places. However, physical Braille maps are often costly, may not always cover an area of interest with sufficient detail, and might not present up-to-date information. We built a handheld system for representing geographical information called SpaceSense, which includes custom spatial tactile feedback hardware-multiple vibration motors attached to different locations on a mobile touch-screen device. It offers high-level information about the distance and direction towards a destination and bookmarked places through vibrotactile feedback to help the user maintain the spatial relationships between these points. SpaceSense also adapts a summarization technique for online user reviews of public and commercial venues. Our user study shows that participants could build and maintain the spatial relationships between places on a map more accurately with SpaceSense compared to a system without spatial tactile feedback. They pointed specifically to having spatial tactile feedback as the contributing factor in successfully building and maintaining their mental map.
[C.1]
Nikola Banovic, Frank Chun Yat Li, David Dearman, Koji Yatani, and Khai N. Truong. 2011. Design of unimanual multi-finger pie menu interaction. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS '11). ACM, New York, NY, USA, 120-129. [Publisher] [Abstract]
Context menus, most commonly the right click menu, are a traditional method of interaction when using a keyboard and mouse. Context menus make a subset of commands in the application quickly available to the user. However, on tabletop touchscreen computers, context menus have all but disappeared. In this paper, we investigate how to design context menus for efficient unimanual multi-touch use. We investigate the limitations of the arm, wrist, and fingers and how it relates to human performance of multi-targets selection tasks on multi-touch surface. We show that selecting targets with multiple fingers simultaneously improves the performance of target selection compared to traditional single finger selection, but also increases errors. Informed by these results, we present our own context menu design for horizontal tabletop surfaces.