Publications

Google Scholar

Thumbnail of Route Tapestries: Navigating 360° Virtual Tour Videos Using Slit-Scan Visualizations

Route Tapestries: Navigating 360° Virtual Tour Videos Using Slit-Scan Visualizations

Jiannan Li, Jiahe Lyu, Maurício Sousa, Ravin Balakrishnan, Anthony Tang, and Tovi Grossman
In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology (UIST '21), 2021

abstract |  paper  |  30s preview  |  video

An increasingly popular way of experiencing remote places is by viewing 360° virtual tour videos, which show the surrounding view while traveling through an environment. However, finding particular locations in these videos can be difficult because current interfaces rely on distorted frame previews for navigation. To alleviate this usability issue, we propose Route Tapestries, continuous orthographic-perspective projection of scenes along camera routes. We first introduce an algorithm for automatically constructing Route Tapestries from a 360° video, inspired by the slit-scan photography technique. We then present a desktop video player interface using a Route Tapestry timeline for navigation. An online evaluation using a target-seeking task showed that Route Tapestries allowed users to locate targets 22% faster than with YouTube-style equirectangular previews and reduced the failure rate by 75% compared to a more conventional row-of-thumbnail strip preview. Our results highlight the value of reducing visual distortion and providing continuous visual contexts in previews for navigating 360° virtual tour videos.

Thumbnail of HoloBoard: a Large-format Immersive Teaching Board based on Pseudo HoloGraphics

HoloBoard: a Large-format Immersive Teaching Board based on Pseudo HoloGraphics

Jiangtao Gong, Teng Han, Siling Guo, Jiannan Li, Siyu Zha, Liuxin Zhang, Feng Tian, Qianying Wang, and Yong Rui
In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology (UIST '21), 2021

abstract

In this paper, we present HoloBoard, an interactive large-format pseduo-holographic display system for lecture based classes. With its unique properties of immersive visual display and transparent screen, we designed and implemented a rich set of novel interaction techniques like immersive presentation, role-play, and lecturing behind the scene that are potentially valuable for lecturing in class. We conducted a controlled experimental study to compare a HoloBoard class with a normal class through measuring students’ learning outcomes and three dimensions of engagement (i.e., behavioral, emotional, and cognitive engagement). We used pre-/post- knowledge tests and multimodal learning analytics to measure students’ learning outcomes and learning experiences. Results indicated that the lecture-based class utilizing HoloBoard lead to slightly better learning outcomes and a significantly higher level of student engagement. Given the results, we discussed the impact of HoloBoard as an immersive media in the classroom setting and suggest several design implications for deploying HoloBoard in immersive teaching practices.

Thumbnail of Thinking Outside the Lab: VR Size & Depth Perception in the Wild

Thinking Outside the Lab: VR Size & Depth Perception in the Wild

Rahul Arora, Jiannan Li, Gongyi Shi, and Karan Singh
In submission to ACM Transactions on Applied Perception (TAP), 2021

abstract |  arXiv  |  video  |  supplemental

Size and distance perception in Virtual Reality (VR) have been widely studied, albeit in a controlled laboratory setting with a small number of participants. We describe a fully remote perceptual study with a gamified protocol to encourage participant engagement, which allowed us to quickly collect high-quality data from a large, diverse participant pool (N=60). Our study aims to understand medium-field size and egocentric distance perception in real-world usage of consumer VR devices. We utilized two perceptual matching tasks -- distance bisection and size matching -- at the same target distances of 1--9 metres. While the bisection protocol indicated a near-universal trend of nonlinear distance compression, the size matching estimates were more equivocal. Varying eye-height from the floor plane showed no significant effect on the judgements. We also discuss the pros and cons of a fully remote perceptual study in VR, the impact of hardware variation, and measures needed to ensure high-quality data.

Thumbnail of More Kawaii than a Real-Person Live Streamer: Understanding How the Otaku Community Engages with and Perceives Virtual YouTubers

More Kawaii than a Real-Person Live Streamer: Understanding How the Otaku Community Engages with and Perceives Virtual YouTubers

Zhicong Lu, Chenxinran Shen, Jiannan Li, Hong Shen, and Daniel Wigdor
In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21), 2021

abstract |  bibtex |  doi  |  paper

Live streaming has become increasingly popular, with most streamers presenting their real-life appearance. However, Virtual YouTubers (VTubers), virtual 2D or 3D avatars that are voiced by humans, are emerging as live streamers and attracting a growing viewership in East Asia. Although prior research has found that many viewers seek real-life interpersonal interactions with real-person streamers, it is currently unknown what makes VTuber live streams engaging or how they are perceived differently than real-person streamers. We conducted an interview study to understand how viewers engage with VTubers and perceive the identities of the voice actors behind the avatars (i.e., Nakanohito). The data revealed that Virtual avatars bring unique performative opportunities which result in different viewer expectations and interpretations of VTuber behavior. Viewers intentionally upheld the disembodiment of VTuber avatars from their voice actors. We uncover the nuances in viewer perceptions and attitudes and further discuss the implications of VTuber practices to the understanding of live streaming in general.

@inproceedings{10.1145/3411764.3445660,
author = {Lu, Zhicong and Shen, Chenxinran and Li, Jiannan and Shen, Hong and Wigdor, Daniel},
title = {More Kawaii than a Real-Person Live Streamer: Understanding How the Otaku Community Engages with and Perceives Virtual YouTubers},
year = {2021},
isbn = {9781450380966},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3411764.3445660},
abstract = { Live streaming has become increasingly popular, with most streamers presenting their
real-life appearance. However, Virtual YouTubers (VTubers), virtual 2D or 3D avatars
that are voiced by humans, are emerging as live streamers and attracting a growing
viewership in East Asia. Although prior research has found that many viewers seek
real-life interpersonal interactions with real-person streamers, it is currently unknown
what makes VTuber live streams engaging or how they are perceived differently than
real-person streamers. We conducted an interview study to understand how viewers engage
with VTubers and perceive the identities of the voice actors behind the avatars (i.e.,
Nakanohito). The data revealed that Virtual avatars bring unique performative opportunities
which result in different viewer expectations and interpretations of VTuber behavior.
Viewers intentionally upheld the disembodiment of VTuber avatars from their voice
actors. We uncover the nuances in viewer perceptions and attitudes and further discuss
the implications of VTuber practices to the understanding of live streaming in general.
},
booktitle = {Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
articleno = {137},
numpages = {14}
}
Thumbnail of StarHopper: A Touch Interface for Remote Object-Centric Drone Navigation

StarHopper: A Touch Interface for Remote Object-Centric Drone Navigation

Jiannan Li, Ravin Balakrishnan, and Tovi Grossman
In Proceedings of Proceedings of Graphics Interface 2020 (GI '20), 2020

abstract |  bibtex |  doi  |  paper  |  video  |  conference presentation

Camera drones, a rapidly emerging technology, offer people the ability to remotely inspect an environment with a high degree of mobility and agility. However, manual remote piloting of a drone is prone to errors. In contrast, autopilot systems can require a significant degree of environmental knowledge and are not necessarily designed to support flexible visual inspections. Inspired by camera manipulation techniques in interactive graphics, we designed StarHopper, a novel touch screen interface for efficient object-centric camera drone navigation, in which a user directly specifies the navigation of a drone camera relative to a specified object of interest. The system relies on minimal environmental information and combines both manual and automated control mechanisms to give users the freedom to remotely explore an environment with efficiency and accuracy. A lab study shows that StarHopper offers an efficiency gain of 35.4% over manual piloting, complimented by an overall user preference towards our object-centric navigation system.

@inproceedings{Li:2020:10.20380/GI2020.32,
 author = {Li, Jiannan and Balakrishnan, Ravin and Grossman, Tovi},
 title = {StarHopper: A Touch Interface for Remote Object-Centric Drone Navigation},
 booktitle = {Proceedings of Graphics Interface 2020},
 series = {GI 2020},
 year = {2020},
 isbn = {978-0-9947868-5-2},
 location = {University of Toronto},
 pages = {317 -- 326},
 numpages = {10},
 doi = {10.20380/GI2020.32},
 publisher = {Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine},
}
Thumbnail of PinchList: Leveraging Pinch Gestures for Hierarchical List Navigation on Smartphones

PinchList: Leveraging Pinch Gestures for Hierarchical List Navigation on Smartphones

Teng Han, Jie Liu, Khalad Hasan, Mingming Fan, Junhyeok Kim, Jiannan Li, Xiangmin Fan, Feng Tian, Edward Lank, and Pourang Irani
In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), 2019

abstract |  bibtex |  doi  |  paper  |  video  |  conference presentation

Intensive exploration and navigation of hierarchical lists on smartphones can be tedious and time-consuming as it often requires users to frequently switch between multiple views. To overcome this limitation, we present PinchList, a novel interaction design that leverages pinch gestures to support seamless exploration of multi-level list items in hierarchical views. With PinchList, sub-lists are accessed with a pinch-out gesture whereas a pinch-in gesture navigates back to the previous level. Additionally, pinch and flick gestures are used to navigate lists consisting of more than two levels. We conduct a user study to refine the design parameters of PinchList such as a suitable item size, and quantitatively evaluate the target acquisition performance using pinch-in/out gestures in both scrolling and non-scrolling conditions. In a second study, we compare the performance of PinchList in a hierarchal navigation task with two commonly used touch interfaces for list browsing: pagination and expand-and-collapse interfaces. The results reveal that PinchList is significantly faster than other two interfaces in accessing items located in hierarchical list views. Finally, we demonstrate that PinchList enables a host of novel applications in list-based interaction?

@inbook{10.1145/3290605.3300731,
author = {Han, Teng and Liu, Jie and Hasan, Khalad and Fan, Mingming and Kim, Junhyeok and Li, Jiannan and Fan, Xiangmin and Tian, Feng and Lank, Edward and Irani, Pourang},
title = {PinchList: Leveraging Pinch Gestures for Hierarchical List Navigation on Smartphones},
year = {2019},
isbn = {9781450359702},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3290605.3300731},
abstract = {Intensive exploration and navigation of hierarchical lists on smartphones can be tedious
and time-consuming as it often requires users to frequently switch between multiple
views. To overcome this limitation, we present PinchList, a novel interaction design
that leverages pinch gestures to support seamless exploration of multi-level list
items in hierarchical views. With PinchList, sub-lists are accessed with a pinch-out
gesture whereas a pinch-in gesture navigates back to the previous level. Additionally,
pinch and flick gestures are used to navigate lists consisting of more than two levels.
We conduct a user study to refine the design parameters of PinchList such as a suitable
item size, and quantitatively evaluate the target acquisition performance using pinch-in/out
gestures in both scrolling and non-scrolling conditions. In a second study, we compare
the performance of PinchList in a hierarchal navigation task with two commonly used
touch interfaces for list browsing: pagination and expand-and-collapse interfaces.
The results reveal that PinchList is significantly faster than other two interfaces
in accessing items located in hierarchical list views. Finally, we demonstrate that
PinchList enables a host of novel applications in list-based interaction?},
booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
pages = {1–13},
numpages = {13}
}
Thumbnail of PageFlip: Leveraging Page-Flipping Gestures for Efficient Command and Value Selection on Smartwatches

PageFlip: Leveraging Page-Flipping Gestures for Efficient Command and Value Selection on Smartwatches

Teng Han, Jiannan Li, Khalad Hasan, Keisuke Nakamura, Randy Gomez, Ravin Balakrishnan, and Pourang Irani
In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 2018

abstract |  bibtex |  doi  |  paper  |  video

Selecting an item of interest on smartwatches can be tedious and time-consuming as it involves a series of swipe and tap actions. We present PageFlip, a novel method that combines into a single action multiple touch operations such as command invocation and value selection for efficient interaction on smartwatches. PageFlip operates with a page flip gesture that starts by dragging the UI from a corner of the device. We first design PageFlip by examining its key design factors such as corners, drag directions and drag distances. We next compare PageFlip to a functionally equivalent radial menu and a standard swipe and tap method. Results reveal that PageFlip improves efficiency for both discrete and continuous selection tasks. Finally, we demonstrate novel smartwatch interaction opportunities and a set of applications that can benefit from PageFlip.

@inbook{10.1145/3173574.3174103,
author = {Han, Teng and Li, Jiannan and Hasan, Khalad and Nakamura, Keisuke and Gomez, Randy and Balakrishnan, Ravin and Irani, Pourang},
title = {PageFlip: Leveraging Page-Flipping Gestures for Efficient Command and Value Selection on Smartwatches},
year = {2018},
isbn = {9781450356206},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3173574.3174103},
abstract = {Selecting an item of interest on smartwatches can be tedious and time-consuming as
it involves a series of swipe and tap actions. We present PageFlip, a novel method
that combines into a single action multiple touch operations such as command invocation
and value selection for efficient interaction on smartwatches. PageFlip operates with
a page flip gesture that starts by dragging the UI from a corner of the device. We
first design PageFlip by examining its key design factors such as corners, drag directions
and drag distances. We next compare PageFlip to a functionally equivalent radial menu
and a standard swipe and tap method. Results reveal that PageFlip improves efficiency
for both discrete and continuous selection tasks. Finally, we demonstrate novel smartwatch
interaction opportunities and a set of applications that can benefit from PageFlip.},
booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
pages = {1–12},
numpages = {12}
}
Thumbnail of A Two-Sided Collaborative Transparent Display Supporting Workspace Awareness

A Two-Sided Collaborative Transparent Display Supporting Workspace Awareness

Jiannan Li, Saul Greenberg, and Ehud Sharlin
In International Journal of Human-Computer Studies, Volume 101, May 2017, Pages 23-44, 2017

abstract |  bibtex |  doi  |  paper

Transparent displays naturally support workspace awareness during face-to-face interactions. Viewers see another person’s actions through the display: gestures, gaze, body movements, and what one is manipulating on the display. Yet we can design even better collaborative transparent displays. First, collaborators on either side should be able to directly interact with workspace objects. Second, and more controversially, both sides should be capable of presenting different content. This affords: reversal of images/text in place (so that people on both sides see objects correctly); personal and private territories aligned atop each other; and GUI objects that provide different visuals for feedthrough vs. feedback. Third, the display should visually enhance the gestural actions of the person on the other side to better support workspace awareness. We show how our FacingBoard-2 design supports these collaborative requirements, and confirm via a controlled study that visually enhancing gestures is effective under a range of deteriorating transparency conditions.

@article{LI201723,
title = {A two-sided collaborative transparent display supporting workspace awareness},
journal = {International Journal of Human-Computer Studies},
volume = {101},
pages = {23-44},
year = {2017},
issn = {1071-5819},
doi = {https://doi.org/10.1016/j.ijhcs.2017.01.003},
url = {https://www.sciencedirect.com/science/article/pii/S1071581917300034},
author = {Jiannan Li and Saul Greenberg and Ehud Sharlin},
keywords = {Transparent displays, Workspace awareness, Collaborative systems},
abstract = {Transparent displays naturally support workspace awareness during face-to-face interactions. Viewers see another person’s actions through the display: gestures, gaze, body movements, and what one is manipulating on the display. Yet we can design even better collaborative transparent displays. First, collaborators on either side should be able to directly interact with workspace objects. Second, and more controversially, both sides should be capable of presenting different content. This affords: reversal of images/text in place (so that people on both sides see objects correctly); personal and private territories aligned atop each other; and GUI objects that provide different visuals for feedthrough vs. feedback. Third, the display should visually enhance the gestural actions of the person on the other side to better support workspace awareness. We show how our FacingBoard-2 design supports these collaborative requirements, and confirm via a controlled study that visually enhancing gestures is effective under a range of deteriorating transparency conditions.}
}
Thumbnail of Interactive Two-Sided Transparent Displays: Designing for Collaboration

Interactive Two-Sided Transparent Displays: Designing for Collaboration

Jiannan Li, Saul Greenberg, Ehud Sharlin, and Joaquim Jorge
In Proceedings of the 2014 conference on Designing interactive systems (DIS '14), 2014

abstract |  bibtex |  doi  |  paper  |  video

Transparent displays can serve as an important collaborative medium supporting face-to-face interactions over a shared visual work surface. Such displays enhance workspace awareness: when a person is working on one side of a transparent display, the person on the other side can see the other's body, hand gestures, gaze and what he or she is actually manipulating on the shared screen. Even so, we argue that designing such transparent displays must go beyond current offerings if it is to support collaboration. First, both sides of the display must accept interactive input, preferably by at least touch and / or pen, as that affords the ability for either person to directly interact with the workspace items. Second, and more controversially, both sides of the display must be able to present different content, albeit selectively. Third (and related to the second point), because screen contents and lighting can partially obscure what can be seen through the surface, the display should visually enhance the actions of the person on the other side to better support workspace awareness. We describe our prototype FACINGBOARD-2 system, where we concentrate on how its design supports these three collaborative requirements.

@inproceedings{10.1145/2598510.2598518,
author = {Li, Jiannan and Greenberg, Saul and Sharlin, Ehud and Jorge, Joaquim},
title = {Interactive Two-Sided Transparent Displays: Designing for Collaboration},
year = {2014},
isbn = {9781450329026},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2598510.2598518},
doi = {10.1145/2598510.2598518},
abstract = {Transparent displays can serve as an important collaborative medium supporting face-to-face
interactions over a shared visual work surface. Such displays enhance workspace awareness:
when a person is working on one side of a transparent display, the person on the other
side can see the other's body, hand gestures, gaze and what he or she is actually
manipulating on the shared screen. Even so, we argue that designing such transparent
displays must go beyond current offerings if it is to support collaboration. First,
both sides of the display must accept interactive input, preferably by at least touch
and / or pen, as that affords the ability for either person to directly interact with
the workspace items. Second, and more controversially, both sides of the display must
be able to present different content, albeit selectively. Third (and related to the
second point), because screen contents and lighting can partially obscure what can
be seen through the surface, the display should visually enhance the actions of the
person on the other side to better support workspace awareness. We describe our prototype
FACINGBOARD-2 system, where we concentrate on how its design supports these three
collaborative requirements.},
booktitle = {Proceedings of the 2014 Conference on Designing Interactive Systems},
pages = {395–404},
numpages = {10},
keywords = {workspace awareness, two-sided transparent displays, collaborative systems},
location = {Vancouver, BC, Canada},
series = {DIS '14}
}
Thumbnail of Designing the car iWindow: exploring interaction through vehicle side windows

Designing the car iWindow: exploring interaction through vehicle side windows

Jiannan Li, Ehud Sharlin, Saul Greenberg, and Michael Rounding
CHI '13 Extended Abstracts on Human Factors in Computing Systems (CHI EA '13), 2013

abstract |  bibtex |  doi  |  paper  |  video

Interactive vehicle windows can enrich the commuting experience by being informative and engaging, strengthening the connection between passengers and the outside world. We propose a preliminary interaction paradigm to allow rich and un-distracting interaction experience on vehicle side windows. Following this paradigm we present a prototype, the Car iWindow, and discuss our preliminary design critique of the interaction, based on the installation of the iWindow in a car and interaction with it while commuting around our campus.

@inproceedings{10.1145/2468356.2468654,
author = {Li, Jiannan and Sharlin, Ehud and Greenberg, Saul and Rounding, Michael},
title = {Designing the Car IWindow: Exploring Interaction through Vehicle Side Windows},
year = {2013},
isbn = {9781450319522},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2468356.2468654},
doi = {10.1145/2468356.2468654},
abstract = {Interactive vehicle windows can enrich the commuting experience by being informative
and engaging, strengthening the connection between passengers and the outside world.
We propose a preliminary interaction paradigm to allow rich and un-distracting interaction
experience on vehicle side windows. Following this paradigm we present a prototype,
the Car iWindow, and discuss our preliminary design critique of the interaction, based
on the installation of the iWindow in a car and interaction with it while commuting
around our campus.},
booktitle = {CHI '13 Extended Abstracts on Human Factors in Computing Systems},
pages = {1665–1670},
numpages = {6},
keywords = {transparent display, side window, vehicle},
location = {Paris, France},
series = {CHI EA '13}
}