Publications

Google Scholar

Full Conference and Journal Papers

Thumbnail of Stargazer: An Interactive Camera Robot for Capturing How-To Videos Based on Subtle Instructor Cues

Stargazer: An Interactive Camera Robot for Capturing How-To Videos Based on Subtle Instructor Cues

Jiannan Li, MaurĂ­cio Sousa, Karthik Mahadevan, Bryan Wang, Paula Akemi Aoyagui, Nicole Yu, Angela Yang, Ravin Balakrishnan, Anthony Tang, and Tovi Grossman
In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23), 2023

abstract |  paper  |  doi  |  website  |  preview  |  video  |  presentation  |  UofT news

Live and pre-recorded video tutorials are an effective means for teaching physical skills such as cooking or prototyping electronics. A dedicated cameraperson following an instructor’s activities can improve production quality. However, instructors who do not have access to a cameraperson’s help often have to work within the constraints of static cameras. We present Stargazer, a novel approach for assisting with tutorial content creation with a camera robot that autonomously tracks regions of interest based on instructor actions to capture dynamic shots. Instructors can adjust the camera behaviors of Stargazer with subtle cues, including gestures and speech, allowing them to fluidly integrate camera control commands into instructional activities. Our user study with six instructors, each teaching a distinct skill, showed that participants could create dynamic tutorial videos with a diverse range of subjects, camera framing, and camera angle combinations using Stargazer.

Thumbnail of Investigating Guardian Awareness Techniques to Promote Safety in Virtual Reality

Investigating Guardian Awareness Techniques to Promote Safety in Virtual Reality

Sixuan Wu, Jiannan Li, MaurĂ­cio Sousa, and Tovi Grossman
In Proceedings of IEEE Virtual Reality 2023 (VR '23), 2023

abstract |  paper  |  doi  |  preview

Virtual Reality (VR) can completely immerse users in a virtual world and provide little awareness of bystanders in the surrounding physical environment. Current technologies use predefined guardian area visualizations to set safety boundaries for VR interactions. However, bystanders cannot perceive these boundaries and may collide with VR users if they accidentally enter guardian areas. In this paper, we investigate four awareness techniques on mobile phones and smartwatches to help bystanders avoid breaching guardian areas. These techniques include augmented reality boundary overlays and visual, auditory, and haptic alerts indicating bystanders' distance from guardians. Our findings suggest that the proposed techniques effectively keep participants clear of the safety boundaries. More specifically, augmented reality participants could avoid guardians with less time, and haptic alerts caused fewer distractions.

Thumbnail of Tourgether360: Collaborative Exploration of 360° Videos Using Pseudo-Spatial Navigation

Tourgether360: Collaborative Exploration of 360° Videos Using Pseudo-Spatial Navigation

Kartikaeya Kumar, Lev Poretski, Jiannan Li, and Anthony Tang
Proceedings of the ACM on Human-Computer Interaction (CSCW '22), 2022

abstract |  doi  |  video  |  presentation

Collaborative exploration of 360 videos with contemporary interfaces is challenging because collaborators do not have awareness of one another's viewing activities. Tourgether360 enhances social exploration of 360° tour videos using a pseudo-spatial navigation technique that provides both an overhead "context" view of the environment as a minimap, as well as a shared pseudo-3D environment for exploring the video. Collaborators are embodied as avatars along a track depending on their position in the video timeline and can point and synchronize their playback. We evaluated the Tourgether360 concept through two studies. First, a comparative study with a simplified version of Tourgether360 with collaborator embodiments and a minimap versus a conventional interface; second, an exploratory study where we studied how collaborators used Tourgether360 to navigate and explore 360° environments together. We found that participants adopted the Tourgether360 approach with ease and enjoyed the shared social aspects of the experience. Participants reported finding the experience similar to an interactive social video game.

Thumbnail of ASTEROIDS: Exploring Swarms of Mini-Telepresence Robots for Physical Skill Demonstration

ASTEROIDS: Exploring Swarms of Mini-Telepresence Robots for Physical Skill Demonstration

Jiannan Li, MaurĂ­cio Sousa, Chu Li, Jessie Liu, Yan Chen, Ravin Balakrishnan, and Tovi Grossman
In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22), 2022

abstract |  paper  |  preview  |  video  |  presentation

Online synchronous tutoring allows for immediate engagement between instructors and audiences over distance. However, tutoring physical skills remains challenging because current telepresence approaches may not allow for adequate spatial awareness and viewpoint control of the demonstration activities scattered across an entire work area and the instructor’s sufficient awareness of the audience. We present Asteroids, a novel design space for tangible robotic telepresence, to enable workbench-scale physical embodiments of remote people and tangible interactions by the instructor. With Asteroids, the audience can actively control a swarm of mini-telepresence robots, change camera positions, and switch to other robots’ viewpoints. Demonstrators can perceive the audiences’ physical presence while using tangible manipulations to control the audiences’ viewpoints and presentation flow. We conducted an exploratory evaluation for Asteroids with 12 remote participants in a model-making tutorial scenario with an architectural expert demonstrator. Results suggest our unique features benefited participant engagement, sense of presence, and understanding.

Thumbnail of immersivePOV: Film-ing How-To Videos with a Head-Mounted 360° Action Camera (Honorable Mention Award)

immersivePOV: Film-ing How-To Videos with a Head-Mounted 360° Action Camera (Honorable Mention Award)

Kevin Huang, Jiannan Li, MaurĂ­cio Sousa, and Tovi Grossman
In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22), 2022

abstract |  doi  |  preview  |  video  |  presentation

How-to videos are often shot using camera angles that may not be optimal for learning motor tasks, with a prevalent use of third-person perspective. We present immersivePOV, an approach to film how-to videos from an immersive first-person perspective using a head-mounted 360° action camera. immersivePOV how-to videos can be viewed in a Virtual Reality headset, giving the viewer an eye-level viewpoint with three Degrees of Freedom. We evaluated our approach with two everyday motor tasks against a baseline first-person perspective and a third-person perspective. In a between-subjects study, participants were assigned to watch the task videos and then replicate the tasks. Results suggest that immersivePOV reduced perceived cognitive load and facilitated task learning. We discuss how immersivePOV can also streamline the video production process for content creators. Altogether, we conclude that immersivePOV is an effective approach to film how-to videos for learners and content creators alike.

Thumbnail of Route Tapestries: Navigating 360° Virtual Tour Videos Using Slit-Scan Visualizations

Route Tapestries: Navigating 360° Virtual Tour Videos Using Slit-Scan Visualizations

Jiannan Li, Jiahe Lyu, MaurĂ­cio Sousa, Ravin Balakrishnan, Anthony Tang, and Tovi Grossman
In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology (UIST '21), 2021

abstract |  bibtex |  paper  |  doi  |  30s preview  |  video  |  conference presentation

An increasingly popular way of experiencing remote places is by viewing 360° virtual tour videos, which show the surrounding view while traveling through an environment. However, finding particular locations in these videos can be difficult because current interfaces rely on distorted frame previews for navigation. To alleviate this usability issue, we propose Route Tapestries, continuous orthographic-perspective projection of scenes along camera routes. We first introduce an algorithm for automatically constructing Route Tapestries from a 360° video, inspired by the slit-scan photography technique. We then present a desktop video player interface using a Route Tapestry timeline for navigation. An online evaluation using a target-seeking task showed that Route Tapestries allowed users to locate targets 22% faster than with YouTube-style equirectangular previews and reduced the failure rate by 75% compared to a more conventional row-of-thumbnail strip preview. Our results highlight the value of reducing visual distortion and providing continuous visual contexts in previews for navigating 360° virtual tour videos.

@inproceedings{10.1145/3472749.3474746,
author = {Li, Jiannan and Lyu, Jiahe and Sousa, Mauricio and Balakrishnan, Ravin and Tang, Anthony and Grossman, Tovi},
title = {Route Tapestries: Navigating 360° Virtual Tour Videos Using Slit-Scan Visualizations},
year = {2021},
isbn = {9781450386357},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3472749.3474746},
doi = {10.1145/3472749.3474746},
abstract = { An increasingly popular way of experiencing remote places is by viewing 360° virtual tour videos, which show the surrounding view while traveling through an environment. However, finding particular locations in these videos can be difficult because current interfaces rely on distorted frame previews for navigation. To alleviate this usability issue, we propose Route Tapestries, continuous orthographic-perspective projection of scenes along camera routes. We first introduce an algorithm for automatically constructing Route Tapestries from a 360° video, inspired by the slit-scan photography technique. We then present a desktop video player interface using a Route Tapestry timeline for navigation. An online evaluation using a target-seeking task showed that Route Tapestries allowed users to locate targets 22% faster than with YouTube-style equirectangular previews and reduced the failure rate by 75% compared to a more conventional row-of-thumbnail strip preview. Our results highlight the value of reducing visual distortion and providing continuous visual contexts in previews for navigating 360°virtual tour videos.},
booktitle = {The 34th Annual ACM Symposium on User Interface Software and Technology},
pages = {223–238},
numpages = {16},
keywords = {360° Video, Virtual Tour, Navigation},
location = {Virtual Event, USA},
series = {UIST '21}
}
Thumbnail of HoloBoard: a Large-format Immersive Teaching Board based on Pseudo HoloGraphics

HoloBoard: a Large-format Immersive Teaching Board based on Pseudo HoloGraphics

Jiangtao Gong, Teng Han, Siling Guo, Jiannan Li, Siyu Zha, Liuxin Zhang, Feng Tian, Qianying Wang, and Yong Rui
In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology (UIST '21), 2021

abstract |  paper  |  doi  |  preview  |  video  |  conference presentation

In this paper, we present HoloBoard, an interactive large-format pseduo-holographic display system for lecture based classes. With its unique properties of immersive visual display and transparent screen, we designed and implemented a rich set of novel interaction techniques like immersive presentation, role-play, and lecturing behind the scene that are potentially valuable for lecturing in class. We conducted a controlled experimental study to compare a HoloBoard class with a normal class through measuring students’ learning outcomes and three dimensions of engagement (i.e., behavioral, emotional, and cognitive engagement). We used pre-/post- knowledge tests and multimodal learning analytics to measure students’ learning outcomes and learning experiences. Results indicated that the lecture-based class utilizing HoloBoard lead to slightly better learning outcomes and a significantly higher level of student engagement. Given the results, we discussed the impact of HoloBoard as an immersive media in the classroom setting and suggest several design implications for deploying HoloBoard in immersive teaching practices.

Thumbnail of More Kawaii than a Real-Person Live Streamer: Understanding How the Otaku Community Engages with and Perceives Virtual YouTubers

More Kawaii than a Real-Person Live Streamer: Understanding How the Otaku Community Engages with and Perceives Virtual YouTubers

Zhicong Lu, Chenxinran Shen, Jiannan Li, Hong Shen, and Daniel Wigdor
In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21), 2021

abstract |  bibtex |  doi  |  paper

Live streaming has become increasingly popular, with most streamers presenting their real-life appearance. However, Virtual YouTubers (VTubers), virtual 2D or 3D avatars that are voiced by humans, are emerging as live streamers and attracting a growing viewership in East Asia. Although prior research has found that many viewers seek real-life interpersonal interactions with real-person streamers, it is currently unknown what makes VTuber live streams engaging or how they are perceived differently than real-person streamers. We conducted an interview study to understand how viewers engage with VTubers and perceive the identities of the voice actors behind the avatars (i.e., Nakanohito). The data revealed that Virtual avatars bring unique performative opportunities which result in different viewer expectations and interpretations of VTuber behavior. Viewers intentionally upheld the disembodiment of VTuber avatars from their voice actors. We uncover the nuances in viewer perceptions and attitudes and further discuss the implications of VTuber practices to the understanding of live streaming in general.

@inproceedings{10.1145/3411764.3445660,
author = {Lu, Zhicong and Shen, Chenxinran and Li, Jiannan and Shen, Hong and Wigdor, Daniel},
title = {More Kawaii than a Real-Person Live Streamer: Understanding How the Otaku Community Engages with and Perceives Virtual YouTubers},
year = {2021},
isbn = {9781450380966},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3411764.3445660},
abstract = { Live streaming has become increasingly popular, with most streamers presenting their
real-life appearance. However, Virtual YouTubers (VTubers), virtual 2D or 3D avatars
that are voiced by humans, are emerging as live streamers and attracting a growing
viewership in East Asia. Although prior research has found that many viewers seek
real-life interpersonal interactions with real-person streamers, it is currently unknown
what makes VTuber live streams engaging or how they are perceived differently than
real-person streamers. We conducted an interview study to understand how viewers engage
with VTubers and perceive the identities of the voice actors behind the avatars (i.e.,
Nakanohito). The data revealed that Virtual avatars bring unique performative opportunities
which result in different viewer expectations and interpretations of VTuber behavior.
Viewers intentionally upheld the disembodiment of VTuber avatars from their voice
actors. We uncover the nuances in viewer perceptions and attitudes and further discuss
the implications of VTuber practices to the understanding of live streaming in general.
},
booktitle = {Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
articleno = {137},
numpages = {14}
}
Thumbnail of StarHopper: A Touch Interface for Remote Object-Centric Drone Navigation

StarHopper: A Touch Interface for Remote Object-Centric Drone Navigation

Jiannan Li, Ravin Balakrishnan, and Tovi Grossman
In Proceedings of Proceedings of Graphics Interface 2020 (GI '20), 2020

abstract |  bibtex |  doi  |  paper  |  video  |  conference presentation

Camera drones, a rapidly emerging technology, offer people the ability to remotely inspect an environment with a high degree of mobility and agility. However, manual remote piloting of a drone is prone to errors. In contrast, autopilot systems can require a significant degree of environmental knowledge and are not necessarily designed to support flexible visual inspections. Inspired by camera manipulation techniques in interactive graphics, we designed StarHopper, a novel touch screen interface for efficient object-centric camera drone navigation, in which a user directly specifies the navigation of a drone camera relative to a specified object of interest. The system relies on minimal environmental information and combines both manual and automated control mechanisms to give users the freedom to remotely explore an environment with efficiency and accuracy. A lab study shows that StarHopper offers an efficiency gain of 35.4% over manual piloting, complimented by an overall user preference towards our object-centric navigation system.

@inproceedings{Li:2020:10.20380/GI2020.32,
 author = {Li, Jiannan and Balakrishnan, Ravin and Grossman, Tovi},
 title = {StarHopper: A Touch Interface for Remote Object-Centric Drone Navigation},
 booktitle = {Proceedings of Graphics Interface 2020},
 series = {GI 2020},
 year = {2020},
 isbn = {978-0-9947868-5-2},
 location = {University of Toronto},
 pages = {317 -- 326},
 numpages = {10},
 doi = {10.20380/GI2020.32},
 publisher = {Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine},
}
Thumbnail of PinchList: Leveraging Pinch Gestures for Hierarchical List Navigation on Smartphones

PinchList: Leveraging Pinch Gestures for Hierarchical List Navigation on Smartphones

Teng Han, Jie Liu, Khalad Hasan, Mingming Fan, Junhyeok Kim, Jiannan Li, Xiangmin Fan, Feng Tian, Edward Lank, and Pourang Irani
In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), 2019

abstract |  bibtex |  doi  |  paper  |  video  |  conference presentation

Intensive exploration and navigation of hierarchical lists on smartphones can be tedious and time-consuming as it often requires users to frequently switch between multiple views. To overcome this limitation, we present PinchList, a novel interaction design that leverages pinch gestures to support seamless exploration of multi-level list items in hierarchical views. With PinchList, sub-lists are accessed with a pinch-out gesture whereas a pinch-in gesture navigates back to the previous level. Additionally, pinch and flick gestures are used to navigate lists consisting of more than two levels. We conduct a user study to refine the design parameters of PinchList such as a suitable item size, and quantitatively evaluate the target acquisition performance using pinch-in/out gestures in both scrolling and non-scrolling conditions. In a second study, we compare the performance of PinchList in a hierarchal navigation task with two commonly used touch interfaces for list browsing: pagination and expand-and-collapse interfaces. The results reveal that PinchList is significantly faster than other two interfaces in accessing items located in hierarchical list views. Finally, we demonstrate that PinchList enables a host of novel applications in list-based interaction?

@inbook{10.1145/3290605.3300731,
author = {Han, Teng and Liu, Jie and Hasan, Khalad and Fan, Mingming and Kim, Junhyeok and Li, Jiannan and Fan, Xiangmin and Tian, Feng and Lank, Edward and Irani, Pourang},
title = {PinchList: Leveraging Pinch Gestures for Hierarchical List Navigation on Smartphones},
year = {2019},
isbn = {9781450359702},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3290605.3300731},
abstract = {Intensive exploration and navigation of hierarchical lists on smartphones can be tedious
and time-consuming as it often requires users to frequently switch between multiple
views. To overcome this limitation, we present PinchList, a novel interaction design
that leverages pinch gestures to support seamless exploration of multi-level list
items in hierarchical views. With PinchList, sub-lists are accessed with a pinch-out
gesture whereas a pinch-in gesture navigates back to the previous level. Additionally,
pinch and flick gestures are used to navigate lists consisting of more than two levels.
We conduct a user study to refine the design parameters of PinchList such as a suitable
item size, and quantitatively evaluate the target acquisition performance using pinch-in/out
gestures in both scrolling and non-scrolling conditions. In a second study, we compare
the performance of PinchList in a hierarchal navigation task with two commonly used
touch interfaces for list browsing: pagination and expand-and-collapse interfaces.
The results reveal that PinchList is significantly faster than other two interfaces
in accessing items located in hierarchical list views. Finally, we demonstrate that
PinchList enables a host of novel applications in list-based interaction?},
booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
pages = {1–13},
numpages = {13}
}
Thumbnail of PageFlip: Leveraging Page-Flipping Gestures for Efficient Command and Value Selection on Smartwatches

PageFlip: Leveraging Page-Flipping Gestures for Efficient Command and Value Selection on Smartwatches

Teng Han, Jiannan Li, Khalad Hasan, Keisuke Nakamura, Randy Gomez, Ravin Balakrishnan, and Pourang Irani
In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 2018

abstract |  bibtex |  doi  |  paper  |  video

Selecting an item of interest on smartwatches can be tedious and time-consuming as it involves a series of swipe and tap actions. We present PageFlip, a novel method that combines into a single action multiple touch operations such as command invocation and value selection for efficient interaction on smartwatches. PageFlip operates with a page flip gesture that starts by dragging the UI from a corner of the device. We first design PageFlip by examining its key design factors such as corners, drag directions and drag distances. We next compare PageFlip to a functionally equivalent radial menu and a standard swipe and tap method. Results reveal that PageFlip improves efficiency for both discrete and continuous selection tasks. Finally, we demonstrate novel smartwatch interaction opportunities and a set of applications that can benefit from PageFlip.

@inbook{10.1145/3173574.3174103,
author = {Han, Teng and Li, Jiannan and Hasan, Khalad and Nakamura, Keisuke and Gomez, Randy and Balakrishnan, Ravin and Irani, Pourang},
title = {PageFlip: Leveraging Page-Flipping Gestures for Efficient Command and Value Selection on Smartwatches},
year = {2018},
isbn = {9781450356206},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3173574.3174103},
abstract = {Selecting an item of interest on smartwatches can be tedious and time-consuming as
it involves a series of swipe and tap actions. We present PageFlip, a novel method
that combines into a single action multiple touch operations such as command invocation
and value selection for efficient interaction on smartwatches. PageFlip operates with
a page flip gesture that starts by dragging the UI from a corner of the device. We
first design PageFlip by examining its key design factors such as corners, drag directions
and drag distances. We next compare PageFlip to a functionally equivalent radial menu
and a standard swipe and tap method. Results reveal that PageFlip improves efficiency
for both discrete and continuous selection tasks. Finally, we demonstrate novel smartwatch
interaction opportunities and a set of applications that can benefit from PageFlip.},
booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
pages = {1–12},
numpages = {12}
}
Thumbnail of A Two-Sided Collaborative Transparent Display Supporting Workspace Awareness

A Two-Sided Collaborative Transparent Display Supporting Workspace Awareness

Jiannan Li, Saul Greenberg, and Ehud Sharlin
In International Journal of Human-Computer Studies, Volume 101, May 2017, Pages 23-44, 2017

abstract |  bibtex |  doi  |  paper

Transparent displays naturally support workspace awareness during face-to-face interactions. Viewers see another person’s actions through the display: gestures, gaze, body movements, and what one is manipulating on the display. Yet we can design even better collaborative transparent displays. First, collaborators on either side should be able to directly interact with workspace objects. Second, and more controversially, both sides should be capable of presenting different content. This affords: reversal of images/text in place (so that people on both sides see objects correctly); personal and private territories aligned atop each other; and GUI objects that provide different visuals for feedthrough vs. feedback. Third, the display should visually enhance the gestural actions of the person on the other side to better support workspace awareness. We show how our FacingBoard-2 design supports these collaborative requirements, and confirm via a controlled study that visually enhancing gestures is effective under a range of deteriorating transparency conditions.

@article{LI201723,
title = {A two-sided collaborative transparent display supporting workspace awareness},
journal = {International Journal of Human-Computer Studies},
volume = {101},
pages = {23-44},
year = {2017},
issn = {1071-5819},
doi = {https://doi.org/10.1016/j.ijhcs.2017.01.003},
url = {https://www.sciencedirect.com/science/article/pii/S1071581917300034},
author = {Jiannan Li and Saul Greenberg and Ehud Sharlin},
keywords = {Transparent displays, Workspace awareness, Collaborative systems},
abstract = {Transparent displays naturally support workspace awareness during face-to-face interactions. Viewers see another person’s actions through the display: gestures, gaze, body movements, and what one is manipulating on the display. Yet we can design even better collaborative transparent displays. First, collaborators on either side should be able to directly interact with workspace objects. Second, and more controversially, both sides should be capable of presenting different content. This affords: reversal of images/text in place (so that people on both sides see objects correctly); personal and private territories aligned atop each other; and GUI objects that provide different visuals for feedthrough vs. feedback. Third, the display should visually enhance the gestural actions of the person on the other side to better support workspace awareness. We show how our FacingBoard-2 design supports these collaborative requirements, and confirm via a controlled study that visually enhancing gestures is effective under a range of deteriorating transparency conditions.}
}
Thumbnail of Interactive Two-Sided Transparent Displays: Designing for Collaboration

Interactive Two-Sided Transparent Displays: Designing for Collaboration

Jiannan Li, Saul Greenberg, Ehud Sharlin, and Joaquim Jorge
In Proceedings of the 2014 conference on Designing interactive systems (DIS '14), 2014

abstract |  bibtex |  doi  |  paper  |  video

Transparent displays can serve as an important collaborative medium supporting face-to-face interactions over a shared visual work surface. Such displays enhance workspace awareness: when a person is working on one side of a transparent display, the person on the other side can see the other's body, hand gestures, gaze and what he or she is actually manipulating on the shared screen. Even so, we argue that designing such transparent displays must go beyond current offerings if it is to support collaboration. First, both sides of the display must accept interactive input, preferably by at least touch and / or pen, as that affords the ability for either person to directly interact with the workspace items. Second, and more controversially, both sides of the display must be able to present different content, albeit selectively. Third (and related to the second point), because screen contents and lighting can partially obscure what can be seen through the surface, the display should visually enhance the actions of the person on the other side to better support workspace awareness. We describe our prototype FACINGBOARD-2 system, where we concentrate on how its design supports these three collaborative requirements.

@inproceedings{10.1145/2598510.2598518,
author = {Li, Jiannan and Greenberg, Saul and Sharlin, Ehud and Jorge, Joaquim},
title = {Interactive Two-Sided Transparent Displays: Designing for Collaboration},
year = {2014},
isbn = {9781450329026},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2598510.2598518},
doi = {10.1145/2598510.2598518},
abstract = {Transparent displays can serve as an important collaborative medium supporting face-to-face
interactions over a shared visual work surface. Such displays enhance workspace awareness:
when a person is working on one side of a transparent display, the person on the other
side can see the other's body, hand gestures, gaze and what he or she is actually
manipulating on the shared screen. Even so, we argue that designing such transparent
displays must go beyond current offerings if it is to support collaboration. First,
both sides of the display must accept interactive input, preferably by at least touch
and / or pen, as that affords the ability for either person to directly interact with
the workspace items. Second, and more controversially, both sides of the display must
be able to present different content, albeit selectively. Third (and related to the
second point), because screen contents and lighting can partially obscure what can
be seen through the surface, the display should visually enhance the actions of the
person on the other side to better support workspace awareness. We describe our prototype
FACINGBOARD-2 system, where we concentrate on how its design supports these three
collaborative requirements.},
booktitle = {Proceedings of the 2014 Conference on Designing Interactive Systems},
pages = {395–404},
numpages = {10},
keywords = {workspace awareness, two-sided transparent displays, collaborative systems},
location = {Vancouver, BC, Canada},
series = {DIS '14}
}

Posters, Workshops, and Preprints

Thumbnail of Thinking Outside the Lab: VR Size & Depth Perception in the Wild

Thinking Outside the Lab: VR Size & Depth Perception in the Wild

Rahul Arora, Jiannan Li, Gongyi Shi, and Karan Singh
arXiv:2105.00584, 2021

abstract |  arXiv  |  video  |  supplemental

Size and distance perception in Virtual Reality (VR) have been widely studied, albeit in a controlled laboratory setting with a small number of participants. We describe a fully remote perceptual study with a gamified protocol to encourage participant engagement, which allowed us to quickly collect high-quality data from a large, diverse participant pool (N=60). Our study aims to understand medium-field size and egocentric distance perception in real-world usage of consumer VR devices. We utilized two perceptual matching tasks -- distance bisection and size matching -- at the same target distances of 1--9 metres. While the bisection protocol indicated a near-universal trend of nonlinear distance compression, the size matching estimates were more equivocal. Varying eye-height from the floor plane showed no significant effect on the judgements. We also discuss the pros and cons of a fully remote perceptual study in VR, the impact of hardware variation, and measures needed to ensure high-quality data.

Thumbnail of Designing the car iWindow: exploring interaction through vehicle side windows

Designing the car iWindow: exploring interaction through vehicle side windows

Jiannan Li, Ehud Sharlin, Saul Greenberg, and Michael Rounding
CHI '13 Extended Abstracts on Human Factors in Computing Systems (CHI EA '13), 2013

abstract |  bibtex |  doi  |  paper  |  video

Interactive vehicle windows can enrich the commuting experience by being informative and engaging, strengthening the connection between passengers and the outside world. We propose a preliminary interaction paradigm to allow rich and un-distracting interaction experience on vehicle side windows. Following this paradigm we present a prototype, the Car iWindow, and discuss our preliminary design critique of the interaction, based on the installation of the iWindow in a car and interaction with it while commuting around our campus.

@inproceedings{10.1145/2468356.2468654,
author = {Li, Jiannan and Sharlin, Ehud and Greenberg, Saul and Rounding, Michael},
title = {Designing the Car IWindow: Exploring Interaction through Vehicle Side Windows},
year = {2013},
isbn = {9781450319522},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2468356.2468654},
doi = {10.1145/2468356.2468654},
abstract = {Interactive vehicle windows can enrich the commuting experience by being informative
and engaging, strengthening the connection between passengers and the outside world.
We propose a preliminary interaction paradigm to allow rich and un-distracting interaction
experience on vehicle side windows. Following this paradigm we present a prototype,
the Car iWindow, and discuss our preliminary design critique of the interaction, based
on the installation of the iWindow in a car and interaction with it while commuting
around our campus.},
booktitle = {CHI '13 Extended Abstracts on Human Factors in Computing Systems},
pages = {1665–1670},
numpages = {6},
keywords = {transparent display, side window, vehicle},
location = {Paris, France},
series = {CHI EA '13}
}