Publications

My research has been published in top HCI and Accessibility venues including ACM CHI, ASSETS, CSCW, and DIS, with 6 first-author papers at CHI.

10Conference Proceedings4Journal Articles4Short Papers

Suhyeon Yoo is shown in bold and underline Β· * co-first authorship

Conference Proceedings(10)

Teaser for SoundStager: Interactive Design of Story-Driven GenAI Soundscapes for Video
CHI 2026⭐ Selected for Adobe MAX Sneaks 2025
SoundStager: Interactive Design of Story-Driven GenAI Soundscapes for Video

Suhyeon Yoo, Adolfo H. Santisteban, Prem Seetharaman, Justin Salamon, Oriol Nieto, Anh Truong

Sound effects (SFX) are critical to video storytelling by immersing viewers, directing attention, and shaping emotion. However, crafting an effective soundscape is difficult: creators must decide how to source, place, layer, and mix sounds to support the narrative. Generative text-to-SFX tools enable users to create custom sounds, but creators often struggle to describe sounds with words and lack control over individual stems in premixed outputs. We propose SoundStager, an AI-assisted tool for designing generative soundscapes for video. SoundStager analyzes the video narrative to create layered audio scenes (of keynote, signal, soundmark, and archetypal sounds) and supports iterative refinement through a combination of conversational and analog controls. SoundStager's design was informed by formative studies with six professional sound designers, six video creators, and insights from sound design literature. Our user evaluation with twelve video creators shows that SoundStager enables users to quickly create satisfactory soundscapes while retaining creative control.

Teaser for FAME: Exploring Expressive Facial Avatars for Lyrical and Non-Lyrical Music Visualization for d/Deaf individuals
CHI 2026
FAME: Exploring Expressive Facial Avatars for Lyrical and Non-Lyrical Music Visualization for d/Deaf individuals

Suhyeon Yoo, Yifang Pan, Ashish Ajin Thomas, Karan Singh, Khai N Truong

d/Deaf and Hard of Hearing (DHH) individuals often engage with music through a multimodal approach, where visual modalities are also used rather than relying on sound alone. While tools like captions and visualizers offer partial support, they often fail to capture the emotional depth and structural nuances of music. To explore new possibilities, we adopted an iterative, probe-based approach. Through a formative study with 9 DHH participants, we identified key design requirements for visualizing rhythm, emotion, and lyrics. We developed FAME (Facial Avatar for Musical Expression), a design probe that conveys music through expressive facial animation, instrument highlights, and synchronized captions, lip-syncing to lyrics or scat-singing to melodies. Through a two-phase exploratory study with 12 DHH users, we examined FAME's efficacy, applicability, and requirements for representing musical elements. Our findings refine design requirements for avatar-based systems and highlight the potential of avatars as expressive and socially meaningful tools for music accessibility.

Teaser for Disclosure Matters: How Self-Disclosure Statements in Song Signing Videos Shape d/Deaf Audiences' Acceptance of Culturally Sensitive Content
CHI 2026
Disclosure Matters: How Self-Disclosure Statements in Song Signing Videos Shape d/Deaf Audiences' Acceptance of Culturally Sensitive Content

Suhyeon Yoo, Somang Nam, Mark Chignell, Khai N Truong

Song signing videos have grown in numbers on YouTube, with much of the content created by amateur non-d/Deaf signers. However, the Deaf community has voiced concerns over misrepresentation and cultural appropriation in these performances. We explore self-disclosure as a way for performers to clarify their motivations and foster greater acceptance among viewers. We interviewed 11 song signers and surveyed 50 viewers to understand important elements that should be included in self-disclosure statements (SDS). A follow-up study with 24 d/Deaf participants assessed the impact of SDS, finding that they generally led to a more positive reception. Participants rated song signing style, relationship to the Deaf community, and sign language as the most important elements to include in SDS. We discuss actionable recommendations for culturally responsive self-disclosures by setting personal boundaries, constructing structured narratives, and presenting SDS without distracting from the performance.

Teaser for CuCap: Comparative Analysis of Customized Captioning between North American and South Korean d/Deaf and Hard-of-Hearing Users
ASSETS 2025co-first author
CuCap: Comparative Analysis of Customized Captioning between North American and South Korean d/Deaf and Hard-of-Hearing Users

CaluΓ£ de Lacerda Pataca, SooYeon Ahn*, Suhyeon Yoo*, JooYeong Kim, Khai N Truong, Jin-Hyuk Hong, Roshan L Peiris, Matt Huenerfauth(*co-first)

Affective and prosodic captions convey not only what a speaker says, but also how they say itβ€”louder words may appear thicker, quieter ones thinner; angry in red, calm in blue. These captions can improve access, satisfaction, and engagement for d/Deaf and Hard-of-Hearing (DHH) users. While prior work has explored their design space, it has focused largely on DHH participants in North America, limiting generalizability beyond English and Latin-based scripts. To uncover the role of culture and language, we ran an exploratory study with 49 DHH participants from North America and South Korea using CuCap, a tool that allowed them to personalize which speech features were displayed, and how. While emotion visualization was a universally favored choice, confirming prior findings, prosody preferences varied across cultures, reflecting linguistic and hearing factors. These findings point to the need for flexible captioning systems that account for cultural, linguistic, and individual differences.

Teaser for Large Language Model Agents for Improving Engagement with Behavior Change Interventions: Application to Digital Mindfulness
CSCW 2025
Large Language Model Agents for Improving Engagement with Behavior Change Interventions: Application to Digital Mindfulness

Harsh Kumar, Suhyeon Yoo, Angela Zavaleta Bernuy, Jiakai Shi, Huayin Luo, Joseph Jay Williams, Anastasia Kuzminykh, Ashton Anderson, Rachel Kornfield

Although engagement in self-directed wellness exercises typically declines over time, integrating social support such as coaching can sustain it. However, traditional forms of support are often inaccessible due to the high costs and complex coordination. Large Language Models (LLMs) show promise in providing human-like dialogues that could emulate social support. Yet, in-depth, in situ investigations of LLMs to support behavior change remain underexplored. We conducted two randomized experiments to assess the impact of LLM agents on user engagement with mindfulness exercises. First, a single-session study, involved 502 crowdworkers; second, a three-week study, included 54 participants. We explored two types of LLM agents: one providing information and another facilitating self-reflection. Both agents enhanced users' intentions to practice mindfulness. However, only the information-providing LLM agent, featuring a friendly persona, significantly improved engagement with the exercises. Our findings suggest that specific LLM agents may bridge the social support gap in digital health interventions.

Teaser for Toward More Inclusive Music Experience: Understanding Deaf and Hard-of-hearing Individuals' Everyday Music Activities
DIS 2025
Toward More Inclusive Music Experience: Understanding Deaf and Hard-of-hearing Individuals' Everyday Music Activities

HyeonBeom Yi, Dasom Choi, Suhyeon Yoo, Youngmi Song, JunWoo Lee, ChiYoon Jeong, Sungyong Shin

Music can play an important role in the lives of some Deaf and Hard-of-Hearing (DHH) individuals, facilitating emotional expression, storytelling, and social interaction despite differences in hearing ability and identity. While prior human-computer interaction (HCI) research has introduced various functional advancements to enhance their music experiences, a deeper exploration of broader user experiences and inclusive design strategies remains necessary. To address the real-life challenges DHH individuals face in everyday music activities, we conducted focus group interviews in South Korea with 39 DHH individuals and 9 music experts. Our analysis identified six dimensions of everyday music activities organized by engagement type and social level, highlighting the distinct challenges and preferences DHH individuals encounter in musical contexts. Based on these insights, we propose design implications for fostering more inclusive music experiences, extending beyond individual engagement to include community and mixed-group interactions. This work provides a comprehensive framework to inform future HCI research and guide the development of inclusive technologies that better support DHH individuals' diverse musical experiences.

Teaser for ELMI: Interactive and Intelligent Sign Language Translation of Lyrics for Song Signing
CHI 2025
ELMI: Interactive and Intelligent Sign Language Translation of Lyrics for Song Signing

Suhyeon Yoo, Khai N Truong, Young-Ho Kim

d/Deaf and hearing song-signers have become prevalent across video-sharing platforms, but translating songs into sign language remains cumbersome and inaccessible. Our formative study revealed the challenges song-signers face, including semantic, syntactic, expressive, and rhythmic considerations in translations. We present ELMI, an accessible song-signing tool that assists in translating lyrics into sign language. ELMI enables users to edit glosses line-by-line, with real-time synced lyric and music video snippets. Users can also chat with a large language model-driven AI to discuss meaning, glossing, emoting, and timing. Through an exploratory study with 13 song-signers, we examined how ELMI facilitates their workflows and how song-signers leverage and receive an LLM-driven chat for translation. Participants successfully adopted ELMI to song-signing, with active discussions throughout. They also reported improved confidence and independence in their translations, finding ELMI encouraging, constructive, and informative. We discuss research and design implications for accessible and culturally sensitive song-signing translation tools.

Teaser for Behind the Pup-ularity Curtain: Understanding the Motivations, Challenges, and Work Performed in Creating and Managing Pet Influencer Accounts
CHI 2024
Behind the Pup-ularity Curtain: Understanding the Motivations, Challenges, and Work Performed in Creating and Managing Pet Influencer Accounts

Suhyeon Yoo, Kevin Pu, Khai N Truong

Creating dedicated accounts to post users' pet content is a growing trend on Instagram. While these account owners derive joy from this pursuit, they may also struggle with criticisms and challenges. Yet, there remains a knowledge gap on how pet account owners manage their pets' online presence and navigate these obstacles successfully. Drawing from interviews with 21 Instagram pet account owners, we uncover the motivations behind pet account creation, spanning personal, altruistic, and commercial goals. We learn about the strategies employed for crafting their pets' online identities and personas, as well as the challenges faced by both owners and their pets in navigating the complexities of digital identity management. We discuss the evolving dynamics between humans and their pets, positioning pet identity cultivation as a form of collaborative work, akin to the "third shift", highlighting the need to design interfaces that support this unique identity management process.

Teaser for Understanding Tensions in Music Accessibility through Song Signing for and with d/Deaf and Non-d/Deaf Persons
CHI 2023
Understanding Tensions in Music Accessibility through Song Signing for and with d/Deaf and Non-d/Deaf Persons

Suhyeon Yoo, Georgianna Lin, Hyeon Jeong Byeon, Amy S Hwang, Khai Nhut Truong

Song signing is a method practiced by people who are d/Deaf and non-d/Deaf individuals to visually represent music and make music accessible through sign language and body movements. Although there is growing interest in song signing, there is a lack of understanding on what d/Deaf people value about song signing and how to make song signing productions that they would consider acceptable. We conducted semi-structured interviews with 12 d/Deaf participants to gain a deeper understanding of what they value in music and song signing. We then interviewed 14 song signers to understand their experiences and processes in creating song signing performances. From this study, we identify three complex, interrelated layers of the song signing creation process and discuss how they can be supported and completed to potentially bridge the cultural divide between the d/Deaf and non-d/Deaf audiences and guide more culturally responsive creation of music.

Teaser for AccessComics: An Accessible Digital Comic Book Reader for People with Visual Impairments
W4A 2021
AccessComics: An Accessible Digital Comic Book Reader for People with Visual Impairments

Yunjung Lee, Hwayeon Joh, Suhyeon Yoo, Uran Oh

A number of researches have been conducted to improve the accessibility of various types of images on the web (e.g., photos and artworks) for people with visual impairments. However, little has been studied on making comics accessible. As a formative study, we first conducted an online survey with 68 participants who are blind or have low vision. Based on their prior experiences with audio-books and eBooks, we propose AccessComics, an accessible digital comic book reader for people with visual impairments. An interview study and prototype evaluation with eight participants with visual impairments revealed implications that can further improve the accessibility of comic books for people with visual impairments. The results showed that the participants' preferred features of the prototype, audiobook and eBook.

Journal Articles(4)

Teaser for Enhancing Collaborative Signing Songwriting Experience of the d/Deaf Individuals
IJHCS 2025
Enhancing Collaborative Signing Songwriting Experience of the d/Deaf Individuals

Youjin Choi, ChungHa Lee, Songmin Chung, Eunhye Cho, Suhyeon Yoo, Jin-Hyuk Hong

Songwriting can be an important means of developing the personal and social skills of d/Deaf individuals, but there is a lack of research on understanding and supporting their songwriting. We aimed to understand the d/Deaf people's songwriting experience for the song signing genre, which visually represents music with sign language and body movement. Through two workshops in which mixed-hearing individuals collaborated in songwriting activities, we identified the potentials and challenges of the songwriting experience and developed a music-sensory substitution system that multimodally presents music in sound as well as visual, and vibrotactile feedback. The proposed system enables mixed-hearing partners to have better collaborative interaction and signing songwriting experience. Consequently, we found that the process of signing songwriting is valued by d/Deaf individuals as a means of musical self-expression and social connecting, and our system has increased their musical engagement while encouraging them to express themselves more through music and sign language.

Teaser for AccessComics2: Understanding the User Experience of an Accessible Comic Book Reader for Blind People with Textual Sound Effects
TACCESS 2023
AccessComics2: Understanding the User Experience of an Accessible Comic Book Reader for Blind People with Textual Sound Effects

Yun Jung Lee, Hwayeon Joh, Suhyeon Yoo, Uran Oh

For people with visual impairments, many studies have been conducted to improve the accessibility of various types of images on the web. However, the majority of the work focused on photos or graphs. In this study, we propose AccessComics, an accessible digital comic book reader for people with visual impairments. To understand the accessibility of existing platforms, we first conducted a formative online survey with 68 participants who are blind or have low vision asking about their prior experiences with audiobooks and eBooks. Then, to learn the implications of designing an accessible comic book reader for people with visual impairments, we conducted an interview study with eight participants and collected feedback about our system. Considering our findings that a brief description of the scene and sound effects are desired when listening to comic books, we conducted a follow-up study with 16 participants (8 blind, 8 sighted) to explore how to effectively provide scene descriptions and sound effects, generated based on the onomatopoeia and mimetic words that appear in comics. Then we assessed the impact of the overall reading experience and if it differs depending on the user group. The results show that the presence of scene descriptions was perceived to be useful for concentration and understanding the situation, while the sound effects were perceived to make the book-reading experience more immersive and realistic. Based on the findings, we suggest design implications specifying features that future accessible comic book readers should support.

Teaser for Integrated Scheduling of Real-Time and Interactive Tasks for Configurable Industrial Systems
TII 2021
Integrated Scheduling of Real-Time and Interactive Tasks for Configurable Industrial Systems

Suhyeon Yoo, Yewon Jo, Hyokyung Bahn

With the recent advances in Internet of Things and cyber-physical systems technologies, smart industrial systems support configurable processes consisting of human interactions as well as hard real-time functions. This implies that irregularly arriving interactive tasks and traditional hard real-time tasks coexist. As the characteristics of the tasks are heterogeneous, it is not an easy matter to schedule them all at once. To cope with this situation, this article presents a new task scheduling policy that uses the notion of "virtual real-time task" and two-phase scheduling. As hard real-time tasks must keep their deadlines, we perform offline scheduling based on genetic algorithms beforehand. This determines the processor's voltage level and memory location of each task and also reserves the virtual real-time tasks for interactive tasks. When interactive tasks arrive during the execution, online scheduling is performed on the time slot of the virtual real-time tasks. As interactive workloads evolve over time, we monitor them and periodically update the offline scheduling. Experimental results show that the proposed policy reduces the energy consumption by 66.8% on average without deadline misses and also supports the waiting time of less than 3 (s) for interactive tasks.

Teaser for Real-Time Power Saving Scheduling Based on Genetic Algorithms in Multi-core Hybrid Memory Environments
IJIBC 2020
Real-Time Power Saving Scheduling Based on Genetic Algorithms in Multi-core Hybrid Memory Environments

Suhyeon Yoo, Yewon Jo, Kyung-Woon Cho, Hyokyung Bahn

Recently, due to the rapid diffusion of intelligent systems and IoT technologies, power saving techniques in real-time embedded systems has become important. In this paper, we propose P-GA (Parallel Genetic Algorithm), a scheduling algorithm aims at reducing the power consumption of real-time systems in multi-core hybrid memory environments. P-GA improves the Proportional-Fairness (PF) algorithm devised for multi-core environments by combining the dynamic voltage/frequency scaling of the processor with the nonvolatile memory technologies. Specifically, P-GA applies genetic algorithms for optimizing the voltage and frequency modes of processors and the memory types, thereby minimizing the power consumptions of the task set. Simulation experiments show that the power consumption of P-GA is reduced by 2.85 times compared to the conventional schemes.

Short Papers(4)

Teaser for Enhancing Music Accessibility through AI Systems for and with d/Deaf Individuals
ASSETS 2025 DC
Enhancing Music Accessibility through AI Systems for and with d/Deaf Individuals

Suhyeon Yoo

Researchers have investigated visual and vibrotactile approaches to making music more accessible to d/Deaf individuals, focusing on music appreciation. However, these approaches often fail to help d/Deaf users fully understand and engage with the various musical elements of a song. My research addresses this gap through a series of design and evaluation studies with d/Deaf and non-d/Deaf participants. It begins with a formative study that identifies key attributes of song signing valued by the Deaf community. Building on this, a controlled study explores the use of disclosure statements to mitigate cultural misrepresentation. Finally, a systems study leverages Large Language Models (LLMs) to support the translation of lyrics to sign language. My next steps involve developing collaborative tools for song signers and facilitating culturally sensitive music experiences. These projects collectively bridge the gap between d/Deaf and non-d/Deaf communities, promoting intercultural understanding and expanding musical inclusivity.