Projects

Understanding the context of substance use among college students through smartwatches

Adolescents and young adults are increasingly consuming substances at amount and frequencies that have serious consequences to mental health and physical wellbeing. The behavioral and contextual factors are poorly understood, with prior studies in this domain based on diaries, cross-sectional surveys, etc. To gain a better understanding of behaviors in the wild, we propose the use of Apple watches to collect Ecological Momentary Assessment (EMA) and sensor data. Using this information, we intend to generate personalized models of substance use that can be used, at a later stage, to deploy interventions that seek to disengage individuals from substance abuse.

This work was awarded a seed grant from the College of IST, and was featured in Penn State News. A description of the proposed system was published at the 5th International Mental Health: Sensing and Invervention Workshop in 2020, and the study is currently underway.

Investigating Users’ Perceptions of Light Behaviors in Smart-Speakers

Light is an important communication modality. Smart speakers leverage light behaviors with different colors and shapes to deliver a wide range of information. However, there has not been sufficient work to assess the effectiveness of these light behaviors. That is, can users correctly interpret light behaviors from smart speakers? How effective are these light behaviors across different cultures? I employed a mixed methods approach to investigate how users perceive the light apparatus on Amazon Echo and Google Home devices. I conducted online surveys with 1006 users along with expert interviews to analyze the perceived utility of these light apparatus, as well as to provide design recommendations to enhance understandability and usability of these apparatus. My primary research goal from this study was to explore a new modality (light apparatus) within smart speakers to ultimately enhance the experience for populations such as the deaf and hard of hearing community, who cannot solely rely on voice interactions with smart speakers. This work was accepted as a full paper to the Conversational User Interfaces conference in 2021.