We are GPS Lab at Penn State University.
Our work, at the intersection of Security, AI, and HCI to address question: How can we design secure systems that improve the quality of human interaction and human data authenticity in digital ecosystems? Some Research Directions in our Lab
We study media (e.g. AI AI-generated contents) provenance and design tools to verify digital content authenticity. (Preprint)
We analyze risks related to bots, fraud and personhood risks, and design user-centered tooling (Talk), (Preprint)
We build tools leveraging computer vision methods to empower blind users to manage private image/video content. BivPriv Dataset, Benchmark, Privacy Tool
We investigate data and model governance to facilitate co-ordination and decision making. Data Minimization, Decentralized Governance, DAO, Democratic AI
This generous support will drive GPS Lab's research on "Designing Accessible Tools for Blind and Low-Vision People to Navigate Deepfake Media"
We are super excited to welcome new PhD students, Farhad and Aljawharah!
Signals of Provenance: Practices & Challenges of Navigating Indicators in AI-Generated Media for Sighted and Blind Individuals
Preprint
AI-generated content has become widespread through easy-to-use tools and powerful generative models. However, provenance indicators are often missed, especially when relying on visual cues alone. This study explores how blind and sighted individuals perceive and interpret these signals, uncovering four mental models that shape their judgments about authenticity and authorship in digital media.
Personhood Credentials: Human-Centered Design Recommendation Balancing Security, Usability, and Trust
Usenix PEPR 2025
Personhood credentials (PHCs) enable individuals to verify they are human without revealing unnecessary data. This study investigates user perceptions of PHCs versus traditional identity verification. Participants favored features like periodic biometric checks, time-bound credentials, interactive human checks, and government involvement. We present actionable design recommendations rooted in user expectations.
Design of Provenance and Verification for Voice Origin
Preprint TBD
Voice plays a critical role in sectors like finance and government, but is vulnerable to spoofing via synthetic audio. This project explores systems to verify that a voice originates from a real human. It combines technical development with user-centered evaluation and advocates for greater awareness and education around voice authentication.
Visual Privacy Management with Generative AI
Accepted in ACM ASSETS
Through interviews with 21 blind or low-vision participants, this study examines how people use GenAI tools for self-presentation, navigation, and professional tasks. Findings reveal user preferences for privacy-aware design features including on-device processing, redaction tools, and multimodal feedback. We offer guidelines to ensure privacy and empowerment in GenAI systems.
Tanusree Sharma
Principal Investigator
Ayae Ide
Ph.D. Student
Yihao Zhou
Masters Student
Farhad Hossain
Incoming Ph.D. Student
Aljawharah M. Alzahrani
Incoming Ph.D. Student
Ryan John Oommen
Researcher, IUG Student
you can be the New Member
Title / Role
We are grateful for the support from: