About

I study secure and trustworthy machine learning, with broader interests in the security, privacy, and fairness of AI systems. My current research focuses on model provenance for generative image models, where I develop and rigorously evaluate techniques for model fingerprinting and model watermarking under realistic adversarial conditions. More broadly, I am interested in how technical mechanisms for provenance and verification can support trustworthy deployment of generative AI while also informing questions around digital forensics, regulatory compliance, intellectual property protection, and content authenticity in practice. I welcome collaborations across research, interdisciplinary projects, and broader real-world applications in these areas (PhD advisor: Dr. Marc Juarez).

  • AI Security
  • Trustworthy AI
  • Generative AI
  • Computer Vision
  • Digital Media

News

  • Mar 2026
    Smudged Fingerprints was covered by Herald Scotland, DIGIT, and University of Edinburgh News, highlighting the broader relevance of this work on robust AI image provenance.
  • Mar 2026
    Gave a guest lecture on Image Provenance in the AI Era for the graduate course Privacy and Security with Machine Learning at the University of Edinburgh. The slides are available here.
  • Dec 2025
    Smudged Fingerprints accepted to the 4th IEEE Conference on Secure and Trustworthy Machine Learning (SaTML 2026). Introduces the first systematic adversarial attack framework against model fingerprints in AI-generated images. Preprint available on arXiv.
  • Sep 2025
    Released AuthPrint, a black-box ML approach for detecting silent model swapping by malicious image-generation API providers. This is the first work to formally address this threat model.
  • Apr 2025
    Presented SoK: What Makes Private Learning Unfair? at SaTML 2025 (Copenhagen).
View earlier updates
  • Dec 2024
    SoK: What Makes Private Learning Unfair? accepted to the 3rd IEEE Conference on Secure and Trustworthy Machine Learning (SaTML 2025). Provides the first causal analysis of fairness degradation induced by differential privacy in machine learning. Preprint available on arXiv.

Publications

View full publication record

Note: * denotes equal contribution (co-first author).

Teaching

  • Lecture
    Guest lecture on Image Provenance in the AI Era for the graduate course Privacy and Security with Machine Learning — University of Edinburgh. The slides are available here.
  • Teaching Assistant
    Privacy and Security with Machine Learning — University of Edinburgh (2023–2025)
  • Teaching Assistant
    Mathematical Image Analysis — Johns Hopkins University (2019–2020)
  • Research Tutoring
    Felipe Takaesu (JHU), Eliana Crentsil (JHU), Lucia Sablich (JHU), Shannon Flanary (JHU), Chunhan Fang (UoE).

Education

University of Edinburgh 🇬🇧
Ph.D. in Cyber Security, Privacy, and Trust
Johns Hopkins University 🇺🇸
M.Sc. in Mechanical Engineering
Fudan University 🇨🇳
B.Sc. in Theoretical & Applied Mechanics