Passionate researcher dedicated to responsible AI in healthcare, with expertise in clinical informatics, natural language processing, and human-AI collaboration. Currently advancing ethical AI systems and clinical NLP at the University of Colorado Anschutz Medical Campus.
I am currently a Postdoctoral Fellow in the Department of Biomedical Informatics at the University of Colorado School of Medicine, where I focus on developing ethical AI systems and advancing clinical natural language processing.
I completed my PhD in Information Technology from the University of Texas at San Antonio in 2025, under the guidance of Dr. Anthony Rios. My research brings together Natural Language Processing (NLP), Large Language Models (LLMs), and Human-Computer Interaction (HCI) to address real-world challenges in healthcare and society, with a focus on fairness, bias detection, and promoting equity in AI for healthcare.
My research journey began with a strong foundation in mathematics and statistics, earning my MS from the University of Colorado Denver and my BS in Statistics from Minzu University of China in Beijing. I have published in prestigious venues across NLP, biomedical informatics, and computational social science, including NAACL, COLING, ICWSM, BioNLP, Clinical NLP, and JAMIA.
Throughout my academic career, I have been recognized for excellence in research and teaching, receiving multiple awards including the PhD Student of the Year Award from UTSA Carlos Alvarez College of Business and being selected as one of 30 national Future Research Leaders in Artificial Intelligence by the University of Michigan.
Responsible AI ensures that algorithms operate fairly and transparently across all stages, from data collection to model inference to deployment, by actively detecting and reducing ethical harms such as bias, privacy issues, and unintended consequences.
In healthcare, even minor biases can lead to misdiagnoses, unequal treatment recommendations, and exacerbated health disparities. By rigorously measuring bias in data and models, and speculating about future bias and its consequences, we can reduce risks before deployment by examining how AI performs across different patient contexts, especially edge cases where misalignment with users' needs can lead to unintended harm.
My main contribution is developing a comprehensive framework to identify, measure, and mitigate bias in biomedical and social applications through three main areas:
We develop methods to extract Social Determinants of Health (SDoH), such as income, housing, and employment, from unstructured EHR notes and turn them into structured data. Real-world bias goes beyond race, age, and gender, and capturing these social factors helps us better understand long-term impacts and improve clinical decisions.
We develop novel prompting strategies to understand how the public perceives news headlines and how hidden bias in content such as implicit framing or unbalanced reporting, can shape public opinion and diverge from real-world data. We also develop benchmarks to evaluate when, how, and why biomedical AI systems fail, especially across different identities like race or gender.
Inspired by "Black Mirror," we develop a multi-agent system that uses LLMs to simulate virtual environments and character interactions. It generates user stories that reflect potential harms and benefits, helping people speculate about future bias and its consequences before AI models are developed and deployed.
Explore my research across different domains
Developing responsible AI systems that prioritize ethical considerations, patient safety, and community well-being in healthcare applications.
Developed a marker-based neural network system to extract Social Determinants of Health (SDoH) from clinical notes, supporting context-aware and personalized patient care.
Advancing natural language processing techniques for clinical applications, focusing on information extraction, text classification, and medical report analysis.
Evaluating instruction-tuned language models for temporal relation extraction in clinical timelines, improving understanding of medical event sequences.
Developing multi-modal retrieval-based systems for chest X-ray report summarization and improving expert radiology reports through layperson summary prompting.
Designing conversational interfaces, multi-agent dialogue systems, and human-centered AI interactions that enhance rather than replace human capabilities.
Developing virtual environments using large language models to simulate character interactions and generate user stories that help identify potential harms and benefits of AI systems before deployment.
Creating intuitive dialogue systems that facilitate natural human-AI interaction, focusing on understanding user intent and providing contextually appropriate responses.
Investigating the internal mechanisms of AI models to understand their decision-making processes, ensure transparency, and develop safer, more reliable AI systems.
Research into understanding how large language models and neural networks make decisions, focusing on identifying interpretable features and causal mechanisms within model architectures.
Developing methods to probe and understand AI model behavior to identify potential risks, biases, and failure modes before deployment in critical applications like healthcare.
Carlos Alvarez College of Business
Advisor: Dr. Anthony Rios
Dissertation: Responsible AI for Healthcare and Community Well-being From Ethical Design to Practical Deployment
Advisor: Dr. Stephanie Santorico
Class Size: 95 students
Evaluations: Instructor Rating 4.4/5.0 • Course Rating 4.5/5.0
Class Size: 43 students
Evaluations: Instructor Rating 3.9/5.0 • Course Rating 4.0/5.0
Class Size: 75 students
Evaluations: Instructor Rating 4.6/5.0 • Course Rating 4.7/5.0