Design GenAI as
Mental Health Primary Care

Image

Recent years have seen a wealth of AI therapy chatbots and journaling tools, driven by advancements in Large Language Models (LLMs). These non-clinical tools hold exciting potential in alleviating the shortage of mental healthcare providers. However, they also pose safety and efficiency risks, may unintentionally weaken human connection, and could delay users from seeking necessary clinical care.

This project explores how to responsibly design LLM tools for mental well-being, fulfilling LLMs’ promise of expanding access to mental health care while minimizing unintended consequences. The HAI design research questions are: What does it mean to design LLM-based mental well-being tools responsibly, given that the safety and efficacy of any mental-health intervention is inherently difficult to evaluate? How can we design such tools to minimize the risk of replacing human connection while seamlessly bridging AI-guided self-care and clinical care?

This project is ongoing. Below are some initial publications:

This project is supported by Weill Cornell Medicine Intercampus research grants. Tony Wong and [Yuewen Yang](https://www.yuewen.me/{:target=”_blank”} are student leads. Alumnus Angel Hwang was the previous student lead.