HomeResearchAI in Psychiatry


AI in Psychiatry: Research, Digital Therapeutics, and Responsible Innovation

By Ryan S. Sultan, MD
Assistant Professor of Clinical Psychiatry, Columbia University Irving Medical Center
March 28, 2026

Dr. Ryan Sultan leads research at the intersection of artificial intelligence and psychiatry at Columbia University. His work spans JAMA Psychiatry publications on telehealth oversight, the NIH-funded PAWS digital therapeutic for cannabis use disorder, natural language processing in electronic health records, and responsible AI frameworks for mental health care. His guiding principle: AI should augment clinical decision-making, never replace the psychiatrist.


On This Page: This page covers four pillars of my AI in psychiatry research: (1) telehealth oversight and the dangers of unregulated digital platforms, (2) the PAWS AI-powered digital therapeutic for cannabis use disorder, (3) natural language processing in mental health, and (4) responsible AI frameworks for clinical psychiatry. Each section links to deeper pages with full details.


The Problem That Started This Work

In 2020, the pandemic forced psychiatry online almost overnight. Telehealth visits surged from under 1% of psychiatric encounters to over 50% in a matter of weeks. For many patients, this was a genuine improvement -- better access, no commute, continuity of care during lockdowns. I supported that shift.

What I did not support was what happened next.

A wave of venture-capital-backed digital psychiatry platforms emerged, positioning themselves as disruptors of mental health care. Companies like Cerebral, Done, and others scaled rapidly by offering what amounted to prescription mills with a tech veneer. Evaluations lasting under 10 minutes. Controlled substances -- stimulants, benzodiazepines -- prescribed without adequate assessment. No follow-up. No monitoring. No therapeutic relationship.

This was not innovation. This was clinical negligence at scale, and it was being celebrated in the press as the future of mental health care.

My colleague Manpreet K. Singh at Stanford University (now at UC Davis) and I decided to document what was happening and propose a framework for responsible integration of telehealth into psychiatry. That work, published in JAMA Psychiatry, became the foundation for my broader research program on AI and digital technology in mental health.


Pillar 1: Telehealth Oversight -- The JAMA Psychiatry Publications

My viewpoint in JAMA Psychiatry, co-authored with Manpreet K. Singh and Alice W. Zhang, laid out the case that the rapid expansion of telepsychiatry had outpaced every regulatory mechanism designed to protect patients. The paper documented specific failures:

What We Found

The Framework We Proposed

Our JAMA Psychiatry paper did not argue against telehealth. I use telehealth in my own practice. The argument was that telehealth requires the same clinical standards as in-person care -- standards that many platforms were systematically circumventing. We proposed:

The full analysis of my telehealth research, including the specific findings and policy recommendations, is covered in depth on the Telehealth in Psychiatry page.


Pillar 2: The PAWS Digital Therapeutic -- AI for Cannabis Use Disorder

If the telehealth research represents the cautionary side of my AI work, PAWS represents the constructive side: using artificial intelligence to solve a real clinical problem that traditional approaches alone cannot address.

The Clinical Problem

Cannabis use disorder among young people is a growing public health challenge that the mental health system is poorly equipped to handle. The numbers are stark:

The traditional treatment model -- weekly in-person therapy sessions with a trained clinician -- works when patients can access it. But most cannot. We need scalable solutions that meet young people where they are, which is on their phones.

What PAWS Is

PAWS -- Preventing Adverse outcomes With Screening -- is an NIH-funded AI-powered digital therapeutic I am developing in collaboration with Xuhai "Orson" Xu at Columbia's Department of Biomedical Informatics (DBMI). Orson is also visiting faculty at Google Research, bringing deep expertise in human-AI interaction and conversational AI systems.

The core of PAWS is a large language model-powered conversational agent designed specifically for youth with cannabis use disorder. This is not a chatbot that dispenses generic advice. It is a clinically informed AI companion that:

The project is funded through an NIH UG3/UH3 mechanism totaling $335,500, reflecting NIH's recognition that digital therapeutics for substance use disorders represent a high-priority area of innovation.

The Advisory Board

PAWS is guided by an advisory board that represents the highest level of expertise in addiction neuroscience, substance use treatment, and clinical research:

Advisor Affiliation Expertise
Eric Nestler, MD, PhD Mount Sinai (NAS + NAM member, h-index 216) Addiction neuroscience, epigenetics of substance use
John Krystal, MD Yale University (NAM member, EIC Biological Psychiatry) Neuropsychopharmacology, substance use neurobiology
Ned Kalin, MD University of Wisconsin (NAM member, EIC American Journal of Psychiatry) Anxiety, stress neurobiology, developmental psychiatry
John Walkup, MD Northwestern University Child and adolescent psychiatry, clinical trials

Clinical collaborators include Yasmin Hurd (Mount Sinai, NAS + NAM member) in addiction neuroscience, Kevin Gray (MUSC) in adolescent substance use treatment, Sharon Levy (Harvard/Boston Children's Hospital) in pediatric addiction medicine, and Melanie Wall (Columbia) in biostatistics and clinical trial design.

The full details of the PAWS project -- technical architecture, clinical validation plan, and development timeline -- are on the PAWS Digital Therapeutic page.


Pillar 3: Natural Language Processing in Mental Health

The third pillar of my AI research applies natural language processing -- the branch of AI that enables computers to understand, interpret, and generate human language -- to psychiatric clinical data.

Training Under Carol Friedman

My NLP work is grounded in training I received from Carol Friedman, PhD, a member of the National Academy of Medicine and the creator of MedLEE (Medical Language Extraction and Encoding System) at NewYork-Presbyterian Hospital. MedLEE was one of the first NLP systems capable of extracting structured clinical information from unstructured medical text -- a foundational technology that influenced the entire field of clinical NLP.

Working with Carol Friedman gave me a deep appreciation for the challenges of extracting reliable clinical information from the messy, ambiguous, abbreviated language of real medical documentation. Psychiatry presents particular challenges for NLP: our notes are narrative-heavy, subjective, context-dependent, and filled with nuances that even experienced clinicians can interpret differently.

Research Advisors and Collaborators

My NLP research is supported by a network of computational experts:

Applications in Psychiatry

The clinical applications of NLP in psychiatry are substantial and largely untapped:

My Mental Health Informatics Lab at Columbia applies these methods to real clinical data from one of the largest academic medical centers in the country. The full scope of this work is detailed on the NLP in Mental Health page.


Pillar 4: Responsible AI Frameworks for Mental Health

The through-line connecting all of my AI research is a commitment to responsible innovation. Technology in psychiatry presents unique risks that other medical specialties do not face to the same degree.

Why Psychiatry Is Different

Psychiatry involves the most sensitive aspects of human experience -- thoughts, emotions, trauma, relationships, identity. The data psychiatrists work with is inherently more personal than a blood pressure reading or an imaging study. This creates distinct challenges for AI:

The Columbia Data Science Institute

I serve as a discussant for AI in mental health at the Columbia Data Science Institute, where I bring the clinical perspective to conversations that are often dominated by engineers and data scientists. This role reflects my conviction that psychiatrists must be at the table when AI tools for mental health are designed, tested, and deployed -- not brought in as an afterthought to rubber-stamp products that have already been built.

The Data Science Institute at Columbia convenes researchers across departments -- computer science, statistics, engineering, medicine -- to address the most pressing questions in data-driven research. Mental health is increasingly recognized as an area where data science can make transformative contributions, but only if the clinical context is understood. Too often, technically elegant AI systems fail in clinical practice because they were built without adequate clinical input. My role is to ensure that does not happen at Columbia -- to be the person in the room who asks "but what does this mean for the patient sitting across from me?"

The cross-disciplinary conversations at DSI have also shaped my thinking about the training pipeline. The next generation of psychiatrists needs computational literacy -- not to become data scientists, but to be informed consumers and collaborators of AI tools. Similarly, computer scientists working on mental health applications need genuine exposure to clinical practice, not just clinical datasets. Building these bridges is part of what the Data Science Institute does, and it is part of what I see as my responsibility as a researcher who works in both worlds.

The discussions at the Data Science Institute have reinforced several principles that guide my research:

Core Principles for AI in Psychiatry:

  • Clinical oversight is non-negotiable. Every AI-assisted decision in psychiatry must have a qualified clinician in the loop.
  • Evidence before deployment. AI tools in mental health must demonstrate clinical validity through rigorous trials before being marketed to patients.
  • Bias auditing is mandatory. AI systems trained on clinical data will reflect the biases in that data -- racial disparities in diagnosis, gender differences in symptom presentation, socioeconomic factors in treatment access. These biases must be identified and mitigated before deployment.
  • Transparency is required. Patients and clinicians must understand what AI systems are doing and why. Black-box algorithms making psychiatric recommendations are not acceptable.
  • The relationship comes first. AI should enhance the psychiatrist-patient relationship, not replace it. Any technology that substitutes for human connection in psychiatry has failed the fundamental test.

The Risks of Getting AI in Psychiatry Wrong

I spend a significant portion of my time thinking about what happens when technology in psychiatry is deployed irresponsibly, because we already have examples.

The Digital Prescription Mill Problem

The digital psychiatry platforms that emerged during and after the pandemic represent a cautionary tale. Companies that described themselves as "disrupting" mental health care were, in many cases, simply lowering the standard of care to maximize throughput. The results were predictable:

The Chatbot Problem

A parallel risk has emerged with AI chatbots marketed as mental health tools. Some of these products -- deployed without clinical validation, staffed without licensed clinicians, and marketed directly to vulnerable populations -- represent the worst of Silicon Valley's "move fast and break things" philosophy applied to psychiatric care.

The risks are real:

This is precisely why my PAWS project is being developed within an academic medical center, with clinical oversight, ethical review, and rigorous testing -- not in a startup garage optimizing for user engagement metrics.


How AI Can Actually Help in Psychiatry

Despite the risks, I am genuinely optimistic about the potential of AI in psychiatry -- when it is done right. The opportunities are substantial:

Clinical Decision Support

Psychiatry involves integrating vast amounts of information -- genetic factors, medication histories, comorbidities, psychosocial stressors, developmental history, family dynamics -- into treatment decisions. AI-powered clinical decision support systems can help clinicians process this information more effectively:

Scalable Screening and Monitoring

The mental health workforce shortage is real. There are not enough psychiatrists, psychologists, and therapists to meet the current demand for mental health services, let alone the growing demand driven by increased awareness and reduced stigma. AI can help address this gap -- not by replacing clinicians, but by extending their reach:

Research Acceleration

AI is accelerating psychiatric research in ways that were impossible five years ago. NLP enables analysis of clinical text at scale. Machine learning identifies patterns in large datasets that human researchers could never detect. Digital phenotyping creates new data streams for understanding psychiatric conditions in real-world settings.

My own research program -- spanning the Sultan Lab -- uses these tools daily. The PAWS project, the NLP work, the large-scale epidemiological studies using IQVIA and MarketScan data -- all of these rely on computational methods that would have been impractical a decade ago.


The Integrative Psych Approach to Technology

At Integrative Psych, my clinical practice in New York City, we implement technology in a way that reflects the principles from my research:


Where This Work Is Going

The intersection of AI and psychiatry is evolving rapidly. Over the next several years, I expect to see:

My role in this ongoing evolution is to ensure that the clinical perspective -- the patient's perspective -- remains central. Technology in psychiatry should serve patients, not investors. It should augment clinicians, not replace them. And it should be held to the same rigorous standards we apply to any other treatment in medicine.

That is what responsible innovation in psychiatry looks like.


Explore My AI in Psychiatry Research

Research Pages

Telehealth in Psychiatry
JAMA Psychiatry publications, risks of unregulated platforms, policy recommendations for responsible telehealth

PAWS Digital Therapeutic
NIH-funded AI companion app for cannabis use disorder, LLM-powered conversational agent, advisory board

NLP in Mental Health
Natural language processing applied to clinical psychiatry, training under Carol Friedman, Mental Health Informatics Lab

The Sultan Lab
Full research program: ADHD, cannabis, computational psychiatry, digital therapeutics


Frequently Asked Questions

What is Dr. Sultan's research on AI in psychiatry?

My AI in psychiatry research spans four areas: JAMA Psychiatry publications on telehealth oversight documenting the risks of unregulated digital psychiatry platforms, the PAWS AI-powered digital therapeutic for cannabis use disorder (NIH-funded, co-developed with Columbia DBMI), natural language processing applied to electronic health records for mental health phenotyping, and responsible AI frameworks for clinical psychiatry. I also serve as a discussant on AI in mental health at the Columbia Data Science Institute.

What is the PAWS digital therapeutic?

PAWS (Preventing Adverse outcomes With Screening) is an NIH-funded AI-powered digital therapeutic for youth with cannabis use disorder. Co-developed with Xuhai "Orson" Xu at Columbia DBMI, it uses a large language model-powered conversational agent to provide personalized, real-time support to young people struggling with cannabis use. The project is funded through an NIH UG3/UH3 mechanism totaling $335,500, with an advisory board that includes members of the National Academy of Sciences and National Academy of Medicine.

What are the risks of unregulated digital psychiatry platforms?

My JAMA Psychiatry research documented several specific risks: some platforms prescribe controlled substances after evaluations lasting under 10 minutes, regulatory oversight has not kept pace with telehealth expansion, patient safety is compromised when business models prioritize volume over quality, and many platforms lack adequate monitoring for side effects, drug interactions, and treatment outcomes. The result has been stimulant overprescribing, adverse events, and erosion of public trust in telehealth.

How does Dr. Sultan use NLP in mental health research?

I apply natural language processing to electronic health records to identify patterns in psychiatric documentation that would be impossible to detect manually. Trained under Carol Friedman -- creator of the MedLEE NLP system at NewYork-Presbyterian -- I use computational methods to detect substance use patterns in clinical notes, predict treatment outcomes, and identify at-risk patient populations through my Mental Health Informatics Lab at Columbia.

Can AI replace psychiatrists?

No. My research framework is built on the principle that AI should augment clinical decision-making, not replace it. Psychiatry requires therapeutic alliance, nuanced clinical judgment, ethical reasoning, and the ability to integrate biological, psychological, and social factors in ways current AI cannot replicate. The goal is tools -- digital therapeutics, NLP-based screening, clinical decision support -- that make psychiatrists more effective while keeping the human relationship at the center.

What is responsible AI in mental health?

Responsible AI in mental health means deploying technology with clinical oversight, evidence-based validation, regulatory accountability, and patient safety as non-negotiable priorities. My framework includes minimum evaluation standards before AI-assisted prescribing, mandatory outcome monitoring, transparent reporting of algorithmic decisions, bias auditing across demographic groups, and maintaining the psychiatrist-patient relationship as the foundation of care.


Further Reading


ADHD Resources

ADHD Guide
Diagnosis
Medications
ADHD in Women
Children
Self-Assessment

Clinical Content

RSD
ADHD Paralysis
ADHD Burnout
OCD & ADHD
ADHD vs Autism

Research & Publications

Publications
Research Grants
Articles
Presentations
Blog

About & Contact

Profile
CV
Contact
Practice
ADHD Services NYC