Home → Research → AI in Psychiatry
AI in Psychiatry: Research, Digital Therapeutics, and Responsible Innovation
By Ryan S. Sultan, MD
Assistant Professor of Clinical Psychiatry, Columbia University Irving Medical Center
March 28, 2026
|
Dr. Ryan Sultan leads research at the intersection of artificial intelligence and psychiatry at Columbia University. His work spans JAMA Psychiatry publications on telehealth oversight, the NIH-funded PAWS digital therapeutic for cannabis use disorder, natural language processing in electronic health records, and responsible AI frameworks for mental health care. His guiding principle: AI should augment clinical decision-making, never replace the psychiatrist. |
|
On This Page: This page covers four pillars of my AI in psychiatry research: (1) telehealth oversight and the dangers of unregulated digital platforms, (2) the PAWS AI-powered digital therapeutic for cannabis use disorder, (3) natural language processing in mental health, and (4) responsible AI frameworks for clinical psychiatry. Each section links to deeper pages with full details. |
The Problem That Started This Work
In 2020, the pandemic forced psychiatry online almost overnight. Telehealth visits surged from under 1% of psychiatric encounters to over 50% in a matter of weeks. For many patients, this was a genuine improvement -- better access, no commute, continuity of care during lockdowns. I supported that shift.
What I did not support was what happened next.
A wave of venture-capital-backed digital psychiatry platforms emerged, positioning themselves as disruptors of mental health care. Companies like Cerebral, Done, and others scaled rapidly by offering what amounted to prescription mills with a tech veneer. Evaluations lasting under 10 minutes. Controlled substances -- stimulants, benzodiazepines -- prescribed without adequate assessment. No follow-up. No monitoring. No therapeutic relationship.
This was not innovation. This was clinical negligence at scale, and it was being celebrated in the press as the future of mental health care.
My colleague Manpreet K. Singh at Stanford University (now at UC Davis) and I decided to document what was happening and propose a framework for responsible integration of telehealth into psychiatry. That work, published in JAMA Psychiatry, became the foundation for my broader research program on AI and digital technology in mental health.
Pillar 1: Telehealth Oversight -- The JAMA Psychiatry Publications
My viewpoint in JAMA Psychiatry, co-authored with Manpreet K. Singh and Alice W. Zhang, laid out the case that the rapid expansion of telepsychiatry had outpaced every regulatory mechanism designed to protect patients. The paper documented specific failures:
What We Found
- Evaluation times below clinical standards: Some platforms were completing psychiatric evaluations -- including for ADHD, which requires comprehensive assessment -- in under 10 minutes. A proper ADHD evaluation involves detailed developmental history, collateral information, standardized rating scales, and differential diagnosis. Ten minutes is enough to check a box, not to make a diagnosis.
- Controlled substance prescribing without safeguards: Stimulant prescriptions were being written after single, brief encounters with no requirement for follow-up, no monitoring of vital signs, and no assessment for substance use risk -- the very factors that make stimulant prescribing require clinical judgment.
- Regulatory gaps: State medical boards, the DEA, and CMS had not updated their oversight frameworks for a world where a physician in one state could prescribe controlled substances to a patient in another state through a platform headquartered in a third state. Accountability was diffuse to the point of being nonexistent.
- Misaligned incentives: Volume-based business models rewarded speed over quality. When a company's revenue depends on maximizing the number of prescriptions per provider per hour, patient safety becomes an obstacle to growth.
The Framework We Proposed
Our JAMA Psychiatry paper did not argue against telehealth. I use telehealth in my own practice. The argument was that telehealth requires the same clinical standards as in-person care -- standards that many platforms were systematically circumventing. We proposed:
- Minimum evaluation standards for telehealth psychiatric encounters, particularly before prescribing controlled substances
- Mandatory outcome monitoring tied to prescribing privileges on digital platforms
- Regulatory requirements specific to digital health platforms treating psychiatric conditions
- Quality metrics that prioritize patient outcomes over visit volume
- Transparent reporting of prescribing practices and clinical outcomes by platform
The full analysis of my telehealth research, including the specific findings and policy recommendations, is covered in depth on the Telehealth in Psychiatry page.
Pillar 2: The PAWS Digital Therapeutic -- AI for Cannabis Use Disorder
If the telehealth research represents the cautionary side of my AI work, PAWS represents the constructive side: using artificial intelligence to solve a real clinical problem that traditional approaches alone cannot address.
The Clinical Problem
Cannabis use disorder among young people is a growing public health challenge that the mental health system is poorly equipped to handle. The numbers are stark:
- THC potency has increased dramatically -- from approximately 4% in 1995 to 15-25% in today's products, with concentrates reaching 80-90% THC
- Daily high-potency cannabis use increases psychosis risk approximately 4-fold according to the landmark Lancet Psychiatry study by Di Forti and colleagues
- Most youth with cannabis use disorder never receive treatment -- less than 10% of adolescents and young adults with CUD access specialty care
- Stigma, access barriers, and workforce shortages mean that even when young people want help, they often cannot find it
The traditional treatment model -- weekly in-person therapy sessions with a trained clinician -- works when patients can access it. But most cannot. We need scalable solutions that meet young people where they are, which is on their phones.
What PAWS Is
PAWS -- Preventing Adverse outcomes With Screening -- is an NIH-funded AI-powered digital therapeutic I am developing in collaboration with Xuhai "Orson" Xu at Columbia's Department of Biomedical Informatics (DBMI). Orson is also visiting faculty at Google Research, bringing deep expertise in human-AI interaction and conversational AI systems.
The core of PAWS is a large language model-powered conversational agent designed specifically for youth with cannabis use disorder. This is not a chatbot that dispenses generic advice. It is a clinically informed AI companion that:
- Provides personalized, evidence-based support between clinical encounters
- Uses motivational interviewing techniques adapted for digital delivery
- Monitors patterns in real time -- use frequency, triggers, mood states, risk factors
- Escalates to human clinicians when clinical thresholds are crossed
- Adapts its approach based on each user's engagement patterns and treatment stage
The project is funded through an NIH UG3/UH3 mechanism totaling $335,500, reflecting NIH's recognition that digital therapeutics for substance use disorders represent a high-priority area of innovation.
The Advisory Board
PAWS is guided by an advisory board that represents the highest level of expertise in addiction neuroscience, substance use treatment, and clinical research:
| Advisor | Affiliation | Expertise |
| Eric Nestler, MD, PhD | Mount Sinai (NAS + NAM member, h-index 216) | Addiction neuroscience, epigenetics of substance use |
| John Krystal, MD | Yale University (NAM member, EIC Biological Psychiatry) | Neuropsychopharmacology, substance use neurobiology |
| Ned Kalin, MD | University of Wisconsin (NAM member, EIC American Journal of Psychiatry) | Anxiety, stress neurobiology, developmental psychiatry |
| John Walkup, MD | Northwestern University | Child and adolescent psychiatry, clinical trials |
Clinical collaborators include Yasmin Hurd (Mount Sinai, NAS + NAM member) in addiction neuroscience, Kevin Gray (MUSC) in adolescent substance use treatment, Sharon Levy (Harvard/Boston Children's Hospital) in pediatric addiction medicine, and Melanie Wall (Columbia) in biostatistics and clinical trial design.
The full details of the PAWS project -- technical architecture, clinical validation plan, and development timeline -- are on the PAWS Digital Therapeutic page.
Pillar 3: Natural Language Processing in Mental Health
The third pillar of my AI research applies natural language processing -- the branch of AI that enables computers to understand, interpret, and generate human language -- to psychiatric clinical data.
Training Under Carol Friedman
My NLP work is grounded in training I received from Carol Friedman, PhD, a member of the National Academy of Medicine and the creator of MedLEE (Medical Language Extraction and Encoding System) at NewYork-Presbyterian Hospital. MedLEE was one of the first NLP systems capable of extracting structured clinical information from unstructured medical text -- a foundational technology that influenced the entire field of clinical NLP.
Working with Carol Friedman gave me a deep appreciation for the challenges of extracting reliable clinical information from the messy, ambiguous, abbreviated language of real medical documentation. Psychiatry presents particular challenges for NLP: our notes are narrative-heavy, subjective, context-dependent, and filled with nuances that even experienced clinicians can interpret differently.
Research Advisors and Collaborators
My NLP research is supported by a network of computational experts:
- Adler Perotte, MD, MA -- research advisor at Columbia DBMI specializing in machine learning, clinical phenotyping, and EHR analysis. His work on computational phenotyping from clinical data directly informs my approach to extracting psychiatric patterns from clinical notes.
- Thomas McCoy, MD -- grant advisor from Massachusetts General Hospital and Harvard, specializing in computational psychiatry and NLP phenotyping. His pioneering work using NLP to identify psychiatric phenotypes from EHR data has been instrumental in shaping my research methodology.
- Noemie Elhadad, PhD -- chair of Columbia DBMI, with deep expertise in NLP and clinical informatics. As department chair, she has built an environment at Columbia where clinical researchers and computer scientists can collaborate on problems that neither could solve alone.
Applications in Psychiatry
The clinical applications of NLP in psychiatry are substantial and largely untapped:
- Detecting substance use patterns: Clinical notes contain rich information about substance use that is rarely captured in structured data fields. NLP can identify mentions of cannabis, alcohol, and other substance use -- including frequency, quantity, context, and consequences -- across millions of clinical encounters, enabling population-level surveillance that structured data alone cannot provide.
- Predicting treatment outcomes: The language clinicians use in progress notes contains implicit signals about treatment trajectory. NLP can identify documentation patterns associated with treatment response, relapse risk, and clinical deterioration before these outcomes become evident through traditional monitoring.
- Identifying at-risk patients: Suicidal ideation, psychotic symptoms, and other high-risk presentations are often documented in narrative notes but missed by structured screening tools. NLP-based surveillance can flag patients who need immediate clinical attention.
- Research phenotyping: For my cannabis and ADHD research, NLP enables identification of clinical cohorts from EHR data with far greater precision than billing codes alone. This is essential for the large-scale observational studies that form a core part of my research program.
My Mental Health Informatics Lab at Columbia applies these methods to real clinical data from one of the largest academic medical centers in the country. The full scope of this work is detailed on the NLP in Mental Health page.
Pillar 4: Responsible AI Frameworks for Mental Health
The through-line connecting all of my AI research is a commitment to responsible innovation. Technology in psychiatry presents unique risks that other medical specialties do not face to the same degree.
Why Psychiatry Is Different
Psychiatry involves the most sensitive aspects of human experience -- thoughts, emotions, trauma, relationships, identity. The data psychiatrists work with is inherently more personal than a blood pressure reading or an imaging study. This creates distinct challenges for AI:
- Therapeutic alliance matters: In psychiatry, the relationship between clinician and patient is itself a treatment tool. AI systems that bypass or undermine this relationship are not just inefficient -- they may be actively harmful.
- Diagnosis is contextual: The same symptom can mean very different things depending on a patient's developmental stage, cultural background, trauma history, and social context. Current AI systems handle this kind of contextual reasoning poorly.
- Stigma amplifies risk: Psychiatric data, if mishandled, can lead to discrimination in employment, insurance, custody, and social relationships. The stakes of a data breach in psychiatry are qualitatively different from those in dermatology.
- Vulnerable populations: Many psychiatric patients -- including those with psychosis, severe depression, or cognitive impairment -- may not be able to fully consent to or understand how AI systems are using their data.
The Columbia Data Science Institute
I serve as a discussant for AI in mental health at the Columbia Data Science Institute, where I bring the clinical perspective to conversations that are often dominated by engineers and data scientists. This role reflects my conviction that psychiatrists must be at the table when AI tools for mental health are designed, tested, and deployed -- not brought in as an afterthought to rubber-stamp products that have already been built.
The Data Science Institute at Columbia convenes researchers across departments -- computer science, statistics, engineering, medicine -- to address the most pressing questions in data-driven research. Mental health is increasingly recognized as an area where data science can make transformative contributions, but only if the clinical context is understood. Too often, technically elegant AI systems fail in clinical practice because they were built without adequate clinical input. My role is to ensure that does not happen at Columbia -- to be the person in the room who asks "but what does this mean for the patient sitting across from me?"
The cross-disciplinary conversations at DSI have also shaped my thinking about the training pipeline. The next generation of psychiatrists needs computational literacy -- not to become data scientists, but to be informed consumers and collaborators of AI tools. Similarly, computer scientists working on mental health applications need genuine exposure to clinical practice, not just clinical datasets. Building these bridges is part of what the Data Science Institute does, and it is part of what I see as my responsibility as a researcher who works in both worlds.
The discussions at the Data Science Institute have reinforced several principles that guide my research:
|
Core Principles for AI in Psychiatry:
|
The Risks of Getting AI in Psychiatry Wrong
I spend a significant portion of my time thinking about what happens when technology in psychiatry is deployed irresponsibly, because we already have examples.
The Digital Prescription Mill Problem
The digital psychiatry platforms that emerged during and after the pandemic represent a cautionary tale. Companies that described themselves as "disrupting" mental health care were, in many cases, simply lowering the standard of care to maximize throughput. The results were predictable:
- Stimulant overprescribing: Some platforms saw stimulant prescription rates far above clinical norms, suggesting that ADHD was being diagnosed -- and stimulants prescribed -- without adequate evaluation.
- Adverse events: When you prescribe controlled substances without proper monitoring, adverse events are inevitable. Reports of cardiovascular side effects, psychiatric destabilization, and substance misuse followed.
- Regulatory action: The DEA and state medical boards eventually intervened in several cases, but the damage to patients had already been done.
- Erosion of trust: Perhaps most damaging, these platforms eroded public trust in telehealth itself -- a modality that, done properly, genuinely improves access to psychiatric care.
The Chatbot Problem
A parallel risk has emerged with AI chatbots marketed as mental health tools. Some of these products -- deployed without clinical validation, staffed without licensed clinicians, and marketed directly to vulnerable populations -- represent the worst of Silicon Valley's "move fast and break things" philosophy applied to psychiatric care.
The risks are real:
- False reassurance: A chatbot that tells a suicidal patient everything will be okay may delay life-saving intervention.
- Inappropriate advice: Without clinical training, AI systems can and do provide recommendations that are clinically inappropriate for specific patients.
- Data exploitation: Some mental health apps harvest deeply personal information with inadequate privacy protections, effectively monetizing patients' psychiatric distress.
- Therapeutic illusion: Patients who believe they are receiving treatment from a chatbot may not seek the professional help they actually need.
This is precisely why my PAWS project is being developed within an academic medical center, with clinical oversight, ethical review, and rigorous testing -- not in a startup garage optimizing for user engagement metrics.
How AI Can Actually Help in Psychiatry
Despite the risks, I am genuinely optimistic about the potential of AI in psychiatry -- when it is done right. The opportunities are substantial:
Clinical Decision Support
Psychiatry involves integrating vast amounts of information -- genetic factors, medication histories, comorbidities, psychosocial stressors, developmental history, family dynamics -- into treatment decisions. AI-powered clinical decision support systems can help clinicians process this information more effectively:
- Drug interaction checking that goes beyond simple databases to consider pharmacogenomic factors and complex polypharmacy scenarios
- Treatment selection guidance based on population-level outcome data matched to individual patient characteristics
- Risk stratification that identifies patients at elevated risk for adverse outcomes and prioritizes clinical attention accordingly
- Documentation assistance that reduces the administrative burden on clinicians, freeing more time for direct patient care
Scalable Screening and Monitoring
The mental health workforce shortage is real. There are not enough psychiatrists, psychologists, and therapists to meet the current demand for mental health services, let alone the growing demand driven by increased awareness and reduced stigma. AI can help address this gap -- not by replacing clinicians, but by extending their reach:
- Automated screening in primary care and emergency settings to identify patients who need psychiatric referral
- Continuous monitoring through digital biomarkers -- sleep patterns, activity levels, communication patterns -- that provide early warning of clinical deterioration between visits
- Measurement-based care with automated outcome tracking that ensures treatment is working and flags when it is not
Research Acceleration
AI is accelerating psychiatric research in ways that were impossible five years ago. NLP enables analysis of clinical text at scale. Machine learning identifies patterns in large datasets that human researchers could never detect. Digital phenotyping creates new data streams for understanding psychiatric conditions in real-world settings.
My own research program -- spanning the Sultan Lab -- uses these tools daily. The PAWS project, the NLP work, the large-scale epidemiological studies using IQVIA and MarketScan data -- all of these rely on computational methods that would have been impractical a decade ago.
The Integrative Psych Approach to Technology
At Integrative Psych, my clinical practice in New York City, we implement technology in a way that reflects the principles from my research:
- Telehealth done right: We offer telehealth visits for appropriate patients, but every evaluation meets the same clinical standards as an in-person visit. Initial evaluations are comprehensive. Follow-up visits include adequate time for clinical assessment. Controlled substances are prescribed only after thorough evaluation and with ongoing monitoring.
- Technology as augmentation: We use digital tools for scheduling, measurement-based care tracking, and communication -- but every clinical decision is made by a licensed clinician with adequate information and time.
- No shortcuts: We do not offer 10-minute medication checks. We do not prescribe controlled substances without proper evaluation. We do not prioritize volume over quality. These are not complicated principles, but they are apparently controversial in the era of digital health startups.
Where This Work Is Going
The intersection of AI and psychiatry is evolving rapidly. Over the next several years, I expect to see:
- LLM-powered clinical tools becoming standard in psychiatric practice, with appropriate guardrails and clinical oversight
- Digital therapeutics like PAWS gaining FDA clearance and insurance coverage for substance use disorders and other psychiatric conditions
- NLP-based surveillance systems operating in real time within health systems to identify patients at risk for adverse outcomes
- Regulatory frameworks catching up to the technology, with specific standards for AI in mental health care
- Training programs that prepare the next generation of psychiatrists to work effectively with AI tools
My role in this ongoing evolution is to ensure that the clinical perspective -- the patient's perspective -- remains central. Technology in psychiatry should serve patients, not investors. It should augment clinicians, not replace them. And it should be held to the same rigorous standards we apply to any other treatment in medicine.
That is what responsible innovation in psychiatry looks like.
Explore My AI in Psychiatry Research
| Research Pages | |
|
Telehealth in Psychiatry |
PAWS Digital Therapeutic |
|
NLP in Mental Health |
The Sultan Lab |
Frequently Asked Questions
What is Dr. Sultan's research on AI in psychiatry?
My AI in psychiatry research spans four areas: JAMA Psychiatry publications on telehealth oversight documenting the risks of unregulated digital psychiatry platforms, the PAWS AI-powered digital therapeutic for cannabis use disorder (NIH-funded, co-developed with Columbia DBMI), natural language processing applied to electronic health records for mental health phenotyping, and responsible AI frameworks for clinical psychiatry. I also serve as a discussant on AI in mental health at the Columbia Data Science Institute.
What is the PAWS digital therapeutic?
PAWS (Preventing Adverse outcomes With Screening) is an NIH-funded AI-powered digital therapeutic for youth with cannabis use disorder. Co-developed with Xuhai "Orson" Xu at Columbia DBMI, it uses a large language model-powered conversational agent to provide personalized, real-time support to young people struggling with cannabis use. The project is funded through an NIH UG3/UH3 mechanism totaling $335,500, with an advisory board that includes members of the National Academy of Sciences and National Academy of Medicine.
What are the risks of unregulated digital psychiatry platforms?
My JAMA Psychiatry research documented several specific risks: some platforms prescribe controlled substances after evaluations lasting under 10 minutes, regulatory oversight has not kept pace with telehealth expansion, patient safety is compromised when business models prioritize volume over quality, and many platforms lack adequate monitoring for side effects, drug interactions, and treatment outcomes. The result has been stimulant overprescribing, adverse events, and erosion of public trust in telehealth.
How does Dr. Sultan use NLP in mental health research?
I apply natural language processing to electronic health records to identify patterns in psychiatric documentation that would be impossible to detect manually. Trained under Carol Friedman -- creator of the MedLEE NLP system at NewYork-Presbyterian -- I use computational methods to detect substance use patterns in clinical notes, predict treatment outcomes, and identify at-risk patient populations through my Mental Health Informatics Lab at Columbia.
Can AI replace psychiatrists?
No. My research framework is built on the principle that AI should augment clinical decision-making, not replace it. Psychiatry requires therapeutic alliance, nuanced clinical judgment, ethical reasoning, and the ability to integrate biological, psychological, and social factors in ways current AI cannot replicate. The goal is tools -- digital therapeutics, NLP-based screening, clinical decision support -- that make psychiatrists more effective while keeping the human relationship at the center.
What is responsible AI in mental health?
Responsible AI in mental health means deploying technology with clinical oversight, evidence-based validation, regulatory accountability, and patient safety as non-negotiable priorities. My framework includes minimum evaluation standards before AI-assisted prescribing, mandatory outcome monitoring, transparent reporting of algorithmic decisions, bias auditing across demographic groups, and maintaining the psychiatrist-patient relationship as the foundation of care.
Further Reading
- Telehealth in Psychiatry: Benefits, Risks & Oversight
- PAWS: AI-Powered Digital Therapeutic for Cannabis Use Disorder
- Natural Language Processing in Mental Health Research
- The Sultan Lab at Columbia University
- All Research Areas
- Publications
ADHD Resources
ADHD Guide |
Clinical Content |
Research & Publications |
About & Contact |