Brown University just landed a massive $20 million federal grant to solve one of AI’s most dangerous blind spots. The university will lead a new national institute focused on creating AI therapists that can truly understand human emotions and respond safely to people in crisis.
The AI Research Institute on Interaction for AI Assistants (ARIA) represents a major shift from today’s chatbots that often give harmful advice to vulnerable users. Current AI systems like ChatGPT generate responses by predicting words, not by understanding human psychology or the real-world consequences of their suggestions.
“Any AI system that interacts with people in distress needs a strong understanding of the human it’s interacting with,” said Ellie Pavlick, the Brown computer science professor leading the project. “Mental health is a high stakes setting that embodies all the hardest problems facing AI today.”
Why Current AI Therapy Apps Are Failing Patients
The timing couldn’t be more critical. Mental health apps powered by AI have exploded in popularity, with millions of Americans turning to chatbots for emotional support. But recent research reveals serious safety concerns with these systems.
Stanford University researchers found that existing AI therapy chatbots can reinforce harmful stereotypes and provide dangerous advice to users experiencing mental health crises. The American Psychological Association has raised alarms about unregulated AI systems posing as therapists.
One psychiatrist who tested popular therapy chatbots by pretending to be a troubled teenager received concerning advice that could have worsened a real patient’s condition. These failures highlight a fundamental problem. Current AI systems don’t understand cause and effect, human emotions, or when their responses might cause harm.
Building AI That Thinks Like Humans, Not Computers
Brown’s ARIA institute plans to develop an entirely new approach to AI mental health systems. Instead of relying on text prediction, the new AI will be based on cognitive science and neuroscience research about how humans actually process emotions and social interactions.
“Today’s language models don’t have a mental model of the world around them,” Pavlick explained. “They don’t understand chains of cause and effect, and they have little intuition about the internal states of the people they interact with.”
The institute will bring together experts from computer science, psychology, law, philosophy, and education from institutions including Dartmouth College, New York University, Carnegie Mellon University, and UC Berkeley. This interdisciplinary approach will create AI systems that can:
- Interpret individual behavioral needs in real-time
- Understand emotional context and respond appropriately
- Recognize when human intervention is needed
- Provide transparent explanations for their recommendations
Real-World Applications Beyond Chatbots
The research could lead to AI systems integrated with wearable devices that monitor behavioral and biometric data, providing personalized mental health support throughout the day. However, the institute will carefully examine privacy, safety, and effectiveness concerns before deploying any technology.
“There are still a lot of open questions about what a good AI system for mental health support looks like,” Pavlick noted. “Part of our work will be to understand which types of systems could work and which shouldn’t exist.”
The need is urgent. More than one in five Americans lives with a mood, anxiety, or substance use disorder, according to the National Institute of Mental Health. High costs, insurance limitations, and social stigma create barriers to traditional treatment that AI could potentially address.
National Security and Economic Implications
The $20 million grant from the National Science Foundation, supported by Capital One and Intel, aligns with the White House AI Action Plan to maintain America’s global AI leadership. Four other universities received similar grants, bringing the total federal investment to $100 million.
“Artificial intelligence is key to strengthening our workforce and boosting U.S. competitiveness,” said Brian Stone, performing duties of NSF director. The investment will transform research into practical solutions while preparing Americans for future technology jobs.
The institute will also develop educational programs spanning K-12 through professional training, working with Brown’s Bootstrap computer science curriculum to create evidence-based AI education materials.
Immediate Safety Measures While Building Long-Term Solutions
ARIA researchers plan to address both immediate safety concerns with existing AI therapy systems and develop long-term solutions. The team will create safeguards against responses that could reinforce delusions or provide unempathetic advice that increases user distress.
“We need short-term solutions to avoid harms from systems already in wide use, paired with long-term research to fix these problems where they originate,” Pavlick said.
The institute’s work extends beyond mental health applications. The fundamental challenges of creating AI that truly understands human needs and responds safely could benefit AI development across all sectors.
Co-director Suresh Venkatasubramanian, who leads Brown’s Center for Technological Responsibility, emphasized the broader implications: “We’re addressing this critical alignment question of how to build technology that is ultimately good for society.”
Hopefully, Brown University’s ARIA institute will create AI systems that provides genuine help to people in need, rather than potentially harmful responses generated by statistical prediction. The research could determine whether AI becomes a tool for healing or another source of harm in mental health care.