
OpenAI updated its usage policies this week to sharply limit ChatGPT’s capacity to provide professional, licensed advice in high-stakes domains, most notably medicine, law, and finance, and to reposition the assistant more clearly as an educational tool. At the same time, the company published a detailed account of model safety work that it says has measurably improved the system’s responses in sensitive mental-health conversations.
Under the policy changes rolled out in late October and reinforced this week, ChatGPT is barred from giving “tailored advice that requires a license” unless a licensed professional is meaningfully involved. The company says the move responds to regulatory pressure, liability concerns, and the growing number of people relying on AI for decisions that traditionally require professional training and legal protections.
Practically, the update means ChatGPT will still explain general concepts, such as how a medication class works, what a typical contract clause contains, or what an ETF is. However, it will not provide individualized dosages, bespoke legal pleadings, or explicit buy/sell recommendations. Attempts to bypass the guardrails, for example, by framing requests as hypotheticals, are now more likely to be blocked by safety filters, sources said.
“We build AI products that maximize helpfulness and freedom, while ensuring safety,” OpenAI said in a statement, framing the shift as a safety and clarity move.
The new rules sit alongside broad bans on misuse, including weapons design, illicit transactions, harassment, and actions endangering minors, and spell out special protections for children and politically sensitive, high-stakes use cases.
Complementing the policy tightening, OpenAI published technical findings describing work to make ChatGPT better at recognizing and responding to mental-health crises and other sensitive signals. The company says the updated GPT-5 model and its safety interventions reduce rates of undesired behavior in high-risk conversations by substantial margins, citing estimated improvements of 65–80% on some mental-health taxonomies and significant gains on metrics for psychosis, self-harm, and “emotional reliance” on the model.
OpenAI says it worked with more than 170 clinicians and a Global Physician Network to shape the taxonomy, evaluate model outputs, and validate improvements. New capabilities include routing certain sensitive conversations to safer models, expanding crisis hotline access, adding reminders to take breaks during long sessions, and reinforcing the model’s tendency to encourage users to seek human help.
The twin moves — a narrower policy on professional advice and deeper safety engineering for crises — reflect OpenAI’s attempt to balance two competing pressures: widespread consumer demand for AI help and regulator and society concerns about harm and liability. Industry observers say the approach acknowledges the unique legal exposures posed by algorithmic advice while preserving the models’ utility for education, summarization, and general guidance.
Editorial Note: This news article has been written with assistance from AI. Edited & fact-checked by the Editorial Team.
Interested in advertising with CIM? Talk to us!