Health AI and the law: Could your chatbot doc testify against you?
Last July, OpenAI CEO Sam Altman told viral podcaster Theo Von that it's "screwed up" that conversations with an AI helper aren't afforded the same legal protections as conversations with a human advocate.
"imo talking to an AI should be like talking to a lawyer or a doctor. i hope society will figure this out soon," Altman posted to X.
This Tweet is currently unavailable. It might be loading or has been removed.
The CEO has repeatedly advocated for stronger privacy protections for his chatbot's conversations with users, even as states have cracked down on AI bots advertised as therapeutic or legal experts.
But user privacy is not the sole reason why people like Altman are pushing for a tougher shield between chatbot conversations and the court, legal experts tell Mashable — there's also a self-serving motivation. If LLMs remain untouchable by courts, it insulates not just AI users, but the companies, too. In fact, Altman's comments to Von may have been prompted by OpenAI's very own legal troubles: Courts were demanding the AI giant save and eventually hand over its user chat logs as legal discovery, an action that could be blocked if AI were viewed the same in the eyes of the court as a therapist, doctor, or attorney.
What's one way to accomplish that? Push for a cultural shift that treats AI guidance with the same reverence as human professionals, starting with our health.
What exactly is "AI privilege"?
"Privilege has a certain meaning to lawyers and in the legal context," explained Melodi Dinçer, senior staff attorney for the Tech Justice Law Project. There's the standard attorney-client privilege, for example, as well as psychiatrist-client privilege and spousal privilege. Communications to clergymen, political votes, and trade or state secrets are also recognized by courts. In all these instances, communications between the two parties are confidential and not admissible in court proceedings.
States have their own privilege rules as well, covered under state law for cases held in state courts. Some states, Dinçer said, extend privileges to conversations between you and your general practitioner, in addition to your psychiatrist. But many states don't. This is all elucidated in Rule 501 of the Federal Rules of Evidence, Dinçer explained, which allows federal courts to recognize privileges broadly that the state courts already acknowledge.
If you are being sued, for example, the other side of the lawsuit cannot admit your therapist's session notes, nor could they admit confidential conversations between you or your lawyer or your spouse.
"The entire purpose of [client privilege] is to be able to have frank and open discussions with these providers in order for them to provide the best advice to you," Lily Li, a data privacy and AI risk management attorney and founder of Metaverse Law, told Mashable. "And from a societal perspective, we want individuals to be frank and open and honest with their attorney, physicians, and psychologists."
But these are conditions placed on human relationships, not digital ones. If you believe an AI chatbot is as effective as a human therapist or a legal consultant, should those communications be protected, too? Some AI developers, like Altman, say yes.
AI chatbots: Tools or people?
"The Open AI copyright lawsuit brought this into sharp focus," said Li. She is referring to a series of recently consolidated copyright cases, 16 in total, opened against OpenAI from publishers, artists, and writers over the last few years. The issues at hand — which include questions of fair use and how to handle the data used to train LLMs — are a kind of temperature gauge for assessing AI's perception in the eyes of the court.
Because of this, legal experts have been closely monitoring how courts categorize AI developers, their products, and user data contained within them. More specifically, they need to track how the law is treating LLMs, including their training data and chat logs, during evidence and discovery.
We don't want a situation where there's just a pure liability shield.
In February, a federal judge decided that legal strategy documents generated by Anthropic's Claude chatbot — and then sent by a client to their lawyer — were not covered by attorney-client privilege. The decision made headlines. The judge in the case relied in part on Anthropic’s own privacy policy to determine if the chats were protected. Because Anthropic's rules don't promise full privacy when using its public product, and because the communications didn't occur between a licensed attorney with the understanding of them being confidential, the privilege didn't apply. The documents were fair game.
But that same month, a different judge in a different, albeit similar, case ruled the opposite. In this instance, attorney-client privilege applied to AI-generated work because the output became an "attorney-client work product," according to the judge. The chatbot wasn't a "person" in this use case, but a tool used by counsel and client. That's an important distinction, because if the chatbot had been seen as a third-party entity, the client would have been voluntarily giving confidential information to it in a manner that could waive the recognition of privilege.
These are just a few early federal district court cases, involving what are referred to as matters of first impression. Basically, no one's ever asked these questions, and we are only in the beginning stages of figuring them out.
Meanwhile, the copyright cases involving OpenAI have engendered more questions about discovery and data. Not long before the two aforementioned rulings, OpenAI successfully appealed a decision ruling the company had waived its attorney-client privilege, opening up access to previously privileged data. The company had been ordered to hand over millions of anonymized ChatGPT conversation logs, as well as internal communications.
Companies like OpenAI have pushed back against such discovery, arguing for its confidentiality. Judges ruling in favor of admitting data have reasoned that removing personal identifiable information, narrowing the focus of logs, and not disclosing data externally makes the digital troves admissible in court. The legal landscape is riddled with questions such as these.
Across the board, AI developers are pushing to keep their internal data out of discovery. And while user privacy is one of the most pressing issues in the age of AI, enumerating AI privileges in a legal context poses a conundrum. How do we protect users' private data, without making it impossible to hold AI's makers accountable?
"We don't want a situation where there's just a pure liability shield," Li said.
A new Mashable series, AI + Health, will examine how artificial intelligence is changing the medical and health landscape. We'll explore how to keep your health data safe, dive into using AI to decipher your blood work, learn how two women are using AI to detect a dangerous form of heart disease, and much more.
Health AI is big business
Earlier this year, OpenAI launched ChatGPT Health, a new consumer-facing "mode" for its tentpole chatbot that intends to turn the AI into a personal health guru. The company encourages users to upload their medical histories to better personalize the experience. The data is not currently protected under the Health Insurance Portability and Accountability Act (HIPAA), the nation's dominant health privacy regulation.
Other companies followed OpenAI's lead, with Anthropic, Microsoft, and Amazon releasing their own health-oriented chatbot companions — some HIPAA compliant and some not — in the months since. OpenAI competitor Google has long been investing in AI for medical use cases, mainly for clinicians and researchers. Fitbit, owned by Google, offers personal health coaching using an integrated Gemini assistant. The company is also building a "conversational diagnostic AI agent," referred to as an Articulate Medical Intelligence Explorer (or AMIE).
Altman and his competitors are flocking to the profit potential of the healthcare industry, even if an AI privilege rule isn't yet on the horizon. In January, OpenAI acquired the health startup Torch, and the Altman-backed MergeLabs, a biotech company interested in brain computer interfaces (BCIs), obtained an $850 million evaluation.
According to a recent report by Menlo Ventures, $1.4 billion went toward healthcare-specific generative AI solutions in 2025. The vast majority of that flowed to AI startups. And these stats only encompass clinical-grade products, tools produced by companies like OpenEvidence and Hippocratic AI intended for medical professionals, not spending on commercial products, such as ChatGPT Health.
A world with human-chatbot privilege?
Among the non-clinical grade products, wellness devices, and non-HIPAA compliant chatbots, a lack of regulation and legal clarity alarms many privacy experts. Some posit that the uncertain policy landscape could be a boon for AI developers, launching their own health AI products into a regulatory miasma in a strategic move to push the company's profit and legal gains.
As chatbots accumulate more "confidential" conversations, more privileges under Rule 501 may be implicated. In states that shield communications with your physician, would AI "doctors" count, too? Or consider a less obvious example posed by Dinçer: Say a user asks a chatbot how they contracted a sexually transmitted infection despite their spouse testing negative, could the prompt and response be presented as evidence — or would it trigger another form of protection, like spousal privilege?
In a hypothetical world with sweeping AI privileges, or even one in which chatbots are looped into existing privilege rules, AI companies may try to refuse admitting blatant evidence of malfeasance. For example, if an AI company was sued for misleading individuals about their health, prosecutors couldn't use internal records or chat analytics containing people's health records.
Perhaps, Dinçer suggests, if more users are inputting their personal medical records, X-rays, or other sensitive information into the consumer-facing product — and if more and more AI companies are connected in a web of personal identifiable information and health tech — courts would be more inclined to entertain the idea of privilege extending to AI.
This may be part of the reason — besides revenue — companies try to engender the same kind of trust in AI assistants as we have in human professionals. With so many already consulting AI for their health needs, and companies like OpenAI already facing heaps of litigation, it's no mystery why executives like Altman want to keep chatbot conversations away from the prying eyes of lawyers and judges.
The information contained in this article is for educational and informational purposes only and is not intended as health or medical advice. Always consult a physician or other qualified health provider regarding any questions you may have about a medical condition or health objectives.
Disclosure: Ziff Davis, Mashable’s parent company, previously filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
