Interest in generative AI is at an all-time high, following the November 2022 release of OpenAI’s artificial intelligence-enabled chatbot ChatGPT.

Healthcare organizations ranging from vendors to health systems are beginning to use generative AI to solve some of their most fundamental challenges. Researchers are using large data sets to draw more complicated conclusions. 

Here’s what you need to know about generative AI in healthcare.  

Related: Epic, Microsoft bring GPT-4 to EHRs

What is generative AI?  

Generative AI is the capability of algorithms to automatically generate content from user queries such as text, video and images.  

ChatGPT is a public-facing generative AI text application from OpenAI. OpenAI has developed other generative AI applications available for paying customers and is working with Microsoft, which made a reported $100 billion investment in the company. Other big tech firms such as Google and Meta have launched their own generative AI tools.  

How does it work?

Generative AI works by learning raw datasets and developing statistically probably outputs. While these models have been used with numerical data for years, it’s only been applied to text. impact and speech in the last 10 years. Large language models like OpenAI’s ChatGPT can converse with humans, summarize articles and write copy.  

Why is healthcare so excited about it? 

The rollout of ChatGPT has the healthcare world abuzz. Dr. John Brownstein, chief innovation officer at Boston Children’s, said he recognized immediately the tech would have a major impact on medicine.  

“Right out of the box, I don’t think I’ve seen anything as transformational since the iPhone or Google,” Brownstein said. 

Related: Why Boston Children’s wants to hire a ChatGPT expert

Experts say the most immediate areas it can help in healthcare are with administrative tasks that require clinician or human oversight like billing, post-appointment clinical notes or communicating with patients.  

Michael Hasselberg, chief digital health officer at University of Rochester Medical Center in New York, said he is a believer in its power. The large language AI models from ChatGPT are “light years ahead” of what Hasselberg said he’s seen in the marketplace from various startups automating healthcare administrative and revenue cycle processes. 

Why are some afraid of it?

There are two schools of thought on the potential of ChatGPT in healthcare, which was summed by Micky Tripathi, chief of the Office of the National Coordinator for Health Information Technology, at ViVE 2023 in March.  

“I think all of us feel tremendous excitement and you also want to feel tremendous fear,” Tripathi said.  

Why fear? Tripathi said that when inappropriately used, algorithms can perpetuate health equity and quality issues. Another concern about ChatGPT in healthcare is its accuracy of medical information generated by the AI solutions.  

Dr. Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard Medical School, recently co-authored a book, “The AI Revolution in Medicine: GPT-4 and Beyond.” He wrote that ChatGPT still has a tendency to make up facts. A March study from researchers at Stanford Medicine found 6% of the medical papers ChatGPT referenced when answering medical questions were fabricated, which potentially could be harmful in clinical care.   

Is ChatGPT protected by HIPAA?

There is also concern over ChatGPT exposing sensitive patient information. The public version of ChatGPT is not protected by the Health Insurance Portability and Accountability Act of 1996. To protect patient privacy, organizations like Cleveland Clinic and Baptist Health have secured and actively monitored private data repositories when testing ChatGPT. Rigorous testing is needed to implement generative AI, experts say.  

Related: What’s ahead for ChatGPT in healthcare

What vendors cashing in on the hype?

Even before ChatGPT fever, investments in healthcare AI totaled $4.4 billion in 2022, according to data from Rock Health, a research and digital health venture capital firm. Funding levels generally have gone down in a tough economy, but experts say interest in AI investments will persist. Vendors are rushing to sell their solutions and capture market share.

Microsoft is bringing OpenAI’s GPT-4 model to Epic’s electronic health record. The initial use cases will focus on patient communication and data visualization. 

Microsoft subsidiary Nuance Communications, a clinical documentation software company, is separately integrating GPT-4 capabilities into the EHR. In this case, it will be used to summarize and enter conversations between clinicians and patients directly into the EHR. 

Who else?

Google announced it is training its generative AI solution to summarize insights from a variety of dense medical texts. Innovaccer, a digital health unicorn, is adding a conversational AI tool to its provider platform. Abridge, a medical AI company, uses generative AI to summarize clinical conversations from recorded audio during patient visits. 

Which health systems are using generative AI? 

Epic has already found three partners for its GPT-4 integration. Madison, Wisconsin-based UW Health and UC San Diego Health have signed on, while Stanford Health Care in California is expected to add the functionality soon. 

Baptist Health in Jacksonville, Florida and Cleveland Clinic have begun experimenting with generative AI. They are collaborating with Microsoft to provide proof of concepts on using ChatGPT for clinical and administrative functions. That includes summarizing data for quality registry review, providing relevant diagnosis information and more.  

Both systems hope to implement ChatGPT into clinical workflow later this year.  

How else are health systems using it? 

Brownstein at Boston Children’s Hospital is such an advocate of ChatGPT that he’s hiring someone to use the generative AI application as an “AI prompt engineer.” The person will design and develop AI prompts using large language models like ChatGPT. 

“The skill set of the next decade is going to be someone with the skills of a prompt engineer. Someone who knows how to interface with large language models,” Brownstein said.  

What are some areas of medicine where it could be used?

Researchers and experts have begun to explore the possibilities of where it could affect patient care. Patient communication seems to be one of the more prominent areas where it could assist. A recent study led by researchers at University of California San Diego found patients preferred ChatGPT’s answers to medical questions more often than physicians.   

Related: Unpacking ChatGPT’s early uses in healthcare

Researchers are also looking at how generative AI can improve cancer care and reduce the $200 billion America spends treating the disease. Researchers at Cedars-Sinai hospital in Los Angeles found ChatGPT could improve health outcomes for patients with cirrhosis and liver cancer by providing easy-to-understand information about the disease. The study found that the generative AI solution can diagnose rare diseases that are one in a million with incredible accuracy.  

ChatGPT also made waves in March when it passed the United States Medical Licensing Exam performing at or near the passing threshold of 60% accuracy. Researchers led by Dr. Tiffany Kung, researcher at virtual pulmonary rehab treatment center AnsibleHealth,  say the technology has a lot of potential in medical education.   

Does the hype match up to the reality? 

The development of AI in healthcare and other fields has greatly outpaced fledgling government efforts to control it. The recent interest in generative AI solutions will only accelerate this gap. Some leading health systems are putting their own guardrails in place but finding the tech talent needed to oversee an expansive, self-regulated AI division has proven challenging for smaller systems. 

Despite the excitement, there are not many people in medicine who will say generative AI is here to replace humans in the near future. And in generative AI’s relative infancy, experts are say humans are very much needed to intervene when these solutions are embedded into clinical care.  

“I think that for some time forward, we’re going to continue to need to have humans in the loop because the AI is far from perfect,” said Erik Brynjolfsson, director of the digital economy lab at Stanford University’s Institute for Human Centered AI. “It can’t do a lot of things.” 

Related Article