Skip to main content

CWRU bioethics researchers share predictions about the future of AI in medicine

In recognition of World Bioethics Day, celebrated on Oct. 19, we asked bioethics researchers about their predictions for the future of AI in medicine

Health + Wellness | October 17, 2025 | Story by: Meg Herrel

The origin of medical ethics often boils down to the now-common expression “do no harm.” While an updated version of the Hippocratic Oath is still administered to many physicians as they begin their training, modern medical practitioners grapple with much different medical advancements, including artificial intelligence. 

As machine learning spreads to the medical field, bioethics researchers seek to understand the impact of artificial intelligence on the future of medical practice. We asked researchers in Case Western Reserve University's Department of Bioethics to answer the question: “Do you have any predictions about the long-term applications and implications of AI in medicine, both for patients and providers?

Here is what they shared.

Answers have been lightly edited.

Lynette Hammond Gerido

Lynette Hammond Gerido headshot

Lynette Hammond Gerido is an assistant professor in the Department of Bioethics. Her research examines the bioethics of big datasets, natural language processing, artificial intelligence and large language models. She is dedicated to embedding ethical frameworks and responsible best practices into research and the clinical application of these emerging technologies.

“I believe AI is fundamentally changing the way we interact and communicate with one another in healthcare. Many people are using AI to handle clinical documentation tasks such as summarizing communications, setting reminders and managing administrative work. The vision behind this is to enrich our conversations by freeing clinicians from note-taking and allowing them to focus more deeply on the patient.

"But I prefer to look more closely at how patients are using AI for healthcare. I have learned patients use AI in various ways: to prepare for doctor's visits, to get a “second opinion” on what's being presented to them, and to have on-demand conversations about health information. I'm hearing that patients sometimes use AI to pose questions about their health, and, though I don't condone this, some are uploading their clinical notes or lab results and then asking AI to summarize those documents into plain language and to suggest questions they might pose to their healthcare providers. Also, as more people use AI-enabled wearables, such as earbuds for real-time translation, we could see a significant shift in the use of these tools in clinical settings if the technology becomes fine-tuned for medical conversations.

"This raises important questions about clinical authority, responsible use of AI, autonomy and privacy. Who's responsible for the real-time translation happening through a patient's earbuds? How do we ensure that patients understand the risks involved and that they become more AI literate? Do patients need to obtain consent from their healthcare providers to use such tools? Do patient portals provide patients with guidance on the use of the data and results shared with them? We'll need to rethink our ideas around responsibility, communication standards and expectations for patient-provider interactions.

"I am cautiously optimistic about the way these technologies can reduce barriers to care and improve access. My research partners with patients, families and communities in exploring how to increase the benefits from AI and improve health outcomes for the most vulnerable communities. The best way to do this is to make sure that as we innovate, we remember that patients are end users and stakeholders too.”

Marsha Michie

Marsha Michie headshot

Marsha Michie is the associate director of the Bioethics Center for Community Health ANd Genomic Equity (CHANGE) and co-director of the PhD in Bioethics program at the School of Medicine. Her research investigates social and ethical issues around biomedical research, translation and practice with an emphasis on reproduction, disability and health equity. 

“AI and machine learning is already here in medicine and medical research, and its biggest impacts are largely out of the public’s eye. In many areas of medical research, AI is being trained to model biological processes, such as the way DNA folds to regulate our genes. Natural language processing is helping researchers find patterns in electronic medical records that can shed light on health trends. And medical wearables are increasingly using AI to find and predict risk for patients. 

“Training AI to find patterns—and divergence from the expected patterns—in large datasets will undoubtedly revolutionize many areas of research and medicine. But many of the important issues that bioethicists are addressing in these areas are the ones that come from the assumptions and biases we unintentionally embed in these models, because the datasets that AI models are trained on will always be incomplete, and the humans making the models will always have blind spots.

“It’s really important to make sure that the voices of many different people and communities are brought in from the very beginning when we are imagining AI applications, all the way through the pipeline of development and application.”

Mark Aulisio

Mark Aulisio headshot

Mark Aulisio is the Susan E. Watson Professor and chair of the Department of Bioethics. Aulisio has authored more than 85 articles and book chapters on clinical bioethics, ethics consultation, organ donation and transplant, double effect and related areas, and is often invited to speak at national and international venues. He has given multiple presentations about the future of AI in medicine.

“While there are really concerning existential issues with AI, in medicine, we have a chance for AI to be a dramatically transformative moment. Over time, AI will make possible an environment in which health professionals and clinicians can actually move more toward the human side of medicine and spend more time with patients. I think the role of clinicians will change over time to be more about empowering patients, helping to decipher medical information and navigating patient values as it regards treatment.

"Just recently, I had an annual checkup with my primary care physician, and she used an AI tool to take notes and enter them into the chart. She told me that she’s able to see two additional patients per day, and actually spends more time with each of her patients as a result of using the tool, because she spends less time charting.

"If the electronic medical record work can be done by AI, it saves a lot of time. If we do this right, I see the future of medical care reverting back to the supportive, compassionate role that has fallen away in the last several decades. The highly technical, skilled side of being a physician will be less necessary, because machines can do that part of the job, and the human side will prevail. Humans are intensely social and ultimately, people want human engagement. Patients want a doctor to engage with them, not spend an entire appointment typing into a chart. 

"On the research side, the knowledge explosion is amazing. We’re getting close to being able to knock out things such as the common cold and vaccine research that is incredible. AI tools can speed the research process along by processing the data much faster than a human can, which can revolutionize the medical practice.”

EXPLORE MORE:
bioethics, Health + Wellness