Artificial intelligence (AI) is everywhere you search these days and healthcare is no exception. Just ask Pooyan Kazemian, the John R. Mannix Medical Mutual of Ohio Endowed Professor of Healthcare Management and an assistant professor in the Department of Operations Management at Case Western Reserve University.
Kazemian, a former faculty member at Harvard Medical School, has conducted extensive research in advancing healthcare operations through verifiable artificial intelligence. His research lies at the intersection of machine learning, AI, and data- driven optimization to improve healthcare management.
Currently, there are over 700 medical AI models that have been approved by the FDA, with most offered as machine learning as a service (MLass). This means that instead of healthcare organizations navigating complex infrastructures and developing their own models, they can use advanced AI tools provided by external companies.
These AI tools are accessible through an online interface, allowing healthcare providers to submit data such as medical images, and receive fast, personalized diagnoses. MLass has been used to detect early signs of illness and certain chronic conditions like skin cancer, cardiovascular problems, diabetes, respiratory illnesses and glaucoma.
It can also reduce the wait time for surgeries or accurately predict blood levels and proper dosages of medication, while also designing individual long-term treatment plans.
“Healthcare isn’t one-size-fits-all,” Kazemian said. “This is personalized medicine. The idea is to not overtreat or undertreat each patient and AI helps us do that.”
The benefit of the technology also eliminates the need for trust between AI vendors and healthcare providers. By using a blockchain-based system, the process is automated and verifiable by a decentralized network, meaning neither party must fully rely on the other.
This not only improves the security and reliability of AI tools but also helps build trust in the use of AI in critical healthcare decisions, potentially leading to faster and more widespread adoption of AI technologies in the medical field.
Kazemian said the research conducted by him, Manoj Malhotra, a former Weatherhead dean and current dean at Lehigh Business, and Hank Korth professor of computer science at Lehigh University, addresses trust issues by introducing an AI verification method using an advanced cryptographic method. It offers a trustless, secure and automated way for healthcare providers to confirm they are receiving authentic model outputs without needing to trust the AI provider.
“The implications of our trustless AI verification framework are that healthcare providers can use medical AI models offered as MLaas with confidence, knowing they are receiving authentic model outputs without needing to trust the provider,” he said.
Kazemian plans to continue the research around the size of the AI models to verify using zero-knowledge proofs and is currently working towards improving the efficiencies so that larger AI models can be proved more quickly.
“There is still a lot of work to be done, but we’re making healthy strides,” he said.