The Elephant in the AI Hospital: Why Culture, Not Computers, is Holding Back the AI Healthcare Revolution

Ravi Komatireddy
3 min readApr 30, 2024

--

AI in healthcare: possibly the only thing that’s got more hype than the next Taylor Swift album. It seems like every time you turn on the news or open a tech magazine, someone’s banging on about how AI is going to revolutionize healthcare. But as someone who’s been around the block a few times and even helped create an early diagnostic system using machine learning, let me tell you, it’s not that simple.

The promise of AI in healthcare is undeniably exciting, and it’s been a long time coming. However, the biggest barrier to adoption isn’t the technology itself but rather a cultural problem that no one seems to be talking about:

How much error are we, as a society, willing to accept, especially from machines?

Think about self-driving cars. We know they’re not perfect, and they never will be. Sometimes, they’ll get into accidents. But we accept those mistakes because, let’s face it, human drivers are about as reliable as a chocolate frying pan. In fact, according to the National Highway Traffic Safety Administration, human error contributes to a staggering 94% of car crashes, resulting in around 38,000 deaths per year in the United States alone. Cars work pretty well. It’s the mushy bags of meat inside that are prone to errors.

Now, apply that same logic to healthcare. AI will make mistakes. It’ll misread a certain percentage of radiology images, get some diagnoses wrong, and even the future robotic surgeons will botch a procedure now and then. When human doctors mess up, we know who to blame. We understand human error, even if we don’t always accept it. Shockingly, a Johns Hopkins study suggests that medical errors are the third-leading cause of death in the U.S., contributing to more than 250,000 deaths annually. This report has been strongly contested, and there are different ways of measuring it. But the fact remains. People make mistakes. But what happens when an AI doctor makes a mistake that ends up hurting people?

This is where the real challenge lies. It’s not about computational power or the next version of ChatGPT. It’s about human psychology, behavior, and perception. How many errors, or what degree of error, are we willing to accept from our artificial medical professionals? And who do we point the finger at when things go wrong? AI will make mistakes, even if the rate is lower than that of human doctors.

If AI has the potential to work wonders in healthcare, we need to have a serious conversation about our expectations and our willingness to accept the inevitability of machine error. It’s a cultural issue that we, as a society, must grapple with before we can fully embrace the AI revolution in medicine. What are YOU, as a patient, willing to accept? Until we can collectively agree on the acceptable margin of error for AI in healthcare, the technology will remain stuck in a cultural traffic jam, and the elephant in the AI hospital will continue to block the path to progress.

--

--

Ravi Komatireddy

Physician, CEO of Daytona Health Inc., Digital Health Entrepreneur. I believe in the power of people and technology working together.