Recently our CEO published a piece critiquing the near term impact of generative AI on patients. “Generative AI in Hospitals: A Promising Future, a Disappointing Present”
Imagine yourself as the leader of a wonderful health system constantly plagued by the number of administrators you need to make sure your patients are receiving the care they need, and you are compensated for all the complex patients you care for. Generative AI can reduce your administrative burden and exponentially increase the rate and depth of your prior authorization and patient coding.
Across the street lies the leader of a just insurer, a champion against unnecessary procedures and tests that prey on the fears of patients and result in higher healthcare costs. Generative AI can automate the evaluation and response to frivolous tests and procedures, usage of expensive drugs, and abuse of the rules by the less principled.
You can see where this goes – everyone is the hero of their own story. The AI bots of providers will fight the AI bots of insurers and we will end up with a different version of the same conflicts and imbalance we have today in a system fraught with obscure rules, loopholes and asymmetric information.
The Prisoner’s Dilemma is one of the earliest and stickiest lessons of economics. The asymmetry of outcomes leads isolated prisoners to betray one another despite the fact that collective cooperation results in a much better outcome for both. Despite the flowery language about the power of an AI future for patients from thought leaders, investors and companies – we are about to see this play out across the largest industry in the world.
Read the full post here.
We gave ChatGPT a chance to respond! Here was its rebuttal:
The article posits that generative AI will lead to increased conflict between healthcare providers and insurers, drawing a parallel to the prisoner’s dilemma, where both parties, acting in self-interest, result in a net negative outcome. However, this view overlooks the potential of AI to streamline operations, improve patient care, and reduce administrative burdens. Generative AI, when thoughtfully implemented, has the potential to facilitate better communication, reduce errors, and enhance decision-making in healthcare.
Firstly, the assertion that AI will exacerbate conflicts between healthcare providers and insurers simplifies the complex interactions in healthcare. Generative AI can automate routine tasks, analyze vast datasets for insights, and assist in making more accurate decisions, benefiting both insurers and providers. This technology can lead to a more collaborative environment where the focus shifts from administrative disputes to patient care.
Secondly, concerns about generative AI leading to an increase in human adjudicators to resolve disputes overlook the technology’s capacity for creating more precise and equitable systems. By automating and improving accuracy in billing and authorization processes, AI can reduce the need for dispute resolution in the first place.
Regarding patient care, the potential of AI to save lives by identifying overlooked diagnoses or errors is significant. While the article acknowledges this, it suggests that focusing on billing and administrative AI applications might distract from patient care advancements. However, improvements in administrative efficiency free up healthcare professionals to focus more on patient care, not less. Streamlining administrative tasks can also speed up patient access to necessary treatments, indirectly improving care outcomes.
The claim that AI technologies are less beneficial inside hospitals compared to outside settings underestimates the adaptability and potential impact of these technologies across different healthcare environments. Generative AI tools can be customized and developed to address the unique needs of in-hospital care, enhancing the efficiency and effectiveness of healthcare delivery.
While caution is warranted in implementing any new technology, especially in sensitive areas like healthcare, the benefits of generative AI can far outweigh the potential drawbacks if deployed responsibly. It is crucial to approach these advancements with a balanced perspective, recognizing both the challenges and the transformative potential of AI in improving healthcare systems for patients, providers, and insurers alike.
However, just like the pending AI bot wars, its hard to get in the last word. Here is ChatGPT’s response to its response.
The rebuttal emphasizes the potential benefits of generative AI in healthcare, suggesting it could lead to improved operations, better patient care, and reduced administrative burdens. However, it may overlook the real-world complexities and unintended consequences that the original article underscores.
Firstly, while the idea of generative AI fostering collaboration and reducing errors is appealing, the current healthcare infrastructure, marked by competitive interests and regulatory constraints, might not easily accommodate such an ideal transition. The friction between healthcare providers and insurers, deeply embedded in the system’s fabric, cannot be simply automated away. The dynamics of self-interest and competition, akin to the prisoner’s dilemma, could indeed intensify with the adoption of AI, leading to scenarios where the technology’s application exacerbates existing conflicts rather than resolves them.
Secondly, the rebuttal’s argument that AI could reduce the need for human adjudicators by improving billing and authorization accuracy seems optimistic. The complexity and variability of healthcare cases mean that AI systems could generate new types of disputes and errors, necessitating human oversight and intervention more than ever. The efficiency gains in one area might be offset by the increased complexity and workload in another, especially in the initial phases of AI adoption.
Regarding patient care, the rebuttal rightly points out AI’s potential to identify diagnoses and errors that might be overlooked by human healthcare providers. However, this potential comes with significant risks, including overreliance on AI recommendations and the potential for ‘AI hallucinations’ or errors, which could have dire consequences in healthcare settings. The critical nature of healthcare decisions means that even small error rates can be unacceptable, requiring rigorous validation and oversight mechanisms that are currently underdeveloped.
The assertion that AI technologies can be equally beneficial inside hospitals as outside overlooks the specialized nature of in-hospital care, which involves acute conditions, emergency interventions, and complex interprofessional collaboration. The current state of AI technology, primarily developed and tested in less complex environments, might not translate seamlessly to the high-stakes, high-variability context of hospital care.
In summary, while the rebuttal offers a hopeful view of generative AI’s role in healthcare, it may underestimate the challenges and risks involved in integrating such technologies into a complex, high-stakes environment. The optimistic scenarios outlined depend on significant advancements in AI technology and healthcare systems’ ability to adapt and integrate these tools in ways that truly benefit patients without introducing new risks or exacerbating existing inequalities.