7 Comments

Regarding the poll, I felt undecided was the only correct answer right now since AI has the potential to do both great good and great harm. But I think the issue won't be only or even mainly technical: it will be who uses it for what motivations. Will it mainly be used by bean counters to deny human care? Will it mainly be used by the well-intentioned but unrealistic dreamers who disregard unintended consequences? Or will it be used by those with both a moral compass and the understanding that EVERY major new technology has unintended effects?

Of course all these sorts of people will use AI. But whose uses will have the greatest impact? That is the question to which I don't know the answers.

Expand full comment

I was in the hospital this week for an Eliquis related brain bleed. The human touch aspect of my care was especially important to the odd and frightening sense of isolation that such a stay evokes. I worry AI will be used to reduce staff in a way that limits human interaction even more. No!!! That would be so deleterious to healing. So I’m undecided about its use. I do love that it will make charting easier. But there’s no place for it in situations where a person is key. My neurologist was charting close by when I woke from my burr hole procedure. He came over took my hand and said “Welcome back.” There’s no replacement for that!

Expand full comment

Great question, and I look forward to reading others' answers! As a health care researcher, I've been focused on this topic a lot lately. I think GenAI has huge opportunities to enhance providers' abilities to practice effectively, and eliminate busy work (as you noted in a recent newsletter, you are already finding this is the case!). Many solutions exist today to reduce the amount of charting/notetaking/etc. providers must do, and this seems like a net positive to providers, patients, and labor costs. Ultimately, doctors who leverage AI to support their practice will replace doctors who don't.

However, I'm also concerned about the downsides. A JAMA pediatrics article from January (https://jamanetwork.com/journals/jamapediatrics/article-abstract/2813283) revealed that ChatGPT was wrong in its diagnosis 83% of the time for peds cases. Yes, it will get better over time, but it's not good enough yet. I also wonder how good is good enough? Doctors aren't 100% accurate, but they also aren't 83% inaccurate, at least not on average.

Another downside is its potential to exacerbate our loneliness epidemic and create additional poor health outcomes that are associated with loneliness. More on that here: https://www.christenseninstitute.org/blog/generative-ai-will-fuel-the-loneliness-epidemic/ .

Ending on a positive note, if we can create industry guardrails that enable both providers and patients to trust AI, the upsides are much greater. But without trust, we won't get very far. More on that here: https://www.christenseninstitute.org/blog/why-jobs-to-be-done-theory-is-helpful-when-evaluating-genais-use-in-health-care/

Thank you for your work! I love your newsletter.

Expand full comment
Jun 28·edited Jun 28

I follow Dr Eric Topol and his many guest experts so based on all of that, yes I feel optimistic

Expand full comment

Book club, Yes!

Expand full comment
founding

For first pass scanning mammogram, colonoscopies etc -- it has to be useful! As you described for documenting EHR, magical!

Expand full comment

I don't have access to the complete NEJM article noted above....found that tirzepatide substantially reduced dangerous events seen among OSA patients a year later. Did the study include OSA pt's without an elevated body weight?

The follow up being, were the reduce events the result of an effect in addition to simply weight loss?

Thank you

Expand full comment