- So this morning I woke up with a headache, which is kind of annoying, and I'm gonna ask ChatGPT what it thinks.
Why did I wake up with a headache?
- There can be several reasons why you might have woken up with a headache.
Common causes include dehydration.
If you didn't drink enough water before going to bed, you may wake up with a headache.
Sleep deprivation.
Not getting enough sleep can also cause a headache.
Stress.
- I am pretty stressed.
- Caffeine withdrawal, if you... - So let's just ask Chat GBT the blunt question.
Do I have a brain tumor?
- Waking up with a headache does not necessarily mean that you have a brain tumor, as I mentioned earlier.
- As I mentioned earlier?
Okay, get hostile on me.
- I'm sorry, but as an AI language model, I don't have the capability to perform a physical examination or diagnose medical conditions.
The best way to determine the cause of your headache is to see a doctor.
- Artificial intelligence, or AI, might not be ready to replace your doctor just yet, as you just saw, but what about in a few years?
- For real, these tools are helping big pharma look for new drugs.
They're replacing paper charts in hospitals.
Nurses can use them to keep track of medications.
The list goes on.
- The National Academy of Medicine put out a report this fall which says we should find ways to use AI more widely across the practice of medicine.
It says it could improve diagnosis, reduce errors, and even reduce burnout for nurses and doctors.
So are we on track for our robot doctor revolution, or not so much?
(intriguing music) Doctors and nurses rely on these algorithms a huge part of the time already by entering patient's vital signs that may be concerning for sepsis.
The computer then looks at the patient's most recent labs.
If the labs are also concerned for sepsis, which means a blood infection, the computer triggers a sepsis alert.
It not only tells me why it has triggered the alert, but also will not let me pass that screen without selecting an option of what I will do about that sepsis and create a care plan.
Bossy, direct, and AI don't play.
- So when you ask, can a robot do things better than a human being, the answer is, it depends.
Computers are really good at pattern recognition.
They go through large piles of data and they learn from this information.
A program that looks for cancer, for example, has reviewed thousands and thousands of scans, and they quote unquote learned to find subtle signs that something is really wrong.
- We have really, really efficient methods to decompose a large image, a complex image, into a smaller dimensional representation.
So we can take this big x-ray that you might have to stare at for, you know, 20 minutes, you're really tired, and encode it using a neural network into a small, maybe set of six different variables, and then we can build a classifier on top of those that tells you whether somebody has pneumonia, or a pneumothorax, or they're fine, they can go home.
- When it comes to analyzing detailed images like x-rays or CTs, for example, the answer might be yes.
In some cases, a computer program maybe will do things better than a human being, as long as it still has a human perspective as a backstop.
- What machine learning methods are not well suited for is automating decisions in healthcare settings.
We don't often know what is the right decision in many medical decision making settings, and so I think there are no cases I can think of in which we would want to shortcut critical decision making skills in a clinician.
- Various studies have found that AI does better than human doctors in diagnosing many different conditions, such as asthma, early identification of sepsis, and early detection of many kinds of cancer.
- But it's not always perfect.
Now, there's been a lot of headlines, for example, about AI being really great when it comes to mammograms.
But a meta study two years ago show that these small examples, these small studies, could not be replicated, and that AI was not better than a human when it comes to diagnosing breast cancer.
- Here's one thing to remember.
AI is only as good as the data that it's been fed.
There are studies where AI did a great job identifying which marks on skin were really skin cancer, but it only worked for people with light skin.
That's because researchers didn't teach the software by showing examples of dark skin, at least not enough examples.
And that goes to show that no matter how much of the population is melanated, if the software isn't fed information about melanated skin, then we're still not gonna be thought of in spaces where we absolutely need to be cared for.
But technology changes, and some people really want to see if AI could take on more of the work that human nurses and doctors do now.
- ChatGPT from the company, OpenAI, is a good example.
Now it's not a medical program per se.
Doctors aren't exactly using it for diagnosis and treatment, but a lot of us have been playing around with it to give us a sense of what artificial intelligence may be able to do, and we're noticing some pitfalls.
Dr. Jeremy Faust, an ER doctor in Boston, has posted some pretty entertaining videos about this.
- I actually asked OpenAI, 35 female, no past medical history, presents with chest pain which is pleuritic, worse with breathing, and she takes oral contraception pills.
What's the most likely diagnosis?
And OpenAI comes out with costochondritis, inflammation of the cartilage connecting their ribs to the breast bone.
Then it says, and it'll come back to this, typically caused by trauma, overuse and is exacerbated by the use of oral contraceptive pills.
Now this is impressive.
First of all, everyone who read that prompt, 35, no past medical history with chest pain that's pleuritic, a lot of us are thinking, oh, pulmonary embolism, blood clot.
That's what that is going to be.
Because on the boards, that's what that would be, right?
But in fact, OpenAI is correct.
The most likely diagnosis is costochondritis.
- So far, so good, but then... - And it said, by the way, that can be made more likely by the use of oral contraceptive hormone pills.
And I was taken aback.
You learn something every day.
So I thought, oh, maybe I didn't know that one.
So I asked OpenAI, wait, where did you get that idea?
Can you give me a reference or a link?
And it came up with a paper that looked like a peer reviewed paper in a medical journal.
I clicked the link that it gave me and it went nowhere.
And I went back and I said, well, that didn't work.
Try another one.
And it gave me another link.
Try this one.
And that didn't work.
And then I started to push back, and I was saying, well, wait a second.
I went to this journal and I don't see it there.
And it said, look in the table of contents.
So I looked in the table of contents, not there.
It said, why don't you call the journal, because there must be some mistake.
- It was like pushing back on you, almost like it was this combative colleague who was like, no, I'm right.
- It took a real journal, The European Journal of Internal Medicine, it took the last names, I think, and first names of authors who have published in said journal, and it sort of confabulated out of thin air a study that would apparently support this viewpoint.
- And I'm just curious if you had a takeaway thought or some thesis after this encounter?
- Well, the immediate thing I thought of was the movie, "2001," where the computer says, "Well, that must be human error."
- So can I ask your human reaction?
Like, were you creeped out?
Were you like, okay, the machines are taking over.
Like, what was your thought process as a human?
- I was just like floored.
Like, wow, not only is it sort of confabulating this thing, I was just blown away like, wow, it's really not giving up.
So yeah, I got a little nervous.
- So we did our own little experiment.
We tried something similar.
We asked ChatGPT a question.
The answer was good.
It was what we needed to know.
But when we asked for more evidence, it made up the reference, just made it up.
We could not find the sources.
The articles you see here, they actually do not exist.
- Still, as a language model, ChatGPT can help physicians in many ways.
Medical terminology assistance.
Chat GPT can help physicians understand and use complex medical terminology.
Clinical decision support.
ChatGPT can provide evidence-based information and suggestions for diagnosis and treatment options based on a patient's symptoms and medical history.
It's important to note, though, that while ChatGPT can provide helpful information and support, it should not be used as a substitute for physicians' professional medical judgment.
Also, in case you couldn't tell, everything I just said was written, word for word, by ChatGPT.
ChatGPT basically told me why ChatGPT is great.
If you wanna learn more, you can read more about this in our show's description.
- Something that AI does help us with is our ability to be able to remember what to spit out.
- I'll give you examples right now.
The other day, I typed into ChatGPT, "Provide instructions for a school nurse to give albuterol," and it laid it out perfectly.
Now, yes, number one, a lot of school nurses already know how to do this.
Number two, we do have discharge instructions, but this was so fast, and it was faster than what I'm going to be able to say or type myself.
It was done in a matter of seconds.
I printed it out.
I like said to the, it was actually to a teenager, and I was like, "Hey, give this to your coach, to your school nurse," and like, boom, there it is.
There are ways to actually kind of add more content to patient education so that we can do our jobs faster.
- Absolutely, like we're humans, y'all, just in case y'all didn't know.
And we're dealing with short staffing at the time and just a lack of resources.
So not all of this information is just gonna pop into our head at one time, but that's what I feel like AI should be used for.
And it's kinda like backing us up a little bit.
Like, can I get an assist from an AI versus them?
You know, it's not gonna replace at all.
- There's also the whole concept of building trust.
Now, you don't just, patient doesn't walk into an ER, and say like, "Hey, how's it going?
"I'm a 19 year old with chest pain.
"I have a family history of congestive heart disease.
"I've currently been smoking two packs a week, "and X, Y, and Z.
"Here are the medications I'm on."
No, you gotta earn that information.
You have to earn that in information by knowing what questions to ask by building good patient rapport, and doing all that in like a matter of seconds.
(bright music) - That does it for this week's episode of PBS Vitals.
Tell us what you think in the comments, and tell us some fun ways that AI is used at your workplace or even in your personal life.
Hit the Like button, follow along on all of our socials, and we'll see you next time.
(upbeat music)