A new story is shining a spotlight on the capabilities of these recently released (and somewhat questionable) AI doctors. It turns out that they, with the help of years of data from the Apple Watch and fitness trackers, can provide wildly inconsistent health "grades" that might leave you more confused than comforted. Unsurprisingly, it appears that these tools might be jumping to some pretty scary conclusions.
What's the deal with these new AI doctors
In a new report from The Washington Post (paywalled), the author decided to put some newly released AI doctors to the test by providing them with a decade's worth of personal data from their Apple Watch. The tools used were ChatGPT Health, which OpenAI released earlier this month, and its competitor, Claude for Healthcare, which Anthropic released just a few days later. The goal of this experiment was to see if the new AI doctors could make sense of the data and provide a clear picture of the author’s well-being.
However, the results of the new AI doctors were anything but comforting. The same data provided an F and then a B, depending on when the question was asked. ChatGPT gave an F when first asked to grade the author’s heart health, but then the grade improved to a D after the author provided more medical records. Claude, on the other hand, gave a C. The same data was then given to real-life doctors, who called the conclusions baseless and declared that the patient was actually in excellent health.
Recommended For You
The report also mentioned that these bots tend to rely on estimated data such as VO2 max, which may not be accurately measured by an Apple Watch, and typically require a treadmill and mask to get an accurate reading. There were also times that the AI forgot basic information, such as the user’s age or gender, throughout the conversation.
Furthermore, the companies may provide the assurance that the data will be encrypted. However, they are not held to the same standard as HIPAA, which governs the privacy standards that a real doctor must adhere to, essentially leaving users with nothing more than a pinky promise on their most personal data.
Will the rumored Apple Health+ be any better?
ChatGPT's virtual doctor gets its info from Apple Health after being granted permission. | Image credit — Apple
The spotty results of these bots come at a bad time, as we have known for some time now about the upcoming Apple Health+ service, which is rumored to be launched sometime this year. This service is expected to have an "AI Health Coach" that would act as a virtual doctor. However, considering the struggles that industry leaders like OpenAI and Anthropic are having, it remains to be seen if Apple's version would be any better.
Apple prides itself on its commitment to data privacy, but the real question is not so much the leaking of data but the reliability of the AI. If the Apple Health+ service is expected to have personalized coaching and medical advice, it would need to be able to sift through the data better than the competition. Considering the spotty results of the AI tools used in this experiment, Apple would have a tough road ahead of it if it hopes to convince users that its AI tool is different.
Would you trust an AI to grade your personal health data?
Can you trust a digital doctor?
If you're a data geek like me and enjoy looking at pretty charts of your health data from the last five years, then these AI tools are fun to play around with. However, it is my opinion that if you need real medical advice, you should stick with a real doctor.
I have no doubt that we'll be seeing a lot more beta health features come out from different tech companies this year. However, while they sound like a lot of fun, I think we can all agree that they'll never be able to replace the real thing.
Johanna 'Jojo the Techie' is a skilled mobile technology expert with over 15 years of hands-on experience, specializing in the Google ecosystem and Pixel devices. Known for her user-friendly approach, she leverages her vast tech support background to provide accessible and insightful coverage on latest technology trends. As a recognized thought leader and former member of #TeamPixel, Johanna ensures she stays at the forefront of Google services and products, making her a reliable source for all things Pixel and ChromeOS.
A discussion is a place, where people can voice their opinion, no matter if it
is positive, neutral or negative. However, when posting, one must stay true to the topic, and not just share some
random thoughts, which are not directly related to the matter.
Things that are NOT allowed:
Off-topic talk - you must stick to the subject of discussion
Offensive, hate speech - if you want to say something, say it politely
Spam/Advertisements - these posts are deleted
Multiple accounts - one person can have only one account
Impersonations and offensive nicknames - these accounts get banned
To help keep our community safe and free from spam, we apply temporary limits to newly created accounts:
New accounts created within the last 24 hours may experience restrictions on how frequently they can
post or comment.
These limits are in place as a precaution and will automatically lift.
Moderation is done by humans. We try to be as objective as possible and moderate with zero bias. If you think a
post should be moderated - please, report it.
Have a question about the rules or why you have been moderated/limited/banned? Please,
contact us.
Things that are NOT allowed:
To help keep our community safe and free from spam, we apply temporary limits to newly created accounts: