New report reveals why giving your Apple Watch data to AI doctors might be a bad idea

One person's struggle with ChatGPT Health and Claude is a cautionary tale.

0comments
Apple Watch
A new story is shining a spotlight on the capabilities of these recently released (and somewhat questionable) AI doctors. It turns out that they, with the help of years of data from the Apple Watch and fitness trackers, can provide wildly inconsistent health "grades" that might leave you more confused than comforted. Unsurprisingly, it appears that these tools might be jumping to some pretty scary conclusions.

What's the deal with these new AI doctors


In a new report from The Washington Post (paywalled), the author decided to put some newly released AI doctors to the test by providing them with a decade's worth of personal data from their Apple Watch. The tools used were ChatGPT Health, which OpenAI released earlier this month, and its competitor, Claude for Healthcare, which Anthropic released just a few days later. The goal of this experiment was to see if the new AI doctors could make sense of the data and provide a clear picture of the author’s well-being.

However, the results of the new AI doctors were anything but comforting. The same data provided an F and then a B, depending on when the question was asked. ChatGPT gave an F when first asked to grade the author’s heart health, but then the grade improved to a D after the author provided more medical records. Claude, on the other hand, gave a C. The same data was then given to real-life doctors, who called the conclusions baseless and declared that the patient was actually in excellent health.

Recommended For You

The report also mentioned that these bots tend to rely on estimated data such as VO2 max, which may not be accurately measured by an Apple Watch, and typically require a treadmill and mask to get an accurate reading. There were also times that the AI forgot basic information, such as the user’s age or gender, throughout the conversation.

Furthermore, the companies may provide the assurance that the data will be encrypted. However, they are not held to the same standard as HIPAA, which governs the privacy standards that a real doctor must adhere to, essentially leaving users with nothing more than a pinky promise on their most personal data.

Will the rumored Apple Health+ be any better?


The spotty results of these bots come at a bad time, as we have known for some time now about the upcoming Apple Health+ service, which is rumored to be launched sometime this year. This service is expected to have an "AI Health Coach" that would act as a virtual doctor. However, considering the struggles that industry leaders like OpenAI and Anthropic are having, it remains to be seen if Apple's version would be any better.

Apple prides itself on its commitment to data privacy, but the real question is not so much the leaking of data but the reliability of the AI. If the Apple Health+ service is expected to have personalized coaching and medical advice, it would need to be able to sift through the data better than the competition. Considering the spotty results of the AI tools used in this experiment, Apple would have a tough road ahead of it if it hopes to convince users that its AI tool is different.

Would you trust an AI to grade your personal health data?

Can you trust a digital doctor?


If you're a data geek like me and enjoy looking at pretty charts of your health data from the last five years, then these AI tools are fun to play around with. However, it is my opinion that if you need real medical advice, you should stick with a real doctor.

I have no doubt that we'll be seeing a lot more beta health features come out from different tech companies this year. However, while they sound like a lot of fun, I think we can all agree that they'll never be able to replace the real thing.
Google News Follow
Follow us on Google News

Recommended For You

COMMENTS (0)
FCC OKs Cingular\'s purchase of AT&T Wireless