Written by Yannah Robles | Art by Tanisha Arora
The future is now, AI is no longer limited to being virtual assistants, they are now also assisting in the field of medicine, specifically in medical screening and diagnosis. Hospitals and startups now use machine learning to identify illnesses using the patient’s symptoms, voice recordings, images, and other patient data. For example, smartphone apps and chatbots like Ada Health or K Health, let users identify their symptoms and get instant risk assessments. A recent survey also found that about 66% of physicians use AI tools in healthcare in 2024, up from 38% in 2023. Researchers say this trend is driven by physician shortages and the ability to harness big data for earlier diagnoses. In practice, AI can work through smartphone apps, chatbots or voice agents, analyzing patient inputs, like text, images and speech, to identify risks and suggest the next course of action.
This technology is greatly beneficial as AI can screen people anytime, anywhere. Additionally, AI can handle routine tasks that allow doctors to focus on critical care, enhancing efficiency in the workplace, with AI voice agents also able to take over appointment scheduling and reminders. Despite this, AI has its limits, symptom checkers are not foolproof. Past evolutions found that many got the right diagnosis only a minority of the time, if a patient enters dangerous symptoms but AI reassures them, the result could be a life threatening “false negative”, or vice versa. Additionally, there is the issue with privacy and bias, as patient data must be kept secure.
Doctors and patients alike worry about liability and transparency, after all, who pays the price if AI produces an error? As such, the AMA stresses that any AI tool should be “designed, developed and deployed in a manner that is ethical, equitable and responsible”, with its usage known by patients. Medical societies advocate using the tool as an aid, not a replacement. Overall, AI is incredibly useful currently for the mundane tasks in healthcare, with more research still required for it to be trusted with more and more complicated tasks. AI is not inherently bad in itself, but as usual, usage must be regulated, for the benefit of the patients and doctors alike, as medicine is still deeply rooted with care and compassion in its center, which AI could never provide like humans.
Works Cited:
https://www.ama-assn.org/practice-management/digital-health/augmented-intelligence-medicine
https://www.nature.com/articles/s41746-025-01776-y
https://www.wired.com/story/health-apps-test-ada-yourmd-babylon-accuracy/




Leave a comment