How effective is AI-powered healthcare?
Decoding Digital Intelligence + ①
Editor's Note
Have you used AI today?
Today, our lives and work seem increasingly inseparable from digital technologies such as artificial intelligence (AI), and these technologies are still "growing" every day. From "digital intelligence + consumption" and "digital intelligence + culture and tourism" to "digital intelligence + sports," from autonomous driving and intelligent manufacturing to smart cities, digital technologies are accelerating their integration into all walks of life, constantly opening up new application scenarios, and continuously changing people's production and lifestyles.
Starting today, this section launches the "Decoding Digital Intelligence+" column, inviting readers to explore emerging new application scenarios of digital intelligence technologies and observe their infinite possibilities.
From triage robots to early cancer screening in medical imaging, the application scenarios of artificial intelligence (AI) technology in the medical field are becoming increasingly diverse. Amidst the surge of interest in large-scale AI models, the idea of "asking AI for help when you're sick" has attracted particular attention. When patients walk into the clinic with AI-generated treatment suggestions, and when AI's "opinions" even challenge doctors' judgments, a series of questions urgently need answers: Is AI-driven healthcare reliable? Will it replace doctors? While embracing efficiency, how can patients and doctors mitigate the risks?
AI becomes a helpful tool in diagnosis and treatment.
Opening the "AI Hepatobiliary Hospital" mini-program on the Beijing Tsinghua Changgeng Hospital WeChat account, the reporter entered "discomfort in the upper right abdomen" into the chat box. Soon, the AI began "chatting" with the reporter: "Are your symptoms persistent or intermittent?" "Are they accompanied by fever, nausea, or vomiting?"... After a few questions, the AI suggested consulting a hepatobiliary specialist.
"Tsinghua Chang Gung Hospital is developing a comprehensive model for the full-cycle management of liver diseases. The initial version has been launched on the hospital's WeChat mini-program. Currently, it can conduct pre-diagnosis based on patients' symptoms and provide triage suggestions," said Yang Ming, chief physician of the Department of Hepatobiliary Medicine at Tsinghua University Beijing Tsinghua Chang Gung Hospital. He added that this AI system combines patients' symptoms and laboratory tests to provide triage suggestions with a high accuracy rate.
During the interviews, many doctors reported that more and more patients are coming to their clinics with AI-generated diagnostic suggestions.
"Some patients will use AI to organize their thoughts before seeing a doctor, so they come to the clinic with a relatively clear idea," Chen Xiuyuan, deputy chief physician of thoracic surgery at Peking University People's Hospital, told reporters. Patients will use large AI models to obtain preliminary explanations of their diseases and possible treatment directions based on their medical history and test data.
"This is equivalent to providing patients with disease education in advance, giving them a preliminary understanding of the disease, making it easier for them to understand the doctor's professional judgment and advice, and making subsequent communication smoother and more efficient," Yang Ming said.
AI is being used for patient consultations and for doctors to treat patients.
Several doctors interviewed stated that the detection rate for nodules smaller than 5 millimeters in diameter was low, but the detection rate significantly improved after using AI. "It works very well," commented Wang Yi, chief physician of the Department of Radiology at Peking University People's Hospital.
"If we compare surgery to driving, then a CT scan is like a precise paper map, while AI 3D reconstruction is like having a more accurate and intuitive electronic map." The "AI 3D reconstruction" algorithm that Chen Xiuyuan mentioned has been deployed at Peking University People's Hospital for many years. This system uses AI to present the complex structure of the lungs more accurately, and the accuracy of anatomical structure recognition has been improved.
"The use of such systems can help doctors free themselves from the heavy workload of initial image screening, allowing them to focus more on in-depth comprehensive analysis of the imaging results, developing personalized treatment plans, and dealing with more complex diagnostic problems," said Zhang Yun, deputy chief physician of the Department of Thoracic Surgery II at Shandong Provincial Public Health Clinical Center.
Beyond medical imaging, AI also demonstrates surprising effectiveness in surgical planning.
Li Haifeng, deputy chief physician of joint surgery at the Department of Orthopedics, General Hospital of the Chinese People's Liberation Army, used joint replacement surgery as an example to explain: "In the past, to prevent mismatches, this type of surgery often required preparing a complete set of different prostheses for each patient, resulting in a waste of resources. Now, AI can accurately predict the required prosthesis model in advance by analyzing the patient's CT data and combining it with a massive amount of past surgical data models."
Information overload can exacerbate anxiety.
AI has already demonstrated a certain level of efficiency and accuracy in assisting medical care. Will it replace doctors?
During the interviews, although doctors generally acknowledged the value of AI in assisting diagnosis and treatment, they remained cautious about the specific conclusions or treatment suggestions provided by AI.
"I only suggest that patients use AI consultations as a way to understand their illness, but I do not recommend that they follow the AI's suggestions," said Li Xiaohong, chief physician of the Department of Spleen, Stomach, Liver and Gallbladder at Dongfang Hospital of Beijing University of Chinese Medicine.
In response, Yang Ming explained: "Currently, AI diagnosis is mainly based on large models, and the data captured will have a significant impact on the generated results."
"This content is indeed very logical and systematic, but whether it is applicable to different patients still needs further judgment," Wang Yi pointed out. If patients do not have much knowledge about the disease, they may find it difficult to identify the problem.
Regarding AI-assisted medical advice, many experts say that "overload" of information can actually exacerbate patients' anxiety.
Li Haifeng stated, "Sometimes patients use AI-generated reports that are very detailed, listing all possible problems, which leads them to seek confirmation from doctors with confusion or even panic. However, in reality, many of these assumptions are not clinically significant."
"Some diseases are systemic problems caused by multiple factors, and it is difficult to make an accurate diagnosis based solely on the symptom descriptions provided by patients," Li Xiaohong admitted. As people's understanding of AI deepens, they are gradually realizing that the content generated by AI needs to be verified for authenticity.
A prescription not only means treating the symptoms, but also reflects the doctor's responsibility and commitment.
“Doctors use their professional knowledge as a starting point for diagnosis and treatment, but when facing each patient, they need to adjust the treatment plan according to their individual characteristics. AI may be lacking in this aspect,” Yang Ming explained, using liver disease as an example. For instance, a patient with elevated transaminase levels may have a history of hepatitis B and fatty liver disease, and recently had a history of heavy alcohol consumption and taking statins. “AI may only proactively inquire about the patient’s past medical history during diagnosis and treatment, and based on the information about having hepatitis B, give the suggestion that ‘antiviral drugs are needed,’ but it may miss key details that the patient did not proactively provide, such as the history of alcohol consumption and medication, so the suggestions given are prone to bias.”
"In the field of imaging, although AI has been widely used, under the current screening technology, relying solely on AI may miss some very early and atypical lesions," Zhang Yun once said. He added that without combining medication history, previous imaging comparisons, and other multi-dimensional information for comprehensive judgment, misdiagnosis is very likely to occur.
“Large models can free up our hands and feet, but they cannot replace the brain.” Li Xiaohong believes that medical decisions rely on complex clinical judgment and rich experience, especially when faced with atypical cases or multiple coexisting diseases. Experienced doctors can capture subtle symptoms and signs, which is something that AI cannot currently achieve.
"Doctors are not only healers of diseases, but also psychological supporters of patients." Zhang Yun once said frankly, "AI has difficulty providing psychological support to patients, while medicine has warmth, and this warmth is conveyed through doctors."
Making AI more "rule-abiding" and "transparent"
It is important to be wary of the "AI illusion" that large AI models may create when generating content, which is the phenomenon of fabricating information.
“In clinical use, AI occasionally makes some ridiculous mistakes.” Zhang Daoqiang, dean of the School of Artificial Intelligence at Nanjing University of Aeronautics and Astronautics, gave an example: “Clinical imaging changes are extremely complex. Sometimes what we see may be ‘interference signals’ that have no diagnostic value, but AI may judge them as lesions. Some users have also found that when using AI to help generate content, it may fabricate the sources of medical terms and create fictitious references.”
Regarding this issue of information fabrication, Zhang Daoqiang believes that improvements should be made from both algorithmic and data perspectives. "The medical field is highly specialized, requiring strict control over errors. This necessitates rigorous control over the uniqueness and accuracy of data from the early stages of large-scale model development. As for algorithms, improving the AI's resistance to interference and reliability is a crucial issue. When AI moves from the laboratory into real-world environments, how can we improve the system's recognition accuracy and reliability in those environments? Any change in information can cause subtle deviations," Zhang Daoqiang said.
Explainability is also a development direction for AI in healthcare. Qin Jie, a professor at the School of Artificial Intelligence at Nanjing University of Aeronautics and Astronautics, explained: "The decision-making process of AI is more like a 'black box,' and patients may not be able to judge how the results were arrived at. Therefore, the path to the decision should be explained to help people make better judgments. Making AI itself more 'rule-abiding' and more 'transparent' is our direction of thinking."
For data samples, both the size and quality of the samples have a significant impact on the results generated by AI. "How can we better combine doctors' experience with large data-driven models? How can we reproduce the experience of top doctors in AI? These are all questions we need to study," said Qin Jie. In terms of model tasks and performance, "general-specific integration" is the next direction for AI development, "that is, to conduct in-depth task analysis based on large models and vertical scenarios."
When it comes to AI, we must both embrace new technologies and remain rational.
"AI is just a tool, definitely not a shortcut to laziness," Wang Yi said. When doctors use AI, they should combine it with their solid professional foundation and rich clinical practice, and apply the information provided by AI critically and rationally, rather than relying on it excessively.
During the interview, experts also suggested that relevant departments should integrate medical big data, unify research and design, and formulate standards and evaluation systems for AI doctors so that AI can better assist doctors.
“From a legal perspective, identifying and assigning responsibility in AI-related cases is more difficult,” pointed out Ma Yide, a professor at the School of Public Policy and Management, University of Chinese Academy of Sciences. “The development and deployment of AI applications often involve multiple stages and multiple stakeholders. From algorithm design to specific operations, each stage can potentially affect the final AI behavior. The lengthened and dispersed chain of responsibility makes it difficult to pinpoint the responsible party when problems arise.”
Ma Yide suggested that measures such as improving laws and regulations, strengthening data security, establishing a rights and responsibilities mechanism, and strengthening ethical supervision should be taken to promote the standardized deployment of AI medical applications.
Currently, the National Medical Products Administration has clarified that medical software that uses AI for disease diagnosis, decision support, and image recognition falls under the category of medical devices and must be registered and regulated in accordance with medical device regulations.
"When patients use AI for consultations, the AI's answers are not subject to legal responsibility, but doctors are responsible for the patient's treatment outcome," Yang Ming cautioned. He emphasized the need to ensure that AI technology is used reasonably within a legal framework, strictly adhering to data security and ethical bottom lines, and ensuring that the application of technology always serves the essence of medicine.
(By Cui Xingyi, Guangming Daily)
(Project Coordinator: Reporter Chen Haibo)