Untested AI Could Lead to Healthcare Errors and Patient Harm, WHO Warns 

The World Health Organization is calling for caution in using artificial intelligence-generated large language model tools (LLMs) such as ChatGPT, Bard, Bert and others that imitate understanding, processing and human communication. 

LLMs increasing use for health-related purposes raises concerns for patient safety, WHO said. The precipitous adoption of untested systems could lead to errors by healthcare workers and cause harm to patients, Healthcare Finance reports. 

WHO proposes that LLM concerns be addressed and clear evidence of benefit be measured before LLMs find widespread use in routine healthcare and medicine – whether by individuals, care providers or health system administrators and policymakers.

WHO released its comments days after OpenAI CEO Sam Altman spoke of AI concerns before the Senate Judiciary Subcommittee on Privacy, Technology and the Law. Read more.

Total
0
Shares
Related Posts