WHO warns about the risks and benefits of generative AI in health

WHO warns about the risks and benefits of generative AI in health


WHO recommends caution in the use of generative AIs, such as ChatGPT and Bard, within medicine. Before mass deployment, security tests should be performed

With the disclosure of Generative Artificial Intelligence (AI) through ChatGPT and Bard, the World Health Organization (WHO) takes a position on the use of new technologies in medicine. The institution is in favor of innovation, as long as it brings safe answers within the medical universe and manages to expand access to health.



“Its meteoric public diffusion and its growing experimental use [da IA generativa] for healthcare purposes are generating considerable enthusiasm around the potential to meet people’s needs in healthcare,” says WHO. Despite the enthusiasm, the awareness is that, today, the moment is still one of caution in relation to its use .

Use of generative AI in medicine

Currently, some experiments are already measuring the impact associated with the use of generative AI in healthcare, and some tools are being improved to better serve this audience.

For example, in the United States, a UC San Diego study found that OpenAI’s ChatGPT is more empathetic than doctors when answering patients’ questions. More recently, Microsoft and Nuance have released a ChatGPT-based AI that can transcribe medical records.

WHO supports more tests on generative AI

At this point, the revolution already wrought by generative AI is irreversible. Without denying the move, WHO is actually defending a more suspicious position. “It is imperative that risks are carefully considered when model language tools are used to improve access to health information, as a decision support tool, or even to increase diagnostic capacity in resource-poor environments to protect people’s health and reduce inequality”. , guide.




WHO urges caution in using generative AI like ChatGPT in medicine (Image: National Cancer Institute/Unsplash)

If the risk assessment movement is not done before a massive implementation, a number of complications can be caused. According to WHO, among the risks of “hasty adoption of untested systems” are:

  • Diagnostic and therapeutic errors by health professionals;
  • Damage to patients’ health;
  • Loss of faith in generative AI;
  • Delaying access to the potential benefits that technology can provide, especially in the healthcare sector.

Necessary precautions in the use of generative AI

Leaving the direct relationship between patients, doctors and generative AI, WHO also raises some points that should be considered in the formation and development of language models. For example, interactions should be confidential when it comes to health, as the diagnosis rests with the person involved.

Also, it is essential to pay attention to which database will be used in training an AI. If fed with misleading or inaccurate information, the tool will generate incorrect content. An additional risk must be included here: since the generated responses appear to be reliable, the end user may be misled, which implies health risks.

Considering these issues and the manipulation of the systems involved, generative AI can also be used to “generate and disseminate highly convincing disinformation in the form of text, audio or video content”, causing confusion.

Against this backdrop, “WHO proposes that these concerns be addressed and that clear evidence of benefit be measured prior to their widespread use in routine health care and medicine, whether by individuals, health professionals, health system administrators and policy makers,” he adds.

Source: WHO

Trending on Canaltech:

Source: Terra

You may also like