AI and health: how to make this combination less risky

AI and health: how to make this combination less risky


We no longer need to discuss whether or not we will have AI in Healthcare, this has already been established. However, it is important to discuss how to implement it

html[data-range=”xlarge”] figure image img.img-9c9f2959691b8d42cee5253fccf716115gl9q60f { width: 774px; height: 479px; }HTML[data-range=”large”] figure image img.img-9c9f2959691b8d42cee5253fccf716115gl9q60f { width: 548px; height: 339px; }HTML[data-range=”small”] figure figure img.img-9c9f2959691b8d42cee5253fccf716115gl9q60f, html[data-range=”medium”] figure figure img.img-9c9f2959691b8d42cee5253fccf716115gl9q60f { width: 564px; height: 349px; }

Not so long ago, in 2016 to be exact, Microsoft decided to test its latest artificial intelligence creation, Tay, on Twitter. The project was the flagship of the technological giant, who wanted to create a relaxed, fun and natural “person” for a light exchange of messages on the social network. It didn’t take even 24 hours for the innocent born robot to adopt a racist, transphobic and prejudiced discourse in many senses. It was necessary to interrupt the experience to put an end to the hate messages created by the machine (learn more about the project Here).

This extreme example highlights one of the greatest risks associated with the use of artificial intelligence (AI) and machine learning (ML): the share – and power – of human bias in technology. It’s that because AI is hardwired to evolve and learn from real interactions, in an environment where hate predominates, the tool will invariably be “tainted” by users.

I tell this story right away to reiterate that artificial intelligence can indeed bring with it some major dangers and risks, especially when used in an area as vital as Health. And that’s why we need to think about how we’re going to use this kind of technology.

Below I list some of the risks that I believe are more immediate to discuss in the Health sector. Have a look and then tell me if you agree or if you would like to add other elements:

  • • Threat to privacy and security

We know that AI algorithms can be used to collect and analyze personal data on a massive scale, which can lead to loss of privacy and increased vulnerability to cyber-attacks. The LGPD is already an important step towards protecting sensitive data, such as Health, but is it just enough?

  • • Economic inequality

Automation and the replacement of human workers with robots and artificial intelligence systems have always been among the biggest fears of the adoption of this type of technology and can, in fact, lead to economic and social inequalities, such as job losses in traditional sectors, such as Health, and the concentration of wealth in the hands of a few players in the sector. What are we doing to address this new social order?

  • • “Prejudice” and Discrimination

Chatbots and AI systems are designed and trained by humans, which means they can embody the same biases and discriminations that exist in society. I’ve mentioned how this happened with Microsoft’s Tay in less than 24 hours of use, so paying more attention to bias when coding is important to try to avoid unfair outcomes and perpetuate inequality in this type of customer service that aims to facilitate your life – not the other way around.

  • • Loss of control

Since AI systems are designed to make decisions autonomously, there is a risk that they will become independent and beyond human control, which can lead to unpredictable and potentially dangerous outcomes. Some doctors and healthcare professionals reinforce this fear in the use of technology, although, to this day, the final human decision remains supreme.

  • • “Bias” on training data

If the data used to train AI in Health algorithms has a disproportionate or skewed representation of some demographic groups, this can lead to misdiagnoses and inappropriate treatments for other groups that exist and which, in a mixed country like Brazil, are not few. .

  • • Loss of privacy

The collection, storage and analysis of sensitive health data can lead to a loss of privacy for patients and increase vulnerability to cyber-attacks. We have seen some major data leaks in the recent past, a clear sign that we need to be more careful with cyber security.

  • • Replacement of health workers

AI algorithms can be used to replace the work of healthcare professionals, which can lead to misdiagnoses and job losses for these professionals.

  • • Excessive dependence on technology

Another potential risk is that of overuse. If AI systems are overused, there could be a loss of healthcare professionals’ ability to evaluate patients and make decisions based on their own skills and judgements.

Importantly, all of these dangers I have listed can be mitigated through proper regulation, coupled with responsible development and transparency in the use of AI technology. Furthermore, it is imperative that we continue to discuss and question the societal impacts of AI to ensure it is used in ways that benefit society as a whole. So I ask: what do you think?

Rodrigo Guerra specializes in finance and innovation. The two fields are usually not associated, but when they are placed side by side they manage to transform innovative projects into daily practice in companies. To theunbox project

of which he is the founder, he curates fundamental content to guide urgent and necessary changes in people, businesses and society.

Source: Terra

You may also like