Adjusting AI in Health: What animal is it?

Adjusting AI in Health: What animal is it?


As AI evolves and assumes a prominent position in society, it needs to be examined more closely.

html[data-range=”xlarge”] figure image img.img-060939a095b7b829ecf119410174943bgebr3on4 { width: 774px; height: 479px; }HTML[data-range=”large”] figure image img.img-060939a095b7b829ecf119410174943bgebr3on4 { width: 548px; height: 339px; }HTML[data-range=”small”] figure figure img.img-060939a095b7b829ecf119410174943bgebr3on4, html[data-range=”medium”] figure image img.img-060939a095b7b829ecf119410174943bgebr3on4 { width: 564px; height: 349px; }

Learning to use technology without being too afraid of it and/or its implications seems to be the biggest challenge when it comes to artificial intelligence. There is a lot of talk about the risks and limitations of applying AI in Healthcare without the majority being clear about the possible ways to mitigate those risks, as well as regulation and data protection.

And then another question arises: is the creation of laws and a regulatory agency that aims to punish those responsible for non-compliance with the agreements formulated the best way to accelerate the digital transformation?

Paulo Schor, associate professor of ophthalmology at the Paulista School of Medicine of the Federal University of São Paulo (EPM/Unifesp) and joint coordinator of Research for Innovation at FAPESP, does not believe: “I think that in relation to AI we always have to look for the minimum of regulation with maximum speed and punishment for all those who prevent the technology from working properly,” he stresses.

What terrifies the specialist in this matter is the regulation of AI which appears as a “destroyer” of the modernity and advancement of the tool, which happens much more frequently than the opposite – see the example of telemedicine itself: “I am not averse to regulation, but I think technology itself has useful and safe tools to “self-regulate”. And, from this, it is up to each nation to have a greater concern for equity, in the sense of justice, taking into account its social particularities, economic and technological rather than simply importing universal laws.

Who’s gonna make the laws around here?

In this regard, Brazil has already started discussing how best to regulate the technology, ever since the creation of the legal framework for artificial intelligence (Law 21/2020), a bill pending approval by the legislature. Rodrigo Guerra, finance and innovation specialist and founder of unbox project, agrees that the focus of discussions should be on how to strike a balance between statutory bans and free incentive to innovate, especially in a conservative sector in urgent need of corporate transformation, such as Health : “If we let everything happen freely, the risks of using AI will be many and potentially catastrophic. On the other hand, laws that are too strict discourage innovation and technological and scientific development».

Finding the balance is also the recommendation of Coriolano Almeida Camargo, LGPD coordinator at the Escola Superior da Advocacia Nacional, especially if one remembers that the legislator cannot keep up with the pace of current technological changes. For the expert, who is also president and founder of the Digital Law Academy, not everything must or can be controlled in order not to hinder innovation and the generation of jobs and opportunities: “And finally, the Law that will regulate the intelligence in Brazil must have as its foundation the defense of the dignity of the human person”, he maintains.

“Man and his human essence and the right to a dignified life must be the supporting pillar of the law governing the use of artificial intelligence,” says Coriolano Almeida Camargo, LGPD coordinator at Escola Superior da Advocacia Nacional and president and founder of Digital Academy of Law.

The caveat is important from a legal perspective, because AI is a technology with multiple applications and should increasingly be used to help make important decisions in different industries. “It is known that a lot can be automated by the tool, but the experience of a lawyer or a doctor cannot be discounted,” clarifies Camargo, who continues: “For the foreseeable future, we can expect firm regulation of AI in Brazil , since a commission of jurists has taken care of guiding the draft laws 5.051/2019, 21/2020 and 872/2021 “. All of them aim to establish principles, rules, guidelines and foundations to regulate the development and application of artificial intelligence in Brazil.

Focus on health

The World Health Organization (WHO) has developed specifically in this sector. you are princes lead the development of AI in healthcare. In essence, they demonstrate that technology is at the service of healthcare professionals and not vice versa: “The doctor and other healthcare professionals will continue to be the main link between the patient and artificial intelligence, bringing empathy and humanization into this relationship” . , says Aldir Rocha, partner consultant at Lozinsky Consultoria.

  • • Protect human autonomy.
  • • Promote well-being, human safety and the public interest.
  • • Guarantee of transparency and intelligibility.
  • • Promotion of responsibility and accountability.
  • • Guarantee of inclusion and equity.
  • • Promote responsive and sustainable AI.

And the developer, how is he?

For Aldir Rocha, partner consultant of Lozinsky Consultoria, the regulation of AI in Health is an existential and sensitive dilemma for its meaning, since the technology will be directly linked to the life and death of a human being. “But it is clear that both those who invest in the development of technology and those who use it must have some type of legal protection, that is, know the rules of the game well in order to know how to respect them and not lead to future damage”.

“Regulation is important for both consumers and developers of artificial intelligence. Therefore, there will be legal protection established by well-known rules so that tomorrow or the day after tomorrow software companies are not penalized for unforeseen situations,” says Aldir Rocha, partner consultant at Lozinsky Consultoria.

Transparency in this sense is so important that Aldir Rocha even defends the possibility of controlling the codes used in the development of the solution to avoid harmful, restrictive and/or prejudiced codes. PL 21/2020 advances precisely on these points by establishing as principles the respect for human dignity, the transparency of algorithms and the protection of personal data: “I think these three guiding pillars are a good starting point, as they focus on the protection of ‘individual’ , justifies Rocha.

All these issues are also related to the General Data Protection Law (LGPD), which was implemented in August 2020 and whose penalties for non-compliance have just started, on February 27, 2023.

Look outside for inspiration

Artificial intelligence, although widely discussed, is still a very recent technology and its disclosure has taken place in recent months. ChatGPT, for example, has provided a more accessible use of the tool, not without first sparking rumors about copyright and bias handling. This, of course, has caused many countries around the world to rush to create their own laws in parallel with the rapid rise of technology and how it will shape the future of relationships in various sectors of society – and the economy, of course.

The current ratio ofArtificial Intelligence Regulation – benchmarking of selected countries”, created by the National School of Public Administration (Enap) highlights the model of AI regulation in the United States, which today is already done by regulatory agencies and federation states: “This model is under discussion and could make relationships safer and agile without stopping innovation. The fact is that there are specific coordination actions by the government to position the country as a leader in AI research and development, in the public and private spheres. Therefore, it is a priority to strengthen the research and development of the country’s competitiveness in AI”.

Japan, the second country to develop national AI strategies and to set goals and allocate budgets for the topic, has Japanese government agencies adopting the “soft-law” approach to address possible AI technology bias. “The country seeks to create agile governance, with the aim of not damaging investments and not hindering innovation,” the document clarifies.

For Coriolano Camargo these flexible models seem reasonable, because other authorities and courts are not excluded from the inspection process. Aldir Rocha also recalls that there are countries where ethical and moral issues must be highlighted in this discussion, since artificial intelligence can segregate and discriminate against individuals: “That’s why I think it really makes sense to subordinate the regulation of AI to a body such as the National Data Protection Agency (ANPD) which will simultaneously protect people’s privacy”.

“We must be clear that AI manipulates data, and if this data is already protected by the LGPD, the distortion of the result of its transformation is under its due legal responsibility. In the Netherlands, for example, the regulatory project is developing in this direction”, says Aldir Rocha, consulting partner of Lozinsky Consultoria.

But also address yourself

Theoretical physicist Stephen Hawking said that “the successful creation of artificial intelligence would be the greatest event in human history”. security from its creation.

And he continues: “If we look at the history of humanity, machines have always been adapted for war, to subjugate other peoples. And in the middle of the 21st century, we are seeing war between Russia and Ukraine, using drones, supercomputers, cyber weapons. Man is capable of great feats, but few leaders are capable of instilling in the minds of others the idea that we need technology for progress through peace and unity among peoples”.

Renata Armas is editor at Unbox.

Source: Terra

You may also like