What changes with the new artificial intelligence law in digital consumers’ safety

What changes with the new artificial intelligence law in digital consumers’ safety


Data protection is the basis for changes in artificial intelligence legislation in Brazil




For many, the experience of navigating the digital world has been marked with complex conditions of use and by the feeling of being “given” in the midst of mysterious algorithms. The good news is that the arrival of a new artificial intelligence legislation in Brazil promises to make significant changes, in particular as regards the safety of digital consumption.

According to MaurĂ­cio Eswans, system analyst and computer security specialist, this new law represents a relief for those who feel lost in this scenario. But, after all, what changes in practice for you?

Digital security: more virtual protection for you and your family from $ 4.90 per month.

End of the “black box” of IA

One of the most remarkable transformations is the end of the opacity that often involved artificial intelligence systems. Companies will no longer be able to use the IA as a “black box”.

This means that the decisions taken by automated systems that directly affect the consumer – such as the refusal of a credit offer, a virtual diagnosis or a purchase recommendation – must be clearly explained. The law requires companies to reveal when an algorithm is behind decisions that affect the citizen, which is considered an important progress.

Your data: explicit consent is the key

Another crucial point addressed by the legislation is the processing of personal data in the formation of the IAS. The new rule prohibits the indiscriminate use of sensitive information – such as sexual orientation, religion or medical history – without explicit consent of the owner.

In practical terms, if a platform wishes to use your data to “teach” an algorithm, you will first have to ask for your permission and you will have the right to refuse without fear of losing access to the service.

Right to interrogate and correct automated decisions

Have you ever imagined being mistakenly identified by a facial recognition system and not being contested? This situation once difficult changes with the new law. Now the consumer will have a direct channel to question automated decisions and even request a human revision.

This mechanism is seen as a fundamental step to reduce prejudices and correct errors that, until then, were generic justifications as “artificial intelligence, we have no control”.

Companies under pressure: heavy fines

For the company sector, the law offers a less comfortable side: the supply of heavy fines in case of data loss or inadequate use of artificial intelligence. For the consumer, this translates into greater pressure for companies to invest effectively in IT security.

As MaurĂ­cio Eswans points out, “where there is a financial risk, there is priority”, indicating that the threat of sanctions should increase the improvement of the safety practices by companies.

What can you do?

Although the law represents progress, it is not perfect and has yet to face technical challenges. However, there are attitudes that the consumer can take to strengthen their digital safety

  • Read the privacy updates: companies will have to inform you as they use.
  • Use control tools: browsers and extensions can help block invisible monitoring.
  • Ask questions: if an automated decision strikes you, they require explanations. It is your right.

The new legislation tries to balance the balance between the innovation brought by the Heig and the fundamental rights of citizens. Also according to ESTEWANS, which has already witnessed the impacts of poorly configured algorithms, “we are moving towards a” less wild “digital environment.

Digital security: more virtual protection for you and your family from $ 4.90 per month.

Source: Terra

You may also like