Does Meta AI read my WhatsApp conversations? Understand what AI can and cannot do with your data

Does Meta AI read my WhatsApp conversations? Understand what AI can and cannot do with your data


The new tool has generated uncertainty about user privacy in the messaging app

THE MetaIA, artificial intelligence (AI) from the Objective (owner of Whatsapp, Facebook AND Instagram), arrived in Brazil and aroused a lot of curiosity. The tool, which allows users to create stickers, images and chat with an AI assistant, has raised questions about its use of its users’ data. The main one is whether the AI ​​can read all the conversations taking place in the messaging app. But the answer is not so simple.

The goal of AI was announced in September 2023 with the goal of integrating AI capabilities into all of the company’s applications. In other words, it would serve as a great intelligent chatbot, like the ChatGPTnot only for WhatsApp, but also for Instagram, Facebook and Messenger.

To do this, Meta AI uses as its “brain” a large Language Model (LLM) created by Mark Zuckerberg’s company, called Blade 3.2. According to the company, the llama was “fed” with a Massive and diverse datasetincluding trillions of words from web pages, open source archives, scientific books and articles, and information from web pages. All this is done so that the Artificial intelligence learns language patterns to provide relevant and reliable information.

This already meant that Meta AI had a good level of functioning even before arriving on WhatsApp. The problem is that Meta also uses data and content published on the company’s services to improve Meta AI, i.e. posts in Facebook and Instagram feeds are also valuable support points for the chatbot.

This led to a dead end with the National Authority for the Protection of Personal Data (ANPD) before the service debuts in Brazil. The initial permission obtained by Meta to collect your data for AI training purposes raised doubts about compliance with the law General Data Protection Law (LGPD)culminating in ban, in July this year, the collection of data for this purpose.

Without the ability to use such information, Meta, in response to the ANPD’s determination, prepared an adaptation planaiming, according to the company, to guarantee user privacy and compliance with the LGPD. The plan includes measures such as transparently notifying users about data collection, ensuring the right to object and excluding data of children under 18 from the dataset used for AI training.

After analyzing the plan, the The ANPD lifted the ban in August 2024, authorizing data collection for AI training, provided that it is subject to the explicit consent of users and guarantees the right of opposition. The authority then allowed Meta AI to arrive in Brazil with restrictions. The tool may collect user data only for AI training using public Facebook and Instagram feeds. The WhatsApp situation is more complex.

That’s where encryption. The tool is a security mechanism that ensures the privacy of communications between users. Since April 2016, the app has implemented end-to-end encryption across all forms of communication, including text messages, voice calls, video calls, and file sharing.

According to Meta, when installing the application, it generates a unique cryptographic key pair for the device: a public key and a private key. The public key is shared with contacts, while the private key remains stored on the user’s device.

When a conversation starts, the WhatsApp automatically exchanges public keys between participants’ devices. When you send a message, the app encrypts it on the sender’s device using the recipient’s public key. The encrypted message is transmitted through WhatsApp servers to the recipient’s device. During the journey, the contents remain inaccessible to third parties, including WhatsApp itself. After receiving the message, the recipient’s device uses its private key to decrypt it.

In short: your message remains unreadable for anyone, both WhatsApp and Meta AI, for example.

So, according to the Meta, The AI ​​cannot read the content of conversations in the messaging app. “The AI ​​can only read and respond to messages that mention “@Meta AI” and messages that are part of a specific conversation with Meta AI. Other messages in a conversation are not read by the tool,” Big Tech says. Meaning what, it is impossible for Meta AI to read or see a message that was not sent to it.

Collecting data for AI training it is therefore limited only to the data provided by users during interactions with the toolsuch as commands, questions and evaluations, as well as public information available on the Internet. “Conversations with AI are different from your conversations in person. When you use these features, Meta receives your commands, messages sent to the AI ​​and evaluates them to provide you with relevant responses and to improve the quality of this technology” says the company.

The company also claims that Meta AI does not connect personal data from WhatsApp account to user data on other Meta platforms, such as Facebook and Instagram. According to the company, these clarifications are made to users directly in the application before and while using Meta AI on WhatsApp.

Data storage and AI removal

But it’s still there “gray areas”. Some questions about Meta AI’s privacy terms still raise doubts. The main one involves using data for AI training. Although Meta claims that data collection for AI training occurs only with user consent, how this consent is obtained and transparency about what data is collected and how they are used still raises questions.

The terms do not make it clear whether everything said to Meta AI is used or if only parts of conversations are used. There is also no way to know if it has some kind of “memory” that stores data indefinitely for future training of the system.

When asked about the matter, Meta did not answer whether all information is used and how long it stores such data. On its blog the company only informs that the data exchanged with the artificial intelligence on WhatsApp is used to train it and this advises against disseminating messages containing information that the user does not wish to share with the artificial intelligence. The Meta also states this artificial intelligence is trained to limit the sharing of information about people, as names, in other conversations.

According to Lucas Maldonado, digital law specialist at FGV“all essential information relating to data processing must be provided by the data controllers clear and accessible way for ownersIn this case it was not specified whether all the data shared with the AI ​​or only some selected data is used to train the tool.

Another point that deserves attention is the inability to remove Meta AI from WhatsApp. The ANPD clarified this Estadao that there has been no agreement with Meta in this regard and that the presence of Meta AI in the application does not violate the LGPD, as long as the collection of data for AI training is carried out with the consent of the users. “There has been no agreement between the ANPD and Meta in this regard,” the authority said.

So despite the AI don’t read your personal conversations, listen to your audio or see your images, it will always be there, available to be usedmaking it impossible to remove. To protect yourself, the user can exercise the right to object to the use of their data for the training of Meta’s AI by filling out an form available by the company itself.

In the case of WhatsApp, the request will be valid subject to confirmation of a phone number linked to the account. Users whose request has been accepted will have your messages excluded from Meta’s generative AI trainingaccording to the company. It is important to highlight that each Meta platform has its own form, which must be filled out individually if the user wishes to exercise this right on Facebook and Instagram, as well as on WhatsApp. Estadão found that even though the use of the data was denied, the AI ​​continued to operate, without collecting information. Meaning what, a person can exercise the right to object and still use the assistant. Meta had not commented on the matter at the time of publication of the article.

Maldonado explains that the LGPD guarantees all interested parties the right to object to the processing of personal data. “People who do not want their data to be used for Meta AI training can exercise this right of opposition through the forms made available by the company itself. The law must guarantee users the right to object to the collection and use of your personal data.”

Source: Terra

You may also like