A man is sentenced to 18 years in prison for creating and selling pornographic deepfakes of children

A man is sentenced to 18 years in prison for creating and selling pornographic deepfakes of children

The recent conviction of Hugh Nelson, a 27-year-old from the UK, highlights the legal impact of inappropriate use of emerging technologies. Nelson was sentenced to 18 years in prison for creating and distributing deepfake child pornography images manipulated using artificial intelligence (AI), a case classified as historic by British authorities.




Jeanette Smith, The UK’s CPS prosecutor said this decision underlines the applicability of the law to both real photographs and those generated by AI technology. Nelson, a graphic design student, admitted making around £5,000 in 18 months selling artificially generated sexually explicit images of children.

What are the legal implications of using artificial intelligence in crimes?

Nelson’s case constitutes a milestone, highlighting the urgent need to adapt investigative practices to new forms of cybercrime. Jen Tattersall, of Manchester Police, highlights the importance of investigators staying up to date on these practices. Otherwise criminals might believe that modern technology ensures them impunity, as Nelson wrongly assumed.

Carly Baines, detectives from the same police division, underlines that, despite being computer-generated images, behavior of this type must be strictly controlled. This decision sets an important precedent for future legal actions involving cybercrimes.

What are deepfakes and how is this technology used?

The technique known as deepfake allows the editing of videos and photos through artificial intelligence. Using this technology, it is possible to alter a person’s face or voice in audiovisual media, creating misleading material from real content. This practice has raised concerns, especially regarding the production of fake pornographic videos, deeply damaging the reputations of those involved.

In a 2020 report, Sensity highlighted the circulation of fake nudes of thousands of women on the Internet. These tools, while innovative, have significant potential for misuse, requiring constant monitoring.

How can deepfakes be identified?

Recognizing deepfakes can be difficult, but some characteristics can indicate manipulated content. Inconsistencies in videos or images, such as unnatural movements or misaligned facial expressions, are warning signs. Additionally, the audio may sound robotic or have strange intonations.

  • Look for strange eye movements or inappropriate lighting in images.
  • It uses specialized software that examines the fingerprints left by modified content.
  • Check the accuracy of information with reliable sources, especially if the content seems surprising or alarming.

Source: Terra

You may also like