How the FBI used to fear deepfake


Some officials at the US Federal Bureau of Investigation (FBI) have admitted that they are unable to detect images tampered with by deepfake.

The information comes from a series of internal FBI emails collected and published by Motherboard last week. When deepfake emerged in 2018, especially the situation of transplanting one person's face onto another person's body image, the FBI worried that the technology would be exploited, affecting surveillance and criminal investigation.

"Can we detect this effectively?", an expert at the FBI's technology department sent an email in July 2018, attaching a Washington Post article about the risk of fake news crisis due to deepfake. .

"No," another responded.

Deepfake is a combination of "deep learning" and "fake". This technology uses AI to analyze a person's gestures, facial expressions and voice, then recreate and edit to produce a realistic-looking photo or video of that person, even including images and sounds. speak.

A series of deepfake content spread on social networks in 2019. Photo: Petapixel

In some emails, experts also mentioned specific products, but were not disclosed, and warned of some emergencies that could strongly affect the FBI's work. In addition, the email also shows that the agents here have researched and learned many things from the face swap features. At that time, many deepfake face transplant products were provided in the form of smartphone applications so that any user could use them.

"While everyone is doing this for trivial purposes, better tools may be being applied to video surveillance or facial recognition," a January 2018 email read, saying: shows the FBI's concern with new technology.

After 5 years, deepfake detection solutions have made great progress. Intel recently announced a solution capable of detecting deepfake with 96% accuracy. However, according to Vice , similar to announcements from other parties, effectiveness figures in this field are "very difficult to verify".

The FBI has not commented on the above information. Last month, the US Federal Bureau of Investigation also warned about extortion using pornography carried out by deepfake.

"Malicious actors exploit photos and videos, often from individuals' social media accounts, then use content-altering technology, turning them into sexual themes related to the victim," FBI write. The content is then sent to the victim for blackmail, harassment, or distributed on social networks, forums, and pornographic websites. "Victims will face significant challenges in blocking or removing them from the Internet," the organization said.



This year 131 international organizations, from 73 countries, partnered with the PRA in Washington, D.C., and its Hernando De Soto Fellow Prof. Sary Levy-Carciente to produce the 17th edition of the IPRI..
Email: lethisam@wifiebenezer.com.mx
The articles on wifiebenezer.com.mx are collected by us on the internet. If you find any infringing articles, please contact us and we will delete them immediately. Thank you!
Copyright 2004-2024 www.wifiebenezer.com.mx , all rights reserved.