Deepfake is a technique that uses artificial intelligence to create fake videos, images and audio of real people. It is being used a lot in movies, advertisement and online, where they sometimes recreate celebrities or famous people.
For some people, it may be entertaining to have a conversation with Albert Einstein or to see your favourite singer or actor who has passed away in a new video clip or a movie. Still, the thing is more serious and goes beyond what we can imagine.
This problem is more serious because now the only thing you need to create deepfake content is a smartphone, so having this tool at your fingertips can be very dangerous depending on how people use it, which can cause much damage.
What are the consequences of inappropriate use?
This technique is on the rise these days. It increases with its inappropriate use that can cause severe damages when the purposes are to distort political elections, manipulate public opinion or electoral manipulation, and put democracy at risk.
It also being used for fake news, in replacement pornography, or to commit scam crimes.
This technique encounters legal drawbacks related to image rights, a right protected by the European Commission and recognized in Article 8 of the European Convention of Human Rights and Articles 7 and 8 of The Charter of Fundamental Rights of the European Union.
There are currently no European law or national law to regulate and tackle the fraudulent use of this technique. However, the European Commission indeed published in February 2020 plans for handling the “high risk” application of Artificial Intelligence2, but there is nothing officially until now.
However, there is an instrument in the European Union that treat some of those problems: The Code of Practice of Disinformation3, to achieve goals set out by the Commission’s Communication presented in April 2018 by setting a wide range of commitments, from transparency in political advertising to the closure of fake accounts and demonetization of purveyors of disinformation.
Fortunately, as a huge step by the EU to regulate the use of Artificial Intelligence, we find the recent Proposal of the European Commission, published on April 21, 2021, for the Regulation of the European Parliament and the Council, Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts4.
The goal will be to guarantee the safety and fundamental rights of people and businesses while make users trust Artificial Intelligence more, respect the rules and values of Europe. Article 52(1) of this Regulation should be the centre of our attention in this main problem since it provides that: “Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system […]”. Although here, we face the same problem: this obligation only applies to certain AI systems with specific transparency obligations.
How to deal with this problem?
Deepfake could be the next major cybersecurity threat, so for all these new situations, even in the process of regulation, we will need a specific response from the EU immediately.
In the recent report from Europol on November 19, 20205, they warn about the high use and abuse of Deepfake and recommends an urgent action in response because people behind this fraudulent use of this technique can develop and adopt new tactics of working with this tool to avoid detection measures.
Likewise, besides the necessity of a regulation, there should be more awareness about fake news and disinformation. It is especially essential to design an efficient and robust technology to detect those malicious uses of Deepfakes and stay one step ahead.