A New Way to Stop Deepfakes?
So Denmark seems poised to pass a new bill that would give each person exclusive rights over their likeness, including facial features, body, and voice. This effectively treats these personal attributes as a form of intellectual property, making deepfakes illegal through copyright law.
An individual whose likeness has been misused in a deepfake would be able to demand the removal of the offending content from online platforms and seek compensation for damages, and online platforms would be legally obligated to remove the content upon notification.
The bill seems to have fairly widespread public support, but I'm wondering what builders think of this approach. It could potentially create some legal headaches (as copyright law often does), and that might be why most anti-deepfake laws so far have mainly gone after people creating non-consensual deepfakes, threatening fines and prison sentences through criminal law. But personally, I find this approach to stop distribution on online platforms using copyright law to be very creative, and I'm hoping more countries try something like this.


Replies
Fakeradar
We share the same opinion. In addition to legal measures, there must be a reliable technical tool that can distinguish a deepfake from a real person. Our team has been working on this challenge for over a year.
I believe that distinguishing an AI-generated image from an original one (such as a photo taken with a phone, a scanned document, or an image created in a graphics editor) cannot yet be done with a sufficient level of certainty. However, when it comes to video conferences, the outlook is much more optimistic. At the current stage of AI development, it is indeed possible to protect video conference participants from impostors using deepfake technology.
Have you already encountered cases in your work where impostors tried to interfere in a video conference using deepfakes?