Aroosa Virk

Is liveness detection enough to block deepfakes, or do you need behavioral signals too?

by

Hi everyone,

We’ve been seeing more sophisticated deepfake attempts lately, especially ones that retry multiple times with tiny changes. It made us wonder:

  • Is passive liveness detection really enough to stop them?

  • We’re exploring behavioral signals (like facial patterns, micro-expressions, blinking, and pupil movement, to detect deepfakes and synthetic media) as an added layer, but I’d love to know how others here are approaching this.

Are you relying solely on liveness, or combining it with behavioral intelligence to spot repeaters?

Open to any thoughts, ideas, or what’s worked for you?

Really curious to learn from others in this space.

79 views

Add a comment

Replies

Best
Artem Anikeev

Our team has been fighting deepfakes in video conferences for over a year now, and during this time we’ve achieved very strong results. Microexpressions, blinking, pupil movements, the appearance of capillaries — all of these are analyzed by our neural network, which then decides whether there’s a real person in front of the camera or not. If a person’s face has been replaced through face swap, we classify that person as not real.

From our perspective, liveness detection (both passive and active) is more about identifying physical spoofing attempts — for example, when someone attaches a printed photo of the person they’re trying to impersonate. That’s a clear problem, and it’s already well-solved with many existing products. We separate deepfake detection into its own distinct class of liveness tasks.

In which behavioral scenarios do you see the need for protection against deepfakes?