This year’s third-place winner of the Joseph Fourier Prize is Anton Firc, a PhD student at FIT BUT. His research focuses on the security implications of voice deepfakes. It’s an issue that concerns most of us—simply put, anyone who uses a computer or mobile phone connected to the internet. The reason is that today, theoretically, anyone with basic technical equipment—something they could even have at home—can create a synthetic recording that is practically indistinguishable from a real human voice to the average person. Reliable deepfake detection is a major challenge—and also a key focus of Anton Firc’s research. What does he see as the biggest “nut to crack” in his work, and what motivates him? We asked him that, too.
It all started basically by chance. During my master’s studies, I was thinking about what thesis topic to choose. I knew I wanted to work in cybersecurity, and I had identified someone who I thought could be a good supervisor, both professionally and personally. Kamil Malinka offered me several topics I could pursue, and among them was deepfakes. I looked up more detailed information—and the potential of a technology that, figuratively speaking, can create another person, really intrigued me. You could say I found my place in the topic—it started to really interest me. Back then, around 2020, it was an unexplored field. Whatever we did was, in a way, new. Transitioning to a PhD was then just a natural step. I’m still extremely interested in the topic today, and let’s face it—there’s a certain societal “hype” around it, a demand from the public.
No. Back then, we were, simply put, ahead of the curve. When we started sharing our research with other parties who might be interested—police, banks, etc.—everyone responded by saying it was interesting, but not a current concern or a real problem. But we knew that would change. And today, security agencies are indeed reaching out to us more frequently for lectures, tools, and similar things.
Unfortunately, we haven’t yet reached the worst point. That will come with automation and large-scale attacks, where automated systems will, for example, call seniors en masse and pretend to be grandchildren in distress. We’re not there yet. Paradoxically, things will have to get worse before major tech players respond in a more substantial way. And then, of course, another type of risk will emerge—one that will keep us security folks on our toes.
It’s definitely an obstacle, but we don’t see it as an extreme problem. This aspect has been known for a long time, and anyone developing “defensive methods” goes into it knowing that users often aren’t motivated to behave securely. So it’s about finding ways to minimize the chances of incorrect user decisions. Educating users definitely makes sense, but solutions also have to take into account that people won’t always do the right thing. One factor is how clearly detection tools communicate their findings—how to inform users about a deepfake detection result and the appropriate follow-up action in a way that’s understandable and helps them make a reasonable decision. We also have to remember that even the technology itself can fail.
Going after awards wasn’t and isn’t a specific motivation for me. This time too, we just decided to give it a try—it wasn’t something planned. I’m very grateful for the award, and I’m thrilled about it. I value it as proof of success on a national level, as recognition of the quality of my scientific work. There’s been a lot of buzz around our research lately—people are requesting lectures, and the public is interested in the topic. And that brings time demands; even a half-hour talk in another region of the country means a full-day time investment. If it didn’t make sense to us socially, we wouldn’t do it. Personally, I’m currently at the stage of finishing my PhD, and knowing that we’re addressing a problem that affects a large portion of the public is a huge motivation for me to continue this work. And another motivation, of course, is the way our research team operates.
Source: Faculty of Information Technology, BUT