Přístupnostní navigace
E-application
Search Search Close
"The voice is natural, but when it comes to dialogue, the dialogue doesn't go as well, it doesn't go as well as when people talk to each other. The automaton doesn't give you space, it doesn't wait, there are weird delays, it doesn't let you jump into its speech, it behaves differently than when two real people talk to each other. But we're going to see more and more of that, and it's going to be dangerous. Because, for example, with the help of the voice of your loved ones you can attack very well."Today, we often read fraudulent text messages about having to pay for a shipment with great caution, but what if you pick up the phone and your grandmother is on the other end asking for help? From just a few seconds of recording, artificial intelligence can create a voice almost indistinguishable from the original speaker. So a fake president can speak to the masses when a candidate in an election creates compromising material on an opponent. But equally, we are approaching a time when we probably won't be able to be confident even when talking to our loved ones. Of course, there are upsides to artificial intelligence. Scientists are working on tools that will detect such scams, we can communicate more easily across the world or be faster in dealing with emergency calls and disaster management. Jan Černocký from FIT BUT, who with his team is one of the world's top researchers in the field of speech data mining, came to the studio to talk about deepfake voices, cooperation with US intelligence services, playing gangsters, and why AI cannot detect sadness from voices.
You can listen to the next episode of the podcast at www.vut.cz/podcast (in Czech only) or on any of the platforms (e.g. Spotify, Apple Podcasts, etc.).Autor: Václav Koníček
Responsibility: Mgr. Marta Vaňková