Event box

PLEASE NOTE THE CHANGE OF ROOM FROM Bowland North SR02 TO Bowland North SR20

Recent technological advancements mean that AI-generated speech has rapidly advanced in quality, to the point where it can be virtually indistinguishable from genuine human speech. This opens the door to a new breed of cybercrime: one which utilises AI to create fraudulent representations of a target’s speech in an attempt to deceive listeners into believing that they are genuine. Presently, curbing this deception is difficult, since the conditions dictating accurate discrimination between genuine and synthetic speech are little understood. This talk examines one factor: a potential victim’s familiarity with the speaker’s voice. When the speaker’s voice is well-known to a listener (i.e. a celebrity or loved one), does this impact the listener’s ability to recognise an AI-generated sample of that speaker’s voice? If so, why? What other factors may also be at play, and how might those factors interact with familiarity to influence the potential victim’s performance? With the insights gained by addressing these questions, we begin to give ourselves the tools to mitigate against AI-mediated cybercrime.


Date:
Thursday, October 31, 2024
Time:
15:00 - 16:00
Location:
Bowland North SR20
Presenter:
Hope McVean
Type:
Talk / Public Lecture
Categories:
Events - Campus and Community Life, Events - Doctoral Academy, Events - Staff Channel, Events - Student Channel, Library Channel, Web - Embrace Digital Staff, Web - Embrace Digital Students

This talk will be given by FACTOR PhD student Hope McVean (LAEL). Hope's thesis is investigating whether attempted frauds carried out using spoofed voices are more likely to fail due to the quality of the AI-generated speech, or the quality of the content.



Registration is now closed. See the events page for details of future sessions.

Non-attendance