Utilization of Voice Embeddings in Integrated Systems for Speaker Diarization and Malicious Actor Detection

2024;
: pp. 54 - 66
1
Lviv Polytechnic National University, Department of Information Security
2
Lviv Polytechnic National University,Department of Measuring Information Technologies
3
Lviv Polytechnic National University, Department of Information Security
4
Lviv Polytechnic National University, Department of Information-Measurement Technologies
5
Lviv Polytechnic National University, Department of Information Security
6
Lviv Polytechnic National University

This paper explores the use of diarization systems which employ advanced machine learning algorithms for the precise detection and separation of different speakers in audio recordings for the implementation of an intruder detection system. Several state-of-the-art diarization models including Nvidia’s NeMo, Pyannote and SpeechBrain are compared. The performance of these models is evaluated using typical metrics used for the diarization systems, such as diarization error rate (DER) and Jaccard error rate (JER). The diarization system was tested on various audio conditions, including noisy environment, clean environment, small number of speakers and large number of speakers. The findings reveal that Pyannote delivers superior performance in terms of diarization accuracy, and thus was used for implementation of the intruder detection system. This system was further evaluated on a custom dataset based on Ukrainian podcasts, and it was found that the system performed with 100 % recall and 93,75 % precision, meaning that the system has not missed any criminal from the dataset, but could sometimes falsely detect a non-criminal as a criminal. This system proves to be effective and flexible in intruder detection tasks in audio files with different file sizes and different numbers of speakers which are present in these audio files.

  1. Landini F., Glembek O., Matejka P., Rohdin J., Burget L., Diez M., Silnova A. (2021). Analysis of the but Diarization System for Voxconverse Challenge. Conference: ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). DOI: 10.1109/ICASSP39728.2021.9414315
  2. Dudykevych V., Mykytyn H., Ruda K. (2022). The concept of a deepfake detection system of biometric image modifications based on neural networks, in: 2022 IEEE 3rd KhPI Week on Advanced Technology (KhPIWeek), IEEE. DOI:   10.1109/khpiweek57572.2022.9916378
  3. Shtefaniuk Y. and Opirskyy I. (2021). Comparative Analysis of the Efficiency of Modern Fake Detection Algorithms in Scope of Information Warfare, 2021 11th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 207–211. DOI: 10.1109/IDAACS53288.2021.9660924.1
  4. Anguera Miro X., Bozonnet S., Evans N., Fredouille C., Friedland G., Vinyals O. (2012). Speaker Diarization: A Review of Recent Research, IEEE Trans. Audio, Speech, Lang. Process, Vol. 20, 356–370. DOI:10.1109/tasl.2011.2125954
  5. Khoma V., Khoma Y., Brydinskyi V., Konovalov A. (2023). Development of Supervised Speaker Diarization System Based on the PyAnnote Audio Processing Library, Sensors, Volume 23, 2082. DOI: 10.3390/s23042082
  6. Hannun A., Case C., Casper J., Catanzaro B., Diamos G., Elsen E., Prenger R., Satheesh S., Sengupta Sh., Coates A., Ng A. Y. (2014). Deep Speech: Scaling up end-to-end speech recognition. Available at: https://doi.org/10.48550/arXiv.1412.5567 (Accessed: 15 February 2024).
  7. Ball J. (2023). Voice Activity Detection (VAD) in Noisy Environments. Available at: https://arxiv.org/html/2312.05815v1 (Accessed: 15 February 2024).
  8. Cornell S., Omologo M., Squartini S., Vincent E. (2022). Overlapped Speech Detection and speaker counting using distant microphone arrays, Comput. Speech & Lang, Volume 72, 101306. DOI: 10.1016/j.csl.2021.101306
  9. Kotti M., Moschou V., Kotropoulos C. (2008). Speaker segmentation and clustering, Signal Process, Volume 88, 1091–1124. DOI: 10.1016/j.sigpro.2007.11.017
  10. Dawalatabad N., Ravanelli M., Grondin F., Thienpondt J., Desplanques B., Na H. (2021). ECAPA-TDNN Embeddings for Speaker Diarization. Proc. Interspeech, 3560–3564. DOI: 10.21437/Interspeech.2021-941
  11. Garcia-Romero D., Snyder D., Sell G., Povey D. and McCree A. (2017). Speaker diarization using deep neural network embeddings, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 4930–4934. DOI: 10.1109/ICASSP.2017.7953094
  12. Bredin H. (2023). Pyannote.audio 2.1 speaker diarization pipeline: principle, benchmark, and recipe, in: INTERSPEECH 2023, ISCA, ISCA. Doi:10.21437/interspeech.2023-105
  13. Harper E., Majumdar S., Kuchaiev O., Jason, et al. NeMo: a toolkit for Conversational AI and Large Language Models [Computer software]. https://github.com/NVIDIA/NeMo
  14. Ravanelli M., Parcollet T., Plantinga P., et al. (2021). SpeechBrain: A General-Purpose Speech Toolkit. Available at: https://arxiv.org/abs/2106.04624 (Accessed: 15 February 2024).
  15. Chung J. S., Huh J., Nagrani A., Afouras T., Zisserman A. (2020). Spot the Conversation: Speaker Diarisation in the Wild, in: Interspeech 2020, ISCA, ISCA. DOI:10.21437/interspeech.2020-2337
  16. Zaiets I. (2024). Dataset of ukrainian podcasts for intruder detection by voice. DOI:10.57967/hf/0701