• Türkçe
    • English
  • English 
    • Türkçe
    • English
  • Login
View Item 
  •   DSpace@Muğla
  • Fakülteler
  • Teknoloji Fakültesi
  • Bilişim Sistemleri Mühendisliği Bölümü Koleksiyonu
  • View Item
  •   DSpace@Muğla
  • Fakülteler
  • Teknoloji Fakültesi
  • Bilişim Sistemleri Mühendisliği Bölümü Koleksiyonu
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Automatic Speaker Recognition Using Mel-Frequency Cepstral Coefficients Through Machine Learning

Thumbnail

View/Open

Tam Metin / Full Text (939.5Kb)

Date

2022

Author

Ayvaz, Uğur
Gürüler, Hüseyin
Khan, Faheem
Ahmed, Naveed
Whangbo, Taegkeun

Metadata

Show full item record

Citation

Ayvaz, U., Gürüler, H., Khan, F., Ahmed, N., Whangbo, T. et al. (2022). Automatic Speaker Recognition Using Mel-Frequency Cepstral Coefficients Through Machine Learning. CMC-Computers, Materials & Continua, 71(3), 5511–5521.

Abstract

Automatic speaker recognition (ASR) systems are the field of Human-machine interaction and scientists have been using feature extraction and feature matching methods to analyze and synthesize these signals. One of the most commonly used methods for feature extraction is Mel Frequency Cepstral Coefficients (MFCCs). Recent researches show that MFCCs are successful in processing the voice signal with high accuracies. MFCCs represents a sequence of voice signal-specific features. This experimental analysis is proposed to distinguish Turkish speakers by extracting the MFCCs from the speech recordings. Since the human perception of sound is not linear, after the filterbank step in the MFCC method, we converted the obtained log filterbanks into decibel (dB) features-based spectrograms without applying the Discrete Cosine Transform (DCT). A new dataset was created with converted spectrogram into a 2-D array. Several learning algorithms were implemented with a 10-fold cross-validation method to detect the speaker. The highest accuracy of 90.2% was achieved using Multi-layer Perceptron (MLP) with tanh activation function. The most important output of this study is the inclusion of human voice as a new feature set

Source

Computers, Materials and Continua

Volume

71

Issue

2

URI

https://doi.org/10.32604/cmc.2022.023278
https://hdl.handle.net/20.500.12809/9795

Collections

  • Bilişim Sistemleri Mühendisliği Bölümü Koleksiyonu [75]
  • Scopus İndeksli Yayınlar Koleksiyonu [6219]
  • WoS İndeksli Yayınlar Koleksiyonu [6466]



DSpace software copyright © 2002-2015  DuraSpace
Contact Us | Send Feedback
Theme by 
@mire NV
 

 




| Policy | Guide | Contact |

DSpace@Muğla

by OpenAIRE
Advanced Search

sherpa/romeo

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsTypeLanguageDepartmentCategoryPublisherAccess TypeInstitution AuthorThis CollectionBy Issue DateAuthorsTitlesSubjectsTypeLanguageDepartmentCategoryPublisherAccess TypeInstitution Author

My Account

LoginRegister

DSpace software copyright © 2002-2015  DuraSpace
Contact Us | Send Feedback
Theme by 
@mire NV
 

 


|| Policy || Guide|| Instruction || Library || Muğla Sıtkı Koçman University || OAI-PMH ||

Muğla Sıtkı Koçman University, Muğla, Turkey
If you find any errors in content, please contact:

Creative Commons License
Muğla Sıtkı Koçman University Institutional Repository is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 Unported License..

DSpace@Muğla:


DSpace 6.2

tarafından İdeal DSpace hizmetleri çerçevesinde özelleştirilerek kurulmuştur.