Triplet entropy loss: improving the generalisation of short speech language identification systems

dc.contributor.advisorEr, Sebnem
dc.contributor.authorVan Der Merwe, Ruan Henry
dc.date.accessioned2021-09-16T10:51:14Z
dc.date.available2021-09-16T10:51:14Z
dc.date.issued2021
dc.date.updated2021-09-16T10:50:31Z
dc.description.abstractSpoken language identification systems form an integral part in many speech recognition tools today. Over the years many techniques have been used to identify the language spoken, given just the audio input, but in recent years the trend has been to use end to end deep learning systems. Most of these techniques involve converting the audio signal into a spectrogram which can be fed into a Convolutional Neural Network which can then predict the spoken language. This technique performs very well when the data being fed to model originates from the same domain as the training examples, but as soon as the input comes from a different domain these systems tend to perform poorly. Examples could be when these systems were trained on WhatsApp recordings but are put into production in an environment where the system receives recordings from a phone line. The research presented investigates several methods to improve the generalisation of language identification systems to new speakers and to new domains. These methods involve Spectral augmentation, where spectrograms are masked in the frequency or time bands during training and CNN architectures that are pre-trained on the Imagenet dataset. The research also introduces the novel Triplet Entropy Loss training method. This training method involves training a network simultaneously using Cross Entropy and Triplet loss. Several tests were run with three different CNN architectures to investigate what the effect all three of these methods have on the generalisation of an LID system. The tests were done in a South African context on six languages, namely Afrikaans, English, Sepedi, Setswanna, Xhosa and Zulu. The two domains tested were data from the NCHLT speech corpus, used as the training domain, with the Lwazi speech corpus being the unseen domain. It was found that all three methods improved the generalisation of the models, though not significantly. Even though the models trained using Triplet Entropy Loss showed a better understanding of the languages and higher accuracies, it appears as though the models still memorise word patterns present in the spectrograms rather than learning the finer nuances of a language. The research shows that Triplet Entropy Loss has great potential and should be investigated further, but not only in language identification tasks but any classification task.
dc.identifier.apacitationVan Der Merwe, R. H. (2021). <i>Triplet entropy loss: improving the generalisation of short speech language identification systems</i>. (). ,Faculty of Science ,Department of Statistical Sciences. Retrieved from http://hdl.handle.net/11427/33953en_ZA
dc.identifier.chicagocitationVan Der Merwe, Ruan Henry. <i>"Triplet entropy loss: improving the generalisation of short speech language identification systems."</i> ., ,Faculty of Science ,Department of Statistical Sciences, 2021. http://hdl.handle.net/11427/33953en_ZA
dc.identifier.citationVan Der Merwe, R.H. 2021. Triplet entropy loss: improving the generalisation of short speech language identification systems. . ,Faculty of Science ,Department of Statistical Sciences. http://hdl.handle.net/11427/33953en_ZA
dc.identifier.ris TY - Master Thesis AU - Van Der Merwe, Ruan Henry AB - Spoken language identification systems form an integral part in many speech recognition tools today. Over the years many techniques have been used to identify the language spoken, given just the audio input, but in recent years the trend has been to use end to end deep learning systems. Most of these techniques involve converting the audio signal into a spectrogram which can be fed into a Convolutional Neural Network which can then predict the spoken language. This technique performs very well when the data being fed to model originates from the same domain as the training examples, but as soon as the input comes from a different domain these systems tend to perform poorly. Examples could be when these systems were trained on WhatsApp recordings but are put into production in an environment where the system receives recordings from a phone line. The research presented investigates several methods to improve the generalisation of language identification systems to new speakers and to new domains. These methods involve Spectral augmentation, where spectrograms are masked in the frequency or time bands during training and CNN architectures that are pre-trained on the Imagenet dataset. The research also introduces the novel Triplet Entropy Loss training method. This training method involves training a network simultaneously using Cross Entropy and Triplet loss. Several tests were run with three different CNN architectures to investigate what the effect all three of these methods have on the generalisation of an LID system. The tests were done in a South African context on six languages, namely Afrikaans, English, Sepedi, Setswanna, Xhosa and Zulu. The two domains tested were data from the NCHLT speech corpus, used as the training domain, with the Lwazi speech corpus being the unseen domain. It was found that all three methods improved the generalisation of the models, though not significantly. Even though the models trained using Triplet Entropy Loss showed a better understanding of the languages and higher accuracies, it appears as though the models still memorise word patterns present in the spectrograms rather than learning the finer nuances of a language. The research shows that Triplet Entropy Loss has great potential and should be investigated further, but not only in language identification tasks but any classification task. DA - 2021 DB - OpenUCT DP - University of Cape Town KW - Statistical Sciences LK - https://open.uct.ac.za PY - 2021 T1 - Triplet entropy loss: improving the generalisation of short speech language identification systems TI - Triplet entropy loss: improving the generalisation of short speech language identification systems UR - http://hdl.handle.net/11427/33953 ER - en_ZA
dc.identifier.urihttp://hdl.handle.net/11427/33953
dc.identifier.vancouvercitationVan Der Merwe RH. Triplet entropy loss: improving the generalisation of short speech language identification systems. []. ,Faculty of Science ,Department of Statistical Sciences, 2021 [cited yyyy month dd]. Available from: http://hdl.handle.net/11427/33953en_ZA
dc.language.rfc3066eng
dc.publisher.departmentDepartment of Statistical Sciences
dc.publisher.facultyFaculty of Science
dc.subjectStatistical Sciences
dc.titleTriplet entropy loss: improving the generalisation of short speech language identification systems
dc.typeMaster Thesis
dc.type.qualificationlevelMasters
dc.type.qualificationlevelMSc
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
thesis_sci_2021_van der merwe ruan henry.pdf
Size:
18.46 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
0 B
Format:
Item-specific license agreed upon to submission
Description:
Collections