Evaluation of different Support Vector Machines (SVM) for speaker identification
Thesis / Dissertation
2004
Permanent link to this Item
Authors
Supervisors
Journal Title
Link to Journal
Journal ISSN
Volume Title
Publisher
Publisher
Department
License
Series
Abstract
This study is an investigation into four support vector machines (SVM) kernels. SVMs have gained much acceptance in classification tasks since their inception in the 1990s. The central feature of SVM is the implicit mapping of input data to some higher-dimensional feature space. This is achieved through the use of kernel functions. Popular kernel functions include gaussian, polynomial, sigmoid and linear. This list is by no means exhaustive. The work done in this thesis compares the four kernels mentioned. Attaining maximum performance with SVM requires optimizing the hyperparameters that are embedded in the kernel function. The results obtained from the experiments performed indicate that the linear kernel's performance was the worst compared to the other three kernels. This can be attributed to the fact that the hyperplane separating the classes of data is not linear. Moreover, it was shown that all the other three kernels achieved relatively the same performance for each data set considered. We can also conjecture from the results that the gaussian kernel took excessive time to converge. This fact is also reported in [52]. SVM was then applied in a hybrid GMM/SVM system using the optimized hyperparameters of each kernel function. The gaussian SVM kernel provided the best performance at the expense of computational time. The identification error rate using the hybrid system was further reduced by 7.7%.
Description
Keywords
Reference:
Jhumka, R. 2004. Evaluation of different Support Vector Machines (SVM) for speaker identification. . ,Faculty of Engineering and the Built Environment ,Department of Electrical Engineering. http://hdl.handle.net/11427/40110