• English
  • ÄŒeÅ¡tina
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • LatvieÅ¡u
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
  • Communities & Collections
  • Browse OpenUCT
  • English
  • ÄŒeÅ¡tina
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • LatvieÅ¡u
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
  1. Home
  2. Browse by Author

Browsing by Author "Tsoeu, Mohohlo"

Now showing 1 - 3 of 3
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Absolute electrical impedance tomography and spectroscopy: an Orthogonal Chirp Division Multiplexed (OCDM) approach
    (2021) America, Ezra Luke; Tsoeu, Mohohlo
    Absolute Electrical Impedance Tomography and Spectroscopy (aEITS) is a non-intrusive imaging technique, that reconstructs images based on estimates of the absolute internal impedance distribution of an object. However, without the availability of a reference frame, it suffers from poor image quality when general assumptions are used to form the prior information about the object. This problem is intensified when selecting a multiplexing technique that introduces significant data inconsistencies. Recent attempts to solve this problem are to use data from previous empirical studies that acquired scans from Magnetic Resonance Imaging (MRI). Another approach is to use statistical methods to estimate the boundaries of the expected internal domains of the object. These approaches have shown an improvement in the reconstructed images, but either rely on data from other imaging modalities or continue to use a reference frame taken at an earlier time. Therefore, this is a non-trivial problem. In this thesis, the concept of Orthogonal Chirp Division Multiplexed aEITS (OCDM-aEITS) is introduced as an alternative multiplexing technique. OCDM-aEITS allows the simultaneous application of orthogonal wideband chirp current waveforms at all stimulation electrodes, while measuring the resultant boundary potentials. Given a single wideband measurement frame, a reference set, prior information, and several absolute images can be reconstructed. Consequently, there no longer is a need to acquire reference data, from an earlier time, or prior information from other imaging modalities. Furthermore, OCDM-aEITS overcomes some of the data inconsistencies from other multiplexing techniques (such as the data inconsistencies caused by sequential stimulation or spikes from fast pseudorandom pulse stimulation), while reconstructing images with comparable quality to those in the related literature. The experimental results from this thesis (acquired from the reconstructed images of a phantom test tank containing biological specimen), achieved an average position and size error of 3.88 % and 2.49 %, respectively.
  • No Thumbnail Available
    Item
    Open Access
    Vision-Based automatic translation for South African Sign Language (SASL)
    (2024) Setshekgamollo, Mokgadi; Verrinder, Robyn; Tsoeu, Mohohlo
    There are more than four million South Africans who are deaf and hard of hearing (DHH). However, most people with hearing abilities neither understand sign language nor know how to sign. This creates a communication barrier between the deaf and hard of hearing and people with hearing, to the disadvantage of the DHH. In 2018, South African Sign Language (SASL) became an official subject in South African schools and in 2023 it became South Africa's 12th official language. However, these implementations do not impose it on institutions and service providers. Although some provisions are made to cater to the needs of DHH people in the form of sign language interpreters, such interpreters are not always readily available. Sign language interpreters are also costly, charging in excess of R500.00 per hour. In this research, we developed the first vision-based Neural Sign Language Translation model for SASL as a first step towards bridging the communication gap. To this end, we recorded a sizeable parallel SASL and English corpus with the help of six sign language interpreters, three of which are native signers and constitute around 90% of the dataset. The dataset comprises 5047 sentences in the domain of government and politics recorded in a studio setting with a uniform green background. At an average of 3.83 seconds per segment, this equates to around five hours of sign language data. We conducted comprehensive experiments using various visual feature extraction architectures as well as translation architectures. We found that recurrent translation models outperform transformer models. We also investigated the impact of pretraining our feature extractor on a Continuous Sign Language Recognition task and fine-tuning on the SASL dataset and found that this is effective for improving feature extraction. Our best models achieved a BLEU-4 score of 1.35 on the SASL test set, a comparable performance to the How2Sign dataset with a best BLEU-4 score of 1.73 but much lower than on the RWTH-PHOENIX-Weather 2014T dataset which produced a BLEU-4 score of 13.23 without gloss supervision. Our experiments also showed that annotating fingerspelled words as individual letters improves the performance of the model. Our model might benefit from the collection of more data and the addition of gloss annotation. Our results on the SASL dataset are very poor and still very far from practical, indicating that more resources and experiments are required before we can remove the language barrier between the hearing and the Deaf. This would be most effectively achieved by working in collaboration with the Deaf Community to produce high quality datasets, annotations, and models.
  • No Thumbnail Available
    Item
    Open Access
    Vision-Based automatic translation for South African Sign Language (SASL)
    (2024) Setshekgamollo, Mokgadi; Verrinder, Robyn; Tsoeu, Mohohlo
    There are more than four million South Africans who are deaf and hard of hearing (DHH). However, most people with hearing abilities neither understand sign language nor know how to sign. This creates a communication barrier between the deaf and hard of hearing and people with hearing, to the disadvantage of the DHH. In 2018, South African Sign Language (SASL) became an official subject in South African schools and in 2023 it became South Africa's 12th official language. However, these implementations do not impose it on institutions and service providers. Although some provisions are made to cater to the needs of DHH people in the form of sign language interpreters, such interpreters are not always readily available. Sign language interpreters are also costly, charging in excess of R500.00 per hour. In this research, we developed the first vision-based Neural Sign Language Translation model for SASL as a first step towards bridging the communication gap. To this end, we recorded a sizeable parallel SASL and English corpus with the help of six sign language interpreters, three of which are native signers and constitute around 90% of the dataset. The dataset comprises 5047 sentences in the domain of government and politics recorded in a studio setting with a uniform green background. At an average of 3.83 seconds per segment, this equates to around five hours of sign language data. We conducted comprehensive experiments using various visual feature extraction architectures as well as translation architectures. We found that recurrent translation models outperform transformer models. We also investigated the impact of pretraining our feature extractor on a Continuous Sign Language Recognition task and fine-tuning on the SASL dataset and found that this is effective for improving feature extraction. Our best models achieved a BLEU-4 score of 1.35 on the SASL test set, a comparable performance to the How2Sign dataset with a best BLEU-4 score of 1.73 but much lower than on the RWTH-PHOENIX-Weather 2014T dataset which produced a BLEU-4 score of 13.23 without gloss supervision. Our experiments also showed that annotating fingerspelled words as individual letters improves the performance of the model. Our model might benefit from the collection of more data and the addition of gloss annotation. Our results on the SASL dataset are very poor and still very far from practical, indicating that more resources and experiments are required before we can remove the language barrier between the hearing and the Deaf. This would be most effectively achieved by working in collaboration with the Deaf Community to produce high quality datasets, annotations, and models.
UCT Libraries logo

Contact us

Jill Claassen

Manager: Scholarly Communication & Publishing

Email: openuct@uct.ac.za

+27 (0)21 650 1263

  • Open Access @ UCT

    • OpenUCT LibGuide
    • Open Access Policy
    • Open Scholarship at UCT
    • OpenUCT FAQs
  • UCT Publishing Platforms

    • UCT Open Access Journals
    • UCT Open Access Monographs
    • UCT Press Open Access Books
    • Zivahub - Open Data UCT
  • Site Usage

    • Cookie settings
    • Privacy policy
    • End User Agreement
    • Send Feedback

DSpace software copyright © 2002-2025 LYRASIS