Browsing by Subject "CNN"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- ItemOpen AccessDynamic Hand Gesture Recognition Using Ultrasonic Sonar Sensors and Deep Learning(2021) Lin, Chiao-Shing; Abdul, Gaffar Mohammed Yunus; Son, JarrydThe space of hand gesture recognition using radar and sonar is dominated mostly by radar applications. In addition, the machine learning algorithms used by these systems are typically based on convolutional neural networks with some applications exploring the use of long short term memory networks. The goal of this study was to build and design a Sonar system that can classify hand gestures using a machine learning approach. Secondly, the study aims to compare convolutional neural networks to long short term memory networks as a means to classify hand gestures using sonar. A Doppler Sonar system was designed and built to be able to sense hand gestures. The Sonar system is a multi-static system containing one transmitter and three receivers. The sonar system can measure the Doppler frequency shifts caused by dynamic hand gestures. Since the system uses three receivers, three different Doppler frequency channels are measured. Three additional differential frequency channels are formed by computing the differences between the frequency of each of the receivers. These six channels are used as inputs to the deep learning models. Two different deep learning algorithms were used to classify the hand gestures; a Doppler biLSTM network [1] and a CNN [2]. Six basic hand gestures, two in each x- y- and z-axis, and two rotational hand gestures are recorded using both left and right hand at different distances. The gestures were also recorded using both left and right hands. Ten-Fold cross-validation is used to evaluate the networks' performance and classification accuracy. The LSTM was able to classify the six basic gestures with an accuracy of at least 96% but with the addition of the two rotational gestures, the accuracy drops to 47%. This result is acceptable since the basic gestures are more commonly used gestures than rotational gestures. The CNN was able to classify all the gestures with an accuracy of at least 98%. Additionally, The LSTM network is also able to classify separate left and right-hand gestures with an accuracy of 80% and The CNN with an accuracy of 83%. The study shows that CNN is the most widely used algorithm for hand gesture recognition as it can consistently classify gestures with various degrees of complexity. The study also shows that the LSTM network can also classify hand gestures with a high degree of accuracy. More experimentation, however, needs to be done in order to increase the complexity of recognisable gestures.
- ItemOpen AccessEvaluating deep learning for enhanced breast cancer diagnosis: a comparative analysis of CNN architectures(2025) Frankle, Solyle; Sinkala, MusalulaArtificial Intelligence (AI), particularly its machine learning (ML) subfield, has revolutionised various sectors, including healthcare. In breast cancer care, AI's ability to analyse vast datasets and extract complex patterns from medical images has the potential to transform diagnostics and treatment strategies. Breast cancer remains one of the most prevalent cancers affecting women globally, with early and accurate diagnosis being crucial for effective treatment. AI, through its advanced image analysis capabilities, significantly improves the accuracy and efficiency of breast cancer diagnosis, specifically in distinguishing between cancer subtypes. Here, we aim to explore the application of deep learning, particularly convolutional neural networks (CNNs), in breast cancer subtype classification using histology images. A custom CNN model, alongside well-established models like ResNet50 and EfficientNetB0, was developed and evaluated for its accuracy in predicting benign and malignant breast cancer subtypes. The results demonstrated that while the custom CNN achieved an accuracy of 65% for malignant and 67% for benign subtypes with ROC-AUC scores of 0.86 and 0.90, respectively, ResNet50 significantly outperformed both the custom model and EfficientNetB0. ResNet50 attained an accuracy of 77% in classifying malignant subtypes and 77% for benign subtypes, accompanied by ROC-AUC scores of 0.92 and 0.96, respectively. Additionally, ResNet50 exhibited higher precision (0.68 for malignant, 0.67 for benign), recall (0.65 for malignant, 0.67 for benign), and F1 scores (0.65 for malignant, 0.67 for benign) across most subtypes, underscoring its robust performance and reliability in clinical settings. In conclusion, AI, specifically through advanced CNN architectures, can greatly enhance breast cancer diagnosis by providing more accurate subtype classifications. Future work should focus on integrating these models into clinical workflows, enabling faster and more personalised treatment planning. Moreover, continued refinement of these models, including addressing the complexities of tumour heterogeneity and incorporating multimodal data, will be crucial for their widespread adoption in oncology.