December 8, 2023

Key Takeaways from the DataCamp-Ishango.ai Final Event for 2023

We concluded our final Datacamp-Ishango.ai event for this year, offering a platform for sharing ideas, networking, and exploring the latest advancements in the field of data science. During this event, our scholarship recipients, Jam Rodrick Joh and Nadine Cyizere Bisanukuli, had the opportunity to present on a data science topic of their choice based on what they have learnt so far. Below, we delve into the key takeaways from the two presentations.

Nadine’s presentation explored spam detection techniques in text messages, focusing on three key methodologies: Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), and Bidirectional Encoder Representations from Transformers (BERT). RNNs are tailored for sequential data, LSTM handles vanishing gradient problems, and BERT is a transformer-based model for bidirectional understanding of word context. The presentation outlined preprocessing steps using the BERT Tokenizer.

Nadine detailed the architecture for spam detection, including Embedding Layer, LSTM Layer, Dense Layer, and Adam optimizer. She explained the fine-tuning process for BERT, including manual parameter adjustment, Grid Search, Random Search, and Bayesian Optimization. The Grid Search method was used to optimise the BERT model for spam detection, focusing on layers, attention heads, hidden units, and learning rates.

Nadine concluded her presentation by highlighting the hyperparameter experiment from TNN (Transformers Neural Network), Experiment 8. This setup, with 4 layers, 8 attention heads, 256 hidden units, and a 0.00001 learning rate, achieved an impressive 99.10% accuracy in just 1.35 seconds of training time. This result suggests that Transformers offer superior predictive capabilities. However, it was noted that their complexity surpasses that of LSTM and RNN models, indicating a trade-off between performance and complexity when considering their use for improved predictions.

Jam Rodrick’s presentation aimed to diagnose melanoma skin cancer through image classification techniques. Highlighting melanoma’s severity, which is responsible for most skin cancer-related deaths, he emphasised the need for early detection. The current diagnosis, which takes about a week, involves visual exams, dermoscopy, and biopsies.

Using a dataset from Kaggle, Rodrick organised the data into train, validation, and test sets. He applied transfer learning with models like VGG16 and modified it for this project. The process involved image preprocessing, data augmentation, model training, evaluation, and analysis based on accuracy and other metrics. The project aimed to create an algorithm to aid in melanoma diagnosis. While showing promising results, Rodrick suggested further improvements before implementing such deep learning solutions in healthcare.

Together, the two presentations offered participants useful insights into the use of data to solve real-world challenges.