Department of Computer Science;
Permanent URI for this collection
Browse
Recent Submissions
Item An Improved AlexNet Convolutional Neural Network Model for Brain Tumor Detection and Classification(Lead City University, Ibadan, 2024-12) Kofoworola Folakemi FAMUREWABrain tumors are frequently categorized as malignant or benign. The treatment for brain tumors requires an early diagnosis and the usual method to detect brain tumor is Magnetic Resonance Imaging (MRI) scans. From the MRI scan, information about the abnormal tissue growth in the brain is identified. Human inspection, which may be time-consuming and not suitable for large number of MRI images, is the traditional method used in contemporary clinical routines for tumor detection and classification in MRI images. Recently, convolution neural networks (CNNs) have made imaging-based artificial intelligence solutions possible. When CNN models are applied on the MRI images, the prediction of brain tumor is done very fast and a higher accuracy helps in providing the treatment to the patient. These predictions also help the radiologist in making quick decisions. Even though CNNs has achieved great results in many tasks and domain, their sensitivity to input size remains a major problem that limits practical use cases. This work modified AlexNet CNN architecture to accept varying sizes of brain tumor images and then classify the tumor as cancerous or non-cancerous. The specific objectives were to acquire and preprocess MRI brain tumor images, develop CNN model that accept varying brain tumor images and evaluate the performance of the model. The implementation was done using Python and Tensor Flow and it was executed on a desktop computer with Intel Core-i5 processor and 16 GB RAM. At the end of the training, the model achieved 89.86% training accuracy and 85.08% validation accuracy. An accuracy of 84.18% was achieved after assessing the model on test data. An evaluation of the model's performance revealed that this approach holds great potential. Keywords: Artificial Intelligence, Brain Tumor, Convolutional Neural Network, Input Size Limitation, Magnetic Resonance Imaging. Word Count: 274Item Evaluation of Machine Learning-Based Algorithm to Predicting Loan Default in Nigeria(Lead City University, Ibadan, 2024-12) Kingsley Oghenekaro EFEKODOIn the financial sector, accurately predicting loan defaults is critical. Traditional creditworthiness assessment methods, while thorough, often do not capture the dynamic and complex interactions within financial data. This necessitates advanced solutions like machine learning (ML). Traditional credit scoring systems are frequently unable to handle high-dimensional, non-linear data effectively, leading to significant financial losses due to inaccurate predictions of loan defaults. This study aims to harness advanced machine learning techniques to enhance the accuracy of predicting loan defaults, aiming to outperform traditional statistical models. Various machine learning algorithms including Logistic Regression, Decision Trees, Gradient Boosting Classifiers, Random Forest, and Gaussian Naive Bayes were applied to a dataset comprising diverse borrower characteristics and loan details. The selected dataset was an open source containing different datasets for both train and test Demographic data, Performance data and Previous loans data. It contained 3 different datasets for both train and test. The sample submission has 2 outcomes- good (1) or bad (0). The dataset systematically divided into two. 70% for the training set, 30% was the test set. These models underwent rigorous training and validation processes to ensure their robustness and reliability. The Gradient Boosting Classifier emerged as the most effective model, with an accuracy of 78.8%. This model significantly outperformed others by effectively capturing complex patterns in the dataset, thereby substantially reducing both false positives and false negatives. The study confirms that machine learning models, particularly the Gradient Boosting Classifier, offer superior predictive power in the context of loan default risk assessments. Financial institutions should consider integrating these models into their credit evaluation processes to enhance decision-making accuracy and minimize risks. Additionally, future research should explore the integration of more diverse data sources, including non-traditional variables that could affect credit risk assessments, and the application of deep learning techniques to further refine prediction accuracies. Keywords: Accuracy, Classifier, Defaults, Financial, Machine Learning Models, Predicting, Cross- Validation, Data Imputation, Customer Segmentation, Nigerian Lending Market, Class Imbalance Word Count: 300Item Fuzzy AHP Based Decision Support System for Prioritizing Challenges of Adopting Internet of Things (IoT) Technologies(Lead City University, Ibadan, 2023-12) Ayuba ATUMANThe adoption and utilization of Internet of Things (IoT) technologies present numerous, complex, and expensive challenges for developing countries as they work to fill their share of the global IoT market. Multi-stakeholder decision makers rely on multi criteria decision support systems (MCDSS) as essential tools for evaluating and prioritizing complicated competing options such as the above. This paper presents an approach for design and development of a web-based prototype Mult Criteria Decision Support System (MCDSS) for prioritizing these difficulties associated with the adoption and utilization of Internet of Things (IoT) technologies. The prototype Decision Support System (DSS) is expected to be an essential tool to be used by Policy makers in IoT, IoT industry experts, researchers, and other stakeholders in IoT. The majority of work done over time by academics, researchers, and industry specialists to construct MCDSS specific for prioritizing challenges adoption and utilization of IoT technologies has levels of mismatch: most of their work ends up on paper without a real-world application wo work with; Some applications are too complex for typical decision-making stakeholders; most of solutions are not explicitly designed to prioritize IoT concerns. The goal of this work is to develop a prototype MCDSS for prioritizing difficulties associated with the adoption and exploitation of IoT technology. As the core logic component of the Decision Support System (DSS), Fuzzy Analytic Hierarchy Process (FAHP) multicriteria decision analysis approach was employed. ASP.Net model View controller (MVC) framework and C Sharp programming Language was used for the GUI and the Logic development. The system's default IoT challenges and dataset were adapted from the works of A.K. Mohammadzadeh (Baseline dataset). The system usability test result obtained shows that the system is friendly and usable. The output weights and rating of the IoT technology adoption challenges/sub-challenges exhibit over 80% similarity when compared to the Baseline dataset. Keyword: Internet of Things (IoT), Decision Support system (DSS), Fuzzy Analytic Hierarchy Process (FAHP). Multi-criteria, Multi-criteria decision making, Prioritization, Technology Challenges, IoT Challenges, IoT Difficulties. Word Count: 300.Item An Automatic Wireless-based Android Controlled Ground Robotic Spy Vehicle(Lead City University, Ibadan, 2024-12) Nurudeen Babatunde YISAUThe pervasive fear among residents, loss of lives, both military and civilian, and the drain on government resources during occurrences and crises conflicts necessitate innovative solutions. This study explores the creation of an Unmanned Ground Vehicle (UGV) designed for remote-controlled surveillance to aid in addressing insurgency and terrorism issues that pose significant threats to national and international security. The aim of this research is to design and implement a wireless, android-based ground robotic system capable of performing sophisticated spying tasks thereby reducing risk in hostile environments. The methodology employed involves a modular design approach, integrating off-the-shelf components such as Arduino IDE, ESP32-camera, motor drivers, and various power sources. These components were selected for their reliability, availability, and alignment with the objectives of the system. The UGV prototype was developed to perform functions including patrolling and monitoring environments, potentially dangerous to human operators, such as military zones and conflict areas. Results from the implementation demonstrated the vehicle's ability to navigate and provide real-time feedback of live video streaming of what the robot “sees”. This independence from network connectivity ensures reliability even in remote environments. The vehicle operates via commands transmitted from an android application, enabling it to move in pre-determined directions and relay visual data back to the base station through an ESP32-camera and a web-server that controls the robot which is being programmed with help of an Arduino IDE. This project successfully demonstrates a cost-effective and efficient approach to surveillance and reconnaissance in high-risk areas. The study recommends enhancing the UGV's capabilities, including extended battery life, improved sensor range, and autonomous navigation algorithms, to ensure a more reliable and performance-optimized vehicle suitable for various surveillance tasks while maintaining affordability. Keywords: Android-based, ESP32-camera, Modular design, Remote-controlled, Surveillance, Unmanned ground vehicle. Word Count: 279Item An Improved Traffic Light Colour Detection and Recognition System for Autonomous Vehicles(Lead City University, Ibadan, 2023-12) Temilade Temitope FASINAThis study introduces significant advancements in traffic light detection and recognition using an improved YOLOv4 algorithm. Two key optimization techniques, shallow feature enhancement and bounding box uncertainty prediction, were incorporated to address the limitations of the original YOLOv4 algorithm. The results demonstrate substantial improvements in accuracy for traffic light detection and recognition. In the experiments conducted with the LISA traffic light dataset, the AUC (Area Under the Curve) increased to 97.03% and 95.31% for the two datasets of LISA and LaRa, respectively, in traffic light detection. Additionally, the map (mean Average Precision) improved to 81.34% and 78.88% for recognition trials. Despite a slight increase in detection time, the system remained capable of real-time traffic light detection. The use of bounding box uncertainty prediction further enhanced the YOLOv4 algorithm, resulting in AUC values of 96.84% and 94.73%, as well as mAP values of 79.93% and 78.23% for the LISA and LaRa datasets in traffic light detection. Importantly, this enhancement reduced detection times to 27.59 and 33.45 milliseconds, respectively. To further improve traffic light detection and recognition systems, it is recommended that the collection of diverse and extensive datasets, accurate annotation of data, data augmentation, semantic segmentation, real-time object tracking, the utilization of deep learning models, transfer learning, proper calibration, multimodal sensor fusion, redundancy, real-time processing, machine learning anomaly detection, continuous testing, and regulatory compliance are done. Keywords: Machine Learning, Traffic Light Recognition, Deep Learning, Autonomous Vehicle Word Count: 225Item Impact of Integrated Machine Learning Models, Background-Traffic and Bandwidth-Limit on the Performance of Software-Defined Networking(Lead City University, Ibadan, 2024-12) Isiaka Babatunde SADIKUEfficient data flow in computer networks is crucial for modern applications, but network performance faces challenges due to the complexity of network types and configurations. Understanding the impact of different networking approaches on packet flow, bandwidth, latency, jitter, and throughput is essential for improving network performance. Traditional Computer Networks (TCN) and emerging technologies like Software-Defined Networking (SDN) have distinct advantages and trade-offs in terms of bandwidth usage, latency, throughput, and jitter. This study aims to assess the influence of background traffic, bandwidth limits, and dataflow features on SDN performance and the ability of machine learning models to predict network behavior. The analysis reveals several key findings: Traditional networks exhibited higher throughput, while hybrid TCN-SDN showed reduced bandwidth usage. Latency varied across network types, with SDN networks showing potential increases. Jitter was significantly impacted by non-homogeneous networks, raising concerns about overall performance stability. ANOVA and Duncan’s tests confirmed the importance of latency, bandwidth, and throughput in influencing network behavior. Back-ground traffic and bandwidth limits were shown to have a complex relationship with SDN performance, particularly in terms of TCP bandwidth, throughput, and latency. Correlation analyses highlighted strong relationships between network parameters, providing deeper insights into dataflow dynamics. Among machine learning models, Support Vector Machine with Radial Basis Function Kernel (SVM_RBF) consistently outperformed others, while the stacked 5-stacked model demonstrated superior accuracy in predicting SDN performance across different datasets and scenarios. This study offers valuable insights into the interplay of network types, traffic conditions, and performance metrics. The results indicate that while traditional networks offer higher throughput, hybrid TCN-SDN configurations present advantages in bandwidth efficiency but may incur higher latency. The machine learning models successfully predicted network performance, with the 5-stacked model emerging as the most accurate across a range of conditions. Keywords: Performance Metrics, Programmable Network, Data Flow, Machine Learning, Bandwidth-traffic Word Count: 290 wordsItem Improved Sentimental Response System for Classifying Emergency Incidence Through Hybridized Minning Techniques(2024-12) Oluwatobi Akanbi JOHNSONEmergency occurrences can be caused by both natural disasters and human error. This study addresses the classification of emergency incidence, stemming from both natural disasters and human errors, emphasizing the critical need for swift response and effective mitigation. Governments typically implement measures to mitigate negative effects, with outcomes dependent on their responsiveness. The research aims to enhance sentiment analysis for emergency incidence through a hybridized mining technique. The system combines Natural Language Processing and Bayesian belief learning, focusing on data mining, machine learning, and NLP for effective classification and sentiment analysis. Social media data from Facebook is gathered using the Facebook API and Graph function 'Requests' for training. Pre- processing involves eliminating unwanted characters and transforming text into lowercase. Experimental analysis involves 450 data samples with four characteristics, creating a multivariate time series dataset for classification tasks. Python with the requests library and Graph API is used for live data capture, while MySQL manages the backend database, and XML and PHP handle the frontend for sentimental response. The study unveils a linear dimension in the classification algorithm, transforming non-linear textual data during pre- processing. Probability computations for incidence parameters and input intervals rely on frequency distribution from emergency observations. Experimental scenarios instill confidence in the improved framework, incorporating supervised learning into NLP for improved precision. The system achieves over 90.93% efficiency in signal precision, a substantial enhancement compared to existing models. Performance evaluation involves using emergency datasets for training (75%) and testing (25%), demonstrating the system's high precision through a confusion matrix. The improved sentimental response system represents a significant advancement, leveraging social media data for proactive emergency management. With a precision rate exceeding 90.93%, the system adeptly identifies and categorizes emergency signals, enabling timely and targeted response strategies. Keywords: Emergency, Hybrid, Incidences, Minning ,Response, Sentiment, Socialmedia Word Counts: 297Item Improved Network Intrusion Detection System Using Hybridized Feature Selection Methods(Lead City University, Ibadan, 2024-12) Olakunle Titus FADEYIThe usage of Machine Learning (ML) and Feature Selection have been implemented in the development of Intrusion Detection System (IDS). From the review of the literature, developing an effective IDS requires large amount of data with many features. Some of these features are not important in the operation of the IDS which slows down the detection of threats. Therefore in this thesis, an IDS which can detect threat, has reduced features and is able to obtain result was developed. Machine learning was incorporated in training of the model using three machine learning algorithm; hybrid decision trees, Naives Bayes (NB) and Random Forest (RF). This was categorized into 3; Dataset Loading and Preprocessing, improved Intrusion Detection System and testing and evaluating the developed system. These three stages saw the total number of columns to 143 in number, after some processes were carried out on it, such as the hot-encoding category features and the SelectKBest techniques which reduced the columns to 15 best columns. After the correlation matix was conducted on the final sub dataset, it shows that features with NaN values have zero correlation with other related features in each of the sub dataset. Features with near zero variance, missing values >25% and those that has high correlation between two numerical variables. With these features having minimal discriminatory power, they were therefore removed from both sub dataset. This reduced columns shows that logistic regression model built was approximately 0.8377, the accuracy score of the K-nearest model was approximately 0.7538, the accuracy score of the DecisionTreeClassifier model was approximately 0.8127, the accuracy score of the LinearSVC model was approximately 0.8101. The developed IDS using Feature Selection technique significantly improved the performance of the Network Intrusion Detection System towards learning accuracy, reduce learning time, and simplify learning results. Keywords: Machine Learning, hybrid decision tree, hot-encoding category feature Word Count: 295 wordsItem Real-time Surveillance Network System for Traffic Monitoring(Lead City University, Ibadan, 2024-12) Abiodun AKANNIAs urban populations continue to surge, the prevalence of traffic-related issues escalates, leading to heightened concerns over public safety, property damage, and various offenses posing significant risks to both life and assets. Traditional solutions have relied heavily on infrastructure-integrated systems, which are often costly to install and maintain, lacking flexibility and scalability. This study aims to develop an approach to address these challenges by creating a low-cost, real-time vehicular monitoring and reporting system. The system employs readily available technology, built on a foundation of electronic architecture, encompassing a Network Unit (Tunnel Server), Mobile Unit (Mobile App), and number plate detection unit. The process involves establishing an HTTP connection between the Tunnel Server and Mobile App. A tunnelling server, a web application, and a number plate detection unit collaborate to detect license plates in real-time. ML5.js and OpenCV.js are employed to process captured frames, identify objects, and extract license plate numbers. The number plate was identified through the utilisation of the find number plate function (Open.js). This function operates by analysing the image, converting it to grayscale, performing edge detection, and subsequently identifying contours to determine the presence of a number plate based on its distinctive feature. The system's performance is evaluated in terms of response time (80%), stability (70%), and usability (84%). The system demonstrates exceptional compatibility with various operating systems and browsers and boasts good scalability and throughput. This research marks a significant technological achievement in the realm of web and mobile applications, computer vision, and artificial intelligence. The developed system successfully detects license plate numbers, promising enhanced public safety, property protection, and traffic management. It is therefore recommendations that future enhancements such as expanding its object recognition capabilities and maintaining a robust testing and quality assurance process to ensure its continued excellence. Keywords: Computer Vision, Electronic, Infrastructure-Integrated Systems, License Plates, Number Plate Detection, Object Recognition, OpenCV.js, Tunnel Server Word Count: 293Item An Improved Feature Selection Approach for Prediction of Students’ Academic Performance in a Virtual Learning Environment(Lead City University, Ibadan, 2024-12) Felicia Ojiyovwi ADELODUNIn the past, only machine learning algorithms were used for predicting Students’ Academic Performance. In recent times both feature selection methods and machine learning algorithms are important in the prediction process. In previous research, the focus has been on demographic information. Research specifically analyzing video interaction of learners is limited. This study provides an opportunity to investigate the interactions of learners in a Virtual Learning Environment (VLE), The study further investigated for clarification whether or not if Feature Selection should be skipped during the prediction process as some previous studies suggested. The study proposed a novel model named PF-PSO as an improved Feature Selection (FS) method comprising of a combination of three existing feature selection methods to improve machine learning models’ accuracies in predicting students’ academic performance in a VLE. The closen feature selection methods are Principal Component Analysis (PCA), Forward Selection Method (FOR) and Particle Swarm Optimization (PSO). Students’ educational datasets were retrieved from secondary sources such as Kaggle.com. This unbiased study used two approaches- with FS and without FS to train machine learning models. The evaluation metrics include MSE, R2 and MAE for the regression tasks. Accuracy, Precision, and F1 measure for the classification tasks. The results from the study showed that while PSO proved promising, the proposed system achieved great success with Random Forest and Gradient Boosting performing very well in both regression and classification tasks and could explain 65% to 89% variance in the target variable. Logistic Regression as best for classification tasks with accuracy in the range of 61% and 75%. The proposed system can contribute to enhancing students’ academic prediction. The findings of the study show the importance of incorporating a hybrid feature selection for predicting students’ academic performance. Keywords: Feature Selection approach for Pearson Correlation Coefficient, Forward Selection method, Particle Swarm Optimization, Prediction of , Students’ Academic Performance, Virtual Learning Environment Word Counts: 301Item Development of a Computerized Hospital Laboratory Operations’ Support System(Lead City University, Ibadan, 2024-12) Abiodun Timothy ADEGBIJIManual hospital laboratory operation system is characterized with lack of prompt retrieval of Information which result in time wastage, loss of information, misplacement and misallocation of results. To proffer solution to the aforementioned problems, the research study developed a computerized hospital laboratory operation support application which is aimed at using information technology to solve the problems associated with manual method of Hospital Laboratory Information System. The system is a web based model built on laravel 7.29 and a WAMP (Windows, Apache, MySQL, PHP) server. A total of 4 laboratories were visited (2 government owned and 2 private laboratories) to collect various types of tests being carried out in the laboratories through ethical approval using the conventional and the developed systems. During the implementation of the developed system at Oyo state hospital management laboratory in Oyo and Ibadan, the system was installed unto the hospital laboratories’ database and subsequently utilized for registration of patients data and processed data. The result obtained showed that 65% of the respondents who were tested with the developed system used between 30-45 minutes and 48% used between 46- 60minutes while 88% that used the manual system used between 2-8hours before the result was ready.The adoption of this research will greatly allow prompt release of test results retrieval of Information, reduce patient test time wastage, give accurate laboratory test result, reduce loss of vital information, reduce misplacement of test results and reduce misallocation of test results to the barest minimum, if not totally eradicated. Key Word: Computerized, Information Technology, Hospital, Laboratory, Laravel Word Count:250Item Hybridized Dimensionality Reduction Model for Blurred Text Detection in Natural Scene Images(Lead City University, Ibadan, 2023-12) Chinonyelum Vivian NWUFOHScene Text Recognition (STR) is synonymous with text recognition in the wild scene, and it is a difficult task since it necessitates getting rid of text strokes and their fuzzy borders, such as embossing, shading, and flare, from an image and then sharpen the latent text. STR has become widely researched because of application areas that cover most everyday activities for both humans and their technological advancement (such as self-driven vehicles and Artificial intelligence gadgets). Existing approaches must fully address the complex problems that arise in the wild regarding text recognition and detection, such as blurred images, which is an aspect of the task for this study. Again, research has recommended using Dimensionality Reduction (DR) and Genetic Algorithm (GA) to instantiate text recognition. Hence, it has given rise to using GA for DR as recommended. Here, we develop two major DR models: the DR model using ICA for pre-processing of the dataset and the DR model using an Independent Component Analysis (ICA) – Enhanced Genetic Algorithm using the Bird Approach (BA-GA), making the ICA-BA-GA model for text deblurring in the wild, coupled with SVM, K-NN, and Ensemble for evaluating the models. The study uses Large-Scale Street View 2019 ICDAR Text (LSVT19), which has 20,000 test photos, 30,000 annotated training images, and 400,000 unlabeled or partially labeled training images. Evaluation parameters such as accuracy, precision, and f1-scores were used for benchmarking. For accuracy compared to the state of the art, ICA-BA-GA gives an impressive 99%. For the improved model (ICA-BA-GA) using the classifiers - with the ensemble, we get the best result (99.30%): K-NN 98.65% and SVM 94.01%. Further research could investigate a hybrid using Neural Networks. A different dataset, preferably one curated by scholars with various categories of blurriness, can be used. Keywords: Scene Text Recognition, Independent Component Analysis, ICDAR Text (LSVT19), Deblurring Text, Pattern Recognition Word Count: 291Item An Improved Call Quality for Call Drop Minimization during Handover in Mobile Communication(Lead City University, Ibadan, 2023-12) Temilola Adedamola JOHN-DEWOLEMobile devices have become essential and significant aspects of everyone's life in the modern technology. Call drops are significant problems for telecommunications network providers. Users’ call quality being negatively affected by mobile call drops, have also lowers revenue generation for telecom service providers. From the literature, the call quality for a group of calls could be predicted based on combination of calls’ successful factors. This study aims at developing an improved call quality for call drop minimization during handover in mobile communication. In order to address the call drop, a model of neural network to enhance call performance and effectiveness was created. Top-performing Long Short Term Memory (LSTM) and Convolutional Neural Network (CNN) models were selected and their predictions were combined using a weighted average ensemble approach. The Ensemble Models machine learning approach was employed using Python software. The features used were signal strength, call drop rate, data usage, call types, congestion level, call setup success rate and traffic control congestion rate. The study utilized a dataset with a total of 3000 data points across 30 cell towers and with each cell running for 5 minutes. The performance is evaluated using accuracy, precision, recall, f-score measures and auc-roc. Results of the research gave an accuracy of 97.18 %, 96.64 % precision, 96.58 % recall, 96.11 % f-score measures and auc-roc of 98.79 % for call drop quality. This strongly correlate with existing results of 90 % accuracy, 93 % precision, 92 % recall, 90 % f-score measures but no auc- roc. Another research showed overall accuracy of 95 %. It is therefore recommended that telecommunications companies should implement deep learning techniques on cellular network data to reduce and fix call drops so that consumers will have a higher call quality in the future; providing continuous communications. Keywords: call drop, call quality, call setup success rate, LSTM, CNN, ensemble models, deep learning, telecommunication providers. Word Count: 295Item PERFORMANCE EVALUATION OF HOMOGENOUS BOOSTING TECHNIQUE FOR INTRUSION DETECTION IN ONLINE BANKING(Lead City University, Ibadan, 2023-12) JIBOKU, Folahan JosephIn recent times, it has been observed that a lot of users have been migrating to online banking. However, security in online banking has been a matter of great concern for most users. This thesis presents a performance evaluation of a homogeneous boosting technique for online banking network intrusion detection. The study aims to determine the effectiveness of the boosting technique in improving the detection of network intrusion attempts in online banking systems. The research methodology includes applying fuzzy logic feature selection technique on the dataset to determine the objectivity of the homogenous boosting ensemble machine learning algorithms. The experimental results of the study showed that the homogenous boosting technique performed well on the datasets, achieving high levels of accuracy and recall. The study also shows that the homogeneous boosting technique has a relatively low false-positive rate, indicating a high level of precision in detecting network intrusion attempts. Furthermore, the study evaluates the impact of various feature selection techniques on the performance of the boosting technique. The results demonstrate that the boosting technique performed better with selected feature subsets, which implies that the technique can be optimized for different online banking network intrusion detection scenarios. In conclusion, this thesis demonstrates the effectiveness of the homogeneous boosting technique for online banking network intrusion detection. The study provides valuable insights into the use of boosting techniques and feature selection for improving the detection of network intrusion attempts in online banking systems. The findings of this study could help enhance the security of online banking systems and improve the overall trust of customers in online banking. Keywords: Online Banking, Intrusion Detection, Fuzzy Logic, Homogenous boosting. Word Count:263.Item Advanced Surveillance Technology Multicast Using Optical Wireless Transceiver in Smart Environment(Lead City University, Ibadan, 2023-12) Israel Oluwagbejamija FAKUNLESecurity practice is crucial peaceful living. In the old times, before the advancement of technology, security was a major concern due to invasions, robbery, and wars. According to history, security personnel in those days known as vigilante also served as police. The security responsibilities then require 100% human effort, having to go over an assign geographical area, restlessly and sleeplessly, to secure lives and properties. But today with technological advancements, people are able to live in security without the need for protection. The advancement in technology as relieved human a whole lot of security threats and stress. This study aims to develop a real-time surveillance system that utilizes multicast technology to prevent and detect crime in an enclosed geographical location. The objective is to empower residents to work together and contribute to the security of their environment, lives, and properties. Real-time surveillance multicast is faced with numerous challenges, such as; lags / interruption in transmission, due to error from the framework or internet connections, high internet data consumption, due to enormous data transmission and limited number of users allowed. A Close-Circuit Television system will be designed using an analogue camera and digital video recorder with a hard drive for data capturing and storage allowing decentralization of the system using a wireless video transceiver through integration. Overall, this study aims to develop a surveillance system that empowers residents to work together and contribute to the security of their community. The system will leverage advanced technologies such as wireless video transceivers and multicast technology to improve the efficiency and effectiveness of surveillance. Keywords: Technological Advancement, Security Threats, need for protection, real-time surveillance, multicast, lags in transmission, empower residents. Word Count: 260Item A Web Based Chatbot for Mental Health Support(Lead City University, Ibadan, 2023-12) Samuel Ejomafuvwe LUCKYDespite the significance attributed to mental health, a considerable number of individuals have difficulties in accessing prompt and tailored mental health interventions. This predicament can be attributed to various factors, including societal stigmatisation, limited availability of resources, and residing in geographically isolated areas. This study addresses the persistent challenge of providing timely and individualized mental health treatment through the development of a web-based chatbot for personalized therapy. The study comprises the utilisation of a dataset containing frequently asked questions (FAQs) related to mental health. Preprocessing techniques, including lemmatization, lowercasing, and duplication removal, are employed in order to prepare the data for analysis. The machine learning model, which utilises neural networks, undergoes training and has a negative association between epochs and loss magnitude, suggesting enhanced performance as the training progresses. The findings indicated that the developed chatbot demonstrated a high level of proficiency in delivering personalised mental health care that is relevant to the individual, providing fast responses, and offering appropriate recommendations for therapy. Additionally, the user feedback received during the performance evaluation highlights a high level of satisfaction and a strong inclination to utilise the chatbot again in the future. The study highlights the potential of chatbots, particularly those based on LSTM architecture in effectively addressing mental health issues and enhancing the availability of resources. The study therefore recommends that continuous improvement refining and enhancing the chatbot's capabilities by regularly updating the chatbot's knowledge base, therapy recommendations, and conversational abilities to ensure it remains relevant and effective. Keywords: Epochs, Frequently asked questions (FAQs), Lemmatization, Lowercasing, LSTM architecture, Machine learning model, Mental health, Personalized therapy Word Count: 247 WordsItem Predicting the Severity of Vehicle Accidents Based on Traffic Accident Attributes Using Machine Learning(Lead City University, Ibadan, 2023-12) Segun Abayomi SofoluweThe occurrence of accidents on global road networks results in a considerable loss of human life on a yearly basis, hence underscoring the urgent matter of ensuring road safety. This research aims to predict the severity of road traffic accidents and enhance prediction performance by employing two machine learning algorithms; the Random Forest model and the Decision Tree Classifier model. The study employs a dataset obtained from Kaggle.com, which is subjected to comprehensive data mining, pre-processing, and exploratory data analysis. The dataset was divided into training and testing subsets for model development and evaluation. The evaluation of model performance involved the computation of key performance metrics such as precision, recall, and F1-score. The findings of the study revealed that the Random Forest (RF) model continuously exhibited better performance compared to the Decision Tree (DT) model across all evaluation metrics, including precision, recall, F1-score, and overall accuracy. The evaluations consistently exhibited higher values for RF across all accident severity classes, indicating its greater predictive capability in accurately determining accident severity. The RF algorithm was found to have a higher weighted-average F1-score, taking into account the presence of class imbalances within the dataset. Therefore, based on the findings of this study, it can be concluded that the Random Forest (RF) model demonstrates superior performance in accurately predicting accident severity across all categories, with an overall accuracy rate of 0.84. In comparison, the Decision Tree (DT) model achieves an accuracy rate of 0.73. It is therefore recommended that additional analysis can be done in order to gain a deeper understanding of the underlying causes for misclassifications, with the aim of enhancing the performance of the models for these particular classes. Additionally, optimizing the hyperparameters of the models can result in enhanced performance and utilization of cross- validation methodologies, such as k-fold cross-validation, to more accurately evaluate the models' performance and mitigate the potential for overfitting. Keywords: Accuracy, Accident severity, Algorithms, Data analysis, Exploratory data analysis, F1-score, Fine-tuning, Machine learning, Precision, Random Forest model, Severity prediction Word Count: 311 WordsItem Intrusion Detection Performance in Cloud Network Environment: A Hybrid of Deep Belief Network and Multilayer Perceptron(Lead City University, 2023-12) Simon Olufikayo AWODELEIt is nearly impossible in the world of today to ponder the digital evolution of businesses, entertainment, organizations, and government without cloud computing. It is therefore not a surprise that many organizations and companies are increasing their investments in cybersecurity. Malicious attackers are increasingly focusing on unprotected web apps and systems connected to the Internet. This makes IT networks, systems, and the data they contain, more vulnerable to threats. attacks and intrusions that can harm business operations, inflict substantial costs, and damage a company's reputation. As a result, the cloud network security systems are essential, significant, and must not be compromised. Therefore, it is necessary to develop a network intrusion detection system using an anomaly detection approach for a cloud computing network that can identify as many intrusions as possible with better detection accuracy and reduce the false positive rate. In this research, a hybrid model has been developed for intrusion detection in a cloud network environment based on the use of UNSW-NB15 detection datasets. Multilayer Perceptron (MLP) and Deep Belief Network (DBN) algorithm techniques were used in a parallel integration pattern to form a single optimal model through the use of the voting classifier to achieve higher precision, lower inaccuracy, increased consistency, and reduce bias The experimental result showed that the hybrid model achieved a lower false positive rate, which makes it more promising for intrusion detection in cloud network environments while the MLP model, which is a conventional method, achieved better performance in terms of accuracy, recall, precision, and F1-score, furthermore the DBN model which is also a conventional model showed a lower performance in all categories of Implementation results. Keywords: Intrusion detection, Cloud computing, Deep learning, Voting Classifier, UNSW- NB15 Word Counts: 282 words.Item A Dual-Mode Radio-Frequency Identification and Facial Recognition System for Attendance Capturing(Lead City University, 2023-12) Yinka John ADEGOKEIn today's rapidly evolving technological landscape, efficient and secure attendance tracking systems are essential for various organizations. This study introduces a novel solution that combines Radio-Frequency Identification (RFID) and Facial Recognition technologies to create a robust attendance management system. By leveraging the capabilities of both hardware and software components, this system offers a seamless and accurate approach to recording and managing attendance data. The hardware component of the system utilizes Arduino microcontrollers and RFID modules to provide individual identification through RFID cards or tags. Each user is assigned a unique RFID card that triggers the RFID module to record the attendance information. Simultaneously, the system captures facial images using a camera module for facial recognition. A Python program processes the data using Open CV, associating it with the respective user's profile and initiates the facial recognition process. The facial recognition system identifies users by comparing the captured facial features with the pre-stored templates in the database. The system offers several advantages, including high accuracy in attendance recording, enhanced security, and rapid processing of data. Moreover, the combined approach reduces the time spent on proxy attendance, ensuring the integrity of attendance records, and creating options for attendance. The system also provides real-time attendance tracking and generates comprehensive reports for administrative purposes. This research presents a step- by-step implementation guide for setting up the RFID and Facial Recognition Attendance System using Arduino and Python, making it accessible for educational institutions, businesses, and organizations looking to streamline attendance management. The system's effectiveness is demonstrated through extensive testing, highlighting its reliability and robustness. The system represents a cutting-edge solution for modern attendance management needs. By harnessing the capabilities of the technologies adopted, this system offers a secure, accurate, and efficient approach to attendance tracking, paving the way for improved organizational efficiency and data integrity. Keywords: Python , Facial Recognition , Arduino, A Dual-Mode Radio, Frequency Identification, System, Attendance Capturing Word Count: 299Item Employee Attendance Tracking Using Facial Recognition System(Lead City University, 2023-12) Bukola Meka OWOLABITraditional pen-and-notebook methods for employee attendance are often susceptible to inaccuracies and falsifications. Biometric systems, despite being more secure, confront issues such as high acquisition costs and inefficiencies in capturing fingerprints, especially when hands are unclean or injured. In this study, a cutting-edge Employee Attendance Tracking System using Facial Recognition is developed, addressing the shortcomings of conventional attendance methods and biometric systems. The proposed system employs an array of Python libraries including Django, face_recognition, OpenCV (cv2), numpy, and PCA. These libraries are utilized for their strengths in image processing, facial recognition, and efficient data management. The primary objective is to create a reliable, cost-effective, and efficient alternative for recording employee attendance, overcoming the limitations of existing methods.The system utilizes advanced image processing techniques to tackle common challenges in facial recognition, such as noise interference, varying lighting conditions, and physical obstructions like occlusions. This is achieved through innovative approaches like noise reduction, illumination normalization, and occlusion handling, significantly improving the accuracy of facial recognition under diverse environmental conditions. A key component of the system is the "Capture_Image" module, which establishes a reference database by capturing and storing employee images. Concurrently, the "Recognize" module employs machine learning algorithms for facial recognition, ensuring accurate and timely recording of attendance. The effectiveness of the system is demonstrated in its ability to adapt to a variety of environments, attributed to its advanced image processing capabilities and robust algorithmic framework. This innovative system is particularly advantageous for institutions, corporate offices, and industries seeking secure, precise, and efficient attendance tracking solutions. It marks a significant advancement in the field of attendance management, offering a blend of enhanced security, accuracy, and operational efficiency. The study recommends further enhancements, such as incorporating advanced algorithms to improve recognition accuracy in different lighting and noise conditions. Keywords: Accuracy, Biometric system, Employee Attendance Tracking, Facial recognition, Machine learning algorithm Word Count: 295 Words
- «
- 1 (current)
- 2
- 3
- »