The Ph.D. student Mr. Mohanad Azeez Joodi was examined in the contents of the thesis entitled:

“Design and Implementation of a Robot-Cloud Architecture for Detection and Tracking in the Region of Interest”

which he prepared in the Department of Electrical Engineering/College of Engineering/Baghdad University in partial fulfillment of the requirements for the degree of Ph.D. in Electrical Engineering under the supervision of Assist. Prof. Dr. Muna Hadi Saleh and Prof. Dr. Dheyaa Jassem Kadhem on Thursday 22-6-2023.The examining committee consisted of Prof. Dr. Tareq Zeyad Ismael (Chairperson of the Examining Committee), Prof. Dr. Faisal Ghazi Mohammad, and Prof.Dr. Mohammad Essam Younis, Asst. Prof. Dr. Zainab Tawfeeq Baqer and Asst. Prof. Dr. Ekhlas Kadhem Hamza (Members of the Examining Committee).

The abstract of the student’s thesis is as follows:

Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the great level of data complexity, as well as the labeled data shortage which presents the key obstacles in the classification of images. This work proposed a CNN model built-from-scratch which has low computational complexity, low layers and the smallest filters sizes for training and classifying different dataset images applied on different approaches, implementing two hardware compact system scenarios.

In the first approach, the CNN classification was used to improve the face mask’s detection accuracy. The proposed model has two stages including Haar cascade detector and then applying the proposed on the MAFA dataset. After that the results were compared with other algorithms used the same dataset. The second approach proposed a CNN classification model which aims to classify the faces by three stages applied on a created real data set named (MAJFA). These three stages include the effects of the augmentation (online, offline, and without augmentation), the second stage involved denoising by applying the median, Gaussian, and mean filters and the third stage involved a multi-class proposed model that contained 12 classes of images that were trained on the MAJFA real data set. To enhance the test prediction accuracy and test time, pre-training and fine tuning as (transfer learning) were applied on the real dataset such as Alex net, VGG16, VGG19, Resnet50, and Google net inspection V3 so as test accuracy and test time of the proposed model are better as compared with other models to reach 99.7% and 4 seconds, respectively. The third approach is a hybrid learning algorithm by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven classifiers.

On the other side, this work proposed a face detection classification model based on a designed AWS cloud platform aiming to classify the faces into two classes (permission and non-permission). The designed cloud platform is implemented and tested through our cameras’ system to capture the images and upload them to AWS S3. Then two detectors, Haar cascade either MTCNN at AWS EC2 were run. After that the output results of these two detectors were compared using accuracy and execution time.

The experimental results of the first approach of the proposed classification model show that, in the dense layer and at various features vector values and rates of learning, the model accuracy extends from 97.55% to 98.43%, while the second approach results reveal that the model accuracy reached 98.81% when the offline augmentation model or 97.62% when the median filter were applied on the real data set. It reached 97.48% when the CNN multi-class proposed model was applied to identify the non-permission class. In the third approach, simulation results show that the support vector machine (SVM) shows a 0.011 mean square error, a 98.80% overall accuracy ratio, also a 0.99 F1 score. Moreover, the outcomes present having an LR classifier a 0.035 mean square error, a 96.42% total ratio. Then an F1 score of 0.96 comes in second place. The ANN classifier shows a 0.047 mean square error, a 95.23% total ratio, and then a 0.94 F1 score comes in third place. Furthermore, RF, WKNN, DT, and NB with mean square error and an F1 score advance to the next stage with accuracy ratios of 91.66%, 90.47%, 79.76%, and 75%, respectively.

Comments are disabled.