Abstract
Optimizing a multi-layer perceptron (MLP) network using a meta-heuristic algorithm can be quite difficult, but it has the potential to greatly enhance the network’s overall performance. Meta-heuristic algorithms are like handy helpers that can aid in figuring out the best settings for different aspects of a neural network, such as how much importance to give to certain factors ( weights and biases), how quickly it should learn, how many neurons should be in each layer, and how many layers should be hidden. Tuning an MLP network using a meta- heuristic algorithm requires careful consideration and experimentation to find an optimal set of parameters that can improve its performance. In this thesis we present Adaptive trainer models based on, Artificial Gorilla Troops Optimizer (GTO) and African Vultures Optimization Algorithm (AVOA) to tune the MLP network. The GTO and AVOA algorithms can help make an MLP network better by adjusting its weights and biases, which can lead to more accurate results and better performance. Five standard classification datasets (breast cancer, XOR, heart, balloon, Iris) the suggested techniques’ efficiency was measured by comparing them to the established standards. The New Optimizers, GTO and AVOA are utilized for the first time as Multi-Layer Perceptron trainers. Furthermore the effectiveness of the suggested approach was evaluated by comparing it with three commonly used optimization techniques: whale optimizer algorithm (WOA), gray wolf algorithm optimization (GWO), and sine cosine optimizer algorithm (SCA). The experimental results showed that both the GTO_MLP and AVOA_MLP algorithms allow the “MLP” network to explore different regions of the search space and converge towards an optimal solution that leads to an optimal local problem solution and a classification rate between 90% to 100% to achieve a high classification rate.