LAPSE:2023.28357
Published Article

LAPSE:2023.28357
Training Feedforward Neural Networks Using an Enhanced Marine Predators Algorithm
April 11, 2023
Abstract
The input layer, hidden layer, and output layer are three models of the neural processors that make up feedforward neural networks (FNNs). Evolutionary algorithms have been extensively employed in training FNNs, which can correctly actualize any finite training sample set. In this paper, an enhanced marine predators algorithm (MPA) based on the ranking-based mutation operator (EMPA) was presented to train FNNs, and the objective was to attain the minimum classification, prediction, and approximation errors by modifying the connection weight and deviation value. The ranking-based mutation operator not only determines the best search agent and elevates the exploitation ability, but it also delays premature convergence and accelerates the optimization process. The EMPA integrates exploration and exploitation to mitigate search stagnation, and it has sufficient stability and flexibility to acquire the finest solution. To assess the significance and stability of the EMPA, a series of experiments on seventeen distinct datasets from the machine learning repository of the University of California Irvine (UCI) were utilized. The experimental results demonstrated that the EMPA has a quicker convergence speed, greater calculation accuracy, higher classification rate, strong stability and robustness, which is productive and reliable for training FNNs.
The input layer, hidden layer, and output layer are three models of the neural processors that make up feedforward neural networks (FNNs). Evolutionary algorithms have been extensively employed in training FNNs, which can correctly actualize any finite training sample set. In this paper, an enhanced marine predators algorithm (MPA) based on the ranking-based mutation operator (EMPA) was presented to train FNNs, and the objective was to attain the minimum classification, prediction, and approximation errors by modifying the connection weight and deviation value. The ranking-based mutation operator not only determines the best search agent and elevates the exploitation ability, but it also delays premature convergence and accelerates the optimization process. The EMPA integrates exploration and exploitation to mitigate search stagnation, and it has sufficient stability and flexibility to acquire the finest solution. To assess the significance and stability of the EMPA, a series of experiments on seventeen distinct datasets from the machine learning repository of the University of California Irvine (UCI) were utilized. The experimental results demonstrated that the EMPA has a quicker convergence speed, greater calculation accuracy, higher classification rate, strong stability and robustness, which is productive and reliable for training FNNs.
Record ID
Keywords
experimental results, feedforward neural networks, marine predators algorithm, ranking-based mutation operator
Suggested Citation
Zhang J, Xu Y. Training Feedforward Neural Networks Using an Enhanced Marine Predators Algorithm. (2023). LAPSE:2023.28357
Author Affiliations
Zhang J: School of Electrical and Optoelectronic Engineering, West Anhui University, Lu’an 237012, China
Xu Y: School of Electrical and Optoelectronic Engineering, West Anhui University, Lu’an 237012, China
Xu Y: School of Electrical and Optoelectronic Engineering, West Anhui University, Lu’an 237012, China
Journal Name
Processes
Volume
11
Issue
3
First Page
924
Year
2023
Publication Date
2023-03-17
ISSN
2227-9717
Version Comments
Original Submission
Other Meta
PII: pr11030924, Publication Type: Journal Article
Record Map
Published Article

LAPSE:2023.28357
This Record
External Link

https://doi.org/10.3390/pr11030924
Publisher Version
Download
Meta
Record Statistics
Record Views
205
Version History
[v1] (Original Submission)
Apr 11, 2023
Verified by curator on
Apr 11, 2023
This Version Number
v1
Citations
Most Recent
This Version
URL Here
https://psecommunity.org/LAPSE:2023.28357
Record Owner
Auto Uploader for LAPSE
Links to Related Works
(0.4 seconds)
