Single-hidden Layer Feedforward Neural Network (SLFN) has been widely used in pattern recognition, automatic control and data mining. However, the speed of traditional learning methods is far from meeting the actual needs, which has become the main bottleneck restricting its development. There are two main reasons for this situation: (1) the traditional error back propagation method (BP) is mainly based on the idea of gradient descent and requires multiple iterations; (2) all parameters of the network need to be iteratively determined during the training process. Therefore, the algorithm has a large amount of calculation and search space. To solve the above problems, a fast learning method (RELM) is proposed based on the one-shot learning idea of ELM and the structural risk minimization theory, which avoids multiple iterations and local minimums and has good generalization, robustness and controllability. Experiments show that the comprehensive performance of RELM is better than that of ELM, BP and SVM.
You Might Like
Recommended ContentMore
Open source project More
Popular Components
Searched by Users
Just Take a LookMore
Trending Downloads
Trending ArticlesMore