Parallel training of a set of online sequential extreme learning machines

Fecha
2022Resumen
Size databases have constantly increased from advances in technology and the Internet, so processing this vast amount of information has been a great challenge. The neural network Extreme Learning Machine (ELM) have been widely accepted in the scientific community due to their simplicity and good generalization capacity. This model consists of randomly assigning the weights of the hidden layer, and analytically calculating the weights of the output layer through the Moore- Penrose generalized inverse. High-Performance Computing has emerged as an excellent alternative for tackling problems involving large-scale databases and reducing processing times. The use of parallel computing tools in Extreme Learning Machines and their variants, especially the Online Sequential Extreme Learning Machine (OS-ELM), has proven to be a good alternative to tackle regression and classification problems with largescale databases. In this paper, we present a parallel training methodology consisting of several Online Sequential Extreme Learning Machines running on different cores of the Central Processing Unit, with a balanced fingerprint database having 2,000,000 samples distributed in five classes. The results show that training and validation times decrease as the number of processes increases since the number of samples to train in each process decreases. In addition, by having several Online Sequential Extreme Learning Machines trained, new samples can beclassified on any of them.
Fuente
Proceedings - International Conference of the Chilean Computer Science Society, SCCC, 2022, 1-4Link de Acceso
Click aquí para ver el documentoIdentificador DOI
doi.org/10.1109/SCCC57464.2022.10000361Colecciones
La publicación tiene asociados los siguientes ficheros de licencia: