Factorial design for selecting an emotion recognition model using artificial intelligence

Authors

  • Anny Astrid Espitia Cubillos Military University Nueva Granada image/svg+xml
  • Robinson Jiménez-Moreno Programa de Ingeniería Mecatrónica, Universidad Militar Nueva Granada

DOI:

https://doi.org/10.53591/easi.V3i2.2617

Keywords:

Supervised Learning, Artificial Intelligence, Analysis of Variance, Factorial Design

Abstract

This article studies the effect of certain supervised training hyperparameters (number of epochs, batch size, and learning rate) on the performance of an artificial intelligence model for recognizing five emotions from facial images. A full factorial design was used, with 60 combinations run 12 times, resulting in 720 experiments developed on Google's Teachable Machine platform. The dependent variable corresponds to the percentage of accuracy in identifying emotions. The data were analyzed using a four-way analysis of variance and post hoc tests. The results show that varying the learning rate from 0.0001 to 0.001 increases accuracy by 46%, using a larger number of batches improves accuracy by 41%, and changing the batch size has the least improvement effect and increases variability. In terms of emotions, neutral was the best identified and sadness the worst. Statistical analysis confirmed significant differences between factor levels and their interactions. It is concluded that properly selecting supervised training hyperparameters is critical to improving the performance of the artificial intelligence model.

Author Biographies

  • Anny Astrid Espitia Cubillos, Military University Nueva Granada

    Performed her undergraduate studies in Industrial Engineering in the Universidad Militar Nueva Granada in 2002 and M.Sc. in Industrial Engineering from the Universidad de Los Andes in 2006. She is an Associate Professor on Industrial Engineering Program at Universidad Militar Nueva Granada, Bogotá, Colombia.

  • Robinson Jiménez-Moreno, Programa de Ingeniería Mecatrónica, Universidad Militar Nueva Granada

    He is  an Electronic Engineer graduated from Universidad Distrital Francisco José de Caldas in 2002. He received a M.Sc. in Engineering from Universidad Nacional de Colombia in 2012 and Ph.D. in Engineering at Universidad Distrital Francisco José de Caldas in 2018. His current working as associate professor of Universidad Militar Nueva Granada and research focuses on the use of convolutional neural networks for object recognition and image processing for robotic applications such as human-machine interaction.

References

Aliyev, I., Muradova, G., Aliyeva, S., Mustafazada, S., Smambayev, Z., & Shamoi, P. (2025). Public perception of feminism using sentiment and emotion analysis. IEEE 5th International Conference on Smart Information Systems and Technologies (SIST), 1–8. Astana, Kazakhstan. https://doi.org/10.1109/SIST61657.2025.11139321

Baek, C., Song, J. W., & Kong, K. (2025). Low-light face recognition for mobile robots. 2025 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC), 1–5. Seoul, Republic of Korea. https://doi.org/10.1109/ITC-CSCC66376.2025.11137701

Balakrishnan, S. G., Tamizh Selvan, S., Venkatesh, R., Vignesh, G. V., & Vishwa, P. (2025). Enhanced two step authentication system for ATM using multimodal facial recognition. 6th International Conference on Data Intelligence and Cognitive Informatics (ICDICI), 1–8. Tirunelveli, India. https://doi.org/10.1109/ICDICI66477.2025.11134970

Dong, X., Zhao, B., Mojaver, K. R., Liboiron-Ladouceur, O., & Meyer, B. H. (2025). Low-power face recognition using joint optical and electronic deep neural networks. IEEE Embedded Systems Letters. https://doi.org/10.1109/LES.2025.3604285

Dutta, S., & Ganapathy, S. (2024). Leveraging content and acoustic representations for speech emotion recognition. IEEE Transactions on Audio, Speech and Language Processing, 1–11. https://doi.org/10.1109/TASLPRO.2025.3603853

Harrath, Y., Bhutta, M., Adohinzin, O., & KC, N. (2025). Optimized face recognition using reinforcement learning and deep learning feature extraction. 11th International Conference on Big Data Computing Service and Machine Learning Applications (BigDataService), 218–225. Tucson, AZ, United States. https://doi.org/10.1109/BigDataService65758.2025.00040

Honcharenko, T., Dolhopolov, S., Sachenko, I., Achkasov, I., Fesan, A., & Paliy, S. (2025). Automated face recognition system using convolutional neural network. IEEE 5th International Conference on Smart Information Systems and Technologies (SIST), 1–4. Astana, Kazakhstan. https://doi.org/10.1109/SIST61657.2025.11139261

Igor, E., Toganas, N., & Shamoi, P. (2025). Emotion classification in digital art using color features and machine learning. IEEE 5th International Conference on Smart Information Systems and Technologies (SIST), 1–6. Astana, Kazakhstan. https://doi.org/10.1109/SIST61657.2025.11139293

Jinsha, K. S., & Bai, V. R. (2025). Emotion detection from distorted images using optimized CNN & GAN. 4th International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS), 908–914. Ernakulam, India. https://doi.org/10.1109/ACCESS65134.2025.11135594

Li, H., Xu, Y., Yao, J., Wang, N., Gao, X., & Han, B. (2025). Knowledge-enhanced facial expression recognition with emotional-to-neutral transformation. IEEE Transactions on Multimedia, 1–20. https://doi.org/10.1109/TMM.2025.360491

Li, N., Shen, X., Sun, L., Xiao, Z., Ding, T., Li, T., & Li, X. (2023). Chinese face dataset for face recognition in an uncontrolled classroom environment. IEEE Access, 11, 86963–86976. https://doi.org/10.1109/ACCESS.2023.3302919

Meng, S., Liu, P., Mei, X., & Jiang, J. (2024). Facial negative emotion recognition of special employees based on machine vision. 8th International Workshop on Control Engineering and Advanced Algorithms (IWCEAA), 135–138. Nanjing, China. https://doi.org/10.1109/IWCEAA63616.2024.10823939

Meng, S., Liu, P., Mei, X. and Jiang, J. (2024, November 01-03) Facial Negative Emotion Recognition of Special Employees Based on Machine Vision. 8th International Workshop on Control Engineering and Advanced Algorithms (IWCEAA), (pp. 135-138). Nanjing, China, https://doi.org/10.1109/IWCEAA63616.2024.10823939

Otarbay, Z., Kyzyrkanov, A., Tursynova, N., Turginbekov, A., Saltanat, A., & Amirov, A. (2025). Improving Electroencephalography-Based Emotion Recognition via Transformer Networks for Subject-Independent Classification. IEEE 5th International Conference on Smart Information Systems and Technologies (SIST), 1–6. Astana, Kazakhstan. https://doi.org/10.1109/SIST61657.2025.11139359

Ritharson, P. I., Vidhya, K., Madhavan, G., Barath, D., & Sathish Kumar, K. (2023). GAN-based facial feature reconstruction for improved masked face recognition during COVID. 2023 International Conference on Circuit Power and Computing Technologies (ICCPCT), 1–5. Kollam, India. https://doi.org/10.1109/ICCPCT58313.2023.10245882

Septiani, D., Wahyono, & Adhinata, F. D. (2025). Face recognition underage progression using hybrid features of textures and geometric. 2025 International Workshop on Intelligent Systems (IWIS), 1–6. Ulsan, Republic of Korea. https://doi.org/10.1109/IWIS66215.2025.11142411

Yadav, U., Bondre, S., Thakre, B., & Likhar, K. (2024). Speech-to-text emotion detection system using SVM, CNN, and BERT. 2024 IEEE International Conference on Smart Power Control and Renewable Energy (ICSPCRE), 1–5. Rourkela, India. https://doi.org/10.1109/ICSPCRE62303.2024.10675267

Published

2026-01-05

Issue

Section

Research articles

How to Cite

Espitia Cubillos, A. A., & Jiménez-Moreno, R. (2026). Factorial design for selecting an emotion recognition model using artificial intelligence. EASI: Engineering and Applied Sciences in Industry, 4(3), 16-27. https://doi.org/10.53591/easi.V3i2.2617