Factorial design for selecting an emotion recognition model using artificial intelligence
DOI:
https://doi.org/10.53591/easi.V3i2.2617Keywords:
Supervised Learning, Artificial Intelligence, Analysis of Variance, Factorial DesignAbstract
This article studies the effect of certain supervised training hyperparameters (number of epochs, batch size, and learning rate) on the performance of an artificial intelligence model for recognizing five emotions from facial images. A full factorial design was used, with 60 combinations run 12 times, resulting in 720 experiments developed on Google's Teachable Machine platform. The dependent variable corresponds to the percentage of accuracy in identifying emotions. The data were analyzed using a four-way analysis of variance and post hoc tests. The results show that varying the learning rate from 0.0001 to 0.001 increases accuracy by 46%, using a larger number of batches improves accuracy by 41%, and changing the batch size has the least improvement effect and increases variability. In terms of emotions, neutral was the best identified and sadness the worst. Statistical analysis confirmed significant differences between factor levels and their interactions. It is concluded that properly selecting supervised training hyperparameters is critical to improving the performance of the artificial intelligence model.
References
Aliyev, I., Muradova, G., Aliyeva, S., Mustafazada, S., Smambayev, Z., & Shamoi, P. (2025). Public perception of feminism using sentiment and emotion analysis. IEEE 5th International Conference on Smart Information Systems and Technologies (SIST), 1–8. Astana, Kazakhstan. https://doi.org/10.1109/SIST61657.2025.11139321
Baek, C., Song, J. W., & Kong, K. (2025). Low-light face recognition for mobile robots. 2025 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC), 1–5. Seoul, Republic of Korea. https://doi.org/10.1109/ITC-CSCC66376.2025.11137701
Balakrishnan, S. G., Tamizh Selvan, S., Venkatesh, R., Vignesh, G. V., & Vishwa, P. (2025). Enhanced two step authentication system for ATM using multimodal facial recognition. 6th International Conference on Data Intelligence and Cognitive Informatics (ICDICI), 1–8. Tirunelveli, India. https://doi.org/10.1109/ICDICI66477.2025.11134970
Dong, X., Zhao, B., Mojaver, K. R., Liboiron-Ladouceur, O., & Meyer, B. H. (2025). Low-power face recognition using joint optical and electronic deep neural networks. IEEE Embedded Systems Letters. https://doi.org/10.1109/LES.2025.3604285
Dutta, S., & Ganapathy, S. (2024). Leveraging content and acoustic representations for speech emotion recognition. IEEE Transactions on Audio, Speech and Language Processing, 1–11. https://doi.org/10.1109/TASLPRO.2025.3603853
Harrath, Y., Bhutta, M., Adohinzin, O., & KC, N. (2025). Optimized face recognition using reinforcement learning and deep learning feature extraction. 11th International Conference on Big Data Computing Service and Machine Learning Applications (BigDataService), 218–225. Tucson, AZ, United States. https://doi.org/10.1109/BigDataService65758.2025.00040
Honcharenko, T., Dolhopolov, S., Sachenko, I., Achkasov, I., Fesan, A., & Paliy, S. (2025). Automated face recognition system using convolutional neural network. IEEE 5th International Conference on Smart Information Systems and Technologies (SIST), 1–4. Astana, Kazakhstan. https://doi.org/10.1109/SIST61657.2025.11139261
Igor, E., Toganas, N., & Shamoi, P. (2025). Emotion classification in digital art using color features and machine learning. IEEE 5th International Conference on Smart Information Systems and Technologies (SIST), 1–6. Astana, Kazakhstan. https://doi.org/10.1109/SIST61657.2025.11139293
Jinsha, K. S., & Bai, V. R. (2025). Emotion detection from distorted images using optimized CNN & GAN. 4th International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS), 908–914. Ernakulam, India. https://doi.org/10.1109/ACCESS65134.2025.11135594
Li, H., Xu, Y., Yao, J., Wang, N., Gao, X., & Han, B. (2025). Knowledge-enhanced facial expression recognition with emotional-to-neutral transformation. IEEE Transactions on Multimedia, 1–20. https://doi.org/10.1109/TMM.2025.360491
Li, N., Shen, X., Sun, L., Xiao, Z., Ding, T., Li, T., & Li, X. (2023). Chinese face dataset for face recognition in an uncontrolled classroom environment. IEEE Access, 11, 86963–86976. https://doi.org/10.1109/ACCESS.2023.3302919
Meng, S., Liu, P., Mei, X., & Jiang, J. (2024). Facial negative emotion recognition of special employees based on machine vision. 8th International Workshop on Control Engineering and Advanced Algorithms (IWCEAA), 135–138. Nanjing, China. https://doi.org/10.1109/IWCEAA63616.2024.10823939
Meng, S., Liu, P., Mei, X. and Jiang, J. (2024, November 01-03) Facial Negative Emotion Recognition of Special Employees Based on Machine Vision. 8th International Workshop on Control Engineering and Advanced Algorithms (IWCEAA), (pp. 135-138). Nanjing, China, https://doi.org/10.1109/IWCEAA63616.2024.10823939
Otarbay, Z., Kyzyrkanov, A., Tursynova, N., Turginbekov, A., Saltanat, A., & Amirov, A. (2025). Improving Electroencephalography-Based Emotion Recognition via Transformer Networks for Subject-Independent Classification. IEEE 5th International Conference on Smart Information Systems and Technologies (SIST), 1–6. Astana, Kazakhstan. https://doi.org/10.1109/SIST61657.2025.11139359
Ritharson, P. I., Vidhya, K., Madhavan, G., Barath, D., & Sathish Kumar, K. (2023). GAN-based facial feature reconstruction for improved masked face recognition during COVID. 2023 International Conference on Circuit Power and Computing Technologies (ICCPCT), 1–5. Kollam, India. https://doi.org/10.1109/ICCPCT58313.2023.10245882
Septiani, D., Wahyono, & Adhinata, F. D. (2025). Face recognition underage progression using hybrid features of textures and geometric. 2025 International Workshop on Intelligent Systems (IWIS), 1–6. Ulsan, Republic of Korea. https://doi.org/10.1109/IWIS66215.2025.11142411
Yadav, U., Bondre, S., Thakre, B., & Likhar, K. (2024). Speech-to-text emotion detection system using SVM, CNN, and BERT. 2024 IEEE International Conference on Smart Power Control and Renewable Energy (ICSPCRE), 1–5. Rourkela, India. https://doi.org/10.1109/ICSPCRE62303.2024.10675267
Published
Issue
Section
License
Copyright (c) 2026 Anny Astrid Espitia Cubillos, Robinson Jiménez-Moreno

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Contributions published in the EASI journal follow the open access license CC BY-NC-ND 4.0 (Creative Commons Attribution-NonCommercial-NoDerivs 4.0). This license empowers you as an author and ensures wide dissemination of your research while still protecting your rights.
For authors:
- Authors retain copyrights without restrictions according to CC BY-NC-ND 4.0 license.
- The journal obtains a license to publish the first original manuscript.
For readers/users:
Free access and distribution: Anyone can access, download, copy, print, and share the published article freely according to the license CC BY-NC-ND 4.0 terms.
Attribution required: If any third party use the published material, they must give credit to the creator by providing the name, article title, and journal name, ensuring the intellectual property of the author(s), and helping to build the scholarly reputation.
Non-commercial use: only noncommercial use of the published work is permitted. Noncommercial means not primarily intended for or directed towards commercial advantage or monetary compensation by any third party.
No modifications allowed: The content of the published article cannot be changed, remixed, or rebuilt upon the author’s work. This ensures the integrity and accuracy of the research findings.







