Application of transformer nets for discrimination of liquid cleaning products
DOI:
https://doi.org/10.53591/easi.v4i2.2600Keywords:
Artificial intelligence, Discrimination of products, Labeling, Transformer neural networksAbstract
This paper presents an artificial intelligence algorithm based on transformer neural networks that allows the discrimination of liquid products identified with different labels and with presentations in various colors, from a camera, to facilitate the management of their subsequent handling by a computer, which allows us to realize, in manufacturing environments, the connection between the physical and the digital world. The process begins with the digitalization of the products to establish a database. Next, the parameters of the network training are defined, which are then evaluated by measuring the learning time, accuracy, and classification time, all of which are developed in a virtual environment. Thanks to the results, it is possible to conclude that even with a small amount of data, including label images that are not complete or of the best quality, the processing times do not exceed 0.5 seconds. A recognition rate of 100% accuracy is achieved, corresponding to the absence of confusion between the considered categories, given the robustness of the selected transformer network.
References
Al-Ali, M., Hajji, Y., Said, Y., Hleili, M., Alanzi, A., Laatar, A., & Atri, M. (2023). Solar Energy Production Forecasting Based on a Hybrid CNN-LSTM-Transformer Model. Mathematics, 11(3), 676. https://doi.org/10.3390/math11030676
Bose, R., Mondal, H., Sarkar, I., & Roy, S. (2022). Design of smart inventory management system for construction sector based on IoT and cloud computing. e-Prime - Advances in Electrical Engineering, Electronics and Energy, 2, 100051. https://doi.org/10.1016/j.prime.2022.100051
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., & al., M. D. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. https://doi.org/10.48550/arXiv.2010.11929
Espitia Cubillos, A. A., Jiménez Moreno, R., & Rodríguez Carmona, E. (2025). Deep learning architectures for location and identification in storage systems. IAES International Journal of Artificial Intelligence (IJ-AI), 592-601. https://doi.org/10.11591/ijai.v14.i1.pp592-601
Guacheta-Alba, J. C., Espitia-Cubillos, A. A., & Jiménez-Moreno, R. (2024). Automated Box Classification in a Virtual Industrial Environment Using Machine Vision Algorithms. 12th International Conference on Control, Mechatronics and Automation (ICCMA) (págs. 305-310). London, UK: IEEE. https://doi.org/10.1109/ICCMA63715.2024.10843920
Habib, F. E., Bnouachir, H., Chergui, M., & Ammoumou, A. (2022). Industry 4.0 concepts and implementation challenges: Literature Review. 2022 9th International Conference on Wireless Networks and Mobile Communications (WINCOM), (págs. 1-6). Rabat, Morocco. https://doi.org/10.1109/WINCOM55661.2022.9966456
Huang, T., Qian, H., Huang, Z., Xu, N., Huang, X., Yin, D., & Wang, B. (2024). A time patch dynamic attention transformer for enhanced well production forecasting in complex oilfield operations. Energy, 309, 133186. https://doi.org/10.1016/j.energy.2024.133186
Liu, L., Song, X., & Zhou, Z. (2022). Aircraft engine remaining useful life estimation via a double attention-based data-driven architecture. Reliability Engineering & System Safety, 221, 108330. https://doi.org/10.1016/j.ress.2022.108330
Ma, Y., Li, X., & Yuan, C. (2024). Intelligent prediction of oil well working conditions based on Transformer. Journal of Physics: Conference Series, 2901, 012022. https://doi.org/10.1088/1742-6596/2901/1/012022
Ma, Y., Wang, X., Yuan, J., Zhang, L., Chen, J., & Fen, K. (2024). Clothing Detection Action Recognition Based on Hierarchical Transformer Networks. 16th International Conference on Communication Software and Networks (ICCSN), (págs. 199-206). Ningbo, China. https://doi.org/10.1109/ICCSN63464.2024.10793382
Mark, B. G., Rauch, E., & Matt, D. T. (2022). Systematic selection methodology for worker assistance systems in manufacturing. Computers & Industrial Engineering, 166, 107982. https://doi.org/10.1016/j.cie.2022.107982
Qi, Y., & Sun, H. (2024). Defect Detection of Insulator Based on YOLO Network. 9th International Conference on Electronic Technology and Information Science (ICETIS), (págs. 232-235). Hangzhou, China. https://doi.org/10.1109/ICETIS61828.2024.10593675
Qureshi, A., Butt, A., Alazeb, A., Mudawi, N., Alonazi, M., Almujally, N., . . . Liu, H. (2024). Semantic Segmentation and YOLO Detector over Aerial Vehicle Images. Computers, Materials & Continua, 80(2). https://doi.org/10.32604/cmc.2024.052582
Saleem, M. H., Potgieter, J., & Arif, K. M. (2022). Weed Detection by Faster RCNN Model: An Enhanced Anchor Box Approach. Agronomy, 12(7), 1580. https://doi.org/10.3390/agronomy12071580
Touvron, H., Cord, M., El-Nouby, A., Verbeek, J., & Jégou, H. (2022). Three things everyone should know about vision transformers. En S. Avidan, G. Brostow, M. Cissé, G. M. Farinella, & T. Hassner (Edits.), Computer Vision–ECCV, 13684, 497-515. Switzerland: Springer Nature. https://doi.org/10.1007/978-3-031-20053-3_29
Vaddadi, S., Srinivas, V., Reddy, N., Girish, H., Rajkiran, D., & Devipriya, A. (2022). Factory Inventory Automation using Industry 4.0 Technologies. 022 IEEE IAS Global Conference on Emerging Technologies (GlobConET), (págs. 734-738). Arad, Romania. https://doi.org/10.1109/GlobConET53749.2022.9872416
Wang, S., Sun, H., Wang, Y., & Luo, X. (2023). Prediction of key chemical parameters based on improved Transformer. 4th International Conference on Computer Engineering and Application (ICCEA), (págs. 855-859). Hangzhou, China. https://doi.org/10.1109/ICCEA58433.2023.10135471
Wu, B. (2023). Motion Control Algorithm for Automatic Welding of Complex Intersecting Line Joints Based on Deep Learning. 2023 International Conference on Mechatronics, IoT and Industrial Informatics (ICMIII), (págs. 352-356). Melbourne, Australia. https://doi.org/10.1109/ICMIII58949.2023.00073
Yoon, J., Han, J., & Nguyen, T. (2023). Logistics box recognition in robotic industrial de-palletising procedure with systematic RGB-D image processing supported by multiple deep learning methods. Engineering Applications of Artificial Intelligence, 123(B), 106311. https://doi.org/10.1016/j.engappai.2023.106311
Zheng, H., Chen, X., Cheng, H., Du, Y., & Jiang, Z. (2024). MD-YOLO: Surface Defect Detector for Industrial Complex Environments, Optics and Lasers in Engineering, 178, 108170. https://doi.org/10.1016/j.optlaseng.2024.108170
Published
Issue
Section
License
Copyright (c) 2025 Anny Astrid Espitia Cubillos, Robinson Jiménez-Moreno, Esperanza Rodríguez-Carmona

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Contributions published in the EASI journal follow the open access license CC BY-NC-ND 4.0 (Creative Commons Attribution-NonCommercial-NoDerivs 4.0). This license empowers you as an author and ensures wide dissemination of your research while still protecting your rights.
For authors:
- Authors retain copyrights without restrictions according to CC BY-NC-ND 4.0 license.
- The journal obtains a license to publish the first original manuscript.
For readers/users:
Free access and distribution: Anyone can access, download, copy, print, and share the published article freely according to the license CC BY-NC-ND 4.0 terms.
Attribution required: If any third party use the published material, they must give credit to the creator by providing the name, article title, and journal name, ensuring the intellectual property of the author(s), and helping to build the scholarly reputation.
Non-commercial use: only noncommercial use of the published work is permitted. Noncommercial means not primarily intended for or directed towards commercial advantage or monetary compensation by any third party.
No modifications allowed: The content of the published article cannot be changed, remixed, or rebuilt upon the author’s work. This ensures the integrity and accuracy of the research findings.








