Development of a Jetson Nano drone for monitoring and detection of forest fires

Ortiz-Mosquera Neisera, Delgado-Vera Meillyna, Espinoza-Álvarez Rolandoa, Trujillo-Borja Ximenaa

aFaculty of Industrial Engineering,Universidad de Guayaquil, Guayaquil, Ecuador.

Corresponding author: neiser.ortizm@ug.edu.ec


Vol. 04, Issue 02 (2025): October
ISSN-e 2953-6634
ISSN Print: 3073-1526
Submitted: October 15, 2025
Revised: October 18, 2025
Accepted: October 18, 2025
Ortiz-Mosquera, N. et al. (2025). Development of a Jetson Nano drone for monitoring and detection of forest fires. EASI: Engineering and Applied Sciences in Industry, 4(2), 41-47. https://doi.org/10.53591/easi.v4i2.2674


Abstract

This project presents the development of an artificial intelligence-based prototype for the monitoring and detection of forest fires. The prototype consists of a drone equipped with a high-resolution camera connected to a Jetson Nano. The system was programmed in Python to develop Convolutional Neural Network models for identifying fire indicators, such as smoke and flames, from the images captured by the drone. Unlike traditional methods that rely on fixed or satellite sensors or cameras, this mobile approach enables flexible surveillance of large forest areas, even in hard-to-reach terrain. During testing, an accuracy rate of 95% to 99% was achieved, demonstrating the system’s capability to process large volumes of visual data in real time. The maximum video transmission range is approximately 3.7 km without interference. Smoke and fire detections were transmitted to the ThingSpeak platform allowing visualization and analysis to facilitate fast and effective decision-making. It is concluded that the prototype aims to enhance response capabilities by enabling early fire and minimizing environmental and economic damage.

Keywords: Circular Economy, Plastics Recycling, Millennials and Sustainability.

1. INTRODUCTION

In Ecuador, forest fires have escalated into a significant threat, showing a concerning increase in both frequency and intensity in recent years (Vaca et al., 2024). These fires have caused severe damage to ecosystems, wildlife and human communities. Factors such as climate change, deforestation and irresponsible human behavior have exacerbated this issue. Despite efforts to control these fires, traditional detection methods, based on fixed surveillance have proven inadequate for large areas and adverse conditions. Early detection and rapid response are crucial for mitigating damage and protecting natural resources. However, advances in technology and artificial intelligence present an opportunity to revolutionize these processes.

This study proposes the development of a prototype drone integrating NVIDIA's Jetson Nano, designed for the automatic detection of fire and smoke. The detection process will focus on real-time analysis using artificial intelligence (AI) to identify fire indicators. The system will send alerts via the ThingSpeak platform, providing aerial surveillance. The computational capability of the Jetson Nano will enable the execution of complex AI models, optimizing detection processes. This research aims to enhance the early detection of wildfires through the integration of drones and artificial intelligence. It will focus on the design, implementation and evaluation of an innovative system, addressing challenges such as adaptation to diverse ecosystems, transmission issues in remote areas and implementation costs. The study seeks to provide an advanced solution for fire management, protecting natural resources and reducing the impact of such events.

Studies (Bouguettaya et al., 2022; Choudhary et al., 2025) examine the use of drones equipped with cameras and integrated programmable hardware system for the accurate real-time detect forest fires. Deep learning techniques are employed to identify fire and smoke patterns with high accuracy, demonstrating the feasibility of autonomous detection in complex natural environments.

Studies (Mahdi & Mahmood, 2022; Yuan et al., 2017) utilize integrated Graphics Processing Unit (GPU) systems, such as Jetson Nano, for local processing of images captured by drones, as these devices are efficient in terms of low latency and responsiveness in fire detection.

2. METHODOLOGY

This study employed several scientific methods, including bibliographic, descriptive, experimental and quantitative approaches. The bibliographic method was used to provide a theoretical foundation for the understanding the issue of forest fires in Ecuador and to select the technological components for the development of the prototype. The descriptive method was then applied to characterize the factors and variables associated with forest fires. Finally, the experimental and quantitative methods were utilized to validate the effectiveness of the prototype. Figure 1 illustrates the scientific methods applied throughout the development of this project.

Scientific methods employed
Figure 1. Scientific methods employed.

2.1. Materials used

The Jetson Nano board was used as it is a commonly utilized GPU for artificial intelligence, specifically for processing image data. A 5-inch FPV freestyle drone was employed for aerial mobility and real-time monitoring. Additionally, a Foxeer camera was used for its high resolution, and a Radiolink CrossRace APM Flight controller was implemented to ensure stability and precise steering of the drone. To further enhance stability, a Radiolink FLYCOLOR Raptor 5 60 A 4-in-1 Radiolink ESC was used. Finally, the ThingSpeak platform was utilized to display graphs of fire and smoke based on the drone's real-time data reception and monitoring. Table 1 provides a breakdown of the prototype components and their corresponding technical specifications.

Table 1. VTX Transmitter Results.

   
Technical Resources   
   
Specifications   
   
Kit Jetson Nano   
   
Microcomputing unit featuring a 128-core Maxwell   GPU and 4 GB RAM   
   
Radiolink CrossRace   
   
A compact and high-performance flight controller   specifically engineered for racing drones and First-Person View (FPV)   applications   
   
RadioLink Raptor Flycolor   
   
ESC 4 in 1, 60A, BLHeli_32, High efficiency and   fast response   
   
RadioMaster RP2 nano Recpetor   
   
2.4GHz, ExpressLRS, Capable of long-range   communication with low latency   
   
LiPo battery OVONIC   
   
Lipo 1300mAh, 4S, Supports rapid charging and   features a lightweight design   
   
Propellers   
   
7 Inch   
   
FPV Foxeer Camera mini   
   
Miniature camera with low latency and high   resolution 720p, 480p y 1080p   
   
Foxeer antenna   
   
An omnidirectional 5.8 GHz antenna featuring high gain and low signal   loss   
   
Motor FPV   
   
A high-speed, lightweight brushless motor   characterized by rapid response, specifically designed for FPV racing drones   
   
Radiomaster   
   
A high-precision 5.8 GHz multiprotocol remote   control unit featuring an integrated LCD screen and operating on EdgeTX   firmware.   
   
VTX FVP   
   
800 mW, 5.8GHz, Operational range of up to 3.7 km      

2.2. Prototype development

The prototype is a system designed to enchance forest fire alert mechanisms by using neural networks for real-time detection of fires through the identification of fire and smoke. This helps improve the efficiency of alert systems and optimization resource management.

The prototype employs a drone equipped with a camera that simultaneously detects fire and smoke, while a computer vision algorithm processes this data. The detected results are displayed on an open-source web page for further analysis. Python (Pertuz, 2022) was selected for model development, as it supports a wide range of machine learning tasks, including object detection, image recognition and segmentation. Additionally, NVIDIA's Jetpack was used, providing a Linux-based operating system along with a set of libraries and Software Development Kits (SDKs). The model was trained with TensorFlow/PyTorch using a dataset of 2,468 images, optimized for Jetson Nano with TensorRT.

2.3. General scheme

The main architecture of the prototype for monitoring forest fires using artificial intelligence is primarily based on the Jetson Nano system. This system utilizes Python to develop Convolutional Neural Network models for identifying fire indicators, such as smoke and flames, from the images captured by the drone. The processing results are transmitted via Wi-Fi, and once processed, the system sends this data to a cloud platform, such as ThingSpeak, for visualization of the detections. Figure 2 presents the general scheme of the prototype.

General scheme of the prototype
Figure 2. General scheme of the prototype.

Figure 3 illustrates the flight controller connections, which are essential for drone’s operation. The controller is connected to an ESC (Electronic Speed Control) which in turn is linked to the four motors and a 6S, 1300 mAh LiPo battery. These specifications ensure proper functionality, activating the motors and enabling the vehicle to take off.

Flight controller connections
Figure 3. Flight controller connections.

3. Results

Several tests were conducted on the prototype, including:

  1. Video Transmission
  2. Monitoring drone battery life
  3. Fire and Smoke Detection Accuracy
  4. ThingSpeak Integration

3.1. Video Transmission

The parameters shown indicate the technical specifications of the drone's video transmitter performance, including range and signal quality. Higher transmission power generally enables a longer range and improved signal quality. As the drone moves farther away, the video signal quality may degrade, affecting the operator's visibility and ability to control the drone effectively.

Table 2. VTX Transmitter Results.

   
Parameter   
   
Value   
   
Emission Power   
   
800 mW   
   
Maximum Range   
   
3.7 km (interference-free)   

3.2. Monitoring drone battery life

The following chart details the drone's battery life under different flight conditions. Smooth flight refers to an operational style in which the drone moves at a moderate speed and performs gentle maneuvers. This type of flight requires less power compared to accelerated flight.

Table 3. Battery life.

   
Flight Style   
   
Battery life   (1300mAh)   
   
Smooth   
   
10 a 12 minutes   
   
Accelerated   
   
5   a 8 minutes   

3.3. Fire and Smoke Detection Accuracy

The following table provides a detailed overview of the accuracy and efficiency of the tests conducted for fire and smoke detection based on an image dataset. When comparing the trained models, the second model demonstrates better results in term of accuracy and detection compared to the first, using the same 30% confidence level. The only variation was the increase in dataset size, suggesting that expanding the amount of data can enhance model performance. It is important to noted that the datasets used for this comparison were different for each model, ensuring that no image was repeated.

Table 4. Model comparison.

   
Model   
   
Image   
   
TP   
   
FP   
   
FN   
   
Confidence level   
   
Accuracy   
   
Identified   
   
1   
   
2468   
   
210   
   
10   
   
198   
   
30%   
   
95%   
   
51%   
   
2   
   
8172   
   
314   
   
2   
   
102   
   
30%   
   
99%   
   
75%   

3.4. ThingSpeak Integration

The fire and smoke detection values range from 1 to 0, where 1 indicates of detection and 0 indicates the absence detection, as shown in Figure 4.

ThingSpeak Fire and Smoke Detection Images
Figure 4. ThingSpeak Fire and Smoke Detection Images.

The average time required for the system to detect and send an alert regarding the presence of fire or smoke through the ThingSpeak platform is approximately 1 to 20 seconds.

Discussion

Studies (Kaufmann, 2010; Vaca et al., 2024) investigate the deployment of drones equipped with cameras and programmable electronics for the autonomous real-time detection of wildfires. These studies leverage deep learning techniques to accurately identify fire and smoke patterns, thereby demonstrating the viability of such systems in complex natural environments. However, it is important to note that these approaches rely on high-performance hardware platforms such as the Jetson TX2 and Xavier, which provide significant advantages in terms of low latency, high throughput, and advanced parallel processing capabilities. These hardware choices support the execution of complex neural network models in real-time with minimal computational constraints. In contrast, the present study proposes the implementation of a wildfire detection system based on the Jetson Nano platform—a more cost-effective and accessible alternative. Nevertheless, the Jetson Nano presents inherent limitations in terms of processing power, particularly when faced with simultaneous real-time tasks such as object detection, image transmission, and system control. This disparity emphasizes the trade-off between affordability and computational capacity. It also poses critical challenges in the optimization and deployment of deep learning models on resource-constrained embedded systems. Addressing these challenges requires careful model selection, algorithmic optimization, and potential compromises in accuracy or speed to ensure functional viability within limited hardware environments.

Furthermore, the adoption of convolutional neural networks (CNNs) for fire and smoke detection represents a robust and well-established strategy, as demonstrated by previous studies such as (Ghali et al., 2022). In this study, the MobileNet architecture was selected for its computational efficiency and lightweight design, which make it particularly suitable for embedded platforms with limited processing capabilities, such as the Jetson Nano. To optimize performance, the model was trained using transfer learning, a method that leverages pre-trained feature representations-such as edge, texture, and shape detection-from large, general-purpose data sets. This method significantly reduces training time and computational cost while maintaining high levels of accuracy. In addition, data augmentation techniques such as image rotations, horizontal flips, scaling, and brightness adjustments were applied during training. These transformations increase the diversity of the data sets, improve the model's ability to generalize to unknown data, and effectively mitigate the risk of overfitting, a particularly important consideration when working with limited or unbalanced data sets.

The study (Abdusalomov et al., 2025) introduces real-time predictive capabilities, allowing the drone to autonomously adjust its flight path by integrating artificial intelligence-based adaptive path planning algorithms, such as A*, RRT or Deep Reinforcement Learning, as further proposed in (Rashida Farsath et al., 2024), to improve the autonomy of the drone. On the contrary, the present project still relies on manual or semi-automated operation.

The use of ThingSpeak for sensing data transmission stands out as a simple and functional solution; however, it has limitations in terms of scalability, security and latency. Recent studies such as (Mahdi & Mahmood, 2022; Yuan et al., 2017) have already adopted edge computing platforms combined with optimized communication protocols such as MQTT, or cloud services such as AWS IoT Core, which offer greater scalability and efficiency for real-time applications.

It is also important to note that drone flight autonomy is insufficient for extended missions. In smooth flight conditions, the autonomy ranges from 10 to 12 minutes, while in accelerated flight it is reduced to approximately 5 to 8 minutes. This limitation is recognized in the present study and has been similarly noted and addressed in previous research, such as (Abdusalomov et al., 2025).

Conclusions

During the research, Convolutional Neural Networks were found to be highly effective for detecting fire signals. Models trained with appropriate datasets achieved high accuracy rates ranging from approximately 95% to 99%, demonstrating their ability to process large volumes of visual data in real time.

A key point is that the converting signals from RCA to HDMI and then to USB introduces additional latency in the process of transmitting images from the drone to the Jetson Nano. Although the Jetson Nano is capable of efficiently processing images, the latency introduced by this signal conversion can affect the speed at which image data is processed and detected, which can be critical in emergency situations.

During the research, two significant limitations were identified in the use of drones for wildfire monitoring. First, the drone's range can be limited by the capacity of the control signal and data transmission, which restricts the coverage of the monitored area and issue that is critical in large-scale fires requiring extensive monitoring. Second, battery life affects the drone's flight time, limiting its ability to conduct extended patrols or cover large areas without the need to recharge or replace the battery, thereby continuous monitoring.

During the development of the prototype, several challenges and difficulties were encountered. A variability in the model’s accuracy was observed, primarily due to the quality and diversity of the images in the dataset. To mitigate this issue, the dataset was expanded with more representative images, and the model’s parameters were adjusted to enhance generalization.

References

  Abdusalomov, A., Umirzakova, S., Tashev, K., Egamberdiev, N., Belalova, G., Meliboev, A., Atadjanov, I., Temirov, Z., & Cho, Y. I. (2025). AI-Driven UAV Surveillance for Agricultural Fire Safety. Fire, 8(4), 142. https://doi.org/10.3390/fire8040142

  Bouguettaya, A., Zarzour, H., Taberkit, A. M., & Kechida, A. (2022). A review on early wildfire detection from unmanned aerial vehicles using deep learning-based computer vision algorithms. Signal Processing, 190, 108309. https://doi.org/10.1016/j.sigpro.2021.108309

  Choudhary, R., Sharma, P., Kumar, A., Thakur, T., & Singh, A. (2025). AI-Driven Forest Fire Prediction and Monitoring System: Enhancing Early Detection and Response. 2025 7th International Conference on Energy, Power and Environment (ICEPE), 1–6. https://doi.org/10.1109/ICEPE65965.2025.11139737

  Ghali, R., Akhloufi, M. A., & Mseddi, W. S. (2022). Deep Learning and Transformer Approaches for UAV-Based Wildfire Detection and Segmentation. Sensors, 22(5), 1977. https://doi.org/10.3390/s22051977

  Kaufmann, J. J. (2010). The Practice of Dialogue in Critical Pedagogy. Adult Education Quarterly, 60(5), 456–476. https://doi.org/10.1177/0741713610363021

  Mahdi, A. S., & Mahmood, S. A. (2022). An Edge Computing Environment for Early Wildfire Detection. Annals of Emerging Technologies in Computing, 6(3), 56–68. https://doi.org/10.33166/AETiC.2022.03.005

  Pertuz, C. M. P. (2022). Aprendizaje automático y profundo en Python. Ra-Ma Editorial.

  Rashida Farsath, K., Jitha, K., Mohammed Marwan, V. K., Muhammed Ali Jouhar, A., Muhammed Farseen, K. P., & Musrifa, K. A. (2024). AI-Enhanced Unmanned Aerial Vehicles for Search and Rescue Operations. 2024 5th International Conference on Innovative Trends in Information Technology (ICITIIT), 1–10. https://doi.org/10.1109/ICITIIT61487.2024.10580372

  Vaca, C., Calahorrano, J., & Manzano, M. (2024). Spatial and Temporal Analysis of Wildfires in Ecuador Using Remote Sensing Data. Colombia Forestal, 27(1). https://doi.org/10.14483/2256201X.20111

  Yuan, C., Liu, Z., & Zhang, Y. (2017). Fire detection using infrared images for UAV-based forest fire surveillance. 2017 International Conference on Unmanned Aircraft Systems (ICUAS), 567–572. https://doi.org/10.1109/ICUAS.2017.7991306

  Zhao, Y., Ma, J., Li, X., & Zhang, J. (2018). Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery. Sensors, 18(3), 712. https://doi.org/10.3390/s18030712