Skip to main content

Companion Computers in Drones

· 41 min read
Ben Li
Software engineer
Chat GPT
AI Assistant

Introduction

Drones across various industries are increasingly equipped with onboard computers (also called companion computers) that serve as the “brain” alongside the flight controller. These onboard systems process sensor data (especially camera feeds) in real time to enable high-level functions such as autonomous navigation, obstacle avoidance, and AI-based analysis (Top 5 Companion Computers for UAVs | ModalAI, Inc.). Traditionally, a drone’s flight controller handled basic stabilization and GPS waypoint following, but modern use cases demand greater autonomy and on-site intelligence. Advances in compact, powerful processors (GPUs, NPUs, etc.) now allow drones to perform complex tasks like object detection, tracking, and mapping locally at the edge, which was not feasible just a few years ago (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs). This shift to onboard edge computing yields low latency decisions and reduces reliance on constant communication links, an important benefit since drones often operate beyond reliable network coverage (Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs) (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs).

Onboard Hardware Choices: The choice of onboard computer depends on the required complexity and performance. NVIDIA’s Jetson family (e.g. Nano, TX2, Xavier, Orin) is popular for drone AI workloads, featuring CPU/GPU architectures that can run neural networks for vision tasks in real time (Real-Time Weed Control Application Using a Jetson Nano Edge Device and a Spray Mechanism). These deliver high TOPS (trillions of operations per second) for deep learning, enabling on-device inference for applications like image classification, object detection (e.g. using YOLO models), and SLAM. In contrast, hobbyist boards like the Raspberry Pi or Radxa single-board computers offer a low-cost, lightweight platform suitable for less intensive tasks or early prototyping (Top 5 Companion Computers for UAVs | ModalAI, Inc.) (Top 5 Companion Computers for UAVs | ModalAI, Inc.). However, such boards lack dedicated neural accelerators – heavy computer vision models run slowly on them, incurring latency for deep learning frameworks (Top 5 Companion Computers for UAVs | ModalAI, Inc.). To bridge this gap, developers sometimes augment lower-power boards with USB AI accelerators (e.g. Intel Neural Compute Stick or Google Coral) to offload neural network inference. In practice, real-world drone deployments span this spectrum: from simple microcontrollers paired with a Raspberry Pi for basic image capture, up to powerful AI mission computers like Jetson Xavier or Qualcomm Snapdragon Flight in fully autonomous drones. The following sections provide a comprehensive overview of how onboard computers are used in major industries – agriculture, security, delivery, mapping/inspection, and other industrial applications – detailing the typical hardware, vision vs. non-vision workloads, and why on-board computing is essential in each context.

Agriculture (Precision Farming)

Agriculture was an early adopter of drone technology for monitoring crops and automating fieldwork. Today, many precision farming drones carry onboard computers to analyze sensor data and make split-second decisions in the field. In crop scouting and surveying, a drone might capture multispectral or RGB images of fields and use onboard processing to stitch images or calculate vegetation indices on the fly. More advanced systems perform real-time computer vision: for example, identifying crop stress, detecting diseased plants, or locating weeds among crops during the flight. UAVs are now widely used in precision agriculture for crop monitoring and targeted spraying, which improves farming efficiency and reduces environmental impact (Frontiers | Ag-YOLO: A Real-Time Low-Cost Detector for Precise Spraying With Case Study of Palms). An onboard computer can directly interpret camera feeds to differentiate healthy crops from weeds or pests, enabling immediate action such as precision spraying of agrochemicals only where needed. This is a leap from the traditional method of capturing images for later analysis – instead, the “see-and-spray” drone can act during the same flight.

On the hardware side, agricultural drones often leverage lightweight AI computers. NVIDIA Jetson modules (Nano, TX2, etc.) have been used in research prototypes to run neural networks that segment weeds vs. crops in real time, guiding an attached spray mechanism (Real-Time Weed Control Application Using a Jetson Nano Edge Device and a Spray Mechanism) (Real-Time Weed Control Application Using a Jetson Nano Edge Device and a Spray Mechanism). For instance, one study deployed a Jetson Nano onboard a drone to perform semantic segmentation of weeds at ~25 FPS, enabling a UAV to spray herbicide precisely on detected weeds (Real-Time Weed Control Application Using a Jetson Nano Edge Device and a Spray Mechanism) (Real-Time Weed Control Application Using a Jetson Nano Edge Device and a Spray Mechanism). Another low-cost approach integrated an Intel Neural Compute Stick 2 (Myriad X VPU) with a small single-board computer to run a custom “Ag-YOLO” object detection model for palm tree disease, achieving 36 FPS detection with only a 1.5 W, 18-gram device (Frontiers | Ag-YOLO: A Real-Time Low-Cost Detector for Precise Spraying With Case Study of Palms) (Frontiers | Ag-YOLO: A Real-Time Low-Cost Detector for Precise Spraying With Case Study of Palms). These examples highlight that both high-end GPUs and specialized accelerators are employed to meet the performance needs of vision-based farming tasks.

Not all agricultural use cases are vision-based, however. Some drones carry other sensors (such as thermal cameras, LiDAR altimeters, or hyperspectral sensors) and use onboard computing to interpret this data. For example, a crop-spraying drone might use a LiDAR or ultrasonic sensor to maintain ultra-low flight altitude over uneven fields; the onboard computer reads this sensor input in real time to adjust the drone’s height, ensuring even coverage. Another non-vision example is soil and microclimate sensing: a drone might measure temperature, humidity, or soil moisture via IoT sensors and map these readings to GPS coordinates, requiring onboard logic to synchronize sensor data with location. In all these cases, the onboard computer is essential because farmland environments often lack reliable internet connectivity – the drone must make decisions on-site (where to spray, which areas need attention) without offloading data. This on-the-fly intelligence boosts efficiency (e.g. less chemical usage by targeting only weed-infested spots) and enables greater autonomy in agricultural operations. Below are some key use cases in agriculture leveraging onboard computation:

  • Real-Time Weed Detection & Spraying: Drones equipped with AI vision can distinguish weeds from crops in real time and actuate spot-spraying. For example, a Jetson-based drone uses a trained deep learning model (YOLO/segmentation) to identify weeds among crops and trigger a targeted herbicide spray, significantly reducing chemical use (Real-Time Weed Control Application Using a Jetson Nano Edge Device and a Spray Mechanism) (Real-Time Weed Control Application Using a Jetson Nano Edge Device and a Spray Mechanism). This requires considerable processing power on the drone to run inference with low latency as the UAV moves.

  • Crop Health Monitoring: Onboard computers process multispectral or RGB images to assess crop health indicators (NDVI, pigment indices) during flight. The drone can immediately flag stressed crop regions (due to drought or disease) and perhaps even alter its route to closer inspect problem spots. These calculations can be done on modest hardware (a Raspberry Pi or Radxa board) since they involve simpler math, though more advanced analysis (like identifying specific diseases from images) would necessitate an AI module.

  • Autonomous Field Navigation: Farmland drones often fly beyond visual line of sight, so onboard processors handle path planning and obstacle avoidance. For instance, navigating around trees, power lines, or terrain is handled by computer vision (stereo cameras) or LiDAR sensors feeding into the onboard computer, which in turn instructs the flight controller to adjust course. This autonomy ensures safe operation at low altitudes over crops.

  • Variable-Rate Application: By merging sensor data and GPS maps, an onboard computer can control the variable release of seeds, water, or fertilizer. As the drone surveys a field, it might decide to increase fertilizer drop on an area of poor crop growth and decrease it elsewhere, all based on real-time processed data. While not strictly “vision,” this use of onboard analytics leads to site-specific farming that maximizes yield.

Overall, agriculture showcases a range of onboard computing needs – from relatively low-complexity tasks like geo-tagging sensor readings (achievable with basic SBCs) to high-complexity AI tasks like vision-based weed control (demanding GPU-level performance). The autonomy level also varies: some drones merely assist a farmer by collecting data, whereas others (with onboard AI) can autonomously take actions like precision spraying with minimal human input. In each case, the onboard computer is a critical enabler for making timely decisions in the field, which improves efficiency, conserves resources, and reduces the need for constant human supervision.

Surveillance and Security

In surveillance, security, and public safety applications, drones act as mobile observers – often tasked with detecting intruders, monitoring crowds, or securing perimeters. These scenarios heavily rely on computer vision running on the drone’s onboard computer to interpret video feeds in real time. A security drone might patrol a fenced facility and use onboard object detection to spot humans or vehicles in restricted areas, immediately alerting security personnel. Similarly, law enforcement can deploy drones at events or in crime response, where the UAV’s onboard AI could track persons of interest through a crowd or follow a fleeing suspect from the air. The common thread is that the drone must “understand” what its camera sees, without waiting to stream video back to a control center – this demands robust edge processing. Indeed, one of the key requirements for UAV surveillance is the ability to detect and track objects of interest in real time (Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs) (Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs). Modern deep-learning algorithms (object detectors like YOLO or trackers like SORT/DeepSORT) enable this but are computation-intensive, so a powerful onboard computer (often a GPU-equipped module) is typically used.

Typical Onboard Setup: Drones for surveillance and security often carry high-end companion computers such as the NVIDIA Jetson TX2, Xavier NX, or newer Orin, which provide the CUDA cores and tensor accelerators needed for real-time vision. For example, researchers demonstrated a drone-based 4K pedestrian tracking system that achieved real-time performance by exploiting both the CPU and GPU of an onboard Jetson TX2 (Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs). This system could detect and follow people in ultra-high-resolution video (captured from 50 meters altitude) without any ground server, proving that onboard computing can handle even demanding surveillance tasks (Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs) (Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs). In testing, the Jetson-based drone successfully detected and tracked individuals in 4K footage at full frame-rate, illustrating the level of performance now attainable on a compact drone (Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs). Another example is border security drones, which may use thermal cameras for night monitoring; the onboard computer can run thermal image analytics to pick out human heat signatures or vehicles in darkness. These use cases might leverage specialized models (e.g. thermal-trained object detectors) running on the same kind of GPU hardware. Some security drones also employ multiple sensors – optical cameras for day, IR cameras for night, and even acoustic or RF sensors – requiring sensor fusion on the onboard computer to make coherent decisions about potential threats.

Why is onboard processing essential for surveillance? Firstly, drones often operate in remote or wide areas (e.g. national borders, large industrial sites) where sending high-bandwidth video to a central server is impractical. The drone must be able to analyze video autonomously because real-time response is critical (it might need to decide to follow an intruder immediately). As one paper noted, wireless connections are not guaranteed aloft, so drones must rely on onboard mission computers for real-time tasks even though deep learning workloads demand high computing power (Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs). Secondly, latency needs to be minimal – a few seconds delay in detecting a person could mean the difference between losing or maintaining visual contact. Onboard AI eliminates the round-trip latency of streaming to the cloud and back. Finally, there’s a privacy aspect: processing video on the device means only relevant alerts (e.g. a detected face or license plate) might be transmitted, rather than raw video, which can be important for data security in sensitive surveillance (this benefit of edge computing for privacy is also documented in research (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs)). Key vision-centered use cases in the security domain include:

  • Perimeter Intrusion Detection: Drones can autonomously patrol fences or borders using onboard vision to detect humans, vehicles, or boats entering restricted zones. A Jetson Xavier-class computer running object detection (like YOLOv5) can identify an intruder and geo-tag their location for response teams. The drone might also track the intruder’s movement with onboard multi-object tracking algorithms, maintaining visual until ground units arrive. This use case demands robust performance to avoid missed detections and often uses thermal imaging at night (the onboard computer handles both IR and daylight video streams).

  • Crowd Monitoring and Anomaly Detection: Law enforcement or event security drones monitor large gatherings for public safety. Onboard image processing can count people, detect fights or unrest by recognizing specific motion patterns, or spot dangerous objects (e.g. a drone scanning a parking lot for unattended bags). These tasks use computer vision and sometimes machine learning classifiers (for action recognition) running on the drone’s GPU. The autonomy level can range from decision support (flagging something for a remote operator) to fully automated response (e.g. circling a flagged person).

  • Follow and Track Missions: In police or military operations, a drone may be tasked to follow a target vehicle or person semi-autonomously. Using onboard vision, the drone locks onto the target and navigates to keep the target in frame. This involves real-time object tracking and continuous adjustment of flight – a compute-heavy loop handled by the onboard computer. Skydio drones, for example, are known for their autonomy in tracking moving subjects through complex environments by leveraging their onboard NVIDIA Orin AI engines ( Skydio Autonomy™ | Skydio ). Such capability has been applied in tactical situations to reduce risk to human officers.

  • Non-Visual Sensors for Security: Although vision is primary, some security drones use other onboard sensor processing. For instance, a drone could carry a mini radar or LIDAR to detect other drones (counter-UAS scenarios) – the onboard computer would process these sensor signals to identify and triangulate rogue drones in protected airspace. Similarly, acoustic sensors onboard can pick up gunshot sounds and the computer can classify and localize them (functioning like a flying ShotSpotter). These examples show onboard computing beyond cameras, often supplementing vision for a more robust security solution.

In summary, surveillance/security drones typically represent high-complexity, high-autonomy use cases. They often operate independently for extended periods, making on-board AI indispensable. The performance requirements are among the highest in the drone world: real-time processing of high-res video, detection of small or fast-moving objects, and reliable operation in unpredictable environments. This is why we see cutting-edge embedded GPUs and sophisticated AI models deployed on security drones – effectively turning them into flying edge computers that “see” and interpret the world in order to keep it secure.

Delivery and Logistics

Drone delivery – transporting packages by air – presents another major industry segment benefiting from onboard computers. In a delivery scenario, an unmanned aircraft must navigate from a distribution center to a customer’s location, then execute a safe drop-off or landing, all with minimal human oversight. This requires a high degree of autonomy and environmental awareness, since the drone will encounter dynamic obstacles (birds, wires, trees, buildings) and must ensure safety around people on the ground. To achieve this, delivery drones rely on a suite of sensors (visual cameras, depth sensors, sonars, sometimes radar) connected to powerful onboard computing for Detect-and-Avoid (DAA) and precision landing. In fact, for drone delivery to be truly autonomous, the UAV “needs to be able to see the world around it” via computer vision (Computer vision is key to Amazon Prime Air drone deliveries - Yahoo). Companies like Amazon Prime Air and Alphabet’s Wing have invested heavily in onboard vision systems that allow drones to identify hazards in flight and at the drop site in real time.

A prominent example is Amazon’s Prime Air delivery drones. Amazon has developed an onboard “detect-and-avoid” system with thermal cameras, depth cameras, and sonar working in concert, and machine learning models on the onboard computer to automatically identify obstacles and navigate around them (Amazon Drone Delivery Service Prime Air Is Advancing | Vaughn College). This means the drone’s AI can detect objects such as people, animals, power lines, and even something as thin as a clothesline, and then alter course or abort a landing if needed (Amazon Drone Delivery Service Prime Air Is Advancing | Vaughn College). The drone effectively makes split-second decisions based on its perceptions – for instance, if during descent to a customer’s yard the cameras spot a dog running underneath, the onboard computer will command the drone to pause or climb to avoid a close encounter. These capabilities are essential not only for safety but also for regulatory approval, as aviation authorities (FAA, etc.) require robust DAA for beyond-visual-line-of-sight flights.

The hardware used in delivery drones is typically at the high end of the spectrum. NVIDIA Jetson platforms or custom system-on-chips with AI accelerators are common, because the drone may be running multiple neural networks: one for sense-and-avoid in the air (finding other aircraft or birds), one for ground hazard detection while landing, perhaps another for reading markers (some delivery systems use fiducial markers or QR codes on landing pads to guide final approach). The autonomy level is very high – the goal is a drone that can depart, fly complex routes, and deliver with zero human intervention. Onboard computing is what makes this possible, handling tasks such as: route planning and re-planning (if it encounters a new no-fly zone or bad weather en route), real-time adjustment to wind or weather (using IMU and airspeed sensor data in control algorithms), and coordinating the delivery mechanism (like lowering a package by tether and releasing it when conditions are correct).

Importantly, delivery drones must also communicate and integrate with logistics systems, and the onboard computer often manages this. For example, it might encrypt and transmit telemetry and video snapshots back to an operations center over LTE/5G, or receive last-second updates (e.g. “customer moved 100m, update drop location”). While a lot of heavy lifting (like large-scale route optimization) is done by cloud services, the drone’s onboard system is the real-time executor that adapts on the fly.

Let’s break down key use cases and why onboard computing is crucial:

  • In-Flight Obstacle Avoidance: Delivery routes often go through suburban areas with trees, poles, buildings, and possibly low-flying aircraft. Onboard sensors feed into algorithms (stereo vision or LiDAR-based obstacle detection) that create a 3D map of the drone’s surroundings as it flies. The onboard computer then either autonomously deviates around obstacles or notifies the autopilot to change course. This must happen within fractions of a second. For instance, Prime Air drones scan the skies and ahead for other aircraft or unexpected objects using computer vision, rather than relying solely on ADS-B or external trackers (A drone program taking flight - About Amazon) (Amazon gets FAA approval to expand drone deliveries - Axios). Such real-time perception and navigation absolutely require an onboard AI engine given the latency and reliability constraints.

  • Precision Landing and Drop-off: When arriving at the destination, a delivery drone might use downward-facing cameras and rangefinders to identify a safe landing zone or to position for dropping a package. Some systems use visual markers on the ground (like an augmented reality tag placed by the customer) – the drone’s onboard vision will recognize the marker and home in to that spot. Other times, the drone simply analyzes the camera feed to ensure no people, pets, or obstacles are in the immediate landing area. All of this logic lives on the drone. If any hazard is detected last-moment (e.g., the customer walks under the drone), the onboard computer can decide to delay or move the drop location. These decisions can’t wait for a human and are enabled by onboard image recognition and path planning.

  • Adaptive Route Autonomy: Delivery drones may be given a pre-planned route, but conditions can change. If wind conditions deteriorate or light rain begins, the drone’s onboard computer might adjust its flight envelope or speed. If an area becomes GPS-denied (perhaps near tall buildings), the drone could switch to visual-inertial navigation, again handled by onboard processing of camera/IMU data. In congested airspace, some delivery drones might coordinate via vehicle-to-vehicle comms; the onboard computer would handle such coordination protocols to avoid other drones. These are non-vision computations but still require robust processing and software logic running locally for autonomy.

  • Payload Management and Other Sensors: In specialized deliveries (like medical deliveries of blood samples or organs, as done by Zipline and others), maintaining the payload’s condition is critical. Onboard systems might regulate a cooler or monitor temperature sensors, ensuring the payload stays within safe conditions, and adjusting mid-flight if needed (e.g., return to base if a problem is detected). While not vision-related, this is another task for the onboard computer, integrating sensor data and acting on it. Additionally, post-delivery, the drone’s computer might run a quick self-diagnostic before returning to ensure no damage occurred during landing – for example, analyzing motor currents or camera feed for any anomalies like a snagged tether.

In essence, drone delivery pushes the envelope on autonomy and complexity. These drones operate in unstructured environments among the general public, so the performance bar for onboard systems is very high: they must be fast, reliable, and redundant. It’s common to have redundancy (multiple cameras, perhaps two onboard computers cross-checking in critical systems) for safety. The onboard computer is the linchpin that ties together all sensor inputs and executes the flight logic that makes autonomous delivery possible. Without powerful onboard processing, a delivery drone would be limited to very controlled environments or would require a human pilot, which negates the scalability. Thanks to modern edge computers and AI, companies have demonstrated drones that can deliver packages beyond visual line of sight while dynamically reacting to their surroundings in real time (Amazon Drone Delivery Service Prime Air Is Advancing | Vaughn College).

Mapping and Inspection

One of the most widespread drone applications is aerial mapping and infrastructure inspection. This spans use cases like land surveying (creating orthomosaic maps and 3D terrain models), inspecting bridges, wind turbines, power lines, solar farms, and more. Traditionally, many of these missions were semi-autonomous: the drone followed a pre-set path to capture photos, and the heavy data processing (stitching photos into maps or analyzing them for defects) was done later on a ground station or cloud service. However, the advent of powerful onboard computers has begun to transform mapping and inspection into real-time or near-real-time endeavors. Drones in this domain increasingly carry companion computers to perform tasks such as in-flight image processing, immediate detection of anomalies, or even constructing 3D models on the fly. The level of autonomy is also rising – some advanced inspection drones can navigate around structures and decide their own camera angles using onboard Spatial AI engines.

(image) A drone inspects power substation infrastructure. Modern inspection drones utilize onboard vision and AI for collision avoidance and data analysis around complex structures. For instance, Skydio – a leading drone in autonomous inspection – is backed by an onboard NVIDIA Jetson Orin GPU, giving it the compute capacity to see, understand, and react in real time while inspecting assets ( Skydio Autonomy™ | Skydio ). With six 360° navigation cameras feeding its neural networks, the drone builds an understanding of the environment and avoids even small obstacles (like wires) automatically ( Skydio Autonomy™ | Skydio ) ( Skydio Autonomy™ | Skydio ). This allows it to be flown in cluttered spaces by users with minimal training. Furthermore, Skydio’s system can conduct targeted inspections and mapping on-board – its Spatial AI engine enables features like automated cell-tower scans and the creation of 3D models on the vehicle, in the field, in minutes ( Skydio Autonomy™ | Skydio ). This is a breakthrough for mapping jobs: rather than waiting to process data later, the drone itself can generate a usable 2D map or 3D point cloud before it even lands, letting teams evaluate results immediately and take action (or re-fly if needed) ( Skydio Autonomy™ | Skydio ) ( Skydio Autonomy™ | Skydio ). There are two broad categories of tasks here: mapping (surveying, reconstruction) and inspection (defect or feature identification). Both benefit from onboard computing in different ways.

  • Aerial Mapping & Surveying: In a typical mapping mission (say, mapping a construction site or farmland for a survey), a drone captures hundreds of images with GPS tags. While high-precision processing (to create an orthomosaic or 3D mesh) is usually done on powerful computers after the flight, an onboard computer can perform quick intermediate processing. For example, it might stitch a low-resolution preview map on the fly, or use SLAM (Simultaneous Localization and Mapping) techniques to ensure it has covered the area without gaps. In long linear mapping (like pipeline or railway surveys), an onboard computer can monitor image quality and coverage and prompt the drone to take additional images of any missed segment. Some research projects have even implemented on-board photogrammetry to continuously build maps – though limited by compute, this is improving with devices like the Jetson Orin. The advantage is instant insights: field crews can know right away if they have the data they need. Also, if the drone has RTK GPS and a companion computer, it could georeference images in real time, outputting a map almost ready for use upon landing.

  • Infrastructure Inspection (Vision-Based): When inspecting physical structures, drones now often carry computer vision models to detect faults or areas of interest. For instance, an inspection drone may use an onboard convolutional neural network to analyze video frames for cracks in a concrete bridge or corrosion on a tower. If the model flags a potential defect, the drone can autonomously hover and circle that spot to collect more data. This kind of adaptive inspection is empowered by onboard processing – the drone doesn’t have to send all video to an operator; it can make preliminary judgments itself. Thermal cameras are also used (e.g. detecting hot spots in power lines or solar panels), and onboard analysis can immediately highlight abnormal heat signatures for the pilot. Using edge AI for this speeds up the inspection workflow and can reduce human error (the AI might see a subtle crack that a human could overlook in a live feed).

  • SLAM and GPS-Denied Navigation: Many inspection targets are indoors or in GPS-denied environments (inside large storage tanks, boilers, caves, or industrial plants). Here, drones rely on SLAM algorithms running on onboard computers to both map and navigate. For example, Flyability’s Elios 3 drone is equipped with a LiDAR and onboard SLAM engine called FlyAware™; as it flies inside a dark, cluttered space, it builds a 3D map in real time and uses that for stabilization and pilot feedback (Elios 3 vs. Elios 2: How do the Flyability drones compare) (Elios 3 vs. Elios 2: How do the Flyability drones compare). The Elios can instantaneously render a 3D map of its surroundings on the pilot’s tablet by processing LiDAR and visual odometry data on-board (Elios 3 vs. Elios 2: How do the Flyability drones compare). This is invaluable for inspecting confined spaces safely – the pilot can see areas the drone might have missed and navigate accordingly, and the final map ensures no spot is left uninspected. SLAM is computationally heavy (involves processing point clouds and running algorithms like ICP or graph optimization), so a robust onboard computer (often an ARM CPU paired with GPU or specialized VIO hardware) is required. Drones like Elios or indoor mapping drones may use Qualcomm Snapdragon-based flight cores or Jetson NX modules to achieve this balance of weight and processing.

  • Adaptive Flight Planning: With on-board intelligence, an inspection drone can make decisions like a human pilot would. For example, Skydio’s 3D Scan software (running partly onboard) allows the drone to autonomously map out a scanning pattern around a structure, adjusting angles to get complete coverage. It leverages the drone’s real-time mapping to know where it has a line-of-sight and where it hasn’t captured yet ( Skydio Autonomy™ | Skydio ). This level of autonomy turns what used to be a manual, labor-intensive flight into an automated routine, made possible by the drone’s onboard continuous computation of its position relative to the structure and the camera view planning.

It’s clear that mapping and inspection use cases can range from moderate to very high complexity. Some mapping missions might use the onboard computer lightly (just for navigational assistance), whereas advanced inspection in tight environments pushes the limits of onboard processing (doing SLAM, AI defect detection, and path planning concurrently). In all cases, the onboard computer’s role is to increase the autonomy and data quality of the mission: ensuring the drone can go places and capture information that a human or a less-intelligent drone might miss or struggle with. The performance needs scale with the difficulty – a basic quadcopter surveying an open field might get by with a Raspberry Pi coordinating camera triggers, but a drone inspecting a wind turbine in gusty winds while identifying blade damage in real time will likely have an NVIDIA Orin or similar at its core. The trend in the industry is clearly toward edge computing on drones for mapping/inspection, as it reduces the amount of data to transfer (only results or models are sent back) and enables faster decision-making. Research has shown that equipping drones with edge AI drastically cuts the latency and bandwidth needed for remote sensing tasks – drones can now detect and recognize objects onboard, making quick local decisions (e.g., find a person to rescue) before sending any data to the cloud (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs) (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs). This paradigm is making drones more efficient tools for mapping our world and inspecting critical infrastructure in real time.

Industrial Applications (Warehousing, Manufacturing, and Beyond)

Beyond the well-defined sectors above, onboard computers in drones are also driving innovative use cases in general industrial and enterprise environments. These include warehouse inventory drones, drones in manufacturing or chemical plants for monitoring, and other specialty uses like mining or oil-and-gas inspections that don’t squarely fall under “mapping” or “security.” In many of these scenarios, drones operate indoors or in close proximity to industrial equipment and workers, requiring extremely reliable autonomous navigation and often integration with business systems. Onboard computers play a pivotal role by handling the necessary computer vision, sensor fusion, and planning to allow drones to carry out tasks that used to be manual.

(Startup’s autonomous drones precisely track warehouse inventories | MIT News | Massachusetts Institute of Technology) An inventory drone scans warehouse shelves. In warehouses, drones are being used to automate inventory tracking by scanning barcodes on high shelves – a task traditionally done via forklifts and manual barcode readers. A prime example is Corvus One, an autonomous inventory drone. It carries an array of 14 cameras and an AI-based onboard system that lets it navigate in GPS-denied warehouse aisles and read pallet labels from the air (Startup’s autonomous drones precisely track warehouse inventories | MIT News | Massachusetts Institute of Technology). Using computer vision, the drone localizes itself among the racks (without needing any external markers or infrastructure) and identifies barcodes on products or QR codes on shelving, associating them with the warehouse database. The on-board computer cross-references these scans with expected inventory and flags discrepancies automatically (Startup’s autonomous drones precisely track warehouse inventories | MIT News | Massachusetts Institute of Technology) (Startup’s autonomous drones precisely track warehouse inventories | MIT News | Massachusetts Institute of Technology). Corvus Robotics (the company behind it) had to develop a learning-based autonomy stack for robust indoor flight, noting that traditional vision techniques alone were insufficient for “lifelong” autonomy in such environments (Startup’s autonomous drones precisely track warehouse inventories | MIT News | Massachusetts Institute of Technology). This means the drone likely uses deep neural networks (perhaps for visual localization or obstacle avoidance) running onboard to adapt to changing warehouse scenes. The result is an infrastructure-free solution – no need for AR tags or external sensors – the drone’s onboard intelligence handles it all, making deployment much easier (Startup’s autonomous drones precisely track warehouse inventories | MIT News | Massachusetts Institute of Technology). The complexity here is significant: the drone must avoid moving obstacles (workers, forklifts), adjust to varying lighting, and navigate narrow spaces, which demands high-performance processing and reliable algorithms on-board. Drones like these often utilize specialized computers; while exact hardware isn’t always disclosed, a powerful NVIDIA Jetson or Qualcomm QRB board with multiple camera inputs and NPUs is a likely choice, given the need to process many video streams and run neural networks for navigation and reading text (OCR).

Another industrial use is inspection and monitoring inside facilities (complementing what we discussed in mapping/inspection, but focusing on routine operations in factories or plants). For example, a drone can fly along pipelines or machinery to check for leaks, abnormal vibrations, or temperature spikes. Equipped with an infrared camera and perhaps a sniffer sensor (for gas leaks), the drone’s onboard computer can detect anomalies: e.g., using thermal image processing to spot an overheating motor, or reading analog gauges via computer vision to log pressure readings. Some drones have been prototyped to read instrument panels – using OCR on onboard video to record meter values, which is far faster than a human doing rounds. These tasks might not require as much AI as other examples; often classical image processing or simple threshold-based alerts are enough, which a Raspberry Pi-level computer could handle. However, as these drones become more autonomous (finding their own way through a factory floor), adding AI for navigation is key. We see some deployments combining SLAM for indoor navigation with specific sensor payloads. The onboard computer thus simultaneously maps the facility (so it knows its location and path) and analyzes sensor data for the monitoring task. This fusion ensures the drone can, say, autonomously inspect a series of checkpoints in a large plant at scheduled times, and alert personnel if it finds an anomaly – all without human control.

A particularly challenging environment is underground mining or industrial confined spaces. In mines, drones are used to map tunnels or assess ore stockpile volumes. They rely on onboard LiDAR processing to avoid collisions in tight shafts and to create volumetric models of mined material (to calculate how much has been extracted or remains). The onboard computer might run point cloud processing to estimate the volume in real time. Likewise, in large oil tanks or pressure vessels, drones (like the aforementioned Elios) fly inside to look for corrosion; their onboard SLAM and lighting systems allow them to operate without GPS or external light. The data (videos, 3D models) is recorded onboard and often also streamed to operators, but initial processing (stabilization, mapping) is done locally to facilitate the mission. From a performance standpoint, industrial applications can vary widely. Warehouse inventory drones like Corvus require advanced vision and AI comparable to security drones – in fact, Corvus claims to be the first to deploy a neural network-based autonomy for indoor drones (Startup’s autonomous drones precisely track warehouse inventories | MIT News | Massachusetts Institute of Technology), highlighting the cutting-edge nature of that application. On the other hand, a drone that simply carries a gas sensor around a refinery might have modest computing needs (mostly for navigation and data logging). That said, even “simple” tasks benefit from edge computing: by processing data on-board, the drone can react immediately (e.g., hover in place when a gas reading spikes, to take more measurements, instead of continuing on a pre-planned path). Edge computing also ensures that data isn’t lost if communication drops out in a steel-and-concrete facility – the drone keeps a local record and can upload when back in range. Let’s summarize some industrial use case categories and how they use onboard computers:

Warehouse Inventory Management: Drones navigate indoor warehouses to scan inventory (barcodes, RFID, images of products). They use onboard vision for localization and item identification. This is a vision-heavy use requiring AI/ML on board. The drone’s computer interfaces with warehouse management systems, updating stock counts in real time. The benefit is continuous, error-free inventory tracking without interrupting operations (Startup’s autonomous drones precisely track warehouse inventories | MIT News | Massachusetts Institute of Technology). The complexity is high: multi-camera SLAM, object detection (for codes), and safe autonomous flight in tight spaces are all handled onboard.

  • Industrial Inspections (Interior): Drones perform scheduled inspections of equipment (pipes, tanks, conveyors) in factories or oil/gas facilities. Onboard computing may run anomaly detection on sensor data – for example, comparing current thermal images to baseline to catch a hotspot, or using sound/vibration analysis if the drone has a microphone. These drones often have to navigate cluttered interiors, so obstacle avoidance via LiDAR or vision is needed (again handled by the companion computer). Many such solutions emphasize removing humans from dangerous inspection jobs, so the drone must be trusted to fly safely and get the data. High autonomy and moderate to high compute is needed, depending on the sensors used.

  • Logistics and Parcel Movement: Beyond last-mile delivery outdoors, some drones are being tested for moving parts or goods within large facilities (e.g., shuttling components between factory stations or from storage to assembly line). Here the onboard computer manages routing in dynamic indoor environments. It might integrate with elevator or door controls, signaling them to open, etc. This is similar to an indoor warehouse drone but with a focus on carrying payloads. Vision or LiDAR can be used for navigation instead of relying on pre-laid routes, giving flexibility.

  • Non-Visual Sensing Missions: Industrial drones also carry non-camera payloads like chemical sensors, magnetometers, radiation detectors (for nuclear plant inspection). The onboard computer reads these sensors and can perform immediate data processing. For example, a radiation drone might map radiation levels to locations in real time to guide itself to hotspots; a magnetometer drone could follow a magnetic anomaly to find a crack in a pipeline. These tasks involve sensor fusion and decision-making by the onboard computer, albeit not as computationally heavy as image analysis. They demonstrate that onboard computing isn’t just about vision – it’s equally important for other sensor-driven missions.

Across these industrial use cases, the differences in requirements are notable. Warehouse drones and complex inspection drones require cutting-edge autonomy, rivaling that of security drones, because they work around people and assets in a complex 3D environment (with no GPS). They often use the highest-performance onboard computers available (e.g., Jetson AGX Xavier or Orin with multiple CPU cores and GPU acceleration) to run their AI and SLAM pipelines. Simpler tasks in industrial sensing might be done with lower-cost boards or microprocessor-based flight controllers, but even these benefit from at least some onboard computing to format data and respond to triggers. Industrial users also value reliability and integration – the onboard computer might need industrial-grade safety certifications or the ability to log data in formats directly usable for compliance. This adds another layer of complexity: the computing platform must be robust and secure.

Finally, a common theme is connectivity: Industrial drones usually form part of a bigger system (warehouse management software, maintenance scheduling systems, etc.). The onboard computer enables this integration by running middleware or APIs that sync data when a link is available. But if the link is down or the drone is in a radio shadow, it can continue its mission thanks to local processing (improved reliability) (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs). It can store the collected info (inventory data or inspection footage) and later upload it. Thus, onboard computing in industrial contexts not only provides intelligence but also a buffer against network disruptions, ensuring the drone’s operation is autonomous and fail-resilient.

Conclusion and Comparative Overview

From the above, it’s evident that onboard computers have become a fundamental enabling technology for drones in virtually every industry. They allow drones to go from remote-controlled cameras in the sky to truly autonomous robots that can interpret and interact with their environment. By processing data on the drone, these systems drastically reduce latency, improve reliability, and unlock capabilities that would be impossible with a strictly human-in-the-loop or offboard processing model (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs) (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs). Differences Across Use Cases: The complexity and performance requirements of onboard computing vary across applications. In agriculture, many missions are pre-planned and relatively structured (flying over flat fields), so the autonomy demands are lower – a mid-range onboard computer can handle tasks like image capture and even basic AI for weed detection. However, for precision tasks (e.g. weed spraying) the complexity jumps, approaching that of security drones, as the system must recognize small objects (weeds) and act immediately. Surveillance and security drones consistently require high complexity and high autonomy – they deal with unpredictable environments and critical real-time constraints (tracking fast or small targets), so they use the most powerful processors (Jetson, Snapdragon, etc.) and sophisticated algorithms. Delivery drones also occupy the high end: the safety-critical nature and need to handle all corner cases (wind, moving objects, privacy of people below) force a robust, high-performance onboard computing solution with redundancy. Mapping and inspection drones cover a wide range: a simple mapping drone might not push its onboard computer hard (saving most processing for later), whereas an advanced inspection drone essentially functions as an AI-powered coworker, navigating and analyzing concurrently, which is a high-performance, high-autonomy scenario. Industrial indoor drones tend to need the most reliable autonomy (to operate around valuable equipment and people safely), often matching the complexity of security drones but in a constrained setting – they typically run cutting-edge SLAM and vision onboard. That said, some industrial tasks can be achieved with simpler means if the environment is controlled (e.g. following fixed routes with a light sensor payload).

Vision vs. Non-Vision: Another important distinction is between vision-centric use cases and non-vision ones. Many of the celebrated advances (obstacle avoidance, object tracking, etc.) are vision-based and thus demand GPUs and complex models. Non-vision tasks (like carrying a gas sensor or reading a digital meter) may not need as much compute; they could run on microcontrollers or low-power CPUs. Yet, even those often get paired with vision for navigation. For example, a gas-sensing drone might still use a camera to avoid obstacles. So in practice, pure non-vision drones are uncommon in advanced use cases – most have at least a camera for navigation if not for the primary mission. The onboard computer’s job is to balance all these sensor inputs.

Why Onboard is Essential: Across all industries, a few common reasons explain why having a computer on the drone (versus processing data on the ground or cloud) is essential: Latency and Real-Time Action: Drones often must react in milliseconds (to avoid a collision or target a spray). Onboard processing enables this instantaneous response; sending data to a remote server would be too slow (Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs).

  • Autonomy/Offline Capability: Drones operate in areas without reliable comms (farms, disaster zones, indoor facilities). Onboard intelligence lets them complete missions and make decisions even with zero connectivity (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs). This autonomy also reduces the burden on pilots – one operator can supervise multiple drones that largely pilot themselves.

  • Bandwidth and Data Reduction: HD cameras generate huge data. Instead of streaming all of it, drones can process video onboard and send back only key results or alerts. This makes beyond-visual-line-of-sight operation feasible over limited bandwidth radio links (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs) (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs). It also lowers cloud compute costs and can preserve privacy by not broadcasting raw feeds (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs).

  • Task-Specific Optimization: An onboard computer can be tightly integrated with the drone’s control system. For instance, a vision algorithm can directly influence the drone’s flight (closing the control loop internally). This synergy can achieve maneuvers (like precision landings or smooth target tracking) that would be hard if the processing were offboard due to induced lag.

  • Increasing Capabilities: As drone roles expand (swarm coordination, package delivery networks, etc.), onboard computing provides a scalable way to add capabilities. Each drone can carry its own “intelligence” without needing a proportionate increase in manpower or constant remote control. This is crucial for scaling drone operations commercially.

In practice, we see a blend of computing happening – some things are still done offboard (e.g., detailed 3D reconstruction for survey-grade maps) when not time-sensitive. But the frontier is continually moving towards more onboard processing as hardware improves. The NVIDIA Jetson series, Qualcomm Flight platforms, Intel Movidius VPUs, Google Coral TPUs, and others are all evolving to provide greater AI performance in smaller, more power-efficient packages, directly benefiting drone applications. For example, the latest Jetson Orin modules deliver dozens of TOPS of AI performance at under 20 W, enabling complex multi-model AI tasks on a drone in real time that used to require a full laptop or desktop GPU a few years ago (Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs) (Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs). This trend will likely continue, with drones getting even more capable of understanding their environment – blurring the line between a flying camera and a thinking agent. Commercial and Research Momentum: Many commercial solutions mentioned (DJI in agriculture, Skydio in inspection, Amazon in delivery, Corvus in inventory, etc.) showcase what is already possible with today’s onboard computers. Research projects often push the envelope further – for instance, swarms of drones performing cooperative tasks with minimal communication, each relying on onboard processing and only high-level coordination. The use cases will keep expanding as technology allows. One can envision future drones in emergency response that map a collapsing building interior in real time while searching for survivors using thermal and CO2 sensors, or agricultural drones that identify individual pest insects via onboard vision and precisely dispense biocontrol agents. All these hinge on edge AI and computing on the drone.

In conclusion, onboard computers have moved drones into a new era of autonomous, intelligent operation. Different industries leverage this capability in unique ways, but all benefit from the drone’s ability to make sense of its world and act immediately. The complexity ranges from basic autopilot assistance to full AI-driven missions, and choosing the right onboard computer (from Raspberry Pi class to Jetson Orin class) is a matter of matching the use case needs for vision processing, decision speed, and reliability. What’s common is that the drone is no longer just a remote sensor platform – it’s an active node in the IoT/edge computing network, often deciding locally and only then sharing results. This improves efficiency and opens up missions that were once thought too difficult or too risky for unmanned systems. As hardware and algorithms continue to advance, we will see even greater autonomy and a proliferation of drone applications across industries, all made possible by that little onboard computer humming away in the sky. (Top 5 Companion Computers for UAVs | ModalAI, Inc.) (AERO: AI-Enabled Remote Sensing Observation with Onboard Edge Computing in UAVs)