
Vigilant Aerospace Systems CEO Kraettli L. Epperson recently presented at the 2021 Business of Automated Mobility (BAM) Forum, a virtual event co-hosted by SAE International and the Association for Uncrewed Vehicle Systems International (AUVSI) on June 23 and 24th In his presentation, “Real-World Testing of Multi-Sensor Detect-and-Avoid Systems,” Epperson focused on the technical requirements for beyond visual line of sight (BVLOS) flight and the role that field-tested, multi-sensor systems will play in making those operations more practical and scalable.
The presentation centered on “detect-and-avoid” (DAA) systems for uncrewed aircraft systems (UAS). Epperson explained that BVLOS operations still require operators to demonstrate how they will maintain safe separation from crewed aircraft. That challenge becomes more complex in U.S. airspace, where operators must account for both cooperative aircraft, which broadcast position information, and non-cooperative aircraft, which may not be transmitting and therefore require other sensing methods such as radar.
Why Multi-Sensor Detect-and-Avoid Matters
A key theme of the presentation was that scalable BVLOS operations require more than traffic awareness alone. They require a layered safety approach that can detect traffic, predict trajectories, identify conflicts, and support timely avoidance actions. Epperson noted that long-range BVLOS operations, as well as future advanced air mobility (AAM) missions, will depend on systems that combine trusted autonomy, reliable sensor inputs, and standards-based safety logic.
He also emphasized that DAA exists within a broader deconfliction framework. Strategic flight planning, in-flight coordination, and future networked systems such as unmanned traffic management (UTM) all contribute to safety, but the aircraft or operator must still be able to respond tactically to nearby traffic. In that environment, the industry is moving toward multi-sensor approaches that provide multiple layers of detection and risk mitigation.
How FlightHorizon Supports DAA Operations
Epperson described how Vigilant Aerospace’s system combines aircraft telemetry, cooperative traffic data, and radar detections into a real-time airspace model. The software tracks aircraft, predicts future positions, and calculates avoidance guidance when needed. That guidance can be presented to a pilot through a display with audio alerts or delivered to an autopilot for response support.
Epperson also outlined the company’s background in NASA-derived DAA development. FlightHorizon is based on two licensed NASA patents and subsequent NASA prototyping and research. Vigilant Aerospace has also participated in the Federal Aviation Administration’s Integration Pilot Program, conducted radar testing in Alaska for the FAA, and supported deployments at airparks and UAS test sites, including the Northern Plains UAS Test Site in Grand Forks.
Field Testing and Certification Pathways
A substantial portion of the session focused on how DAA systems are tested in practice. Epperson described testing that includes sensor performance evaluation, encounter testing with multiple aircraft, radar filtering and false-track rejection, and collection of baseline data for larger simulation campaigns. He explained that this testing supports the development of a safety case that can be used in waiver or certification processes.
The presentation also reviewed the role of standards and autonomy logic in future DAA systems. Epperson referenced NASA DAIDALUS, ACAS X, and ACAS sXu as examples of conflict detection and collision avoidance logic relevant to uncrewed aircraft. He also pointed to ASTM and RTCA standards as important frameworks for how these systems are designed, tested, and eventually certified.
Implications for Routine BVLOS Operations
The operational examples in the presentation ranged from infrastructure inspection and airpark deployments to pipeline routes and biomedical delivery testing. Epperson’s broader point was that different mission profiles may require different DAA architectures. Fixed inspection areas may be well served by ground-based infrastructure, while route-based missions and future AAM operations will likely require onboard multi-sensor systems.
He concluded by describing Vigilant Aerospace’s approach as focused on continuous automatic safety through multiple layers of deconfliction, including onboard and ground-based sensing, future UTM integration, and tactical avoidance when necessary. The long-term goal, he said, is full aerospace integration for uncrewed aircraft operating across a wide range of environments.
See the full presentation and transcript below for more information.
Full Presentation
Full Transcript
I am Kraettli Epperson, CEO of Vigilant Aerospace, and we build detect-and-avoid systems. We’re going to have a conversation about detect-and-avoid for unmanned aircraft, with a particular focus on real-world testing.
A few themes and topics we’re going to cover: first, the problem. What is detect-and-avoid, and why is it needed?
Then I’ll cover some system fundamentals. I want everyone to come away from this session with a clear understanding of what detect-and-avoid is for, even if you haven’t encountered it before.
I’ll also talk a little bit about our detect-and-avoid system as a predicate system. I’ll provide some background about us, explain how detect-and-avoid works, and discuss some of the projects we’ve completed that have brought us to this point.
From there, we’re going to dive into what detect-and-avoid system components are made of, along with the algorithms and standards that govern these systems today. Those are extremely important, particularly when you’re designing testing schemes. Then we’ll cover testing types and considerations, followed by real-world testing. Finally, I’ll close with a short discussion about next steps for us and for the industry, and what we see as the ultimate goal for systems like this.
Why Is Detect-and-Avoid Important?
Let’s start with a short discussion about the problem and why this is such an important technology.
This is a critical technology for the unmanned aircraft systems industry. You’ll note that I have a diagram here. This comes from the ASTM F38 detect-and-avoid systems standard. It is a timing diagram showing how a system like this would work under that standard.
The reality is that beyond visual line of sight flight for drones currently requires individualized authorization from the FAA if you’re operating in the U.S. If you’re flying internationally, the rules may differ, but many are very similar to what you encounter with the FAA.
The reason for that is that the technologies needed to provide safety for unmanned aircraft flying beyond the visual line of sight of the pilot are still being developed. Drone pilots and UAS pilots have a requirement to remain well clear of manned aircraft. That means it is the UAS pilot’s responsibility to have some kind of system that allows them to detect and then avoid encounters with manned aircraft.
A complication is that they must be able to detect both cooperative aircraft and non-cooperative aircraft. Cooperative aircraft are aircraft with transponders that are participating in air traffic control. Non-cooperative aircraft are not transmitting that information.
In the U.S. in particular, you have general aviation aircraft that may not be interacting with air traffic control, but as a UAS operator, you still have a responsibility to avoid them. These are the fundamental problems that a detect-and-avoid system is designed to solve.
In addition, onboard detect-and-avoid, rather than a ground-based system, is really required to make long-range beyond visual line-of-sight operations practical and economical. It is also going to be absolutely critical for UAM and AAM operations, which we’ll touch on shortly.
A scalable solution really needs automatic avoidance. In other words, you should not have to grab a joystick every time a sensor tells you there is another aircraft, figure out where that aircraft is, track the two aircraft in relation to one another, and then decide what to do. To truly scale, detect-and-avoid needs autonomy, specifically trusted autonomy. It needs to be well tested, and it needs to exist within a broader framework of safety as part of an overall system that is trusted.
Right now, detect-and-avoid can be complex and expensive, which is why overcoming these obstacles is so important.
This is another diagram from the ASTM standard, and it helps illustrate the idea that the unmanned aircraft in the middle has to maintain a certain separation distance, called the well-clear boundary, from manned aircraft. It also has to maintain a near mid-air collision, or NMAC, boundary. This is really the fundamental purpose of a detect-and-avoid system.
Current Industry Regulatory Trends
A little bit now about current industry and regulatory trends.
Detect-and-avoid typically exists in the smallest of these circles. It is usually a tactical system, but it works alongside other forms of deconfliction and safety that surround unmanned aircraft, UAM, and other autonomous aircraft.
Strategic pre-planning is very important. In-flight planning is important. Knowing where other aircraft are, potentially through networked systems like UTM, is also important. But ultimately, the aircraft itself is expected to be able to detect and avoid other aircraft.
Typically, the industry is moving toward the use of multiple sensors to do that, creating multiple layers of detection. The ability to detect both cooperative and non-cooperative air traffic, and then provide a defined level of risk mitigation, is extremely important. That concept runs throughout the standards. The idea is that your detect-and-avoid system is providing a measurable level of safety, combined with situational awareness and, ultimately, autonomy.
In the future, we expect networked coordination systems like UTM, or unmanned traffic management, to provide additional coordination.
Detect-and-Avoid for Unmanned Aircraft Operations
All right, so I’m going to talk a little bit about our system.
Our system is called FlightHorizon. It is software designed to detect, track, and avoid conflicts between drones and manned aircraft. It can send commands for an avoidance maneuver either to the pilot or to an autopilot. It provides a real-time 2D or 3D display of air traffic, predicted trajectories, and avoidance advisories, and it combines data from multiple sensors.
I’ll go through that in more detail in later slides, but pulling in data from multiple systems is one of the major functions of the software.
It is based on two licensed NASA patents and significant NASA prototyping and research.
Here is a diagram that gives you a better idea of how detect-and-avoid systems work, again using our product, FlightHorizon, as the predicate system.
You have telemetry coming in on the left-hand side from your drone. This comes from the flight controller and provides information over your command-and-control radio about where your aircraft is located.
You also have information from a cooperative sensor on the right-hand side, specifically the transponder receiver. If aircraft are participating in air traffic control, they’ll be transmitting that information over the air, which is very useful. But for aircraft that are not cooperative and do not have that transponder, you need radar to detect them.
All of that information comes into a system like ours. It is incorporated into an aerospace model, where all that air traffic is maintained in real time. The system predicts trajectories over the next 30, 60, or 90 seconds, and sometimes longer. Then, if necessary, it calculates and delivers an avoidance maneuver. That is usually displayed on the airspace management or detect-and-avoid display so that the pilot or airspace manager is aware of it.
Here is what that display typically looks like. Very quickly, the white aircraft in the middle is your ownship, or your unmanned aircraft. The large yellow circle around it is your well-clear distance. The small red circle is your NMAC distance. Those boundaries have to be maintained even when a manned aircraft enters your airspace.
In this screenshot, you can see a blue aircraft being tracked. There is a predicted conflict, so the system immediately provides an advisory that says, “Collision possible 20 seconds. Ascend. Ascend. Ascend.” It is actually speaking to you, displaying the warning on screen, and also showing a green guidance line.
Interfaces like this are very common, but this is a good representation of what is happening inside the software and what might be sent to an autopilot, which in this case would be instructed to follow the new flight path.
Company Background
All right, this is just a little background.
NASA Armstrong and Ricardo Arteaga, who is the inventor of this technology, began thinking about this back in 2013 as NextGen Air Traffic Control was emerging, and they worked on filing a patent.
We were very fortunate to learn about that work, see a public presentation, license the patent, and then participate in multiple flight campaigns with both NASA and other research institutions. That has been extremely valuable for us.
We then participated in the FAA’s Integration Pilot Program from 2018 to 2020, and we conducted a variety of flights through that program. Then, in 2020, we began flight testing our system, particularly radar testing for the FAA at the Alaska Center for Unmanned Aircraft Systems Integration. That work has continued through today.
We are now part of the FAA’s BEYOND program in Alaska and are rolling out our ground-based system at a variety of air parks and UAS test sites.
We have three versions of the system. We have a ground-based version for air parks. We have a portable version for a laptop or tablet, which is actually used by NASA. And then we have the PILOT version, which is onboard. I’ll talk about that a little more here.
In terms of past projects, we have completed several projects with NASA, and that really provides much of our research background.
Increasingly, we have also been out in the field with commercial operators. In particular, we completed our radar integration work with support from Oklahoma State University, and that was a fantastic effort.
Current projects are really where I’ll focus as we move forward, and I’m going to show you information about real-world testing that I hope you’ll find interesting.
We are currently doing radar research with our system as the predicate detect-and-avoid system for the FAA. That project is ongoing.
We are also rolling out our system at the Northern Plains UAS Test Site in Grand Forks, one of the national federal test sites, and we are involved in several other FAA and NASA testing projects.
Testing a Detect-and-Avoid System
All right, I’m going to go a little deeper into the technology here, and then we’ll talk about how you test that technology.
The typical components of a DAA system include sensors for non-cooperative aircraft detection.
We use radar, which is very popular because it comes in a variety of sizes, ranges, and power levels. We test a wide range of radar systems, and I’ll show you some photos of that. Electro-optical and infrared systems are also used. These are camera systems capable of detecting and classifying aircraft. Acoustic sensors are used in some cases as well, along with other types of sensors.
Display and human factors are also very important. Typically, part of the DAA system is the display that shows traffic and, in particular, alerts you to situations that require immediate action. Over time, we expect more of that process to become automatic. So autonomous avoidance processes, along with the algorithms that support automatic avoidance, are also a key part of the system.
Ultimately, we expect there to be more coordination. Systems like UTM will likely be rolled out regionally or nationally, allowing for much greater levels of coordination. Even then, detect-and-avoid is expected both to integrate into UTM systems and to remain either with the pilot or onboard the aircraft to provide tactical deconfliction.
These are just some examples of common components. These are things we use, along with others.
We use an ADS-B in receiver like the device on the left. We use portable radar for many of our tests, although we also work with larger radars. And then we use an autopilot. We can integrate with several of those, and Pixhawk is very commonly used.
I won’t go into a lot of depth on this, but there are a few important points to keep in mind.
A detect-and-avoid system that provides some level of autonomy depends on algorithms to perform trajectory prediction, detect conflicts, and generate a resolution or avoidance advisory.
We use NASA DAIDALUS, which is a system that has undergone substantial testing at NASA. The FAA now has ACAS X, which is a new TCAS-related system, and we are implementing ACAS sXu, which is the small unmanned aircraft version of that software.
We’re very excited about that because it offers a path toward certification for systems performing collision avoidance, and we clearly see that as the future.
There are also some very important technical standards. I referenced the ASTM standard at the beginning of this talk, but there are other standards that are important in this industry. RTCA has standards, mostly for larger unmanned aircraft. The SC-228 standards are very important in this area, and the new TCAS-related standards for emerging ACAS systems from RTCA will also affect how these systems are implemented and certified.
So when we test these systems, what are we doing and why are we doing it?
We are testing sensor performance. That is very important. We are also doing encounter testing to ensure that all the data is coming together correctly and being correlated properly.
Usually, multiple aircraft are involved in those encounters, and we need to confirm that the system can properly use that data and provide a resolution advisory. Typically, we are testing either ground-based or onboard sensors using different methods and approaches.
With radar, we are sensitive to all the issues that can create problems. That includes clutter, which means we need very careful filtering. We also have to reject false tracks that do not behave like aircraft, and classification of different objects and aircraft types is important.
We want to establish baseline performance for these sensors. That is one of the main reasons we are doing the projects I’ve described.
That information can then be used in much larger simulations involving hundreds, thousands, or even millions of encounters. That gives you a much deeper understanding of how the sensors can be used for detection.
Ultimately, you are building a safety case that can become part of either a waiver application or a certification process with a regulator.
Real-World Testing
Here are some photos from recent testing. It was a very cold morning near Fairbanks, Alaska, and these are the systems I’ve described actually being used in the field.
You’ll see here an ADS-B in receiver in the upper left. You have an EchoGuard radar. You have a GroundAware 9120, which is a larger, longer-range radar. We have data capture running, telemetry coming in from the unmanned aircraft, and an EchoFlight radar, which you can’t quite see in this picture. That radar is actually mounted on the drone and is being used to detect manned aircraft.
Here is a system with FlightHorizon PILOT, the onboard system, running with EchoFlight radar. That is a portable radar designed to be flown onboard the aircraft. It can be used on either fixed-wing or multirotor drones, and in this case we are using it on a multirotor.
It feeds directly into the detect-and-avoid system, which records information and will eventually provide information back to the autopilot, which is supervised from the ground.
In this example, we are using two encounter aircraft. These are very useful for us because they are typical general aviation aircraft. They provide strong encounter scenarios and allow us to verify that we are seeing the aircraft where we expect to see them, both on the ground and in the air, using our sensors.
This is some additional testing with small unmanned aircraft.
I’m giving you a variety of examples and programs here so you can get a feel for what real-world testing of systems like this actually looks like.
In this case, we have two unmanned aircraft flying through a variety of encounters. You’ll see a couple of small screenshots showing those aircraft being tracked, detected, and then the system providing the ownship pilot with information about what needs to be done to avoid a collision.
These aircraft are offset by altitude, but they are still two small unmanned aircraft flying real encounter scenarios so that you can conduct real-world, sensor-based, in-field testing.
Here is another test. This one used a larger radar and a slightly larger unmanned aircraft acting as a target in order to measure range and performance characteristics. This test was performed in North Dakota with the Northern Plains UAS Test Site.
Here is another example, and this one is interesting because we set ourselves up in the field, install the equipment, and then run dozens or even hundreds of tests. We have completed many of these encounters, which gives us a great deal of information about how the systems behave.
We also had the opportunity in May 2020 to fly with a company that develops specialized temperature-controlled biomedical delivery containers. We conducted that testing with Oklahoma State University, and it was a very interesting use case for this system.
We fully expect that drone delivery will eventually become routine. That is one of the reasons we are doing this research. Our goal is to help make beyond visual line-of-sight flight safe and reliable enough that regulators will allow it to become routine, including delivery operations.
Next Steps
So, as we move into the final minutes of the presentation, and then hopefully a few minutes for questions, let’s talk about next steps.
The real focus of this kind of research is to establish safety cases for different kinds of systems.
In some cases, you will have a ground-based, infrastructure-supported system, particularly if you are inspecting assets. For example, if you are operating at a port, wind farm, or refinery, and you plan to fly routinely in that area, then a ground-based system may be the most economical solution.
In other cases, such as flying along a long pipeline, performing delivery flights, or operating in UAM, you are going to need an onboard system with multiple sensors collecting and delivering this information.
This work is intended for both small UAS and large UAS. We work with both, and much of the research has involved a mix of platforms. You’ve seen examples of small and medium-sized aircraft. A lot of the original research was done at NASA, and we continue to work with large UAS as well.
Establishing the risk ratio, meaning the ability to quantify the contribution of a detect-and-avoid system to the safety of the overall operation, is very important. That is what drives compliance with the standards.
We are also testing new and longer-range radars. Radar costs are coming down, and the capabilities of small onboard radars are improving. All of that is very exciting for us.
We are not a radar manufacturer. We do not manufacture any hardware. We are really focused on systems and integration, on how to make all of these elements work together successfully.
As the industry moves toward more autonomy, system assurance and trusted autonomy are going to become extremely important.
Frameworks are emerging, and we are very involved in the industry. I serve on several working groups that are helping shape those frameworks.
Ultimately, we are very excited about the impact systems like this can have on UAM and AAM as the industry moves toward air taxis and other advanced operations that we expect to drive growth and innovation over the next several years.
There is also Remote ID for unmanned aircraft. Under the new rules, unmanned aircraft are identifying themselves by radio, essentially like a license plate for a car. That is very helpful for deconfliction between unmanned aircraft and for general tracking.
Then, ultimately, UTM systems will provide network-based coordination. As long as you can access that network, you can achieve additional coordination. All of those elements are in development across the industry, and they are things our company monitors closely.
As I close and leave a few minutes for questions, I want to emphasize that we are really focused on providing continuous, automatic safety with multiple layers of deconfliction. That means multiple sensors, pre-coordination, in-flight coordination when possible, and ultimately tactical avoidance when necessary.
Integration with both onboard and ground-based resources and assets, eventually including UTM, is very important to the way we think about detect-and-avoid and safety for unmanned aircraft.
We expect the system built into the flight controller to become an integral part of the operation, something that simply runs in the background as part of normal flight.
We expect it to be standards-based, highly flexible, and highly modular. We are always excited when we have the opportunity to integrate a new radar sensor into the system. Those are the types of developments that are driving innovation and capability.
Over time, we expect the cost of compliance to come down. That means the cost of shared infrastructure or individual sensors should decline, while capability increases. That will improve the pace of unmanned aircraft operations and dramatically increase efficiency when shared resources are available.
Ultimately, we want to develop and deliver systems that are economical and scalable for different types of users, from small unmanned aircraft all the way up to large unmanned aircraft. That is what will drive the industry.
In low-risk environments such as rural areas, agricultural settings, and pipeline inspection, there can be a high level of efficiency achieved through beyond visual line-of-sight unmanned aircraft operations.
In urban environments, the setting is obviously different, but operators may also have access to different resources and ground-based systems. All of these factors come together.
Ultimately, we want to see full aerospace integration, and that is really the goal of detect-and-avoid and of systems like ours.
We would be happy to talk with you and answer any additional questions, which I’m sure you’ll have. Thank you all very much, and thank you to SAE International for allowing us to make this presentation. Thank you so much, everybody.
About the Business of Automated Mobility Forum

The Business of Automated Mobility Forum is a joint event organized by SAE International and AUVSI. The event was designed to help companies building the future of mobility create an actionable roadmap to success by bringing together experts on standards, regulations, technology, and commercialization across automated mobility sectors.
About SAE International

SAE International is a global professional association that serves engineers and technical experts in the aerospace, automotive, and commercial vehicle industries. The organization is widely known for its work in mobility-related standards development, technical resources, professional education, and industry events.
This structure also follows your internal blog SOP, which calls for a conference-appearance format grounded in the event description, reverse-pyramid organization, consistent subheads, and required About sections.
