Drone Obstacle Avoidance and Autonomy in the 2020’s- Diverse Needs and No Single Solution

Drone Obstacle Avoidance and Autonomy in the 2020’s- Diverse Needs and No Single Solution

In my last article I told a story from the early 2000s about how we first integrated obstacle avoidance on a small drone. To recap, we were attempting to provide a fixed-wing model aircraft with a set of optical flow sensors that would detect a looming obstacle (trees, buildings, etc.) and trigger a turn to avoid a collision. We found that the sensors succeeded in detecting the looming obstacle, however the drone was not agile enough to avoid it with the warning provided. We solved the problem by making the drone itself more maneuverable.  The point of that story was that providing obstacle avoidance required not optimizing the sensor, but optimizing the entire drone system (the drone, its actuation, the sensors, and control) in a holistic fashion.

Let us update the above discussion to the current decade. In the 2020’s and onward, there will be more use cases for drones involving operation close to the ground, near structures, or even inside them. The range of applications will be quite diverse, each with its own specific needs for autonomy and obstacle avoidance. I also believe that that a holistic approach is still needed to implement such drone systems. If you accept this, the implication is that there will not be a “one size fits all” approach to obstacle avoidance and autonomy that is viable, at least not any time in the near future.

First, suppose we wanted to define a set of “specifications” for an implementation of obstacle avoidance and autonomy on a drone. Below is a list I came up with, along with a range of possible values. What you will note is that some of the specifications refer to the drone itself as opposed to any obstacle avoidance technology. The reader will undoubtedly think of other specifications or improvements to the list below. I solicit your feedback!

System mass: How much does the entire drone weigh, all-inclusive, on take-off? Sample values include 1) nano UAS weighing 15g to 50g, 2) “micro” UAS weighing less than 250g thus not requiring registration with the FAA, 3) small UAS weighing 0.5kg to 10kg for aerial photography, and 4) larger UAS for delivery. These values currently span more than three orders of magnitude while still meeting the FAA definition of an sUAS!

Distance to obstacles: What is the distance to obstacles that must be detected and negotiated? These values can range from 10cm in tight indoor or subterranean spaces to 10s of meters for outdoor environments, spanning two orders of magnitude. We are not considering “see and avoid” applications requiring the ability to see platforms a kilometer or more away.

Minimum obstacle size detected: What is the smallest obstacle that must be detected for an application? Smaller values can include several millimeters for cables and naked vegetation. Modest values can include 10s of centimeters for tree trunks, furniture, or wide variety of environmental features. Larger values can include meters for buildings. Again, we see three orders of magnitude of variation. It should be noted that while all such obstacles can be troublesome to a drone, it is possible to define an application or mission scenario that does not need to avoid the smallest obstacles. For example, operators using a drone for building inspection will generally plan and set up operations away from power lines and trees while operators for tactical (military, etc.) uses might not have that luxury.

Platform agility: How agile is the platform? A platform that is extremely agile can get closer to an obstacle before an avoidance maneuver is triggered. Arguably increased agility can simplify avoidance needs, as described in my previous post.

Platform collision resilience: What is the “cost” of the drone coming in physical contact with an obstacle? For most open rotor drones such a “contact event” will result in a crash and/or physical damage. Drones with rotor guards can usually survive gentle contact events. Some drones have surrounding rolling cages and are designed to be in constant contact with the environment. For these latter drones, the cost of a contact event may be just a short delay. This specification is important since the performance of an obstacle avoidance or autonomy system clearly depends on how the drone interacts with the environment.

Light levels: For optically based avoidance systems, what is the range of light levels that must be handled? Outdoor daylight levels range from thousands to tens of thousands of lux in the open or hundreds of lux in shadows. Indoor office or living environments range from single digit lux to thousands of lux, depending on, for example, lighting type, whether one is in a shadow, and whether or not one is near a window. Light levels drop to a small fraction of a lux at night or in the dark. This specification clearly can span six or more orders of magnitude.

Lighting dynamics: How much do light levels vary over time or by location? For example, does a vision system need to see both sunlit and shadowed areas in one image? Will the system need to transition quickly between bright and dark environments? Are there pulsed lighting sources that can affect vision algorithms? These factors can stress a vision system used to support obstacle avoidance and autonomy.

Environment geometry: What is the general shape of the environment? An easy environment is one in which floors, ceilings, and walls are flat and either horizontal or vertical. Although simplistic, this is an adequate model for most office or living spaces. Drones operating in such environments can make assumptions about obstacles based on limited sensing. Difficulty is increased by adding vertical features such as slopes, stairwells, and vertical shafts. On the other extreme are environments such as caves having a variety of three-dimensional shapes for obstacles and clear spaces. Flight within, for example, a tree canopy would be even more complex. Arguably, the “difficulty” of such environments in terms of the demands they place on perception systems could qualitatively span one or two orders of magnitude.

Flight speed: How fast is the drone traveling? We envision operational flight speeds ranging from tens of centimeters per second to tens of meters per second, or about two orders of magnitude. Flight speed is relevant since it indicates how quickly an obstacle (or safe path around it) must be detected. Furthermore, faster flight can cause blurring in any acquired visual imagery, especially in lower light levels.

Operator integration: How will a human operator interact with any autonomous or obstacle avoidance algorithms on the drone? This is a qualitative descriptor. At one level, the human operator may directly fly the drone, with avoidance algorithms overtaking only when a collision is imminent or when the operator releases the sticks. At another level, the human operator may only give high level commands, for example to initiate a flight sequence.

If we look at the above, we can see each dimension of the specification can span between one and six (or more) orders of magnitude. When you consider all permutations, the volume of the specification space can span twenty or more orders of magnitude. There is no practical way (with anything resembling 2020’s technology) a single implementation can handle all of the above, or even more than a tiny fraction of it. I invite you to consider any implementation of autonomy or obstacle avoidance you may have come across, whether as a university research demonstration or included in a product that you can buy. That implementation would cover, at most, just a thin sliver of the specification space. This even includes cases where tens to hundreds of $M have been invested. The most noteworthy examples are advanced consumer drones by Skydio and DJI. These platforms weigh on the order of one to several kilograms, operate only in daytime, and (as far as I know) only operate outdoors. These are no doubt impressive technical achievements. However, if you take them out of the environments for which they were designed they will likely fall short of performing adequately if not fail completely. Similarly, the technology they incorporate could almost certainly not, as of 2020, be implemented on a 50g class nano drone. (Indeed, an nVidia TX2 without the requisite heat sink still weighs more than some complete autonomous nano drones my company Centeye has built!) This observation is not meant in any way to denigrate these achievements, but to remind the reader that they are practical over only a subset of mission or use case scenarios.

Students of technology innovation will have come across the theories of disruptive innovation put forth by the late Clayton Christenson, most notably in his “innovator’s dilemma” series of books. One observation from his book “The Innovator’s Solution” is in my opinion applicable, as I interpret it: Every technology, roughly defined by a technical problem and the solution set used to solve it, can be fit into one of two categories. In the first category, the technology partially solves a problem, and does so enough to provide some utility, but does not solve it completely. The “best” solution, from the perspective of a customer, is really the “least deficient” solution. Innovations for this type of technology involve filling in the remaining deficiency gaps. A vertically integrated solution is needed for this type of technology, since only a holistic implementation will provide an integrated solution that maximizes overall performance.

 In the second category, the technology adequately solves the problem. When a technology reaches this latter category, it is on the path towards being a commodity as innovations focus cost rather than utility. A holistic architecture can be a liability since it is technically harder to make an integrated solution than one that is modular and thus can use components already built to solve different but related problems. Modularization not only can lower cost, but it can make it easier to repurpose the technology for different platforms and related applications. It is true that modularization does incur a performance penalty, but this penalty is now absorbed by the surplus ability of the core technology.

In my opinion, technology to provide drones with obstacle avoidance and autonomy is clearly in the first category, with highly integrated solutions for each case. This is why, for example, all successful implementation of obstacle avoidance and autonomy on small drones work for a narrow set of applications, such as those consumer platforms mentioned above, or are single-purpose demonstrations, such as those implemented by university research groups. This is also why there are (as far as I know) no viable “plug and play” obstacle avoidance solutions in the small drone space beyond the most rudimentary.

If you accept my reasoning above, there are many implications for those who wish to build, invest in, and/or use autonomy technology for drones. I will explore these further in follow-on articles.

Leave a Comment