This is a new project we are performing in collaboration with Professor Pamela Abshire of the University of Maryland at College Park.
Let us suppose you want to send your nano drone deep into some near-Earth environment, whether in a dark corner of an urban alleyway, or deep inside a building. Unless you are a skilled pilot, you are going to need some sort of assistance in hovering the drone in one location, and you are going to need assistance in veering away from obstacles in your flight path. You can’t rely upon GPS, since it is unreliable in such environments. Besides, you can’t program the GPS coordinate of every single object or wall into the autopilot. And you can’t rely on just an IMU- even the best ones have drift, and double-integrating noisy acceleration measurements gets you in trouble pretty fast. You are going to need a vision-based solution.
If you are using a larger air vehicle, say 50cm or larger, there are a lot of sensing and vision technologies that will enable you to do this. This includes laser rangefinders, 3D sensors such as the Kinect, stereo vision cameras, and of course a whole collection of other vision algorithms.
But on a nano drone you are more limited- A proper nano quadrotor (let’s say having a motor-to-motor distance of less than 10cm) will have a payload capacity of at most 5 to 10 grams at max. So to allow room for any payload, and to avoid operating at the nano quad’s limits, you really have at most a few grams left over for vision.
But what happens if it is dark? Well, you need very sensitive photoreceptor circuits to work with whatever few photons you can intercept, and then excellent algorithms to gain intelligence from these photons.
Don’t worry- we are currently working on a project to solve this exact problem. Our goal is to build a vision system weighing just a few grams that allows autonomous hover-in-place and obstacle avoidance, and ultimately even provide mapping information, and do so in all lighting conditions, “from daylight to no light”.
We know it is possible. There are many species of nocturnal and crepuscular insects that are able to self stabilize, avoid obstacles, visually navigate to and form their nest, and even even visually recognize their nest, but do this at light levels of just a few photons per ommatidia per second! That is maybe a few ten thousand photons per second for the entire vision system.
We already are making progress towards this goal. We recently prototyped a new chip, in a standard CMOS process, that is sensitive enough it can support optical flow measurement in the dark, with illumination provided by just a single 30 milligram LED! We have already obtained successful flight tests with these chips, and are now working to integrate them with a true nano quadrotor. And we are having a blast with this…