Centeye Nano Drone Video
Enjoy our most recent video showing our nano drone with 360 degree obstacle avoidance and auto-hover!
Enjoy our most recent video showing our nano drone with 360 degree obstacle avoidance and auto-hover!
As part of our efforts to develop practical vision-based flight control, Centeye has developed a new version of our nano drone containing 360 degree vision and obstacle avoidance. The system is built on a modified Crazyflie. Below are pictures and specifications for this system, which we are using for internal work and providing to our partners. The system is operational. New videos will be released in the coming months.
Centeye nano unmanned aircraft system (UAS) with 360-degree stereo vision
Centeye Multi-Mode Stereo Sensor:
As part of an Air Force funded project, Centeye has prototyped a vision based system to allow small drones to both hover in place without GPS and visually detect nearby objects to avoid collisions. The video below shows sample flights in an indoor residence, taken in November 2015. A more detailed write-up is available on the site of our good friends at Tandem NSI. A sample video of flights is below. The video contains annotations which are best viewed through a laptop or desktop computer.
Eager to make a little foray into the “Internet of Things”, I decided to experiment with the use of an ArduEye as an “Eye for the IoT”. My house is on a fairly busy street, of which I have a good vantage point from my attic home office. A car traffic counter seemed like a good choice for a first project.
I programmed an ArduEye Aphid to detect cars using a very simple algorithm- The ArduEye grabs a small block of pixels ten times a second, and if any pixel in that block changes by more than a threshold, this is considered a car detection. The block of pixels is on the path of cars headed north-bound on my street. I added the requirement that the pixels had to quiet down and be quite for a certain time interval (about a half a second) to prevent the same car from being counted twice.
Next was to upload the data feed to COSM (formerly Pachube). I made use of an excellent book, Building Internet of Things with the Arduino, by Charalampos Doukas, and used an Arduino Uno to bridge the ArduEye Aphid to the internet via a WiFly shield. COSM’s API was simple enough to follow, and for the first time, one of my image sensor chips was feeding real-time data to the Internet!
Here is a link to my data feed on COSM. I’m uploading three datastreams. The first is a simple count of how many cars are detected per minute. the second datastream shows the raw intensity value of one pixel in the window- this is a fun way to monitor the change from day to night. (This datastream is a bit glitchy- I haven’t figured that out yet). The third datastream shows the largest “delta intensity” change of any pixel between two successive frames. The picture below shows a screen shot of the car count datastream over a six-hour period of February 19, 2013. You can clearly see the rise of traffic due to rush hour towards the end of the plot.
The datastream you see may be different, depending on the current traffic and whether the camera is alive at any one point.
This first traffic monitor is very simple and certainly far from perfect. But it was fun to do and gives me a few other ideas for things to try. Here’s an interesting question: What if I could make a version of the Stonyman image sensor chip that drew just a few hundred microwatts of power? Could I then hook it up to a low power microcontroller that could monitor traffic for, say, a year with just a single battery charge? I think it would be fun to try that out…
This past January, I was contacted by Alex Sayer, Alan Kwok, and Benny Chick, students at the British Columbia Institute of Technology (BCIT) in Vancouver, who wanted to use some of our Tam2 chips for a class project in which they would provide a human-computer interface (HCI) for people suffering from ALS (Lou Gehrig’s disease) and could not operate a computer using their hands. Their idea: Use a low resolution image sensor, mounted on a pair of eyeglasses and pointed at the eye, to track where the eye is looking, and generate signals to emulate a mouse. The user would move his or her eye in different directions, causing the mouse cursor to move accordingly. The user could use select patterns to generate mouse clicks and so forth. Intrigued, I sent them some Tam2 chips to play with…
Four months later, they had a working prototype they call the “eyeSelect”, and Alan Janzen, a patient suffering from ALS, was using this device to play solitaire and other games on a computer! These three students won the top prize in the 2012 Dr. Jim McEwen Excellence in Engineering design competition. Nice!!
Below are a links to news media articles on their achievement, with pictures and video!
We’ve had a lot of inquiries about our ArduEye system, plus we’ve just prototyped a smaller, completely self-contained ArduEye (more on this in another post). So I figure it makes sense to discuss what is actually possible with one of these devices. True, the ATmega328 processing engine of an Arduino is limited compared to more advanced DSP, but the reality is that for many applications you really don’t need a whole lot of pixels. If you can get by with specs of the ‘328 (16MHz as an Arduino, 8kB flash, and 2kB SRAM), the small size of an ArduEye (we’ve gotten to around 350mg) and the ease of prototyping new sensors with one (I can prototype a new sensor with just a few hours coding) makes it a good development platform and reference design. So below are some sample applications:
We’ve actually prototyped some of the above examples in our lab or by others. The rest are examples that we are pretty sure are doable using the memory constraints of an Arduino, though we haven’t actually attempted yet.
In a previous post, I demonstrated that the ArduEye platform could be used to prototype a 6DOF vision system for optical flow odometry. The goal is to make a vision system for the Harvard University Robobee Project.
After the success of the prototype, the next step was to design a board that was as small and light as possible. The result is shown below:
The vision system consists of two back-to-back Stonyman vision chips, an Atmel ATMEGA 328P microcontroller, an oscillator (16Mhz), and a voltage regulator. The chips have flat printed optics (as described previously) with slits in order to take one-dimensional images of the environment. Even better, the Atmel has the Arduino bootloader, so the sensor is an Arduino clone and can be programmed through the Arduino IDE. The entire system weighs approximately 300-350 milligrams and has dimensions of 8×11 millimeters.
The following video shows that motion along all six axes can be distinguished. Some axes are stronger than others, and the Y translation, in particular, is weak. However, the results are promising and with a little optimization this could be a useful addition to a sensor suite.
I’d like to gauge the interest for an integrated Arduino clone vision sensor similar to this, but maybe not as compact and minimal. This would be most likely a one-sided vision chip with optics and an Arduino clone processor integrated on a small, single board. The size would be about that of a penny and weigh a half a gram. The user would have control over which pixels are read and how they are processed through the Arduino environment.
Awhile ago we (Centeye) started ArduEye, a project to implement an open source programmable vision sensorbuilt around the Arduino platform. The first ArduEye version used a simple Tam image sensor chip and a plastic lens attached directly to the chip. After much experimentation and some feedback from users, we now have a second generation ArduEye.
The second generation ArduEye is meant to be extremely flexible, ultimately allowing one to implement a wide variety of different sensor configurations. A basic, complete ArduEye is shown below, and contains the following basic components:
An Arduino- Currently we are supporting Arduino UNO-sized boards (e.g. UNO, Duemilanove, Pro) and the Arduino MEGA. When the ARM-driven DUE comes out, we will surely support that as well.
A shield board- this board plugs into the Arduino, and has a number of places to mount one or more image sensor breakout boards. This shield also has places to mount an optional external ADC as well as additional power supply capacitors if desired.
A Stonyman image sensor on a breakout board- The Stonyman is a Centeye-designed 112×112 resolution image sensor chip with an extremely simple interface: 5 digital lines in, which are pulse in predefined sequences, and one analog line out, which contains the pixel. The Stonyman chips are wirebonded directly to a 1-inch square breakout board, which can plug into the shield.
Optics- Possibilities include printed pinholes, printed slits, and cell-phone camera lenses, depending on what you want to do.
Example application- The “application” is an Arduino sketch programmed into the Arduino. This sketch determines what the ArduEye does. One sketch can make it track bright lights, another sketch can measure optical flow, and so on. We are releasing, initially, a base sketch that demonstrates light tracking, optical flow, and odometry. Let us know what other example applications you would like to see.
ArduEye libraries- These libraries are to be installed in your Arduino IDE’s “libraries” file, and include functions to operate the Stonyman image sensor chip as well as acquire and process images, including measuring optical flow.
GUI- Finally, we created a basic GUI that serves as a visual dump terminal for the ArduEye. You can now communicate with the ArduEye via either the GUI or the basic Arduino IDE’s serial terminal. The GUI was written in Processing.
We designed the system to allow easy hacking to implement a wide variety of vision sensors by exploring combinations of optics, image sensing, and image processing. I personally find it useful, and actually use this system for prototyping things at Centeye- I can prototype a new vision sensor in just a couple hours. The target applications are quite broad and include just about anything that may use embedded vision, whether robotics, sensor nets, industrial controls, interactive electronic sculptures (yes this has come up), and so forth.
The video at the top shows some of the basic things you can do with this ArduEye. You’ll see the ArduEye interfacing with a host PC using both the Arduino IDE’s serial terminal and the ArduEye GUI. For more details, including links to the hardware design files and source code, go to the ArduEye wiki site. The site is a work in progress, but should be adequate to get people started. The sample “first application” and GUI is what was used to generate the above video.
Right now we are having 200 Stonyman breakout boards being assembled- they should be ready within a month. We’ll make more if this is well-received. We can assemble a few in-house at Centeye- I’ll do this if enough people twist my arm and promise to really play with the hardware. 🙂
Please let me know your thoughts. In particular, are there any other “sample application” sketches you’d like us to implement?
As part of Centeye’s participation in the NSF-funded Harvard University Robobee project, we are trying to see just how small we can make a vision system that can control a small flying vehicle. For the Robobee project our weight budget will be on the order of 25 milligrams. The vision system for our previous helicopter hovering system weighed about 3 to 5 grams (two orders of magnitude more!) so we have a ways to go!
We recently showed that we can control the yaw and height (heave) of a helicopter using just a single sensor. This is an improvement over the eight-sensor version used previously. The above video gives an overview of the helicopter (a hacked eFlite Blade mCX2) and the vision system, along with two sample flights in my living room. Basically a human pilot (Travis Young in this video) is able to fly the helicopter around with standard control sticks (left stick = yaw and heave, right stick = swash plate servos) and, upon letting go of the sticks, the helicopter with the vision system holds yaw and heave. Note that there was no sensing in this helicopter other than vision- there was no IMU or gyro, and all sensing/image processing was performed on board the helicopter. (The laptop is for setup and diagnostics only.)
The picture below shows the vision sensor itself- the image sensor and the optics weigh about 0.2g total. Image processing was performed on another board with an Atmel AVR32 processor- that was overkill and an 8-bit device could have been used.
A bit more about optics: In 2009 we developed a technique for “printing” optics on a thin plastic sheet, using the same photoplot process used to make masks for, say, making printed circuit boards. We can print up thousands of optics on a standard letter size sheet of plastic for about $50. The simplest version is a simple pinhole, which can be cut out of the plastic and glued directly onto an image sensor chip- pretty much any clear adhesive should work.The picture below shows a close-up of a piece of printed optics next to an image sensor (the one below is a different sensor, the 125 milligram TinyTam we demonstrated last year).
The principle of the optics is quite understandable- a cross section is below. The plastic sheet has a higher index of refraction than air, thus a near hemisphere field of view of light may be focused onto a confined region of the image sensor chip. You won’t grab megapixel images in this manner, but it works well for the hundreds of pixels needed for hovering systems like this.
We are actually working on a new ArduEye system, using our newer Stonyman vision chips, to allow others to hack together sensors using this type of optics. A number of variations are possible, including using slits to sense 1D motion or pinhole arrays to make a compound eye sensor. If you want more details on this optics technique, you can pull up US patent application 12/710,073 on Google Patents.
(Sponsor Credit: “This work was partially supported by the National Science Foundation (award # CCF-0926148). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.”)