A Computer Vision project where our objective was to pilot a drone using visual targets in the environment.
- Images from both cameras of the drone are taken from the simulation in Unreal Engine 4 and sent to a Python program running in parallel.
- Pre-processing is applied to the live feed, then Harris-Corner detection is performed.
- Using detected points, edges are tested and filtered to strong black & white edges.
- Quadrilaterals are extracted from edges using simple rules and perspective transform is applied to retrieve a flattened image of a candidate target.
- If the extracted image corresponds to a target (ARuco markers), we retrieve its identification number and use some 3D-geometry to estimate its location and its rotation relative to the camera at hand.
- This information is finally sent back to the drone which applies time-averaging of the estimates to guide itself in the environment.
In the following video, we see the drone evolving in a simulated environment. Yellow-block targets are the ones the drone must follow whilst the others are parasites to test the identification of markers. On the right are the two camera views (600×600 pixels each) of the drone, one at the front of it, one at the bottom. We see red-green-blue frames appear repeatedly on the targets: these are the estimates the drone makes of their location and rotation. The yellow arrow on top of the drone represents the averaged estimated direction the drone must go when it’s ready to move to the next marker. Finally, we can see the navigation state of the drone on the top-right of the screen.
Final note: a PID controller is used to navigate towards a given target. To make things easier (as CV was the main focus of this project), gravity is automatically compensated and the drone can perform perfect odometry and speed measurements.