Scalable Drone autonomy and Obstacle Avoidance in GPS denied Environments
Avideh Zakhor, Professor
Electrical Engineering and Computer Science
Applications for Fall 2025 are closed for this project.
Over the past few years, we have developed a reinforcement learning based framework for a drone to navigate from one point to another point while avoiding obstacles. In doing so, we do not use the GPS signal as it could be intermittent in many situations. Our approach has been to use Flightware simulation with zero shot transfer. See this paper:
https://www-video.eecs.berkeley.edu/papers/shil-dutta/IROS2025_SUBMITTED.pdf
We demonstrated the above method on an actual DJI Matrice 300 drone in actual experiments at Berkeley Marina and Hearst Mining Circle.
In the coming year, we plan to extend the above system in three different ways:
First, we need to improve the Visual Inertial Odometry (VIO) onboard sensor/algorithms to reduce drift. Second, we need to demonstrate scalability of our approach to other drones such as Modal AI, Holybro and Aero West. The goal is to show how easy it is to port an existing policy to new hardware and to what extent our existing policy works "out of the box". Third, we need to develop new algorithms that enable the drone to maneuver in more realistic environments such as forests.
In this project you will have a chance to work with real drones and use state of the art computers to develop the AI algorithms needed to do so.
Role: * Carry out literature search on state of the art VIO algorithms; Choose the best one; Implement it in software and simulation; then test it on the actual drone.
* If the above successful, integrate the VIO with the existing RL code base for autonomy and show that it works with more accuracy than existing Zed2i sensor VIO scheme on Matrice 300.
* If the above works, extend it to other drones such as Holybro.
* If the above works, extend the algorithms to more complex environments such as forests.
* There is also an opportunity to test your VIO system on a 4 legged robot, a hexapod robot, or a mobile manipulator, or a tank/tracked robot.
Qualifications: * Must have taken or currently be taking the robotics class EECS 106A or EECS106B
* Must have taken CS285 or be familiar with Reinforcement Learning concepts.
* Must have some experience with deep learning frameworks such as pytorch.
* Familiarity with ROS is a plus.
* Familiarity with visual odometry and SLAM is a plus.
* Preference will be given to students who can continue the project through the spring semester.
* Familiarity with 3D mechanical mount design is a plus.
* Experience with flying drones with a remote control is a plus;
Hours: 12 or more hours
Related website: https://www.youtube.com/watch?v=Ti-fV5oRh1w
Related website: https://www-video.eecs.berkeley.edu/papers/shil-dutta/IROS2025_SUBMITTED.pdf