Project details

School of Electrical & Electronic Engineering


Click on [Back] button to go back to previous page


Proj No. A3066-251
Title Deep Reinforcement Learning to Navigate Robots
Summary Conventional navigation techniques have mainly relied on a global information approach, wherein pre-built environment maps are used to construct a path from a given start to a destination. While these methods have seen much success, they are mainly confined to be substantial effort required for prior mapping and have no ability to learn and generalize to new unseen environments. These related problems have motivated researchers to turn to machine-learning approaches. In particular, the advent of Deep Reinforcement Learning (DRL) has shown many promises in robot map-less navigation and collision avoidance. In this project, you will be required to utilize deep reinforcement learning models (e.g., DDPG) to produce your robot control commands in the simulation environment and enable the robot to navigate to the target area without any collisions. Besides, it also requires building the robot-environment interaction script to process the data from the simulator. Eventually, you are encouraged to find out the way to better optimize the feature extraction ability and learning ability for enhancing your learning-based robot control model.
Supervisor A/P Jiang Xudong (Loc:S1 > S1 B1C > S1 B1C 105, Ext: +65 67905018)
Co-Supervisor -
RI Co-Supervisor -
Lab Centre for Advanced Robotics Technology Innovation (CARTIN) (Loc: S2.1-B3-01)
Single/Group: Single
Area: Intelligent Systems and Control Engineering
ISP/RI/SMP/SCP?: