Project details

School of Electrical & Electronic Engineering


Click on [Back] button to go back to previous page


Proj No. A1099-251
Title Robot Pose Imitation using D435 Camera and Simulation-based Control
Summary Traditional robot control methods often rely on predefined motion scripts, limiting their ability to dynamically adapt to human movements. With the rise of vision-based motion capture and deep learning, robots can now perceive and imitate human poses in real time. This project aims to develop a system where a robot equipped with an Intel RealSense D435 camera can observe human poses and mirror them in a simulation environment.
Project Scope
The project consists of three key phases:
1. Pose Estimation from Depth Camera:
o Utilize the Intel RealSense D435 RGB-D camera to capture human pose data.
o Process depth and RGB data using state-of-the-art pose estimation models like OpenPose or MediaPipe.
o Extract skeletal joint positions and orientations in real-time.
2. Pose Processing and Motion Mapping:
o Convert detected human poses into a format suitable for robotic control.
o Implement kinematic retargeting to adapt human movement to a simulated robot’s joint constraints.
o Apply filtering techniques (e.g., Kalman or Savitzky-Golay) to smooth motion data and reduce noise.
3. Simulation-based Imitation:
o Implement the imitation system in a simulation environment (e.g., PyBullet, Mujoco, or Gazebo).
o Develop inverse kinematics (IK) and control strategies to ensure realistic robot movement.
o Evaluate performance based on motion accuracy, stability, and latency.
Expected Outcomes
• A functional pipeline that enables a simulated robot to mimic human movements using depth camera input.
• A framework that can later be extended to real-world robotic applications (e.g., teleoperation, human-robot interaction).
• Potential contributions to robotics and human motion imitation research, with the possibility of furthering real-world implementation.

Required Resources

• Hardware: Intel RealSense D435 camera, PC with GPU for processing, access to robotic simulation platforms.
• Software & Libraries: ROS (Robot Operating System), OpenCV, PyTorch/TensorFlow (for pose estimation), a physics simulator (PyBullet, Mujoco, or Gazebo).

Candidate Requirements

• Familiarity with Python and deep learning frameworks (PyTorch/TensorFlow).
• Experience with computer vision or pose estimation models is a plus.
• Understanding of ROS and robotic kinematics is advantageous.
• Willingness to engage in experimental work and debugging.
Supervisor Prof Xie Lihua (Loc:S2 > S2 B2C > S2 B2C 94, Ext: +65 67904524)
Co-Supervisor -
RI Co-Supervisor -
Lab Internet of Things Laboratory (Loc: S1-B4c-14, ext: 5470/5475)
Single/Group: Single
Area: Intelligent Systems and Control Engineering
ISP/RI/SMP/SCP?: