Proj No. | A3199-251 |
Title | Neuromorphic Camera and Audio-Based Scene Understanding via Foundation Models |
Summary | This Final Year Project (FYP) explores the use of neuromorphic cameras and audio data for advanced scene understanding. Unlike traditional image-based approaches, neuromorphic cameras provide an event-driven stream of asynchronous data, offering potential advantages in terms of power efficiency and temporal resolution. This project will investigate the fusion of this event-based visual data with complementary audio information to create a richer, more complete scene representation. The core methodology will involve the application of foundation models (such as pre-trained vision and audio transformers) to extract meaningful features from both data streams and subsequently fuse these features for improved scene understanding tasks, potentially including object recognition, event detection, and scene categorization. The efficiency and transfer learning capabilities of foundation models will be central to this research. |
Supervisor | Ast/P Wang Lin (Loc:S2 > S2 B2C > S2 B2C 91, Ext: +65 67905629) |
Co-Supervisor | - |
RI Co-Supervisor | - |
Lab | Internet of Things Laboratory (Loc: S1-B4c-14, ext: 5470/5475) |
Single/Group: | Single |
Area: | Digital Media Processing and Computer Engineering |
ISP/RI/SMP/SCP?: |