Project details

School of Electrical & Electronic Engineering


Click on [Back] button to go back to previous page


Proj No. B2169-251
Title Leveraging FPGA for High-Speed Data Processing and Analysis for Deep Learning
Summary This project explores hardware-based acceleration for Machine Learning (ML) inference, focusing on Field-Programmable Gate Arrays (FPGAs) to enhance speed and efficiency in real-time applications. Traditional software-based processing struggles with large datasets and time-sensitive tasks, whereas FPGAs offer optimized computational pathways, reduced latency, and improved power efficiency. Unlike GPUs, FPGAs can be customized to match specific ML models, making them ideal for low-latency and power-constrained environments such as autonomous systems, industrial automation, and portable medical devices. However, low-cost FPGAs often have limited resources, such as fewer logic elements, lower DSP block availability, and reduced memory bandwidth, which can constrain the complexity and size of ML models they can efficiently implement. These constraints necessitate careful model optimization, such as quantization and pruning, to fit within available resources, potentially sacrificing accuracy and flexibility compared to higher-end FPGA or GPU implementations. This project provides hands-on experience in hardware-accelerated machine learning and real-time data processing, bridging theoretical knowledge with practical applications.

Project Stages and Tasks:
1. Literature Review: Conduct a brief literature review of the topic, understand FPGAs and/or their programming, as well as examine existing known ML algorithms or methods used for processing large data sets, and their performance features.
2. System Implementation (Hardware): Help with the purchase of the desired components, mounting them on the PCBs and do first checks of proper functionality of hardware.
3. Deep Learning Algorithm Implementation: Implement signal processing techniques on different FPGA-based hardware, apply ML algorithms in real-world scenarios, and design program workflows for effective data processing
4. Performance Evaluation and Optimization: The target is to ultimately optimize the inference models on FPGAs & of ML algorithms and improve their performance, also enabling to make recommendations for future work & improvements.
5. Final Report: Compile and document findings and results in a comprehensive final report.

The student may contribute to only one, or a few, or (ideally) all the activities listed above, depending on the available time, his background and skills, and other factors (e.g. collaboration with other parties, time needed for various other elements, etc.)

While not mandatory, preliminary knowledge of convolutional neural networks (CNNs) (if possible also and/or of GPUs and/or FPGAs), as well as proficiency in MATLAB, Python, and/or Verilog/VHDL are highly desirable and would be a big plus.

This project is in collaboration with Singapore Institute of Manufacturing (SIMTech), where the student will carry out most of the work, joining a research team of experienced scientists and engineers, under the guidance of Dr. Seck Hon Luen as co-supervisor. It is preferrable if the student is willing to start the work earlier (BEFORE the school starts), namely to gradually begin as soon as possible after the end of the exam session. Since this is a company-based project the candidate(s) may first be interviewed by the company co-supervisor and the supervising prof.
Supervisor A/P Poenar Daniel Puiu (Loc:S2 > S2 B2A > S2 B2A 27, Ext: +65 67904237)
Co-Supervisor -
RI Co-Supervisor -
Lab Machine Learning and Data Analytics Lab (Loc: S2.1, B4-01)
Single/Group: Single
Area: Intelligent Systems and Control Engineering
ISP/RI/SMP/SCP?: ISP:
Dr. Seck Hon Luen
Senior Scientist II
Singapore Institute of Manufacturing (SIMTech)
hlseck@SIMTech.a-star.edu.sg