Proj No. | A1063-251 |
Title | Knowledge augmented large language model (LLM)-driven detection verifier for autonomous driving |
Summary | Artificial Intelligence (AI), particularly deep learning, holds tremendous potential, but it also poses significant safety and security risks, including vulnerability to adversarial attacks. Considering attack or defense techniques that apply to a range of sensors of autonomous vehicles, including pinhole and fisheye cameras, as well as LiDARs, most adversarial attacks available today have limited scope because they frequently target data from a single sensor or modality, ignoring the collaboration among multiple sensors and modalities. To address this issue, it is essential to consider the vehicle as a cohesive unit and take into account the interactions between various sensor inputs and modalities to gain a more complete understanding of adversarial attacks and countermeasures for autonomous vehicles. Furthermore, from a dynamical system perspective, the vulnerability of neural networks is a system instability issue. Existing methods typically model machine learning as an open-loop control system, which results in performance degradation when adversarial attacks cause a data distribution shift. To overcome these limitations, this project aims to create a reliable detection output verifier that can accelerate the process of evaluating the adversarial robustness of AI systems. The output verifier is expected to identify different types of detection anomaly like miss detection, misclassification duplicate detection, etc. and address them with theoretical guarantee based on common knowledge, spatial-temporal invariance, logical inference and other context-specific rules. To ensure a more reliable, accurate, low-cost and real-time object detection solution, the knowledge library and rule set used for parameter tuning or model construction of the verification and detection correction should be scalable and transferable, which calls for the ability like auto-filter or auto-tuner according to the changes of driving environment and driver behavior. Large Language Models (LLMs), a key component of AI, exhibit remarkable learning and adaptation capabilities within deployed environments, demonstrating an evolving form of intelligence with the potential to approach human-level proficiency, which is exactly a good match. Therefore, the undertaker of this project is expected to explores the significant potential of integrating the knowledge-based output verifier and LLMs to propel the development of an autonomous closed-loop verification for autonomous driving perception. Based on the available verification framework and driving dataset, the LLM is expected to be augmented with knowledge extracted from different use cases and output robust detection in real time with interpretability for parameter or model tuning. Pre-requisite requirements/knowledge Applicants needs to be familiar with Git, C# or Python, and it is better to have some basic knowledge of image processing and LLMs. Experience of working on configuration or local deployment of LLM or foundation models is beneficial but not mandatory. Participants of this project are expected to improve skills and knowledge in LLM tuning, training, evaluation, augmentation and prompt engineering. Key words: Knowledge augmentation, large language model (LLM), anomaly detection, output verifier, autonomous driving (AV), computer vision |
Supervisor | Prof Su Rong (Loc:S1 > S1 B1B > S1 B1B 59, Ext: +65 67906042) |
Co-Supervisor | - |
RI Co-Supervisor | - |
Lab | Centre for Advanced Robotics Technology Innovation (CARTIN) (Loc: S2.1-B3-01) |
Single/Group: | Single |
Area: | Intelligent Systems and Control Engineering |
ISP/RI/SMP/SCP?: |