Kolloquiumsvortrag Prof. Dr. Cristiano Premebida (University of Coimbra, Portugal)
Abstract (short version):
Robotic perception finds applications in various domains, such as object detection, scene understanding, human/pedestrian detection, and semantic place classification.
It plays a crucial role in decision-making processes, especially in safety-critical scenarios like robotics and autonomous driving. Modern robotic perception relies significantly on deep learning (DL) approaches. However, Deep models, while achieving remarkable performance, often exhibit undesirable behaviors, including overconfidence and a lack of proper probabilistic interpretation or uncertainty quantification.
This Talk focus on reliable robotic perception, particularly perception systems that rely on DNNs equipped with post-hoc calibration to obtain reliable architectures which are vital for safe-critical applications by providing confidence scores that accurately quantify the true likelihood of their predictions (ie, in line with probabilistic explainability), thus properly estimating their predictive uncertainty and mitigating the well-known DL’s problem of overconfident predictions.
Extended Abstract:
In robotics and autonomous vehicles, perception can be understood as a system that endows the robot/vehicle with the ability to perceive, comprehend, and reason about the surrounding environment. In most cases, the key components of a perception system are (i) sensory data, (ii) data representation (environment modeling), and
(iii) AI/ML components (including Deep Networks – DNNs). Robotic perception is related to many applications domains, such as: object detection, environment representation, scene understanding, human/pedestrian detection, activity recognition, semantic place classification, target recognition, tracking, among others. Perception provides inputs to a decision-making process thus, in safe-critical applications (like in robotics and autonomous driving) the outputs of a perception system should be reliable – desirably. As in many domains, deep learning (DL) is also ubiquitous in robotic perception systems. Although DL-based models have been achieved remarkable performance, most of the deep-architectures are prone to two undesirable behaviours: (I) overconfidence issues; and usually (II) DNNs do not guarantee proper probabilistic interpretation or uncertainty quantification.
In summary, this Talk focus on reliable robotic perception, particularly perception systems that rely on DNNs equipped with post-hoc calibration to obtain reliable architectures which are vital for safe-critical applications by providing confidence scores that accurately quantify the true likelihood of their predictions (ie, in line with probabilistic explainability), thus properly estimating their predictive uncertainty and mitigating the well-known DL’s problem of overconfident predictions.
Short-bio:
Dr Cristiano Premebida is Assistant Professor in the department of electrical and computer engineering at the University ofCoimbra, Portugal, where he is part of the institute of systems and robotics (ISR-UC). His main research interests are robotic perception, autonomous vehicles, autonomous robots, field/agricultural robotics, machine learning, and sensor fusion. C. Premebida has collaborated on research projects in the areas related to robotics, autonomous driving, and applied machine learning, including national and international projects. He is IEEE RAS and ITS society member, beenserving as AE in the flagship conferences IROS, ITSC and IVS.
Zeit & Ort
25.04.2024 | 14:00 c.t. - 16:00
Takustr. 9, SR051