Ever since the first machines were invented, humans have had the vision of human-like machines. Much effort will be necessary to achieve this scientific goal and to create a more natural interface between humans and machines. Therefore, an understanding of the observed environment is required, which includes the acquisition of a symbolic representation from sensor data. This can be done with the help of state estimation; thereby, different methods of reasoning are used. Relations between the multimodal input and the representation are described by the related likelihood. From this, Bayesian networks can be obtained, which consider certainties and uncertainties. Furthermore, theories of evidence can be used to optimize classification results by performing a fusion of different sensors.
Research Objectives:
- Incremental information fusion