Redundancy-free omnimodal 3D detection technology
Today's systems for capturing 3D scenes are characterized by an extremely high redundancy of the image data compared to 2D systems. In 2D systems, the 2D detector chip captures the image directly in the image plane, with almost no redundancies in the generated data stream. On the other hand, a high redundancy in the raw data stream arises during the acquisition of 3D scenes with currently established systems, since the 3D information is inefficiently imaged onto the 2D detector arrays, e.g. by multiple image sequences in the pattern projection, multiple images in plenoptic cameras and time sampling in time-of-flight systems. The abstract cause of this inefficiency is the non-optimal conversion of the image information from 3D to 2D in the respective detection principle. The practical cause is that conventional optical imaging systems do not appear to provide such an optimal conversion to the available two-dimensional plane of the detector, or no transformation has yet been found.
This fact has a limiting effect on the performance of today's 3D detection systems in two respects. On the hand, the redundancies in the raw data lead to a deterioration of the signal-to-noise ratio, since the poorly conditioned raw data cannot be completely imported into the 3D scene information. On the other hand, available detector technologies and signal processing technologies limit the overall data stream, which ultimately limits the system performance.
The aim of the OMNIdetect innovation project is the research of new 3D detection technologies which, compared to the state of the art, enable a more efficient scanning of 3D data and thus a significantly higher information density of the recorded raw data.