Multimedia Technology and Telecommunications Lab
|
--
Continuous-wave Time-of-Flight (ToF) cameras attracted a large attention both
from the research community and for commercial applications due to their ability
to robustly measure the scene depth in real-time.
They have been employed for
many computer vision applications including human body tracking, 3D scene
reconstruction, robotics, object detection and hand gesture recognition.
The success of this kind of systems is given by their benefits, e.g., the simplicity
of processing operations for the estimation of the depth maps, the absence of
moving components, the possibility to generate a dense depth map, the absence
of artifacts due to occlusions and scene texture. Other depth estimation systems
as Structured Light (SL) and stereo vision systems have weaknesses due to these
aspects and so it is preferable to use ToF cameras in many situations.
Beside these good aspects, ToF cameras have also some limitations for which
they need to be further analyzed and improved. Some of these limitations are a
low spatial resolution due to the complexity of pixel hardware required for the
depth estimation, the presence of a maximum measurable distance, estimation
artifacts on the edges and corners and the wrong depth estimation due to the
Multi-Path Interference (MPI) phenomenon. We propose in [1,2] and [3] 2 different
approaches to correct this problem: Related Papers:
[1] G. Agresti and P. Zanuttigh, Deep learning for multi-path error removal in ToF sensors, Proceedings of the European Conference on Computer Vision Workshop (ECCVW): Geometry Meets Deep Learning, Munich, Germany, 2018. |