Joint Color and Depth Segmentation Datasets

(Multimedia Technology and Telecommunications Laboratory, University of Padova)

 

This page contains some datasets with color and depth information acquired by different devices that can be used to evaluate joint depth and color segmentation algorithms. The datasets have been used for the results of the following papers:

  1.  Carlo Dal Mutto, Pietro Zanuttigh and Guido M. Cortelazzo, "Fusion of Geometry and Color Information for Scene Segmentation" , Special Issue "Emerging Techniques in 3D: 3D Data Fusion, Motion Tracking in Multi-View Video, 3DTV Archives and 3D Content Protection", IEEE Journal of Selected Topics in Signal Processing, 2012

  2. Giampaolo Pagnutti and Pietro Zanuttigh , "Scene Segmentation From Depth and Color Data Driven by Surface Fitting" , ICIP 2014.

  3. Giampaolo Pagnutti, Pietro Zanuttigh, "Scene segmentation based on NURBS surface fitting metrics", Smart Tools and Apps in Computer Graphics Workshop, 2015

  4. Giampaolo Pagnutti, Pietro Zanuttigh, "Joint Color and Depth Segmentation Based on Region Merging and Surface Fitting", International Conference on Computer Vision Theory and Applications (VISAPP), 2016

  5.  

    ToF and Stereo Dataset

    This dataset has been used for the experiments on the data acquired with the trinocular system described in Section VI.A of Paper 1. For each scene a .zip file is provided. Each archive contains the color image, the depth map, and 3 text files containing the x,y and z coordinates respectively for each sample of the image in millimeters (arranged in row order). The depth map is provided as a 16 bpp .png image and the actual depth values can be computed from the color of the pixels in the depth map by the following equation: z = (pixel_color) / 65535 * 5000.

     

    Baby

     

    Baby and Plant

     

    Plant

     

    Person

     

    Middlebury Stereo Dataset

    The data used for the experiments of Section VI.B of Paper [1] are the Aloe and Baby2 scenes from the Middlebury dataset. They are part of the 2006 dataset and can be downloaded from the Middlebury webpage at the url: http://vision.middlebury.edu/stereo/data/scenes2006/ .

     

    Microsoft Kinect® Datasets

    This dataset has been used for the experiments on the data acquired with Microsoft Kinect® Section VI.C of Paper [1]. For each scene click on the corresponding image to download the associated .ply files or click here if you want to download the entire dataset.

    Baby 2

    Teddy bear

    Person


    New Dataset with Ground Truth Segmentations

    A new dataset acquired with the Kinect and the Asus Xtion that has been used for the results of Papers [2], [3] and [4] is also available. For each of the 6 scenes color data, depth information and manually computed ground truth segmentations are provided.

    - The data acquired with the Kinect together with calibration information can be downloaded from here (updated, now contains also ground truth information).
    - The two scenes acquired with the Xtion and the corresponding calibration data can be downloaded from here  (updated, now contains also ground truth information).
    dataset


     

     



    Have a look at the two videos showing how the approach of [4] segments three scenes from this dataset and three others from the NYUv2 dataset using an iterative merging procedure starting from an over-segmentation of the data:

       
    Our Dataset NYUv2 Dataset

     

    Microsoft Photosynth® Dataset

    This dataset has been used for the experiments on Microsoft Photosynth® data in Section VI.D of Paper [1]. Click here to download the associated .ply file.

     

     

    If you are interested in our research you can visit our website: http://lttm.dei.unipd.it

    For any question or clarifications about this datasets please write to: zanuttigh@dei.unipd.it