Daniel Rotman

Depth Restoration Occlusionless Temporal


Daniel Rotman, Guy Gilboa

Electrical Engineering Department,

Technion – Israel Institute of Technology, Haifa, Israel




DROT is a depth dataset created to test depth restoration, rectification and upsampling methods


Dataset details:

  • Real sensor input from Kinect 1, Kinect 2 and RealSense R200 sensors.
  • RGB and depth images registered pixel-to-pixel versus high quality ground truth depth.
  • For each sensor there are two viewpoints:
    • RGB sensor with the registered depth image. This allows testing upsampling methods when the RGB image is substantially larger.
    • IR sensor with the registered RGB image for straightforward depth restoration.
  • The dataset consists five multiple frame videos, with varying types of object motion.


Vid 1 Vid 2 Vid 3 Vid 4 Vid 5
 Vid1  Vid2  Vid3  Vid4  Vid5
30 frames

Motion: turn, parallel

21 frames

Motion: parallel

30 frames

Motion: turn

20 frames

Motion: parallel

11 frames

Motion: turn, parallel, perpendicular

Vid1_sensor_data Vid2_sensor_data Vid3_sensor_data Vid4_sensor_data Vid5_sensor_data
Vid1_ground_truth Vid2_ground_truth Vid3_ground_truth Vid4_ground_truth Vid5_ground_truth


Note: Depth PNG images may look completely black when opened with a simple image viewing program due to the 16bit encoding.


If you use this dataset in your work, please cite the following publication:

  1. Rotman and G. Gilboa, “A depth restoration occlusionless temporal dataset,” in International Conference on 3D Vision (3DV). IEEE, 2016.