Reducing the Frame-Rate of Depth Cameras by RGB-Based Depth Prediction

Depth cameras are becoming widely used for facilitating fast and robust natural user interaction.

Abstract

Depth cameras are becoming widely used for facilitating fast and robust natural user interaction. But measuring depth can be high in power consumption mainly due to the active infrared illumination involved in the acquisition process, for both structured-light and time-of-flight technologies. It becomes a critical issue when the sensors are mounted on hand-held (mobile) devices, where power usage is of the essence.
The goal of this project is to attempt to reduce the depth acquisition frame rate, thus saving considerable power. The compensation will be done by calculating reliable depth estimations using a coupled color (RGB) camera working at full frame rate. These predictions, which are shown to perform outstandingly, create for the end user or application the perception of a depth sensor working at full frame rate.
Quality measures based on skeleton extraction and depth inaccuracy are used to calculate the deviation from the ground truth.

Algorithem

1

Depth and RGB Capture

During our experiments we used a Kinect camera created by Microsoft. There are many ways to capture the RGB, depth, skeleton, meta-data, and calibration using the camera, and we were forced to use specific methods because of the nature of our project.

2

                                                        RGB                                                                                                                      depth

34

Conclusions

  • We implemented the frame reduction idea, and proved that the idea is not only plausible but proven possible.
  • We managed to use the OpenNI system to create a program which returns a skeleton from edited depth data, which has never before been done in the faculty and can barely be found over the internet.
  • Due to the fact that we proved that the idea is possible, the idea is now patent pending.