The growth of augmented and virtual reality technologies together with 3D imaging technologies is opening up many avenues for filmmakers and media artists to tell stories and build experiences. Depth cameras in particular offer the ability to easily capture the 3D data from a scene and integrate it into projects. Depth cameras like the Microsoft Kinect, ASUS Xtion, Structure Sensor, and Leap Motion give use the ability to build in motion-capture interactivity to projects. One of the more unique mixes of 2D and 3D image capture is the DepthKit software package.

DepthKit

DepthKit (http://www.rgbdtoolkit.com/) is an open-source software package created by artists and coders that combines the 3D data from a depth camera (Microsoft Kinect, ASUS Xtion) and a 2D video camera. We’ve been very interested in working between the 2D and 3D worlds for storytelling and visual imagery. DepthKit offers a way to combine more traditional filmmaking workflows with 3D imaging technologies. In this experiment, we used a Sony A7s and a Microsoft Kinect V1 (model 1414). We paired the A7s with a Rokinon 24mm cine lens, which produces a wonderful image.

DSC09276Setup and Calibration

Physically, the Kinect is placed just below the A7s. We purchased a bracket from the RGBD-ToolKit website (http://www.rgbdtoolkit.com/tutorials.html), which seemed like a nice way to support the project, in addition to donating money directly to their development costs. There are also various 3D models and DIY directions for making your own mount if desired. To pair the A7s video to the Kinect, we first needed to calibrate the devices together in the DepthKitCaptureKinect program. This stage takes some some. It involves shooting video of a checkerboard in A4 or A3 size placed at various positions in front of the camera system. The DepthKit program then extracts still images from the video and creates a calibration for the specific lens you are using. The checkerboard needs to be placed at different distances from the cameras in order to calibrate the distance (as I more or less understand it). This step requires that sufficient natural IR radiation exists in the room you’re shooting in, and an overcast Zurich was not ideal, but in the end we got it calibrated. After calibrating the depth data stream with the A7s and the Kinect, we set about doing a few movements, e.g., testing out how dance movements would be represented in the final film. The A7s records in XVAC-s, a wonderful video codec which is not directly supported by DepthKit. We ended up transcoding the A7s footage in Adobe Media Encoder into .h264, and was able to visualize the color footage in the post-processing stage.

Screen Shot 2015-04-06 at 12.33.17Post-Processing

After filming and conversion, the footage was post-processed into the DepthKitVisualize program. In this stage, the depth and color video streams are synchronized together. For this reason, it is ideal to clap once or twice during the start of filming, similar to the way audio can be synced. Once the data streams are synced correctly, you can start creating movements and modify the depth data represented on the screen. Since the Kinect records depth data, which is simply the position from the camera to the scene, this can be represented in different ways in the post-processing stage. For example, instead of having the full depth data, you can specify a depth window, so that the data behind the person can be removed. Also, since the movie exists in 3D space, you can control the perspective, and with that you are essentially controlling the virtual camera. This means that you can move forwards, backwards, rotate the depth data position, and set key points, just like you would control animations in a non-linear video editor such as Adobe Premiere Pro or Final Cut. Given this workflow, it’s very easy for a filmmaker or game developer to understand how to work within the DepthKitVisualize environment.

save.01888Use In Projects

The resulting footage can be rendered and exported as a 2D PNG image sequence or additionally as an .OBJ 3D mesh sequence. Here, the 2D image sequence was imported into AfterEffects to create the movie. The PNG files with transparent backgrounds are ideal for layering with background effects. There are many options for processing the depth data in DepthKitVisualize. The lines defining the 3D depth data surface and size of the grid can be modified or removed. The depth of field can also be modified to change the focus of the subject, and, of course, these parameters can all be modified along the movie timeline.

Coming from a background in film as well as CAD systems used in automotive and mechanical engineering and simulation, DepthKit is very intuitive to understand. At the moment we are experimenting with importing the .OBJ sequence into Blender, and then exporting a .md2 animation for integration with an augmented reality app built with Metaio. If you’re coming from a background without an understanding of 3D and depth cameras, don’t worry, there are some good tutorials available describing the stages of calibration and post-processing.

For example projects check out: http://rgbd.tumblr.com/
For a written tutorial checkout: http://www.rgbdtoolkit.com/tutorials.html
For video RGBDToolkit Tutorials: (https://vimeo.com/album/1977644)

idezo Labs: DepthKit with Microsoft Kinect + Sony A7s

css.php