Technology trends in 3D modeling and scanning are making it easier to create 3D models to include in mixed-reality, Augmented Reality (AR) games or Virtual Reality (VR) experiences. I’ve been experimenting with different 3D scanning technologies and workflows for integrating the models into virtual worlds or AR apps. One workflow is to create a full-body 3D scan of a person, and then use that model in different immersive projects. First, we need the ability to do 3D scans.
The release of the Microsoft Kinect brought 3D scanning into the hands of consumers and opened up a world of creative opportunities for 3D vizualization projects. When I wanted to learn about the Kinect I picked up the book Making Things See by Greg Borenstein from O’Reilly Media. This gave me the basics in understanding depth cameras like the Kinect and building applications with Processing. Scanning with the Kinect is straightforward with software like Skanect, Reconstructme or kscan3d. However, the Kinect is not a mobile device, so it always needs to be connected to a computer and plugged into an outlet for power. The goal is to have both mobile and fixed systems for the most flexibility.
Since the Kinect release, the technology has been developed into smaller solutions such as webcams including the Sensz3D depth camera from Creative and the Structure Sensor, which I picked up from their Kickstarter campaign. The Structure Sensor fits directly onto an iOS device like the iPhone or iPad, making it very convenient for scanning people or objects outside of a studio environment. I use mine with an iPad mini, which is a truly mobile solution for 3D object and environment scanning. You can scan to your mobile device and create an .obj 3D file, or send the data to your computer via a WiFi connection and use the Skanect Pro software to collect the depth and color data to build your 3D scan.
Who Needs a 3D Model?
Once you have a model the question is…what exactly to do with it. Of course, you can print it out, allowing people to have 3D figurines of themselves, but I’m interested in a more interactive experience. The development of Oculus Rift and the future release of Samsung Gear VR as well as DODCASE and Google VR Cardboard are offering ways for people to experience VR content. At the same time, it’s incredibly easy to integrate custom 3D models into a game engine like Unity 3D, where you can develop AR games or 3D experiences as well as mobile and desktop games.
Rigging the Model
To have a model which is really useful beyond 3D printing it needs the ability to move. When the Kinect sees a person, it superimposes a skeleton structure onto the person, and this is what allows programing and interaction with the environment. In the same way, we need to add a bone structure system to the 3D model. To do this we can use Mixamo, an online app which allows you to upload your 3D mesh and then assign points for joint positions. Then just let Mixamo do what it was designed for and the rigged model will be available for download. Mixamo isn’t a free solution, but it is easy to use…insanely push-button easy to use and is a very straightforward way to rig your models.
The current technology trend is towards the creation of smaller 3D depth cameras and integrating them in mobile devices. I covered this a bit in my workshop on designing location-based experiences at the ISMAR 2014 AR conference. Google’s Project Tango has already allowed developers to play with integrated 3D capable mobile devices, and using the Structure sensor on your mobile device you can start developing use cases and projects while companies like Intel develop smaller depth cameras for future mobile devices.