Soon depth sensors will be in your iPhone. In fact any mobile device you have: phone, tablet, laptop.
I’ve been hoping this would come true for years, but now it is an undeniable reality. There are a multitude of companies talking about putting depth sensors into mobile phones and wearables, like Structure Sensor, iSense, Meta. Also, Apple just bought Primesense, makers of the Kinect depth sensor, for $345 million.
So what does a future with depth sensors in your iPhone look like?
So, we can safely say that our iPhones will eventually have depth sensors. But what will we do with them? Well the things we usually do with our phones: take pictures and play games.
In Arto, we explore the future of photography with depth sensors. We use two depth sensors, one to capture 3D information about the world and another to capture 3D gestures. This means you can “reach into your photograph” to edit it.
You can position virtual light sources. Instead of carrying around a light kit, photographers can just wave their hand around. You move your hand left, the virtual light moves left, move your hand up, the light moves up, etc.
We use a very simple lighting model, but we envision a future of photography with more sophisticated lights (like area light sources).
You can insert virtual objects into your photos. Like Justin Bieber, of course. And you can insert Bieber at his exact height of 5’7″. You can reach around Justin and correctly occlude him (to give him a hug).
You can edit the lens blur (depth-of-field) of your photo. By moving your hand backwards and forwards in depth, you can change the focal plane depth and aperture. This is all done by simulation, using the depth-map to “fake” depth-of-field.
You know how your iPhone pics never seem to look as good as those of a professional photographer. Well a lot of that is due to the lack of lens blur. The tiny optics in your iPhone limit the depth-of-field, but with depth sensors we can fake it. The end result, photos with buttery lens blur.
With Instagram you can apply filters to images.
With Arto, you can easily select the foreground of the image by moving your hand through space, then apply Instagramy filters to the foreground of the image only. Thereby making something in the foreground “pop out” of your image.
Finally, you can capture photos of fast moving objects, like pets, wild animals, children, sporting events, etc. You simply place a “3D Trigger” into your photograph. If anything enters into the trigger, your camera takes a photograph.
This means you can take photos of fast moving objects (like these falling objects).
You may have noticed that the prototype is quite large (hence the DSLR and tripod). This project was actually done back in 2012, before things like Structure Sensor (PrimSense Capri) existed. Also, this was before things like LeapMotion, so we had to build our own finger tracking library from scratch.
We are looking into updating the technology to try the interactions in a truly mobile form factor.