April 27, 201335 Comments

IllumiRoom: Immersive Experiences Beyond the TV Screen

[spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/1" el_position="first last"]

The television is the focal point of living room entertainment, and while our TVs have gotten bigger and brighter over the years, our content is still trapped inside a little box in our living room. We introduce IllumiRoom a proof-of-concept system that augments the area surrounding a television with projected visualizations to enhance traditional viewing experiences. IllumiRoom can directly extend the viewing experience, turning a 40 inch television into a 15 foot television. IllumiRoom can enable augmented reality experiences where virtual objects interact with the physical environment (e.g. furniture). Finally, IllumiRoom can augment and distort the physical environment (e.g making a living room look like a cartoon).

IllumiRoom uses a projector and a Kinect depth sensor to blur the lines between on-screen content and the surrounding physical environment. It is entirely self-calibrating and is designed to work in almost any room. IllumiRoom can change the appearance of the room, induce apparent motion, extend the field of view, and enable entirely new gaming experiences.

In our research paper, we present a detailed exploration of the design space of peripheral projected illusions and we demonstrate ways to trigger and drive such illusions from gaming content. We also contribute specific feedback from two groups of target users (10 gamers and 15 game designers); providing insights for enhancing viewing experiences through peripheral projected illusions.

[/spb_text_block] [spb_video link="http://www.youtube.com/watch?v=L2w-XqW7bF4" full_width="no" width="1/1" el_position="first last"] [blank_spacer height="30px" width="1/1" el_position="first last"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/1" el_position="first last"]

CHI 2013 Best Paper Award

CHI 2013 Golden Mouse Award (Best Video Award)

SIGGRAPH 2014 Best Demo Award

Paper: Download the paper

Reference: Jones, B., Benko, H., Ofek, E. and Wilson, A. D. 2013. IllumiRoom: Peripheral Projected Illusions for Interactive Experiences. In Proceedings of the 31st Annual SIGCHI Conference on Human Factors in Computing Systems (Paris, France, April 27 – May 2, 2013). CHI ’13. ACM, New York, NY.

Presentation: Download the PowerPoint

Professional news outlets – contact me for the high res images + video.

[/spb_text_block] [blank_spacer height="30px" width="1/1" el_position="first last"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/1" el_position="first last"]

[/spb_text_block]

September 8, 2014No Comments

RoomAlive: Magical Experiences Enabled by Scalable, Adaptive Projector Camera Units

[spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/1" el_position="first last"]

Note: This was a large group effort conducted at Microsoft Research by Brett JonesRaj SodhiMike MurdockRavish MehraHrvoje BenkoAndy WilsonEyal OfekBlair MacIntyre, & Lior Shapira.

RoomAlive is a proof-of-concept prototype that envisions a future of interactive gaming with projection mapping. RoomAlive transforms any room into an immersive, augmented entertainment experience through the use of video projectors. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. RoomAlive builds heavily on our last research project, IllumiRoom, which explored interactive projection mapping surrounding a television screen. IllumiRoom was largely focused on display, extending traditional gaming experiences out of the TV. RoomAlive instead focuses on interaction, and the new kinds of games that we can create with interactive projection mapping. RoomAlive looks farther into the future of projection mapping, and asks what new experiences will we have in the next few years?

See the full post over at Projection Mapping Central.

[/spb_text_block]

November 21, 2015No Comments

Projectibles: Optimizing Surface Color For Projection

[spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/1" el_position="first last"]

Typically video projectors display images onto white screens, which can result in a washed out image. Projectibles algorithmically control the display surface color to increase the contrast and resolution. By combining a printed image with projected light, we can create animated, high resolution, high dynamic range visual experiences for video sequences. We present two algorithms for separating an input video sequence into a printed component and projected component, maximizing the combined contrast and resolution while minimizing any visual artifacts introduced from the decomposition. We present empirical measurements of real-world results of six example video sequences, subjective viewer feedback ratings, and we discuss the benefits and limitations of Projectibles. This is the first approach to combine a static display with a dynamic display for the display of video, and the first to optimize surface color for projection of video.

Additional Materials Video:

[/spb_text_block] [spb_video link="http://youtu.be/GiBBtVaxi4Y" full_width="yes" width="1/1" el_position="first last"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/1" el_position="first last"]

Presentation:

[/spb_text_block] [spb_video link="http://www.youtube.com/watch?v=Jkrk65dlB8U" full_width="yes" width="1/1" el_position="first last"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/1" el_position="first last"]

Reference

Jones, B. R., Sodhi, R., Budhiraja, P., Karsch, K., Bailey, B., & Forsyth, D. Projectibles: Optimizing Surface Color For Projection. ACM UIST (2015).

Download:

Full paper

Images:

[/spb_text_block] [spb_single_image image="808" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/1" el_position="first last"] [spb_single_image image="807" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/1" el_position="first last"] [spb_single_image image="805" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/2" el_position="first"] [spb_single_image image="806" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/2" el_position="last"] [spb_single_image image="809" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/2" el_position="first"] [spb_single_image image="811" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/2" el_position="last"] [spb_single_image image="810" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/1" el_position="first last"]

May 28, 2010Comments are off for this post.

Build Your World and Play In It

ISMAR 2010 Best Student Paper

[vimeo clip_id="12154930" width="800" height="500"]

Read more

January 15, 20125 Comments

Arto: The future of photography (with depth sensors)

[blank_spacer height="5px" width="1/1" el_position="first last"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/1" el_position="first last"]

Soon depth sensors will be in your iPhone. In fact any mobile device you have: phone, tablet, laptop.

I've been hoping this would come true for years, but now it is an undeniable reality. There are a multitude of companies talking about putting depth sensors into mobile phones and wearables, like Structure SensoriSenseMeta. Also, Apple just bought Primesense, makers of the Kinect depth sensor, for $345 million.

[/spb_text_block] [blank_spacer height="5px" width="1/1" el_position="first last"] [spb_slider revslider_shortcode="mobile-depth" width="1/1" el_position="first last"] [blank_spacer height="15px" width="1/1" el_position="first last"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/2" el_position="first"]

[blockquote3]So what does a future with depth sensors in your iPhone look like?[/blockquote3]

[/spb_text_block] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/2" el_position="last"]

So, we can safely say that our iPhones will eventually have depth sensors. But what will we do with them? Well the things we usually do with our phones: take pictures and play games.

[/spb_text_block] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/1" el_position="first last"]

In Arto, we explore the future of photography with depth sensors. We use two depth sensors, one to capture 3D information about the world and another to capture 3D gestures. This means you can "reach into your photograph" to edit it.

[/spb_text_block] [blank_spacer height="15px" width="1/1" el_position="first last"] [spb_single_image image="706" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/2" el_position="first"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/2" el_position="last"]

You can position virtual light sources. Instead of carrying around a light kit, photographers can just wave their hand around. You move your hand left, the virtual light moves left, move your hand up, the light moves up, etc.

[/spb_text_block] [spb_single_image image="707" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/2" el_position="first"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/2" el_position="last"]

We use a very simple lighting model, but we envision a future of photography with more sophisticated lights (like area light sources).

[/spb_text_block] [blank_spacer height="15px" width="1/1" el_position="first last"] [spb_single_image image="708" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/2" el_position="first"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/2" el_position="last"]

You can insert virtual objects into your photos. Like Justin Bieber, of course. And you can insert Bieber at his exact height of 5'7". You can reach around Justin and correctly occlude him (to give him a hug).

[/spb_text_block] [blank_spacer height="15px" width="1/1" el_position="first last"] [spb_single_image image="709" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/2" el_position="first"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/2" el_position="last"]

You can edit the lens blur (depth-of-field) of your photo. By moving your hand backwards and forwards in depth, you can change the focal plane depth and aperture. This is all done by simulation, using the depth-map to "fake" depth-of-field.

You know how your iPhone pics never seem to look as good as those of a professional photographer. Well a lot of that is due to the lack of lens blur. The tiny optics in your iPhone limit the depth-of-field, but with depth sensors we can fake it. The end result, photos with buttery lens blur.

[/spb_text_block] [blank_spacer height="15px" width="1/1" el_position="first last"] [spb_single_image image="710" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/2" el_position="first"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/2" el_position="last"]

With Instagram you can apply filters to images.

With Arto, you can easily select the foreground of the image by moving your hand through space, then apply Instagramy filters to the foreground of the image only. Thereby making something in the foreground "pop out" of your image.

[/spb_text_block] [blank_spacer height="15px" width="1/1" el_position="first last"] [spb_single_image image="711" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/2" el_position="first"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/2" el_position="last"]

Finally, you can capture photos of fast moving objects, like pets, wild animals, children, sporting events, etc. You simply place a "3D Trigger" into your photograph. If anything enters into the trigger, your camera takes a photograph.

[/spb_text_block] [spb_single_image image="712" image_size="full" frame="noframe" intro_animation="none" full_width="no" lightbox="yes" link_target="_self" width="1/2" el_position="first"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/2" el_position="last"]

This means you can take photos of fast moving objects (like these falling objects).

[/spb_text_block] [blank_spacer height="15px" width="1/1" el_position="first last"] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/1" el_position="first last"]

You may have noticed that the prototype is quite large (hence the DSLR and tripod). This project was actually done back in 2012, before things like Structure Sensor (PrimSense Capri) existed. Also, this was before things like LeapMotion, so we had to build our own finger tracking library from scratch.

We are looking into updating the technology to try the interactions in a truly mobile form factor.

Whatcha think?

[/spb_text_block] [spb_text_block pb_margin_bottom="no" pb_border_bottom="no" width="1/1" el_position="first last"]

See the full paper here: Arto.pdf

[/spb_text_block]

April 27, 2013No Comments

BeThere: 3D Mobile Collaboration with Spatial Input

Sodhi, R., Jones, B., Forsyth, D., Bailey, B.P., and Maciocci, G. BeThere: 3D Mobile Collaboration with Spatial Input.
ACM CHI, 2013.

Paper: BeThere.pdf

May 28, 2012Comments are off for this post.

Around Device Interaction for Multiscale Navigation

Brett Jones, Rajinder Sodhi, David Forsyth, Brian Bailey, Giuliano Maciocci, "Around Device Interaction for Multiscale Navigation,"  to appear MobileHCI 2012. (Best Paper Nominee)

Paper

Read more

January 8, 2009Comments are off for this post.

Idea Generation Techniques Among Creative Professionals

Scarlett R. Herring, Brett R. Jones, Brian P. Bailey, "Idea Generation Techniques among Creative Professionals," HICSS, pp.1-10, 42nd Hawaii International Conference on System Sciences, 2009

Read more