Computing in the Camera

DZone 's Guide to

Computing in the Camera

Mobile AR, with its ubiquitous camera, is set to transform what and how human experience designers create. Click here to learn more about computing in the camera.

· IoT Zone ·
Free Resource
“[T]he camera will bring the Internet and the real world into a single time and space.” Allison Woods, CEO, Camera IQ

Occasionally, as you navigate the hustle and bustle of everyday life, something will happen or somebody will say something that will have a profound, disruptive, or transformational impact on how you see the world. Unfortunately, the distractions of your busy life may prevent you from realizing it for some time. Eventually, when you realize the true impact, things will fall into place. (It’s like the end of The Usual Suspects, except you didn’t just let the devil walk out your front door.)

Last year, in December of 2017, something perspective-altering happened to the Torch team, the impact of which we only began to understand in the last few months. It happened when we demoed an early version of our prototyping app for Allison Wood of the Camera IQ.

One of the points Allison made repeatedly on that call (and in this wonderful blog post of the same time period) was that the camera is going to be at the center of computing going forward, an indispensable element. Spatial computing could not exist without it — simple, obvious, straightforward, but not earth-shaking. We all heard what she had to say, but I don’t think any of us really understood just how profound or prophetic that statement turned out to be.

Since that call, two things have happened. First, we started looking at every single mobile AR app we could get our hands on, analyzing them as a team and even sharing our analysis in mobile AR app design review blog posts, which we have also published here on DZone. We have started to learn what works, as well as what to avoid, and what separated a great mobile AR experience from a crummy one.

Based on feedback from our platoon of Early Access users, we also spent the last several months refining and simplifying the Torch 3D prototyping platform. We have continually experimented with ways to make 3D content creation faster, easier, and more intuitive.

At every stage in the evolution of this process, one thing always happened: we moved more stuff off the screen and either into immersive space or hid it in drawers and dropdowns. We pushed the camera into the center of our experience. Between our own experience and the insights we gained from analyzing other apps, we had arrived smack at the moment when Allison’s prediction had become today’s reality.

Mobile AR apps are a hybrid of 2D screen-locked elements, 3D and 2D objects arranged spatially, and the real world as captured in the camera. These are the basic layers of a mobile AR app. In the future, designers creating mobile AR experiences will learn to balance the needs of the various layers, trading off familiar on-screen UX patterns for unobstructed views of the 3D coordinate space.

The three layers of mobile AR

This is the key design challenge of mobile AR, at least in the early days as the millions of UX devs and designers begin to incorporate spatial computing into their workflows. Tools and tutorials will all need to start accounting for the needs of all three layers (2D, 3D, Reality/Camera).

The constraints imposed by this need to preserve the balance that will spark the next big wave of UX innovation. The future of 3D designers is in the hands of mobile app UX designers who probably haven’t even worked in 3D yet. Hopefully, one day in the future, they will look back to posts like this and reflect on just how much their world has changed for the better.

camera ,virtual reality ,augmented reality ,vr ,ar ,spatial understanding ,iot

Published at DZone with permission of Paul Reynolds . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}