How Useful Are Google's AR Guidelines?
How Useful Are Google's AR Guidelines?
Google's ARCore guidelines are a good place to start when getting into AR design and development...but do they need to be more ambitious?
Join the DZone community and get the full member experience.Join For Free
When Google and Apple announced their mobile augmented reality (AR) platforms (ARCore and ARKit, respectively) last summer, it sparked interest in 3D from a potentially huge group of designers and developers who have traditional 2D mobile application experience but are new to AR.
Accustomed to mature tools that formed (mostly) integrated workflows, these newcomers found a 3D workflow cobbled together using legacy tools and design patterns borrowed from gaming, video and film entertainment, and the architecture and engineering disciplines. This hodgepodge made the learning curve steep, development expensive, and cross-discipline collaboration nearly impossible.
To complicate matters, there were no resources – like how-tos, reference applications, and design guidelines – to help. Designers were forced to borrow liberally and make things up as they went along, resulting in an often awkward marriage between mobile gaming or architecture and traditional 2D UX patterns.
This improvisation was no surprise, given the newness of the platforms. It would take time for patterns to emerge, then be evaluated, classified and organized into something systematic.
The recent announcement of Google’s Augmented Reality Design Guidelines (GARDG) caught our attention. How far had design guidelines progressed? Did they reflect the priorities or address the needs of designers, both as expressed to us during interviews and as we have experienced while building a complex 3D application? Most importantly, would we recommend these guidelines to designers looking to get started in AR?
The short answer to the last question is “yes.” But the caveat: We’d recommend them only to start, and for a few critical concepts.
The Best of GARDG
Overall, the GARDG excels when it encourages designers to build applications that focus on motion and environmental engagement. It excels in drawing attention to the critical role movement, specifically user movement, plays in AR. Mobile app devs must account for new device orientations driven by camera position. Users no longer simply cradle a device. They hold it more deliberately. Therefore, designers must account for user fatigue and whether UI elements could cause users to cover the camera, especially in landscape mode. Different device types and weights also impact a user’s experience. A tablet offers a larger screen, but weighs more than a phone, potentially increasing user fatigue.
GARDG reminds designers of one of the most overlooked aspects of AR in our experience: end-user mobility and how this shapes interactions with immersive designs. Consider if the user is on a plane, uses a wheelchair or is unable to move or hold a device. In order for everyone to have access, developers should design for four user modes -- seated, hands fixed; seated, hands moving; standing, hands fixed; and standing, hands moving (which would be full range of motion). Intertwined with its guidance on mobility, the GARDG stresses the outside environment and ensures designs never sacrifice user safety. Making users backup blindly, or encouraging them to move forward while the device is pointed in a different direction are strongly discouraged.
Where GARDG Needs to Work Harder
The GARDG struggles, however, to keep pace with the demands of designers and developers. And it reveals how both Google and Apple may well trail behind users in understanding the potential of their platforms for leveraging 3D to accomplish complex tasks or communicate complex experiences.
The current design guidelines make no allowance for complex mechanics – or really anything behind simple object placement and sticker-like functionality. Designers are attempting to build apps that include interactivity. This includes object selection, conditional behaviors, branching scene flows or storyboards driven off of user behavior; movement between scenes using teleportation; portals and lots of physical gestures.
The GARDG is missing multi-scene use cases, which automatically excludes many modes of interactivity or conditional behavior that leads to transitions, complex or more interesting changes of state, personalization, and ultimately a deeper, more immersive experience.
Similarly, there is no discussion of animations (a common topic in our interviews with designers), either triggered or timed, or the notion of a shared or collaborative environment. One of the most sought-after features is the ability of one user to either collaborate in the same scene from the same location on different devices or, in some cases, to share the same camera view. The latter example is one we frequently encounter when designers wish to allow remote collaborators and clients to see their AR prototypes in the environment for which they were intended and to provide feedback, all in real time.
Even in the relatively tame realm of static object creation and placement, designers are already searching for the best way to design for complex behaviors, such as selecting objects that might be hidden by other objects.
The guidelines make assumptions about optimal object placement range - within the reach of the user -- that we see no reason to codify at this time. What about throwing objects as a method of placement, or pointing, grabbing and interacting with objects at a distance?
Overall, the GARDG is a good starting point. Most importantly, it situates the user and their needs, in particular how they interact with the physical environment while using augmented reality, squarely at the center of the design guidelines. The overall tone of the guidelines focus mainly on being mindful of humans out in the world using your (simple) application and that is okay.
But for designers in a hurry, this conservative approach by Google (and Apple) presents both an obstacle and an opportunity. Current workflows available for these platforms increase friction, cost time and money, and ultimately limit the projects. Designers already are developing clever workarounds and companies like CameraIQ, Wikitude, and my own company, Torch 3D, are starting to develop tools.
This brings us to the opportunity. The patterns and best practices of mobile AR UX design are wide open for anyone to help define. What works well will win. Innovations by the pioneering designers of today will inform the design guidelines of tomorrow.
The mobile AR space may seem chaotic and disorganized right now. But in a matter of years, I believe we will look back fondly on these days as ones of incredible experimentation, freedom, and learning.
About the Author
Paul Reynolds has been a software developer and technology consultant since 1997. In 2013, after 10 years of creating video games, Paul joined Magic Leap where he was a Senior Director, overseeing content and SDK teams. In 2016 Paul moved to Portland, OR where a year later he founded Torch 3D, a prototyping, content creations, and collaboration platform for the AR and VR cloud.
Opinions expressed by DZone contributors are their own.