ARKit Development Case Studies: Augmented Reality in iOS Applications
ARKit Development Case Studies: Augmented Reality in iOS Applications
Take a look at just a few of the possible applications of augmented reality in iOS apps and the solutions making them possible.
Join the DZone community and get the full member experience.Join For Free
The rapid rise of Augmented Reality from being just one more tech industry buzzword to a part of everyday life has been impressive. Millions of users of mobile devices now have access to AR technologies thanks to the advent of tools like ARKit for iOS devices. In a variety of real-life situations, we've been able to apply a number of non-standard AR solutions to good effect. It's also important, however, to be aware of the technological constraints that must be overcome in order to achieve the goals set forth by the companies we've built relationships with as our clients.
In many ways, augmented reality applications are similar to how visual effects are utilized in the film industry. The big difference, though, is that AR performances are done on mobile devices in real time.
The augmentation process has three steps. First, an existing scene must be tracked. Second, it has to be transformed into something a machine can understand. Finally, computed 3D overlays have to rendered onto the scene for the viewer.
Data is fed into ARKit from cameras, accelerometers and gyroscopes. In order to be properly processed, a scene must be lit well and properly textured. There also must be a flat surface for visual odometry and a static scene for motion odometry. If the environment fails in one of these regards, ARKit declares one of three potential states: not available, normal or limited.
It's generally possible for us to place an object on or close to a surface in the real world. With ARKit 1.5, recognition of vertical surfaces and images has been made possible.
A number of standardized solutions are available, and they can be rapidly built into any app. Our interest, however, is in creating brand new experiences. The App Store is currently littered with AR apps, but we want to take a big-picture look at what augmented reality can accomplish. To be noticed in the market, it's critical to create something new. Additional attention goes to those who can accomplish what was previously claimed to be impossible.
Case: AR in Non-Static Scenes
Following the announcement of ARKit, we initiated development of a product that allows vehicle passengers to see AR objects applied to the outside world while looking out from moving vehicles. The official documents for ARKit said it was only capable of being applied to static scenes. Consequently, it was expected to provide unreliable performance matching visual and motion data if the user was moving at the same time, as would occur in a car or bus.
There is actually a very simple solution to this problem. Geographic coordinates beyond the vehicle can be employed to create a custom tracking solution that makes use of periodic local updates. By adding compass data and calculating intermediate positions, we were able to place objects in scene accurately.
A new challenge emerged: vibration. Barely noticed movements of virtual objects often occurred when the real image was jostled. This posed a variety of technical issues, as common image stabilization solutions would not necessarily provide high enough accuracy. For example, the calculation for movement between 360- and 2-degree angles would yield an average of a 181-degree angle, an entirely different direction.
We obviously don't want the image to lag when the user quickly moves the phone. Employing an adaptive algorithm, we were able to use input data as a solution. Small vibrations, such as shaking, lend themselves to the application of a smoothing algorithm. Major movements change the view angle rapidly, and the solution is to recognize the difference and adjust the scene without smoothing.
ARKit has a built-in preference for real-world scale. For the majority of applications, this readily simplifies placing objects on surfaces. Perspective and distance, however, start creating trouble quickly. Placing a sign above a building that's far away, for example, will lead to the sign being barely noticeable. Signs either have to be scaled proportionally by distance, or they have to be transformed based on coordinates that are projected on an unseen sphere.
We used the second solution. The coordinate system is different from the user's perspective, but it renders the same in AR. This prevents excessive numbers of objects mapped on a scene from appearing to close to each other.
Case: AR and Indoor Navigation
Mobile users have a strong expectation of being able to pull out their devices and have access instantly to maps and routing information, even if they're in a foreign city. This breaks down, however, when people are inside buildings like airport terminals or shopping complexes. Google and Apple have created detailed maps of many of these types of sites in order to accommodate demand.
One of the biggest problems that emerges, though, is that GPS tends to operate poorly in indoor spaces. It also struggles to derive accurate information about users who are at different levels of the same building.
One of the possible solutions is iBeacon, a small Bluetooth device that periodically supplies small packets of data to solve the issue. Beacons need to be positioned at multiple locations to allow devices to triangulate their positions within a large, multi-story structure.
A number of problems accompany using this method, though. Foremost, beacons have operating ranges that are limited to between 10 and 100 meters depending on the model in use. Signals also can be blocked by everything from walls to people. Third, a device must be able to obtain pings from at least three active beacons in order to provide accurate triangulation information. In many settings, it would be prohibitively expensive to deploy enough beacons to accomplish the goal of providing precise triangulation for indoor navigation.
The arrival of ARKit 1.5 opens up new opportunities in this sector. Through the integration of machine vision technologies, we now were able to take a different approach. By placing marks with location-based metadata in areas, we were able to use floors and walls to provide accurate reference information. The device now only needed to scan a single mark to attain accurate 3D coordinates for the user. An added bonus comes from the fact that printed images, signs and panels could all be incorporated as marks.
That provided us an excellent starting point, but we also needed to supply routing information. 3D layering, though, presented a unique perceptual challenge for users. Humans expect routes that go past barriers such as walls and corners to be occluded, but the software perceives the entire route.
Three possible solutions were worth considering.
The first was to utilize a compass instead of mapping out the entire route. This, however, would break norms that are expected by users.
The second was to clip the route past a certain distance from the user. This had a certain virtue of speed in implementation.
The third was to build a model of the structure with a low polygon count using the existing maps—that was the solution we opted to pursue.
The third approach had the advantage of being both high-quality and cost-effective. When looking down a long corridor, the route would be visible the whole way to the next corner, creating a sense of naturalness in large areas. If a corner was present, the route would be visibly clipped. Users would have an easy time understanding it, and they would be able to use it from many vantage points in expansive spaces, such as airports.
Case: Faces and AR
The TrueDepth camera is presently designed exclusively for use on the iPhone X. The underlying principles are still the same with AR, as scenes continue to be tracked, understood and rendered. Once ARKit has done the job of processing incoming data, we're able to work up a great deal of information. Tracking data, as always, is available. We also have access to facial geometry and a corresponding face mesh to work within the AR environment. A number of blend shapes are available, allowing us to calculate various facial parameters, such as how open the user's eyes are, how much they've raised their eyebrows and whether their mouth is moving.
This type of AR has already been in use for facial masking for a while. Anyone familiar with the classic Snapchat filter or the animoji craze will recognize the common consumer-grade applications. We're driven, though, to look at applications that are more sophisticated.
Here is an example. Sportspeople benefit from seeing demonstrable progress in order to sustain motivation. An individual trying to lose weight or gain muscle mass may have a hard time perceiving the results they've achieved. Fluctuations of several kilograms as a consequence of recent food consumption are completely normal. Something as simple as clothing may make information inaccurate.
One of the simplest ways to monitor these changes in fitness is to track changes to the face. As a person becomes more fit, the face itself tends to become sharper. Utilizing the TrueDepth camera, an app can monitor and log changes in order to deliver motivational data.
The role that ARKit can play in creating unique and compelling products is huge, especially when the skills of a capable development team are applied. Augmented reality provides a number of functional solutions for a variety of business needs, and it will become more commonplace in the near future. We're thrilled by the opportunities within the market, and our team looks forward to continuing to work with clients to move the limits of what can be achieved using AR.
Published at DZone with permission of Andrew Makarov . See the original article here.
Opinions expressed by DZone contributors are their own.