DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Simplifying Multi-Cloud Observability With Open Source
  • API-Led Example: MuleSoft
  • Enhancing Accuracy in AI-Driven Mobile Applications: Tackling Hallucinations in Large Language Models
  • Simplifying Database Operations With HarperDB SDK for Java

Trending

  • How To Introduce a New API Quickly Using Quarkus and ChatGPT
  • Docker Base Images Demystified: A Practical Guide
  • Immutable Secrets Management: A Zero-Trust Approach to Sensitive Data in Containers
  • FIPS 140-3: The Security Standard That Protects Our Federal Data
  1. DZone
  2. Data Engineering
  3. Data
  4. Posture Recognition: Natural Interaction Brought to Life

Posture Recognition: Natural Interaction Brought to Life

Learn how to recognize human postures in your app to implement AR-based interactions.

By 
Jackson Jiang user avatar
Jackson Jiang
DZone Core CORE ·
Nov. 21, 22 · Tutorial
Likes (1)
Comment
Save
Tweet
Share
4.6K Views

Join the DZone community and get the full member experience.

Join For Free

Augmented reality (AR) provides immersive interactions by blending real and virtual worlds, making human-machine interactions more interesting and convenient than ever. A common application of AR involves placing a virtual object in a real environment, where the user is free to control or interact with the virtual object. However, there is so much more AR can do beyond that.

To make interactions easier and more immersive, many mobile app developers now allow users to control their devices without having to touch the screen by identifying the body motions, hand gestures, and facial expressions of users in real time and using the identified information to trigger different events in the app. For example, in an AR somatosensory game, players can trigger an action by striking a pose, which spares them from frequently tapping keys on the control console. Likewise, when shooting an image or short video, the user can apply special effects to the image or video by striking specific poses without even having to touch the screen. In a trainer-guided health and fitness app, the system powered by AR can identify the user's real-time postures to determine whether they are doing the exercise correctly and guide them to exercise in the correct way. All of these would be impossible without AR.

How, then, can an app accurately identify the postures of users to power these real-time interactions?

If you are also considering developing an AR app that needs to identify user motions in real time to trigger a specific event, such as to control the interaction interface on a device or to recognize and control game operations, integrating an SDK that provides the posture recognition capability is a no brainer. Integrating this SDK will greatly streamline the development process and allow you to focus on improving the app design and crafting the best possible user experience.

HMS Core AR Engine does much of the heavy lifting for you. Its posture recognition capability accurately identifies different body postures of users in real-time. After integrating this SDK, your app will be able to use both the front and rear cameras of the device to recognize six different postures from a single person in real-time and output and display the recognition results in the app.

six different postures from a single person in real-time and output and display the recognition results in the app.

The SDK provides basic core features that motion-sensing apps will need and enriches your AR apps with remote control and collaborative capabilities.

Here I will show you how to integrate AR Engine to implement these amazing features.

How to Develop

Requirements on the development environment:

  • JDK: 1.8.211 or later
  • Android Studio: 3.0 or later
  • minSdkVersion: 26 or later
  • targetSdkVersion: 29 (recommended)
  • compileSdkVersion: 29 (recommended)
  • Gradle version: 6.1.1 or later (recommended)

Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.

If you need to use multiple HMS Core kits, use the latest versions required for these kits.

Preparations

1. Before getting started with the development, integrate the AR Engine SDK via the Maven repository into your development environment.

2. The procedure for configuring the Maven repository address in Android Studio varies for the Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.

3. Take Gradle plugin 7.0 as an example:

Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.

Go to buildscript > repositories and configure the Maven repository address for the SDK.

 
buildscript {
     repositories {
         google()
         jcenter()
         maven {url "https://developer.huawei.com/repo/" }
     }
}


4. Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.

 
dependencyResolutionManagement {
    repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
      repositories {
           repositories {
                google()
               jcenter()
               maven {url "https://developer.huawei.com/repo/" }
           }
       }
}


5. Add the following build dependency in the dependencies block.

 
dependencies {
    implementation 'com.huawei.hms:arenginesdk:{version}
}


App Development

1. Check whether AR Engine has been installed on the current device. If so, your app will be able to run properly. If not, you need to prompt the user to install AR Engine, for example, by redirecting the user to AppGallery. The sample code is as follows:

 
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
    // ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
     isRemindInstall = true;
}


2. Initialize an AR scene. AR Engine supports up to five scenes, including motion tracking (ARWorldTrackingConfig), face tracking (ARFaceTrackingConfig), hand recognition (ARHandTrackingConfig), human body tracking (ARBodyTrackingConfig), and image recognition(ARImageTrackingConfig).

3. Call the ARBodyTrackingConfig API to initialize the human body tracking scene.

 
  mArSession = new ARSession(context)
  ARBodyTrackingConfig config = new ARHandTrackingConfig(mArSession);
  Config.setEnableItem(ARConfigBase.ENABLE_DEPTH | ARConfigBase.ENABLE.MASK);
Configure the session information.
  mArSession.configure(config);


4. Initialize the BodyRelatedDisplay API to render data related to the main AR type.

 
Public interface BodyRelatedDisplay{
     Void init();
     Void onDrawFrame (Collection<ARBody> bodies,float[] projectionMatrix) ;
  }


5. Initialize the BodyRenderManager class, which is used to render the personal data obtained by AREngine.

 
Public class BodyRenderManager implements GLSurfaceView.Renderer{

// Implement the onDrawFrame() method.
         Public void onDrawFrame(){
             ARFrame frame = mSession.update();
             ARCamera camera = Frame.getCramera();
             // Obtain the projection matrix of the AR camera.
             Camera.getProjectionMatrix();
             // Obtain the set of all traceable objects of the specified type and pass ARBody.class to return the human body tracking result.
             Collection<ARBody> bodies = mSession.getAllTrackbles(ARBody.class);
         }
   }


6. Initialize BodySkeletonDisplay to obtain skeleton data and pass the data to the OpenGL ES, which will render the data and display it on the device screen.

 
Public class BodySkeletonDisplay implements BodyRelatedDisplay{
        // Methods used in this class are as follows:
// Initialization method.
public void init(){
}
// Use OpenGL to update and draw the node data.
Public void onDrawFrame(Collection<ARBody> bodies,float[] projectionMatrix){
   for (ARBody body : bodies) {
            if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
                float coordinate = 1.0f;
                if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
                    coordinate = DRAW_COORDINATE;
                }
                findValidSkeletonPoints(body);
                updateBodySkeleton();
                drawBodySkeleton(coordinate, projectionMatrix);
            }
        }
}
// Search for valid skeleton points.
private void findValidSkeletonPoints(ARBody arBody) {
        int index = 0;
        int[] isExists;
        int validPointNum = 0;
        float[] points;
        float[] skeletonPoints;

if (arBody.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
            isExists = arBody.getSkeletonPointIsExist3D();
            points = new float[isExists.length * 3];
            skeletonPoints = arBody.getSkeletonPoint3D();
        } else {
            isExists = arBody.getSkeletonPointIsExist2D();
            points = new float[isExists.length * 3];
            skeletonPoints = arBody.getSkeletonPoint2D();
        }
for (int i = 0; i < isExists.length; i++) {
            if (isExists[i] != 0) {
                points[index++] = skeletonPoints[3 * i];
                points[index++] = skeletonPoints[3 * i + 1];
                points[index++] = skeletonPoints[3 * i + 2];
                validPointNum++;
            }
        }
        mSkeletonPoints = FloatBuffer.wrap(points);
        mPointsNum = validPointNum;
    }
}


7. Obtain the skeleton point connection data and pass it to OpenGL ES, which will then render the data and display it on the device screen.

 
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {
     // Render the lines between body bones.
     public void onDrawFrame(Collection<ARBody> bodies, float[] projectionMatrix) {
        for (ARBody body : bodies) {
            if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
                float coordinate = 1.0f;
                if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
                    coordinate = COORDINATE_SYSTEM_TYPE_3D_FLAG;
                }
                updateBodySkeletonLineData(body);
                drawSkeletonLine(coordinate, projectionMatrix);
            }
        }
}
}


Conclusion

By blending real and virtual worlds, AR gives users the tools they need to overlay creative effects in real environments and interact with these imaginary virtual elements. AR makes it easy to build whimsical and immersive interactions that enhance user experience. From virtual try-on, gameplay, photo, and video shooting, to product launch, training and learning, and home decoration, everything is made easier and more interesting with AR.

If you are considering developing an AR app that interacts with users when they strike specific poses, like jumping, showing their palms, and raising their hands, or even more complicated motions, you will need to equip your app to accurately identify these motions in real-time. The AR Engine SDK is a capability that makes this possible. This SDK equips your app to track user motions with a high degree of accuracy and then interact with the motions, easing the process of developing AR-powered apps.

Apache Maven Interaction Software development kit mobile app Data Types

Published at DZone with permission of Jackson Jiang. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Simplifying Multi-Cloud Observability With Open Source
  • API-Led Example: MuleSoft
  • Enhancing Accuracy in AI-Driven Mobile Applications: Tackling Hallucinations in Large Language Models
  • Simplifying Database Operations With HarperDB SDK for Java

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: