The Fundamental Steps of Leap Motion Combination in Your Unity 3D Project
Working on a Unity 3D project? See how Leap Motion can give your VR environment an added interactive boost.
Join the DZone community and get the full member experience.Join For Free
Leap Motion has been trying to make virtual reality more immersive than ever with a new hand-tracking system. The future of human-computer interaction is about to undergo a standards shift. Ever since the start of computing, scientists and app developers have been working toward creating 3D environments.
We live in very interesting times. We are growing closer to making these science fiction ideas into science facts. In reality, we have already done so to some extent. Virtual reality is a great topic. One of the major topics within VR is the ability to interact with the digital world. This article will concentrate on the communication of the user with the digital world, applying a hardware sensor developed by Leap Motion.
Here, I will be including how to combine & use Leap Motion hand motion sensor with Unity 3D.
What is the Leap Motion Controller?
The Leap Motion Controller functions just like moving your hand. It follows all 10 fingers up to one-100th of a millimeter. It's dramatically more delicate than existing motion-control technology. That is how you can draw or paint mini masterpieces inside a one-inch cube.
Leap Motion Setup and Unity 3D Integration
First, you will need to download the SDK for Leap Motion. After you download the SDK, install the runtime. The SDK supports a variety of platforms and languages that you can look into.
Assuming you have already downloaded and installed Unity 5 on your machine, the next thing you will need to do is get the Leap Motion Unity Core Assets from the asset store.
After downloading and installing the Leap Motion Core capitals, we can begin with developing our simple scene and learning how to communicate with the assets, extending them for our own purposes. Go ahead and create a new empty project and import the Leap Motion Core Assets into your project.
Leap Motion Core Assets
In the project window, your main focus should be the LeapMotion folder. The folders that include OVR are for applying Leap Motion with a VR headset. We will include this in a future article.
You should take time and study the structure — more importantly, study the content of each folder. One of the main core assets that you will want to become familiar with is the HandController. This is the central building block that enables you to communicate with the Leap Motion device in your view. It serves as the support point for rendering your hands in the scene.
The HandController prefab has the Hand Controller script connected, which enables you to communicate with the device. The advantage of this design is that you can build your own hand models and use them with the controller for visual design, as well as custom gestures and more.
Another key property is the Hand Movement system vector. A word of warning here, you will need to read the documentation and specifications for using the proper adjustments for the particular application that you are working on.
The Hand Movement system vector is used to improve the range of motion of the hands without increasing the possible model size. Placing the HandController object is essential. As stated, this is the anchor point for your camera, and your camera should be in the same area as the HandController.
The fact that you can really see your hand movement in the picture by itself is a huge undertaking with the assets that have been given. To make everything more interesting, you will need to be able to communicate with the environment and modify and manipulate stuff within the scene.
For simplicity, we will set three cubes serving as our color pallet. The first will be red, the second will be blue, and the third one will be orange. (You can pick any colors you want, by the way.) We need to place the cubes in a way that interacting with them is easy for the user.
We can put the cubes in the following order:
The next step is to generate the script that will help us communicate with the Cube GameObjects:
As you can see, the code is not very complex, but it does help us get what we want. There are three public variables that are applied: to identify whether the object is selectable, the color that the object uses, and the selected color.
The core of the logic happens in the OnTriggerEnter function. We check to see if the object that interacted with the Cube is the index finger, then we check to notice if the object is selectable. If the object is selectable, we set the selectable color to the predefined color code and exit the function.
This same function is also used to implement the selected color with the Cube GameObject. For this special case, we get the Renderer object from the Cube and set the material color to the selected color.
How About Moving an Object?
This is great, but what if you wanted to be able to have more communication with your 3D environment. Let's look at a simple example — you need to pick up objects and move them around in the scene.
In order for us to make it happen, we need to write more code! Extending our example, we will implement the code that will enable you to pick up the Cube object and move it around in the environment.
Virtual reality has always been one of those topics in the industry that gets a lot of exposure at once, then cools down. This time around, things are a bit different. The hardware and software ecosystem that enables VR is becoming more democratized. The cost of the hardware, even though it's not cheap, is at a point that most developers can afford.
This allows the development community to provide better VR experiences and entertainment. The idea of integrating hand motion and gestures with VR apps is a must.
Opinions expressed by DZone contributors are their own.