An Introduction to Leap Motion in Java

I just got my hands on a Leap Motion hand tracking device, and it’s pretty neat. For those of you who don’t know what it is, it’s a small sensor device that can track hand and finger movement. It’s very small, sizing up at 3 inches wide, 1.2 inches in depth, and 0.5 inches tall and weighing in at about 0.1lbs. The device itself is very simple, a rubberized bottom, aluminum chassis, and a tinted plastic top. The device uses 2 IR cameras and 3 IR LEDs to send light rays out rapidly. The cameras track and capture reflections and the data is sent back to the computer. Through their calculations they are able to identify hand orientation, finger tip position and orientation and it’s able to track both hands and all 10 fingers.

Application Development

Developing for the device is very well documented on their website. You have a wide array of languages to choose to develop in, from C++ to web-based Javascript. Each language has a “Hello, World” example documentation page that details both of the main methods for gathering what is referred to as “Frame” data. As far as the API documentation goes, it’s very detailed and easily accessible.

Leap Motion has their own application distribution platform and test bed known as “Airspace.” This application is installed with the drivers from the website and provides you a simple introduction to the capabilities of the Leap Motion device. The applications available include a tech demonstration, a sculpting application, and more.

The community support for the Leap Motion is very active. The community forums hosted on the official website sees frequent activity and the questions that are asked almost always get an answer. However, in terms of external support as far as blogs and tutorials go, results were far and few between. I found that there were developers who mentioned that they implemented Leap Motion into their game, but never get into the details or their thought processes for the device. One of the problems that developers face today with the device is how it’s actually used.

Current Applications

The website promotes many uses of the device from shooting games, sculpting, drawing, and medical applications. Leap Motion Inc. has also signed contracts with ASUS and HP to roll out products with the Leap Motion device built into laptops and keyboards. The most notable application of the Leap Motion is for an MRI Image Viewer where a doctor would be able to scroll through MRI sequences without having to remove their gloves or touch anything.

Leap Motion for Java

I wanted to become familiar with what the device has to offer and I wanted to test the capabilities of the hardware and effectively identify the discrepancies in precision of the tracking. Through my tests I was able to determine all of that. I decided to use their Java API to set up the Leap Motion.

I wanted to make a simple 2D physics games that  allowed you to perform a circle gesture to create spheres and then make a fist to smack the spheres around. The purpose of this was to identify the capabilities of the Leap Motion device especially when attempting to identify a closed fist vs an open fist, as well as individual finger tracking capabilities.

I decided to use LibGDX as my Java library because it’s very easy to set up, the physics library is there, and the rendering pipeline is solid. The documentation on the Leap Motion website was very easy to follow and straightforward. I do want to talk about the two common methods of gathering Frame data from the Leap device. Firstly, a “Frame” is a motion snapshot from the device where you have access to any hands in the viewport, or fingers, or gestures made, etc. The API also provides the ability to provide previous snapshots up to a given amount, so comparing the last few positions to find a trend or detect a velocity (although functions that provide these are built in) manually is easy. The first method is to continously just check for frame data in a consistent update loop, OR you can create a class that has inherits properties of a “Listener” class and then you can add this class to the listeners on the controller. Once that is done, the onFrame event in the Listener class is called when new frame information is available.

I went with the first method of data gathering, it was easy to implement and because the update loop is well defined in the game engine. I was able to track the fingers that the device could see by checking the frames fingerlist. Once I got the list I rendered a circle where each finger was being tracked.

		if (frame.fingers().count() > 0) {
			for (int g = 0; g < frame.fingers().count(); g++) {
			    Finger finger = frame.fingers().get(g);
			    Vector position = finger.tipPosition();
			    Vector2 scalePosition = scaleLeapPosition(position);
			    float offsetX = -80.0f;
			    float offsetY = -100.0f;
			    sprite.setPosition(scalePosition.x + camera.viewportWidth / 2.0f + offsetX, scalePosition.y + offsetY);
			    sprite.draw(batch);
			}
		}

I also polled the gesture list to check to see if any gestures existed in a frame, and then if I found a Circle Gesture I created a circle physics object in the scene.

		if (frame.gestures().count() > 0 && canGenerateShape) {
			for (int g = 0; g < frame.gestures().count(); g++) {
				boolean createdOne = false;
			    switch (frame.gestures(previous).get(g).type()) {
			        case TYPE_CIRCLE:
			            //Handle circle gestures
			        	CircleGesture gesture = new CircleGesture(frame.gestures().get(g));

			        	// Get center of gesture
			        	Vector circleCenter = gesture.center();

			        	// scale to accommodate current screen size.
			        	Vector2 scalePosition = scaleLeapPosition(circleCenter);
			        	float radius = gesture.radius();
			        	float rawX = scalePosition.x;
			        	float rawY = scalePosition.y;

			        	float x = rawX + camera.viewportWidth / 2.0f;
			        	float y = rawY;

			        	// lets create a body
			        	Body temp = PhysicsObjectFactory.CreateCircle(x / BOX_TO_WORLD, y / BOX_TO_WORLD, radius / BOX_TO_WORLD, BodyType.DynamicBody);
			        	Sprite newPile = new Sprite(pileTexture);
			        	newPile.setOrigin(newPile.getWidth() / 2.0f, newPile.getHeight() / 2.0f);
			        	newPile.setScale(radius / 128.0f * 2.0f);
			        	temp.setUserData(newPile);
			        	mArray.add(temp);

			        	canGenerateShape = false;

			        	// this way we can break out
			        	createdOne = true;

			        	break;
			    }

			    if (createdOne) {
			    	break;
			    }
			}
		}

		// Wait one second before generating another point
		if (!canGenerateShape) {
			tempTime += 1.0f/60.0f;
			if (tempTime >= generationTimeMax) {
				tempTime = 0.0f;
				canGenerateShape = true;
			}
		}

Since the application can change resolution and I track the position explicitly, when the game resolution reaches a large size the finger tracking is limited to a small portion of the screen. So what I had to do was scale the Leap finger position to positions that reflect our resolution

	LEAPSCALEX = camera.viewportWidth / 480.0f;
	LEAPSCALEY = camera.viewportHeight / 320.0f;

	public Vector2 scaleLeapPosition(Vector leapPosition)
	{
		Vector2 newPosition = new Vector2();

		newPosition.x = leapPosition.getX() * LEAPSCALEX;
		newPosition.y = leapPosition.getY() * LEAPSCALEY;

		return newPosition;
	}

Capabilities

The finger tracking at times felt a little wonky at times, and sometimes the finger circles would flicker on the screen. I also felt like there were problems with the tracking itself, and by that I mean that I felt at the extremes the finger tracking was inconsistent. I think the device would benefit from wider tracking cameras because this would allow you to have a wider application and the there would be less problems with finicky tracker at the wider angles.

There are limited Gesture recognition capabilities. If you need to detect gestures that are not circles or swipes, then you need to look into alternative, or external libraries that can parse a list of points. I had originally planned on implemented a square recognition algorithm, but the actual algorithms are pretty complicated and I wanted to focus on the out of the box options that the Leap API had to offer.

Hand position recognition–specficially a fist–is not something is explicitly implemented, but there are options to detect when a fist is there. You have access to hands that are detected and palm positions. When checking for a fist, look for active hands that do not have any fingers in their individual finger lists. I did experience problems with this route because the device would at times “see” a finger in the scene.

Competitors and Similar Devices

There are a few other devices that exist that can be competitors to the Leap Motion that may be better than what Leap has to offer. One notable competitor is Haptix. Haptix claims that you can turn any surface into a touch surface. The primary use of it is to track objects, and hands in space with a solid surface below. It’s most powerful and attractive feature is that it works in all orientations and that it does not have to sit down on a flat surface facing upwards. This way you are not limited to your development environments.

Of course we have the Kinect with the Xbox One, and this tracking device can track entire bodies, hands, fingers, heart rate, and more. The Kinect is a fully developed tracking device that is used primarily for body tracking. The Kinect has the same limitation as the Leap Motion in that is must be mounted somewhere before use.

Conclusion

I think the technology is not where it needs to be yet, but I think it’s definitely something that society needs to become more comfortable with what the future has to offer. What is good about the Leap is that it’s NOT a bad device and the impression that it leaves for the public is generally a positive one. If the device had more precise tracking algorithms or hardware, then I think it would be better. And on a personal note and in my own recognition of the importance of tactile feedback, it is a very weird device to use simply because your hands are in the air the entire time and you are not touching anything stimulating that extra sense.¬† Overall, it’s a really cool device, and I think with enough community support there could be a lot of really great applications that can come from this.

Leave a Reply

Your email address will not be published. Required fields are marked *