Tag Archives: Depth Sensor

Intel Perceptual Computing Challenge Award

Interactive Gesture Camera

Interactive Gesture Camera.

We just won $1000 – One Thousand US Dollars! Awesome! Thank you Intel!

During the time of December 17th to February 20th Intel held the Perceptual Computing Challenge Phase 1. I entered “Google Earth Controller”, an application demonstrating touch-free navigation of Google Earth utilizing an Interactive Gesture Camera and the Intel Perceptual Computing SDK.

It allows you to fly like Superman across Earth or other planets while staying seated in your favorite chair! See video below.

Yesterday afternoon the winners were announced and Team “W” is happy that “Google Earth Controller” was awarded with Second Place. While this was just a weekend warm-up hack we are getting ready for Phase 2 of the challenge. Stay tuned!

Creating Depth Images with the Kinect Sensor.

Microsoft’s Kinect Sensor is a nice peace of Hardware and considerably cheap for a camera that also provides a 640×480 depth image. Soon after it appeared on the market in November 2010, it was reverse engineered for users to interface it from the computer without the Xbox 360.

Technology Enthusiasts and Roboticists all over the world picked up a Kinect to see what they can do with it. My personal motivation is to improve Perception for my Service Robot. Shortly after the publication of the Kinect Hack, Microsoft and PrimeSense decided to release an official SDK: the OpenNI Framework.

Below are the first Depth Images and User Tracking Images, that I took with the official SDK.