Making a robot is the dream of many. An important part of the robot is its eyes and perception of the outside world. Many robots have not only an ordinary camera but also some kind of sensors that help to better perceive the outside world and calculate the distance to objects. For this purpose, the Asus Xtion Live Depth Camera is well suited. It can provide a regular RGB image as well as a depth image. The Depth image is an image in which in a certain pixel stored the depth of a given pixel. In other words, each pixel is a measurement of distance.
For example, this information can be used to calculate the distance to objects and plan the further route of the robot. The depth image example is placed above (By Jorma Palmén - Own work, CC BY-SA 4.0).
The Asus Xtion Live Depth Camera was used in one of my projects. In the project, the depth camera was connected to a single-board computer Raspberry Pi 3, and by using the depth image, I have written code to detect the box, get the distance to it and its dimensions for further actions with it. But before that, I worked hard to make the necessary settings to get the depth image. This article will provide a detailed tutorial on how to make these settings.
The most difficult part of the integration of depth camera with Raspberry Pi was the installation of the OpenNI (Open Natural Interaction) framework. This is an open-source SDK, which is widely used by developers of 3D sensing applications. The binaries and documentation of the OpenNI framework are available online as well as its open-source APIs. These APIs are accessible for depth-sensing cameras and are known as powerful tools that support not just viewing and displaying depth images, but also have built-in features like hand gestures recognition, body motion tracking, and even voice, and voice command recognition.
Installation of the OpenNI2 started with obtaining the essential dependencies that are required for the compilation of documentation:
sudo apt-get install -y g++ python libusb-1.0-0-dev freeglut3-dev doxygen graphviz
Afterward, it is needed to get the source code from GitHub:
git clone https://github.com/OpenNI/OpenNI2/archive/2.1-Beta.tar.gz
Then it was needed to change two files.
First OpenNI2/ThirdParty/PSCommon/BuildSystem/Platform.Arm. Changing this line:
CFLAGS += -march=armv7-a -mtune=cortex-a8 -mfpu=neon -mfloat- abi=softfp #-mcpu=cortex-a8
with the following:
CFLAGS += -mtune=arm1176jzf-s -mfpu=vfp -mfloat-abi=hard
The second file is OpenNI2/Redist/Redist.py:
compilation_cmd = "make -j" + calc_jobs_number() + " CFG=" + configuration + " PLATFORM=" + platform + " > " + outfile + " 2>&1"
Then duplicated the following line, commented it, and changed the copied line:
#compilation_cmd = "make -j" + calc_jobs_number() + " CFG=" + configuration + " PLATFORM=" + platform + " > " + outfile + " 2>&1"
compilation_cmd = "make -j1" + " CFG=" + configuration + " PLATFORM=" + platform + " > " + outfile + " 2>&1"
Building OpenNI2:
PLATFORM=Arm make
When make is done, run:
cd Redist && python ReleaseVersion.py arm
We now have a tar archive of the OpenNI2 installer, it is located in
./Redist/Final/
Move the archive to your home folder
mv /Redist/Final/OpenNI-Linux-Arm-2.1.0.tar.bz2 ~
Extract the archive and install OpenNI2
tar -xvf OpenNI-Linux-Arm-2.1.0.tar.bz2 && cd OpenNI-2.1.0-arm && sudo sh install.sh
Make an example :
cd Samples/SimpleRead && make
Start the example :
cd ../Bin && ./SimpleRead
SimpleRead program output:
[00066738] 2768
[00100107] 2768
[00133477] 2768
[00166846] 2769
[00200215] 2769
[00233584] 2767
[00266954] 2767
……
MultipleStreamRead program outputs both RGB and depth information:
./MultipleStreamRead
MultipleStreamRead program output:
[00066738] 2768
[00100107] 2768
[00133477] 0x445620
[00166846] 2769
[00200215] 2769
[00212243] 0x053a46
[00233584] 2767
[00266954] 2767
[00275456] 0x272e30
[00233584] 2768
……
SimpleViewer program gives us depth stream information in the video:
Then, I have installed the OpenCV for visualization of depth images in gradient and convenience of further work. There are already many installation manuals for it and I see no reason to repeat it.
Further possibilities are limited only by your imagination. There are already several open-source projects which are done by using the depth camera. Here are some ideas where you can start or use them in your projects: full body tracker, hand and head gesture tracker, handheld 3d scanner, stair-step detector, and even heart rate detector.
I wish you good luck in your new endeavor!