Running YOLO on ODROID: YODROID

Written by TomPJacobs | Published 2017/11/07
Tech Story Tags: ubuntu | tensorflow | machine-learning | odroid | ai

TLDRvia the TL;DR App

YOLO is a neural network model that is able to recognise everyday objects very quickly from images. There’s also TinyYOLO which runs on mobile devices pretty well. This guide tells you how to get TinyYOLO installed and running on your ODROID XU4. To follow along, log into your ODROID, and run these commands:

Step 1: Install TensorFlow

Update your system

First, we make sure everything is up to date.

sudo apt-get updatesudo apt-get upgrade -ysudo apt-get dist-upgrade -ysudo reboot

Get some swap

Bazel won’t build without this on the ODROID XU4. Pop in a blank 8GB USB drive, which will get erased, and run:

sudo blkid

Check the device name, usually /dev/sda1, and with that name, run:

sudo mkswap /dev/sda1sudo swapon /dev/sda1sudo swapon

Install the requirements

We’ll need real Oracle Java, instead of OpenJDK. I tried OpenJDK, built Bazel with it, but it failed to SHA-1 hash downloads, and so was useless. So, we install:

sudo apt-get install pkg-config zip g++ zlib1g-dev unzipsudo apt-get install gcc-4.8 g++-4.8sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 100sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.8 100sudo apt-get install python-pip python-numpy swig python-devsudo pip install wheel

sudo add-apt-repository ppa:webupd8team/javasudo apt-get updatesudo apt-get install oracle-java8-installer

Install Bazel build system

Google builds things using Bazel. TensorFlow is from Google. Thus, we need to build Bazel first. This takes about a half an hour. Go get some lunch.

wget https://github.com/bazelbuild/bazel/releases/download/0.5.4/bazel-0.5.4-dist.zipunzip -d bazel bazel-0.5.4-dist.zipcd bazelsudo ./compile.sh

Now, Java will run out of heap here. So, we need to:

sudo vi scripts/bootstrap/compile.sh

And find the line with “run” on it, and add some memory flags, changing it to:

run “${JAVAC}” -J-Xms256m -J-Xmx384m -classpath “${classpath}” -sourcepath “${sourcepath}” \

And we compile again.

sudo ./compile.sh

sudo cp output/bazel /usr/local/bin/bazel

Get TensorFlow

Now we can actually download and configure TensorFlow.

git clone --recurse-submodules https://github.com/tensorflow/tensorflow.gitcd tensorflow

I couldn’t get the latest version of TensorFlow to install, it had BoringSSL C99 compile issues, so checkout version 1.4.0, and configure:

git checkout tags/v1.4.0./configure

Say no to most things, including OpenCL:

Build TensorFlow

Then, we build. If you thought Bazel took a long time to build, then you haven’t built software before. Hold on to your hats. We’re in for a ride here.

bazel build -c opt --copt="-mfpu=neon-vfpv4" --copt="-funsafe-math-optimizations" --copt="-ftree-vectorize" --copt="-fomit-frame-pointer" --local_resources 8192,8.0,1.0 --verbose_failures tensorflow/tools/pip_package:build_pip_package

Building…

1,900 / 4,909 files… error.

Oop, NEON doesn’t work. Ok, let’s turn that off. But, we’ll want to fix it later.

bazel build -c opt --copt="-funsafe-math-optimizations" --copt="-ftree-vectorize" --copt="-fomit-frame-pointer" --local_resources 8192,8.0,1.0 --verbose_failures tensorflow/tools/pip_package:build_pip_package

3,700 / 4,622 files… error.

In file included from tensorflow/compiler/xla/service/llvm_ir/llvm_util.cc:30:0:./tensorflow/core/lib/core/casts.h: In instantiation of 'Dest tensorflow::bit_cast(const Source&) [with Dest = long long int; Source = void (*)(const char*, long long int)]':tensorflow/compiler/xla/service/llvm_ir/llvm_util.cc:400:67: required from here./tensorflow/core/lib/core/casts.h:91:3: error: static assertion failed: Sizes do not match

Alright, XLA is causing problems. It’s new, and not needed. Let’s drop it for now and reconfigure and rebuild without it.

2,345 / 3,683 files…

3,112 / 3,683 files…

3,682 / 3,683 files…

Built!

Target //tensorflow/tools/pip_package:build_pip_package up-to-date:bazel-bin/tensorflow/tools/pip_package/build_pip_package

Quick! Install it!

bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

sudo pip2 install /tmp/tensorflow_pkg/tensorflow-1.4.0-cp27-cp27mu-linux_armv7l.whl --upgrade --ignore-installed

See the footnotes for issues here.

Is it real?

python2>>> import tensorflow>>> print(tensorflow.__version__)1.4.0

That feels good. Onto YOLOing.

Step 2: SO YOLO.

I’m sure there are a few implementations of YOLO out there by now. Let’s pick one. This guy seems to know what he’s talking about. Let’s try his stuff.

# Get codegit clone https://github.com/experiencor/basic-yolo-keras.gitcd basic-yolo-keras

# Get weights from https://1drv.ms/f/s!ApLdDEW3ut5fec2OzK4S4RpT-SU# Or Raccoon: https://1drv.ms/f/s!ApLdDEW3ut5feoZAEUwmSMYdPlYwget _<authurl>/_tiny_yolo_features.h5wget _<authurl>/_tiny_yolo_raccoon.h5

# Edit configvi config.json # Change model to "Tiny Yolo"

# Download a raccoonwget https://upload.wikimedia.org/wikipedia/commons/b/be/Racoon_in_Vancouver.jpg

# Runpython2 predict.py -c config.json -i Racoon_in_Vancouver.jpg -w tiny_yolo_raccoon.h5

Missing imaug. Ok, looks like we’ll need a few things.

sudo pip2 install imgaugsudo pip2 install kerassudo pip2 install h5py

H5py and scipy take a little while to install. Ok, let’s try that again. Can it find a raccoon in this image of a raccoon?

Where is it?

python2 predict.py -c config.json -i Racoon_in_Vancouver.jpg -w tiny_yolo_raccoon.h5

Yes! Now that’s a detected raccoon!

I see you. You can’t hide. From the ODROID.

__________________________________________________________________

Footnotes.

At first when I ran:

sudo pip install /tmp/tensorflow_pkg/tensorflow-1.4.0-cp27-cp27mu-linux_armv7l.whl --upgrade --ignore-installed

It ran using python 3 and failed to install, so after some googling and learning about pip filename rules, I figured it out and just used pip2 instead.

Then, the most fun issue was when I first ran import tensorflow. I got this fun message:

>>> import tensorflowTraceback (most recent call last):File “<stdin>”, line 1, in <module>File “tensorflow/__init__.py”, line 24, in <module>from tensorflow.python import *File “tensorflow/python/__init__.py”, line 49, in <module>from tensorflow.python import pywrap_tensorflowFile “tensorflow/python/pywrap_tensorflow.py”, line 25, in <module>from tensorflow.python.platform import self_checkImportError: No module named platform

Oh no! It’s broken in some way. So I googled the issue, and it seemed to be about locales:

No module named tensorflow.python.platform · Issue #36 · tensorflow/tensorflowgithub.com

So I set a locale first (after also seeing that it needed to be capital US), and rebuilt, and it still gave me the same issue. Hm. I’ll take a look at it tomorrow.

export LC_ALL=en_US.UTF-8export LANG=en_us.UTF-8

The next day, with fresh googling powers, revealed that actually… it was just that I was just running it in the build directory! It has a directory called tensorflow in it, and python was looking up into that to find things. So just changing directories to another fixed the issue. So easy. Such a fail.

Importing TF in Python yields ‘cannot import name ‘build_info’ · Issue #13526 · tensorflowgithub.com


Published by HackerNoon on 2017/11/07