paint-brush
Diving into Android Thingsby@emuneee
113 reads

Diving into Android Things

by EvanFebruary 15th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

I’ve always tinkered with electronics since my teens. I went to school and graduated with a Computer Engineering degree with a focus on hardware (embedded systems, ASIC design, etc). I somehow stumbl...got into software since graduation and fast-forward 9 years later, am now an <a href="https://hackernoon.com/tagged/android" target="_blank">Android</a> developer at <a href="http://radiopublic.com" target="_blank">RadioPublic</a>. When <a href="https://hackernoon.com/tagged/google" target="_blank">Google</a> announced their IoT platform, <a href="https://developer.android.com/things/index.html" target="_blank">Android Things</a> in late 2016, I was beyond excited because it gave me a reason to break out my old breadboard, resistors, LEDs, and power regulators. It also gave me a reason to buy a Raspberry Pi. With Android Things I’m finally able to leverage my expertise in Android development in a more embedded (and frankly, more interesting) platform.

People Mentioned

Mention Thumbnail
Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Diving into Android Things
Evan HackerNoon profile picture

I’ve always tinkered with electronics since my teens. I went to school and graduated with a Computer Engineering degree with a focus on hardware (embedded systems, ASIC design, etc). I somehow stumbl...got into software since graduation and fast-forward 9 years later, am now an Android developer at RadioPublic. When Google announced their IoT platform, Android Things in late 2016, I was beyond excited because it gave me a reason to break out my old breadboard, resistors, LEDs, and power regulators. It also gave me a reason to buy a Raspberry Pi. With Android Things I’m finally able to leverage my expertise in Android development in a more embedded (and frankly, more interesting) platform.

I’m not going to cover a ton of Android Things fundamentals here because a lot of really good developers have already done a great job at that:

I’m going to share a project I began on Friday, February 10th and finished prototyping on Monday, February 13th. When exploring something new, it’s important for me to find a practical application for it. I’m a homeowner of a house built in the early 90s. It can use some home-built tech from 2017. A superficial problem I and other members of my household have trouble with is parking (correctly and in alignment) in the garage. Either we parked to close to the wall and can’t walk around both sides of the car or we aren’t sure if the car will get caught in the garage door.

Hello CantParkRight

My first Android Things project is to build an assistive parking devices that uses several sensors to assist drivers in parking correctly in the garage. Think of the signals you see when you enter a carwash. Normally, there are two or three lights. When you first enter, the light is green, which instructs you to keep driving forward. When you have driven far enough, the light turns red to alert you to stop. I want this in my garage.

The first step of this is prototyping CantParkRight.

Prototyping the Hardware

A huge advantage that Raspberry Pi-like devices provide is the ability to quickly and cheaply prototype assistive devices like the one I’m building. The fact that I can officially leverage Android APIs (and down the road, Google APIs) is a big plus.

The supplies I used for my prototype include:

  • Raspberry Pi Model 3 running Android Things Preview 1
  • HC-SR04 Ultrasonic proximity sensor
  • 2 resistors, 10KΩ and 20KΩ
  • 3 LEDs (Red, Yellow, Green)
  • A breadboard
  • Assortment of jumper wire

I had most of my supplies already. I bought a Raspberry Pi sometime ago and recently bought a pack of 5 HC-SR04 Ultrasonic sensors from Amazon. I settled on the HC-SR04 after quite a bit of research. How the HC-SR04 works is, you send a 10µS (microsecond) signal to the TRIGGER pin. Sometime in the next few milliseconds, the HC-SR04 sends a burst of 8 40KHz sound waves that will eventually bounce back. If an object is in range, the signal will bounce back and be detected by the receiver portion of the sensor. The HC-SR04 then sends a variable length echo to any device attached to the ECHO pin. The length of this pulse is determined by the distance the signal traveled before returning to the sensor. The HC-SR04 has a range of around 400cm (~13 feet). Perfect. Note: check out the datasheet on the HC-SR04 here.

After a lot of experimentation, here is how my circuit is arranged on my breadboard.

CantParkRight hardware schematic

CantParkRight IRL, messy

A few hardware gotchas:

  • The accuracy varies greatly between sensors, especially the “knockoffs”. Out of the pack of 5, some sensors were more sensitive to object movements while others exhibited less variation.
  • The signal sent to the ECHO pin is at 5V. The GPIO ports on the Raspberry Pi are rated for 3.3V. You can damage it by sending to high a voltage, so I use the resistors to step the voltage down to 3.3V.

Prototyping the Software

The best part of this project was writing the software in Android Studio, deploying it via ADB (over WiFi), and seeing the results play out in front of my eyes. I based the implementation on:

Over the course of the article, Daniel builds several implementations, some synchronous and some asynchronous using while loops, callbacks, and threads. I decided I wanted to build upon that, but use RxJava to implement asynchronous handling of sensor data. I’ve used RxJava in most of the Android apps that I’ve built. It offers quick and convenient ways to build, reuse, and arrange pieces of logic that leverage the flow of data from one end to the next, basically perfect for CantParkRight.

Disclaimer: I am NOT an RxJava expert. There are likely ways to do what I did using RxJava in a more efficient manner.

The critical piece is how I go about initiating the TRIGGER and waiting for an ECHO. My first implementation of this used a RxJava Observable that essentially wraps a few While loops (check out my repository, then go to the first commit).

The process was:

  • Send the 10µS signal to the TRIGGER
  • Start a while loop that executed until the ECHO goes high, record the start time
  • Start a while loop that iterated until the ECHO goes low, record the end time and calculate the pulse width which is used to calculate the distance

It worked, sometimes, but often for reasons I’m still researching, the sensor would stop responding (ie. the ECHO never went high after a TRIGGER). The improvement came when I used a GpioCallback. A GpioCallback allows you to listen to edge triggers (signal going high, signal going low, etc.) asynchronously. I combined my implementation of a GpioCallback with a RxJava Observable (more specifically an Emitter). From what I’ve read, the advantages of the Emitter over using a plain Observable (using Observable.create) is that it forces you to specify a BackPressure strategy, which is important when reading values pushed from a sensor. CantParkRight uses the BUFFER BackpressureMode. Using RxJava allows me to start the distance detection process, simply by subscribing to the correct Observable. Using an Emitter also allows me to right code to unregister my GpioCallback when I unsubscribe in onDestroy(…). This prevents future memory leaks.

CantParkRight prototype in action

What’s Next

For CantParkRight, I’m working up to building an actual device I can easily mount in my garage. With the prototype complete, I turn my efforts to making that happen.

In the meantime, you can check out the source code for CantParkRight on GitHub. Be sure to follow me on Twitter or (cough) Google+ for updates on CantParkRight. I intend on posting the finished project here in the coming months, but watching the repository is great way to keep up.

Originally published at emuneee.com on February 15, 2017.