Before you go, check out these stories!

Hackernoon logoHow To Creat an Audible Object Detector [DIY Tutorial] by@cleuton-sampaio

How To Creat an Audible Object Detector [DIY Tutorial]

Author profile picture

@cleuton-sampaioCleuton Sampaio

Founder: "". Full-stack dev/ AI Egineer/ Professional Writer/ M.Sc. Rio de Janeir

For people with vision problems.

Click here for a vídeo presentation

It's in Portuguese, but you can remove the translation and leve it talking in english. Just change line #81 of script

Finally I finished the audible object detector proof of concept. The goal is to create something that can be used by people with visual needs. This is a proof of concept, or an MVP.

I used:

  • Raspberry Pi 3 with Raspbian;
  • Ultrasonic detector HC-SR04;
  • Raspberry Pi Camera;
  • Yolo model;
  • OpenCV;

In this demo, I'm using Yolo (You Only Look Once), with python and OpenCV. I was inspired by the Adrian Rosebrock article to create this PoC.

I've tested with CNN models in Keras, using banks like CIFAR and COCODataset, but Yolo's performance is better, although less accurate.

It is still an unfinished project, but I decided to share it for you to help me and develop your own solutions.

I'm using Google's gTTS library to transcribe text to audio.

Prototype assembly

You will need:

  • Flat cable to connect Raspberry PI to a protoboard;
  • Raspberry PI 3;
  • Raspberry Camera;
  • Ultrasonic sensor HC-SR04;
  • 330 ohm resistor;
  • 470 ohm resistor;
  • Switch;
  • Jumpers;

To connect an HC-SR04 sensor to the Raspberry PI, follow the instructions in this article. The image of the article is this:

I used the GPIOs: 17 (TRIGGER) and 24 (ECHO). In the image, he used: 18 (TRIGGER) and 24 (ECHO).

Connect the switch by connecting the circuit ground (GND) and the GPIO 25. When you press the Switch, this GPIO will change the state and command a photo.


Clone the Darknet project (git clone and copy following files to yolo folder:


Click on this link and download the yolov3.weights file and save it in the yolo folder.

Install VLC. It is better if you have Anaconda also installed, just create a virtual environment with the command:

conda env create -f ./env.yml
conda activate object

To execute, just run the script


If you want, you can pass the path of an image file to test. I attached 2 images for you to test.

Oh, and I created a JSON Dictionary to translate the names of the objects found (to Portuguese), but if you are an english speaker, just use the original names.

Executing on the Raspberry PI

Install the conda environment: env-armhf.yml.

The and the scripts must be installed on the Raspberry PI. The script starts the object detection loop.

By pressing the switch the device will take a photo and tell you the objects that are in it and the distance to the closest object (see the video).

Read the OpenCV installation to see how to install the rest of the components on your Raspberry PI.

Previously published at


Join Hacker Noon

Create your free account to unlock your custom reading experience.