Feedback & Odometry: Coursera’s Control of Mobile Robots with ROS and ROSbots — Part 3by@rosbots
5,425 reads
5,425 reads

Feedback & Odometry: Coursera’s Control of Mobile Robots with ROS and ROSbots — Part 3

by ROSbotsMay 8th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Welcome to part 3 of our multi-part <a href="" target="_blank">Coursera’s Control of Mobile Robots</a> (CMR) series. This series implements concepts learned from CMR with <a href="" target="_blank">ROS</a> and a <a href="" target="_blank">ROSbots robot</a>.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Feedback & Odometry: Coursera’s Control of Mobile Robots with ROS and ROSbots — Part 3
ROSbots HackerNoon profile picture

Welcome to part 3 of our multi-part Coursera’s Control of Mobile Robots (CMR) series. This series implements concepts learned from CMR with ROS and a ROSbots robot.

About ROSbots

ROSbots is a ROS + OpenCV robot kit for Makers. Based off a Raspberry Pi and Arduino-compatible UNO board, the ROSbots robot kit caters to a Maker’s desire by being extremely hackable to implement any new robotics concepts you come across. All our code is open source on Github.

Previously in Part 2, What‘s in Part 3

Back in part 2 of our Control of Mobile Robots series, we wrote about the convenience of using a Unicycle Model to intuitively represent robot dynamics. But since our ROSbots robot is a differential drive robot, we need to talk about how the Unicycle Model relates to the Differential Drive Model dynamics. We then walked through ROS code to “drive” our ROSbots robot in a systematic manner, via remote control (RC).

In this part 3, we will build upon the Differential Drive dynamics to:

  1. Introduce the concept of feedback
  2. Describe how our ROSbots’ wheel encoder sensors work, and
  3. Define the equations needed to compute the pose — the position and orientation — of our robot using feedback and our encoder readings.

Disclaimer: In this post, we actually won’t be showcasing any ROS code and will stick to talking about the equations and concepts behind feedback and odometry. But this sets us up for the next post which will have ROS code — promise!!

ROSbots’ wheel encoders used for odometry

Need for Feedback

Even though we successfully sent drive commands to our robot, we cannot guarantee that our robot has executed on the commands. The packet may have dropped. The heavy payload may have stalled the motors.

In order to know if our robot has actually moved, we need sensors to return back some information. We need feedback.

In general, we define a couple of components in our feedback system:

  1. r → a reference target which we want to achieve, ie a goal we want to get to, a speed we want to track, a line we want to follow.
  2. u → r translates to some series of input commands we want to give to our robot to help us achieve, track r.
  3. x → this is our current state. Our input u affects the state by a set of rules known as dynamics, or how the system evolves over time.
  4. y → this is our measurement of the current state. Often times, it is impossible to actually observe the state x directly. But we can create sensors to measure it.

Credit: Magnus Egerstedt, Control of Mobile Robots, Georgia Ins<tute of Technology

The measurement y is fed back to the beginning of the system to help us tweak our input u. Without y and a feedback process, we cannot know if we are tracking our reference preventing us from implementing an effective, stable controller to control our robot.


Like with the Quickbot robot and Khepera robot used in the Coursera course, our ROSbots robot comes equipped with wheel encoders that measure the rotational velocity of the wheel.

The notches on the encoder disk of our ROSbots robot interfere with a light switch on the tips of the U arm on the speed sensor.

When the wheel turns, the notches alternate between blocking and unblocking the light switch — ie a “tick”. By counting the number of “ticks” that has gone by, you can determine how much the wheel has rotated.

By using the encoders, we can update our robots odometry — which is defined as the use of motion sensing data to update robot pose, or the position and heading, of our robot.

Recall pose as the defined by the following:

x - position on the x-axis (ie in meters)

y - position on the y-axis (ie in meters)

φ - phi - angle of the unicycle counter clockwise from x-axis (ie in radians)

The positional and angular velocities of our Unicycle Model are defined by:

v - directional velocity

w - angular velocity

To update the odometry, we need to employ a couple of equations that help us compute the change in our position and heading from the distance traveled per right and left wheel of our differential drive robot.

How far did each wheel rotate?

The first is the equation that uses our encoder ticks to compute how far, in meters, the right and left wheel has turned

D_left = 2 * pi * R * (nTicksLeft / nTotalTicks)D_right = 2 * pi * R * (nTicksRight / nTotalTicks)

  1. R → the radius of the wheel (in meters)
  2. nTicksLeft/Right → the number of ticks that we sample, say, every second (in 1 / second).
  3. nTotalTicks → the total number of ticks per revolution for the encoder disk

Since R is in meters, nTicksLeft/Right is in 1/second, then D_left and D_right are both in meters/second.

What is the directional and angular velocity of our robot?

With D_left and D_right, we can compute the directional and angular velocity used to represent a Unicycle Robot’s dynamics— v and w respectively.

v = (D_right + D_left) / 2.0 (ie. in meters per second)w = (D_right - D_left) / L (ie. in radians per second)

Recall, L → the wheelbase of our robot (ie in meters per radian)

Since in our example, we sampled the encoder ticks once per second, both D_right and D_left are in meters per second, so v is also in meters per second.

Since L is in meters per radian, w is in radians per second.

What’s the new pose of our robot?

For a Unicycle Model, the change in pose is defined as the following:

dx/dt = v * cos(φ)dy/dt = v * sin(φ)dφ/dt = w

In our example, dt is 1 second — the sample rate of our encoder ticks. With the v and the w we computed in the section above, we can determine the new pose of our robot after a certain delta T:

x' = x + (dx/dt * delta_t)y' = y + (dy/dt * delta_t)φ' = φ + (dφ/dt * delta_t)

If delta_t = 1 second, then x’, y’, and φ’ will represent our robot’s new pose after 1 second.


In this part 3, we talked about:

  1. The need for feedback in a control system — specifically the system we use to compute pose for our ROSbots differential drive robot.
  2. How to use speed encoders to measure the movement from our differential drive robot.
  3. Recapped the dynamics for a Unicycle Model robot.
  4. What we need to compute unicycle directional and angular velocity dynamics with the movement measurements from our differential drive robot’s speed encoders
  5. How to compute the change in pose — position and heading/orientation — of our robot with the unicycle directional and angular velocities.

In the next part 4, we will look at some ROS code which implements these equations and use the implementation to drive our ROSbots robot to a specific location.

As usual, follow @rosbots on Medium for updates. Follow us on Instagram and Facebook too!

Don’t hesitate to reach out with questions, comments, general feedback, if you want to collaborate, or just to say hello.

And if you haven’t already done so, purchase your own ROSbots robot here to follow along.

Thanks!Jack “the ROSbots Maker”