Welcome to part 3 of our multi-part Coursera’s Control of Mobile Robots (CMR) series. This series implements concepts learned from CMR with ROS and a ROSbots robot.
ROSbots is a ROS + OpenCV robot kit for Makers. Based off a Raspberry Pi and Arduino-compatible UNO board, the ROSbots robot kit caters to a Maker’s desire by being extremely hackable to implement any new robotics concepts you come across. All our code is open source on Github.
Back in part 2 of our Control of Mobile Robots series, we wrote about the convenience of using a Unicycle Model to intuitively represent robot dynamics. But since our ROSbots robot is a differential drive robot, we need to talk about how the Unicycle Model relates to the Differential Drive Model dynamics. We then walked through ROS code to “drive” our ROSbots robot in a systematic manner, via remote control (RC).
In this part 3, we will build upon the Differential Drive dynamics to:
Disclaimer: In this post, we actually won’t be showcasing any ROS code and will stick to talking about the equations and concepts behind feedback and odometry. But this sets us up for the next post which will have ROS code — promise!!
ROSbots’ wheel encoders used for odometry
Even though we successfully sent drive commands to our robot, we cannot guarantee that our robot has executed on the commands. The packet may have dropped. The heavy payload may have stalled the motors.
In order to know if our robot has actually moved, we need sensors to return back some information. We need feedback.
In general, we define a couple of components in our feedback system:
Credit: Magnus Egerstedt, Control of Mobile Robots, Georgia Ins<tute of Technology
The measurement y is fed back to the beginning of the system to help us tweak our input u. Without y and a feedback process, we cannot know if we are tracking our reference preventing us from implementing an effective, stable controller to control our robot.
Like with the Quickbot robot and Khepera robot used in the Coursera course, our ROSbots robot comes equipped with wheel encoders that measure the rotational velocity of the wheel.
The notches on the encoder disk of our ROSbots robot interfere with a light switch on the tips of the U arm on the speed sensor.
When the wheel turns, the notches alternate between blocking and unblocking the light switch — ie a “tick”. By counting the number of “ticks” that has gone by, you can determine how much the wheel has rotated.
By using the encoders, we can update our robots odometry — which is defined as the use of motion sensing data to update robot pose, or the position and heading, of our robot.
Recall pose as the defined by the following:
x - position on the x-axis (ie in meters)
y - position on the y-axis (ie in meters)
φ - phi - angle of the unicycle counter clockwise from x-axis (ie in radians)
The positional and angular velocities of our Unicycle Model are defined by:
v - directional velocity
w - angular velocity
To update the odometry, we need to employ a couple of equations that help us compute the change in our position and heading from the distance traveled per right and left wheel of our differential drive robot.
The first is the equation that uses our encoder ticks to compute how far, in meters, the right and left wheel has turned
D_left = 2 * pi * R * (nTicksLeft / nTotalTicks)D_right = 2 * pi * R * (nTicksRight / nTotalTicks)
Since R is in meters, nTicksLeft/Right is in 1/second, then D_left and D_right are both in meters/second.
With D_left and D_right, we can compute the directional and angular velocity used to represent a Unicycle Robot’s dynamics— v and w respectively.
v = (D_right + D_left) / 2.0 (ie. in meters per second)w = (D_right - D_left) / L (ie. in radians per second)
Recall, L → the wheelbase of our robot (ie in meters per radian)
Since in our example, we sampled the encoder ticks once per second, both D_right and D_left are in meters per second, so v is also in meters per second.
Since L is in meters per radian, w is in radians per second.
For a Unicycle Model, the change in pose is defined as the following:
dx/dt = v * cos(φ)dy/dt = v * sin(φ)dφ/dt = w
In our example, dt is 1 second — the sample rate of our encoder ticks. With the v and the w we computed in the section above, we can determine the new pose of our robot after a certain delta T:
x' = x + (dx/dt * delta_t)y' = y + (dy/dt * delta_t)φ' = φ + (dφ/dt * delta_t)
If delta_t = 1 second, then x’, y’, and φ’ will represent our robot’s new pose after 1 second.
In this part 3, we talked about:
In the next part 4, we will look at some ROS code which implements these equations and use the implementation to drive our ROSbots robot to a specific location.
As usual, follow @rosbots on Medium for updates. Follow us on Instagram and Facebook too!
Don’t hesitate to reach out with questions, comments, general feedback, if you want to collaborate, or just to say hello.
And if you haven’t already done so, purchase your own ROSbots robot here to follow along.
Thanks!Jack “the ROSbots Maker”