User Tools

Site Tools


turtlebot:adventures:sensing101_forwarderror

Forward Error Signal Feedback


Some Basics on $SE(2)$

By now you've been able to do orientation control. The current adventure now requests that you add a little forward position control. What will be described here is yet another step towards full fledged geometric control of the Turtlebot. The geometric part means that we will try to use the natural geometry of the Turtlebots space to create feedback controls that look like what you are used to. There will be some nonlinear math going on, but we are going to hide it all so that things look like the standard error feedback that you are used to. That's what makes it geometric. Some systems require actual nonlinear controls that really look nonlinear. The Turtlebot moves in what is called the special Euclidean plane, written as $SE(2)$.

The special Euclidean plane consists of the translational position of the robot and the orientation of the robot. You've seen it packed into the pose as a quaternion and as a translation vector (in 3D). Both of these are for representing the full special Euclidean space, written as $SE(3)$. The Turtlebot can't (or shouldn't rather) move in this full 3D space, so we have been restricting it to the 2D (or planar) version. That's what we will do here. The nice thing about our geomoetric laws, is that if you get them, then you will automatically get how to do them for more complex 3D robots too!

What we are going to do is to define the translation and the orientation as a complex 2×2 matrix. This will be done by ripping out the complex part of the quaternion and putting that into a complex number $r \in \mathbb{C}$ just as for the turn control case. Then, take out the translation part and dump that into a complex number $t \in \mathbb{C}$. Then pack them together into a complex 2×2 matrix, that we will label $g$, as follows: \begin{equation} g = \left[ \begin{matrix} r & t \\ 0 & 1 \end{matrix} \right] \end{equation} This matrix defines the position and orientation of the Turtlebot. One thing we did not discuss explicitly is that $r \in \mathbb{C}$ is actually unit length. As a rotation it should be. Being unit length means that the inverse of $r$ is actually the complex conjugate since $r r^* = 1$ (by virtue of being unit length). This can be useful to know for speeding up certain operations.

Computing the Error

The complex matrix form of $SE(2)$ lets us compute the error just like for the complex orientation case. Suppose that you have constructed the complex matrices $g_{des}$ and $g_{curr}$ properly, then the error is \begin{equation} g_{err} = g_{curr}^{-1} g_{des} = \left[ \begin{matrix} r_{err} & t_{err} \\ 0 & 1 \end{matrix} \right] \end{equation} Pulling out $r_{err}$ gives the orientation error as before \begin{equation} \theta_{err} = \text{phase}(r_{err}) \end{equation} Pulling out the real part of $t_{err}$ gives the forward movement error \begin{equation} x_{err} = \text{Re}(t_{err}) \end{equation} We won't use it for now, but the imaginary part of $t_{err}$ gives the sideways error. Since the Turtlebot can't move sideways, we will ignore it for now. To actually be able to modify the $y_{err}$ through a control signal requires more advanced concepts from control theory. These will show up in a future Adventure.

Generating the Control Signal

The orientation control signal gets computed just like before \begin{equation} \omega = k_{\theta} \theta_{err} \end{equation} while the forward drive control signal gets computed as \begin{equation} v = k_{drive} x_{err}. \end{equation} where $v$ specifies the forward velocity. As in the goforward and other related python code the forward velocity is the first component of the twist. In ROS that would be the twist.linear.x field within the twist. As usual you should be applying a saturation function to the controls so that they don't give crazy big values. What would be a reasonable upper bound on the Turtlebot's speed?

If your goal here is to go to a static Turtlebot pose, then you should have some kind of stop-control condition that checks to see if the two errors are too small to bother fixing. I think I had used something like one half or one quarter of a degree for the orientation control back in the earlier adventure. For the forward error, I am not sure what I would use. I'd test thing out by going to some older code that published the pose (hopefully you have such a test code) and check how little I could move the Turtlebot and still see a change in its position. The limit should then be set to be a little higher than this since that value is the resolution or granularity of the sensor. You won't get better than that. I had done something similar for the orientation control to arrive at my control limit.

Generating the Desired End Pose

Alright, the above is all fine and good, but the bigger question is: how do I create the desired pose signal to begin with? It actually works in the opposite way as computing the error. Let's consider the “error” to be a difference denoted by $\Delta g$, \begin{equation} \Delta g = g_{curr}^{-1} g_{des}. \end{equation} Now, let's rearrange it to solve for $g_{des}$, \begin{equation} g_{des} = g_{curr} \Delta g. \end{equation} Ha! This final equation is exactly what you want to do. Take the current position, as measured prior to actually performing any feedback, and left-multiply by the desired movement of the Turtlebot. For pure forward motion and for pure turning, this would be: \begin{equation} \Delta g_{forward} = \left[ \begin{matrix} 1 & \Delta x + 0j \\ 0 & 1 \end{matrix} \right] \quad \text{and} \quad \Delta g_{turn} = \left[ \begin{matrix} e^{j \Delta \theta} & 0 \\ 0 & 1 \end{matrix} \right], \end{equation} where $\Delta x$ is the distance to drive forward and $\Delta \theta$ is the angle (in radians most likely) to turn. In future instantiations, we'll probably do more complex relative pose changes which involve both moving to an arbitrary point relative to the robot (having both x and y offsets), plus an arbitrary angle.

This desired end pose will then be fixed until the proper terminating condition is achieved (e.g., the error is small enough to be negligible).

Complex Matrices in Python

OK, so you are thinking “Argh!!! How do I do this all in python????” Well, fear not, because I went through the same painful experience and have some tips on one way to solve things. First of all, you will definitely need numpy and quite possibly cmath (if it is not imported as part of numpy),

> import numpy
> import cmath

After that you will be happy because then complex matrices will be supported. Let's build the simplest such matrix, which is the identity matrix of $SE(2)$. It is:

> r = cmath.rect(1,0)
> t = complex(0,0)
> g = numpy.mat([[r, t], [0, 1]], dtype=complex)
> g

If I am not mistaken, you should now have the identity matrix! That's boring you say, you want a real matrix. Ok then let's have the robot rotated by $30^\circ$ and translated by 5 units in the $x$ and negative 7 in the $y$,

> r = cmath.rect(1,numpy.pi/6)
> t = complex(5,-7)
> g = numpy.mat([[r, t], [0, 1]], dtype=complex)
> g

I recommend using cmath.rect for the rotation complex number because the arguments are the magnitude of the complex number (for rotations, it is always 1) and then the phase of the complex number (which is the orientation we wish to achieve). Taking the inverse you should see that the top left part of the matrix gets conjugated:

> gInverse = numpy.linalg.inv(g)
> gInverse

If you can get the above working, then that's pretty much all you will need. Matrix multiplication should work how you think it does:

> gErr = gCurrInv * gDes

or maybe even

> gErr = numpy.linalg.inv(gCurr) * gDes

There is actually a computationally faster way of doing it that exploits the fact that $r$ is a rotation, but that's for another day, or maybe even for you to explore on your own.


Sensing 101

turtlebot/adventures/sensing101_forwarderror.txt · Last modified: 2024/08/20 21:38 by 127.0.0.1