This project is quite similar to the Planar Ducted Fan project, however the control actuation is a bit more direct since there are two counter-torquing (pitch) thrusters. I haven't fully analyzed the system, but I believe that this one is not a non-minimum phase system due to the two counter-torquing thrusters (as opposed to the vectored thrust characteristics of the ducted fan). The two control inputs are independent, whereas in the ducted fan the control forces are coupled through the two inputs. A nice experimental version of this project is the Quansar 3DoF Helicopter where you can see more directly the relationship of the physical instantiation to a planar ducted fan.
Defining $q = (x, y)^T$ to be the center of mass of the ducted fan, and $\theta$ to be the orientation of the ducted fan, the most general form of the equations is \begin{equation} \begin{split} m \ddot q & = -d \dot q + R(\theta) \left[ \begin{matrix} 0 & 0 \\ 1 & 1 \end{matrix} \right] \vec f - m \vec g \\ J \ddot \theta & = r \left[ \begin{matrix} 1 & -1 \end{matrix} \right] \vec f \end{split} \end{equation} while a more specialized form is \begin{equation} \begin{split} m \ddot q & = -d \dot q + e_2(\theta) \left[ \begin{matrix} 1 & 1 \end{matrix} \right] \vec f - m \vec g \\ J \ddot \theta & = r \left[ \begin{matrix} 1 & -1 \end{matrix} \right] \vec f \end{split} \end{equation} where the force vector $f$ is in the body frame of the ducted fan, generated from the two fans' thrust. Each coordinate of $f \in \mathbb{R}^2$ can be independently controlled (but can never really go negative due to the nature of the fan blades used). Regarding the functions of $\theta$ used in the equations of motion, $R$ is a planar rotation matrix for which $e_2$ is the vector generated from the second column of the rotation matrix, \begin{equation} R(\theta) = \left[ \begin{matrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{matrix} \right] \quad \textrm{where} \quad e_1(\theta) = \left[ \begin{matrix} \cos(\theta) \\ \sin(\theta) \end{matrix} \right] \quad \text{and} \quad e_2(\theta) = \left[ \begin{matrix} -\sin(\theta) \\ \cos(\theta) \end{matrix} \right]. \end{equation}
Parameter | Value |
---|---|
$m$ | 6 kg |
$d$ | 0.1 kg/sec |
$g$ | 9.8 m/sec |
$r$ | 0.25 m |
$J$ | 0.1425 kg m$^2$ |
As noted earlier, typical fans cannot produce negative thrust forces, thus the coordinates of $f$ are limited to non-negative values. When designing the baseline trajectories, try to have them results in forces not exceeding six times the baseline force needed to hover under ideal circumstances. The actual adaptive, closed-loop design might violate this upper limit, but hopefully not by too much (say, less than ten times). It is best to design trajectories that do not hit these limits in the closed-loop. More aggressive trajectories is best saved for future self-study.
Functional code stubs for the implementation are provided in the birotor zipfile. They implement a constant control signal that most definitely fails to do the job, but provide enough structure to complete the project. Comments in the code should help to realize the necessary improvements.
These activities sketch what should be done, but do not necessarily indicate what should be turned in. By now you should have seen enough solution postings and possibly also read enough papers on control that you should have an understanding of what should be turned in. This would include the mathematics or derivations pertinent to an adaptive control system augmentation, the synthesized controller, and sufficient plots to demonstrate that the task objective was met. Discussion of outcomes should be included.
Linearize the equations of motion about hover at $\theta = 0$ (radians), so that the linearized state and control inputs have an equiblirium at $\theta = 0$ with zero linearized control input. Establish performance specifications for the system and design a linear feedback controller that will stabilize the system and meet the performance specifications.
Modify by 10-20% some of the model parameters of the system and compare the outcomes under a traditional linear controller. Incorporate linear adaptive control and show the resulting outcomes. Confirm how well the adaptive system meets the performance specifications versus the static controller.
Here, you should consider two cases. One is the initial case, where tracking a particular reference signal will lead to an adaptation transient. Simulate as normal, however pick a time post-transient and grab the adaptive gains from the output signals. Prepare a second simulation that starts with these gains. The second simulation would act like a second or subsequent deployment post-adaptation. Show that the system better meets the performance objectives. It is your responsibility to create appropriate signals to track. One should be a simple step response of point-to-point stabilization (e.g., translation from one hover point to another hover point), and the other should be some trajectory in space.
Considerations: When the model parameters are unknown, that influences the baseline control $u_0$. Since the baseline control is a constant, it is possible to adapt the parameter as a form of structured uncertainty. Treating the $\Phi(x)$ function to have the constant bias term $1$, the baseline control $u_0$ can be adapted online to recover its value. Doing so will improve the performance of the linear DMRAC controller. It is highly recommended to add this term. Otherwise, the adaptive gains will increase to large values in order to compensate for the constant gravity term that was incorrectly “cancelled.” If this paragraph doesn't make sense, then implement both versions so you can see what happenes yourself.
Likewise, for better tracking some feedforward term can be added that looks a lot like a reference signal. The feedforward term to add is the acceleration needed to track the desired height of the birotor. It is recovered from the second derivative of the desired height (the $y$ coordinate). Adding this term will remove some of the gain/phase differences between the desired trajectory and the model reference trajectory, which means that the birotor will better track the desired trajectory. This feedforward term can have an adaptive effectiveness gain that is tuned during online operation. It's not necessary, but you'll find that it enhances performance. Overall, these considerations start to expose implementation differences between vanilla D-MRAC and actual implementation on a robotic-control system with a trajectory tracking task.
There are several ways to work out this problem employing nonlinear methods that lead to linear control structure for adaptation. What I mean here, is that there are control laws that look a lot like the cancellable nonlinearity form covered in class. Nothing too fancy needs to be done outside of knowing how to manipulate vector equalities using matrix algebra. You just have to work out one version or approach, either from the options below or based on an approach that is more natural to you (so long as it is correct). Get the baseline controller for it, then augment as an adaptive system. Provide the expected plots for different cases. Make sure to do a repeat run with learnt parameters to show improved performance (long-term).
Though there are many solution approaches, working them out is best done in partitioned coordinates, much like the way the equations of motion are provided above. The final control laws and adaptive augmentation follow by combining the controllers for the partitioned sub-states. The one exception to this advice is when deriving a baseline linear controller, in which case looking at the full system is best. While the objective should be to have the extra terms appear in the span of the control, some solutions may not work out that way. There may be terms that cannot be cancelled. In that case, make sure to note as much and properly handle the parts that can be taken care of.
The first approach, and perhaps the most sensible, is to simply think of the problem as a traditional linear problem with nonlinear terms to be cancelled. Of course, the $B$ part of the matrix will have a rotation matrix in it. The good thing is that the structure of the rotation matrix is known, thus adaptation is really needed for the other parts. Working out the algebra to get it in the appropriate structure is not too bad as long as you try to keep things in matrix form and don't overly complicate the formulation. Once in the proper form, the control law pops right out.
This system and many other engineered or man-made mobile vehicles have the property of differential flatness. There exists a transformation of state that will provide full control of the position variables if control over the orientation is relaxed. The transformation is to consider control of a virtual point somewhere ahead of the robot. The transformation of coordinates is \begin{equation} q' = q + \lambda e_2(\theta) \end{equation} for the bi-rotor system. This virtual point lies somewhere above the bi-rotor when view in the coordinate frame of the bi-rotor. The analog for wheeled vehicles is to consider tracking a point in front of the wheeled vehicle (as opposed to above). The constant value for $\lambda$ should not be too large, nor too small. Too large will limit the trajectories that can be tracked with sensible controls, while too small will lead to weird behavior for certain trajectories or set points. Usually tracking of the trajectory will implicitly define the $\theta$ trajectory through the tracking control equations. Note that the orientation is not transformed. In some instances it is not even controlled, whereas in others a weak or long-time constant control is given.
Apply the transformation and derive the new equations of motion. You should see that the nonlinearities can be factored such that the control and the nonlinearities plus gravity are all multipled by the same $\theta$ dependent $B$ matrix. It is possible to transform the inputs so that they enter more directly and the nice structure of the overall system is made apparent. Just make sure to transform the controls back to their original form after computing the control law. Take advantage of the relationship between $e_1(\theta)$ and $e_2(\theta)$ when computing the transformed equations of motion.
Once the system has been derived and controlled, it should look like a linear system with nonlinear terms in the span of the input space. We have covered such a system. Augment the existing controller with a model-reference adaptive controller. Show that it behaves well under the same random change of parameters from Step #1. Demonstrate improved performance relative to the static nonlinear controller design. If possible, compare performance of linear vs nonlinear adaptive controllers (best to quantify).
Note: One thing to be careful about is the initial transient experienced by the adaptive controllers. When comparing, it is usually best to separate the transient time period from the non-transient period. Also, one could argue that the most important comparison is with the repeated run since that would be the normal use case for an adaptive controller. Thus, limiting comparison to the repeat run outcomes is acceptable and may even be preferred for simplicity of data collected and analyzed.
Because there are more control signals relative to the typical homework assignments, there is a greater diversity of trajectories to follow. Make sure that you create reference trajectories reflecting this diversity. Trajectories applied should include regulation (moving to a new, feasible set point) and tracking.
The report should include the appropriate controlled equations of motion for the different realizations (linear, nonlinear, transformed nonlinear if done, etc). It should cover the controller design and control synthesis for static and adaptive cases. If using adaptive controllers covered in class, then only their setup and final adaptive laws should be covered. This should be the case if following the Steps. If attempting an adaptive structure slightly different from what was covered in the lectures, then its derivation should be included; this most likely won't be the case unless you do not follow the Steps. Just like in homeworks, attention should be paid to highlighting how the static controller fails to perform under incorrect parameters estimates. Otherwise, the Final Deliverable assignment item should cover what's needed.
When possible, try to stick to canonical control forms. What's the simplest way of providing the equations? Writing it all out coordinate-wise is not sensible; in fact almost all of the time it is the worst thing that can be done since it will hide any underlying structure or geometry and not necessarily be any more informative. As noted earlier, combining everything into one big simplified full-state system may not be the best either. Doing something in between with partitioned sub-systems might provide the best description and highlight cleanly the structure of the control.
There are some references below whose equations might differ from the ones above. There are a few models for this fan. The model chosen depends on what the authors wish to demonstrate.