User Tools

Site Tools


ece6554:project_planarheli

This is an old revision of the document!


Planar Bi-Rotor Helicopter

This project is quite similar to the Planar Ducted Fan project, however the control actuation is a bit more direct since there are two counter-torquing (pitch) thrusters. I haven't fully analyzed the system, but I believe that this one is not a non-minimum phase system due to the two counter-torquing thrusters (as opposed to the vectored thrust characteristics of the ducted fan). The two control inputs are independent, whereas in the ducted fan the control forces are coupled through the two inputs. A nice experimental version of this project is the Quansar 3DoF Helicopter where you can see more directly the relationship of the physical instantiation to a planar ducted fan.

Equations of Motion

Defining $q = (x, y)^T$ to be the center of mass of the ducted fan, and $\theta$ to be the orientation of the ducted fan, \begin{equation} \begin{split} m \ddot q & = -d \dot q + e_2(\theta) \left[ \begin{matrix} 1 & 1 \end{matrix} \right] \vec f - m \vec g \\ J \ddot \theta & = r \left[ \begin{matrix} 1 & -1 \end{matrix} \right] \vec f \end{split} \end{equation} where the force vector $f$ is in the body frame of the ducted fan, generated from the two fans' thrust. Each coordinate of $f \in \mathbb{R}^2$ can be independently controlled (but can never really go negative due to the nature of the fan blades used). Though not used in the equation above, $R$ is a planar rotation matrix, while $e_2$ is the vector generated from the second column of the rotation matrix, \begin{equation} R(\theta) = \left[ \begin{matrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{matrix} \right] \quad \text{and} \quad e_2(\theta) = \left[ \begin{matrix} -\sin(\theta) \\ \cos(\theta) \end{matrix} \right]. \end{equation}

Parameters and Limits

Parameter Value
$m$ 6 kg
$d$ 0.1 kg/sec
$g$ 9.8 m/sec
$r$ 0.25 m
$J$ 0.1425 kg m$^2$

As noted earlier, typical fans cannot produce negative thrust forces, thus the coordiantes of $f$ are limited to non-negative values. When designing the baseline trajectories, try to have them results in forces not exceeding six times the baseline force needed to hover under ideal circumstances. The actual adaptive, closed-loop design might violate this upper limit, but hopefully not by too much (say, less than ten times). It is best to design trajectories that do not hit these limits in the closed-loop. More aggressive trajectories is best saved for future self-study.

Activities


These activities sketch what should be done, but do not necessarily indicate what should be turned in. By now you should have seen enough solution postings and possibly also read enough papers on control that you should have an understanding of what should be turned in. This would include the mathematics or derivations, the synthesized controller, and sufficient plots to demonstrate that the task objective was met. Discussion of outcomes should be included.

Step 1: Linear Adaptive Control

Linear Control Model

Linearize the equations of motion about hover at $\theta = 0$ (radians), so that the linearized state and control inputs have an equiblirium at $\theta = 0$ with zero linearized control input. Establish performance specifications for the system and design a linear feedback controller that will stabilize the system and meet the performance specifications.

Model Mismatch and Adaptive Control

Modify by 10-20% some of the model parameters of the system and compare the outcomes under a traditional linear controller. Incorporate linear adaptive control and show the resulting outcomes. Confirm how well the adaptive system meets the performance specifications versus the static controller.

Here, you should consider two cases. One is the initial case, where tracking a particular reference signal will lead to an adaptation transient. Simulate as normal, however pick a time post-transient and grab the adaptive gains from the output signals. Prepare a second simulation that starts with these gains. The second simulation would act like a second or subsequent deployment post-adaptation. Show that the system better meets the performance objectives. It is your responsibility to create appropriate signals to track. One should be a simple step response of point-to-point stabilization (e.g., translation from one hover point to another hover point), and the other should be some trajectory in space.

Considerations: When the model parameters are unknown, that influences the baseline control $u_0$. Since the baseline control is a constant, it is possible to adapt the parameter as a form of structured uncertainty. Treating the $\Phi(x)$ function to have the constant bias term $1$, the baseline control $u_0$ can be adapted online to recover its value. Doing so will improve the performance of the linear DMRAC controller. It is highly recommended to add this term. Otherwise, the adaptive gains will increase to large values in order to compensate for the constant gravity term that was incorrectly “cancelled.”

Likewise, for better tracking some feedforward term can be added that looks a lot like a reference signal. The feedforward term to add is the acceleration needed to track the desired height of the birotor. It is recovered from the second derivative of the desired height (the $y$ coordinate). Adding this term will remove some of the gain/phase differences between the desired trajectory and the model reference trajectory, which means that the birotor will better track the desired trajectory. This feedforward term can have an adaptive effectiveness gain that is tuned during online operation. It's not necessary, but you'll find that it enhances performance.

Step 2: Nonlinear Controller

There are several ways to work out this problem employing nonlinear methods that lead to linear control structure for adaptation. What I mean here, is that there are control laws that look a lot like the cancellable nonlinearity form covered in class. Nothing too fancy needs to be done outside of knowing how to manipulate vector equalities using matrix algebra. You just have to work out one version or approach. Get the baseline controller for it, then augment as an adaptive system. Provide the expected plots for different cases. Make sure to do a repeat run with learnt parameters to show improved performance (long-term).

Version #1: Traditional with Nonlinearities in Span of Control

The first approach, and perhaps the most sensible, is to simply think of the problem as a traditional linear problem with nonlinear terms to be cancelled. Of course, the $B$ part of the matrix will have a rotation matrix in it. The good thing is that the structure of the rotation matrix is known. Working out the algebra to get it in the appropriate structure is not too bad as long as you try to keep things in matrix form and don't overly complicate the formulation. Once in the proper form, the control law pops right out.

Version #2: Virtual Point Approach Exploiting Differential Flatness

This system and many other engineered or man-made mobile vehicles have the property of differential flatness. There exists a transformation of state that will provide full control of the position variables if control over the orientation is relaxed. The transformation is to consider control of a virtual point somewhere ahead of the robot. The transformation of coordinates is \begin{equation} q' = q + \lambda e_2(\theta) \end{equation} for the bi-rotor system. This virtual point lies somewhere above the bi-rotor when view in the coordinate frame of the bi-rotor. The analog for wheeled vehicles is to consider tracking a point in front of the wheeled vehicle (as opposed to above). The constant value for $\lambda$ should not be too large, nor too small. Too large will limit the trajectories that can be tracked with sensible controls, while too small will lead to weird behavior for certain trajectories or set points. Usually tracking of the trajectory will implicitly define the $\theta$ trajectory through the tracking control equations. Note that the orientation is not transformed. In some instances it is not even controlled, whereas in others a weak or long-time constant control is given.

Apply the transformation and derive the new equations of motion. You should see that the nonlinearities can be factored such that the control and the nonlinearities plus gravity are all multipled by the same $\theta$ dependent $B$ matrix. It is possible to transform the inputs so that they enter more directly and the nice structure of the overall system is made apparent. Just make sure to transform the controls back to their original form after computing the control law. Take advantage of the relationship between $e_1(\theta)$ and $e_2(\theta)$ when computing the transformed equations of motion.

Adaptive Controller

Once the system has been derived and controlled, it should look like a linear system with nonlinear terms in the span of the input space. We have covered such a system. Augment the existing controller with a model-reference adaptive controller. Show that it behaves well under the same random change of parameters from Step #1.

Step 3: Nonlinear Control Lyapunov Approach [Not Done]

The PWMN Controller

Performance Reference Adaptive Control

Report Considerations


Because there are more control signals relative to the typical homework assignments, there is a greater diversity of trajectories to follow. Make sure that you create reference trajectories reflecting this diversity. Trajectories applied should include regulation (moving to a new, feasible set point) and tracking.

The report should include the appropriate controlled equations of motion for the different realizations (linear, nonlinear, transformed nonlinear if done, etc). It should cover the controller design and control synthesis for static and adaptive cases. If using adaptive controllers covered in class, then only their setup and final adaptive laws should be covered. This should be the case if following the Steps. If attempting an adaptive structures slightly different from what was covered in the lectures, then its derivation should be included; this most likely won't be the case unless you do not follow the Steps. Just like in homeworks, attention should be paid to highlighting how the static controller fails to perform under incorrect parameters estimates. Otherwise, the Final Deliverable assignment item should cover what's needed.

When possible, try to stick to canonical control forms. What's the simplest way of providing the equations? Writing it all out coordinate-wise is not sensible; in fact almost all of the time it is the worst thing that can be done since it will hide any underlying structure or geometry and not necessarily be any more informative.

References


There are some references below whose equations might differ from the ones above. There are a few models for this fan. The model chosen depends on what the authors wish to demonstrate.

  • The Quansar site for the 2-DoF helicopter has useful stuff.
  • R. Olfati-Saber, “Near-Identity Diffeomorphisms and Exponential $\epsilon$-Tracking and $\epsilon$-Stabilization of First-Order Nonholonomic $SE(2)$ Vehicles.” American Control Conference, pp. 4690-4695, 2002. link
  • R. Olfati-Saber, “Exponential $\epsilon$-tracking and $\epsilon$-Stabilization of Second-Order Nonholonomic $SE(2)$ Vehicles Using Dynamic State Feedback.” ACC, pp. 3961-3967, 2002. link

Back Main

ece6554/project_planarheli.1654013016.txt.gz · Last modified: 2023/03/06 10:31 (external edit)