User Tools

Site Tools


ece6554:project_planarheli

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ece6554:project_planarheli [2023/03/06 10:31] – external edit 127.0.0.1ece6554:project_planarheli [2023/04/12 10:26] (current) classes
Line 6: Line 6:
  
 ==== Equations of Motion ==== ==== Equations of Motion ====
-Defining $q = (x, y)^T$ to be the center of mass of the ducted fan, and $\theta$ to be the orientation of the ducted fan,+Defining $q = (x, y)^T$ to be the center of mass of the ducted fan, and $\theta$ to be the orientation of the ducted fan, the most general form of the equations is 
 +\begin{equation} 
 +\begin{split} 
 +  m \ddot q & = -d \dot q + R(\theta) \left[ \begin{matrix} 0 & 0 \\ 1 & 1 \end{matrix} \right] \vec f - m \vec g \\ 
 +  J \ddot \theta & = r \left[ \begin{matrix} 1 & -1 \end{matrix} \right] \vec f 
 +\end{split} 
 +\end{equation} 
 +while a more specialized form is
 \begin{equation} \begin{equation}
 \begin{split} \begin{split}
Line 14: Line 21:
 \end{equation} \end{equation}
 where the force vector $f$ is in the body frame of the ducted fan, generated from the two fans' thrust.  Each coordinate of $f \in \mathbb{R}^2$ can be independently controlled (but can never really go negative due to the nature of the fan blades used). where the force vector $f$ is in the body frame of the ducted fan, generated from the two fans' thrust.  Each coordinate of $f \in \mathbb{R}^2$ can be independently controlled (but can never really go negative due to the nature of the fan blades used).
-Though not used explicitly in the equation abovelet $R$ be a planar rotation matrix for which $e_2$ is the vector generated from the second column of the rotation matrix,+Regarding the functions of $\theta$ used in the equations of motion, $R$ is a planar rotation matrix for which $e_2$ is the vector generated from the second column of the rotation matrix,
 \begin{equation} \begin{equation}
   R(\theta) = \left[ \begin{matrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{matrix}  \right]   R(\theta) = \left[ \begin{matrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{matrix}  \right]
Line 33: Line 40:
 As noted earlier, typical fans cannot produce negative thrust forces, thus the coordinates of $f$ are limited to non-negative values. When designing the baseline trajectories, try to have them results in forces not exceeding six times the baseline force needed to hover under ideal circumstances.  The actual adaptive, closed-loop design might violate this upper limit, but hopefully not by too much (say, less than ten times). It is best to design trajectories that do not hit these limits in the closed-loop. More aggressive trajectories is best saved for future self-study. As noted earlier, typical fans cannot produce negative thrust forces, thus the coordinates of $f$ are limited to non-negative values. When designing the baseline trajectories, try to have them results in forces not exceeding six times the baseline force needed to hover under ideal circumstances.  The actual adaptive, closed-loop design might violate this upper limit, but hopefully not by too much (say, less than ten times). It is best to design trajectories that do not hit these limits in the closed-loop. More aggressive trajectories is best saved for future self-study.
  
 +===== Implementation =====
  
 +Functional code stubs for the implementation are provided in the {{ ECE6554:projects:birotor.zip | birotor zipfile}}.  They implement a constant control signal that most definitely fails to do the job, but provide enough structure to complete the project.  Comments in the code should help to realize the necessary improvements.
 ====== Activities ====== ====== Activities ======
 ----------------------- -----------------------
Line 50: Line 59:
 Here, you should consider two cases.  One is the initial case, where tracking a particular reference signal will lead to an adaptation transient.  Simulate as normal, however pick a time post-transient and grab the adaptive gains from the output signals.  Prepare a second simulation that starts with these gains.  The second simulation would act like a second or subsequent deployment post-adaptation.  Show that the system better meets the performance objectives. It is your responsibility to create appropriate signals to track. One should be a simple step response of point-to-point stabilization (e.g., translation from one hover point to another hover point), and the other should be some trajectory in space.  Here, you should consider two cases.  One is the initial case, where tracking a particular reference signal will lead to an adaptation transient.  Simulate as normal, however pick a time post-transient and grab the adaptive gains from the output signals.  Prepare a second simulation that starts with these gains.  The second simulation would act like a second or subsequent deployment post-adaptation.  Show that the system better meets the performance objectives. It is your responsibility to create appropriate signals to track. One should be a simple step response of point-to-point stabilization (e.g., translation from one hover point to another hover point), and the other should be some trajectory in space. 
  
-**Considerations:** When the model parameters are unknown, that influences the baseline control $u_0$.  Since the baseline control is a constant, it is possible to adapt the parameter as a form of structured uncertainty. Treating the $\Phi(x)$ function to have the constant bias term $1$, the baseline control $u_0$ can be adapted online to recover its value.  Doing so will improve the performance of the linear DMRAC controller. It is highly recommended to add this term. Otherwise, the adaptive gains will increase to large values in order to compensate for the constant gravity term that was incorrectly "cancelled."+**Considerations:** When the model parameters are unknown, that influences the baseline control $u_0$.  Since the baseline control is a constant, it is possible to adapt the parameter as a form of structured uncertainty. Treating the $\Phi(x)$ function to have the constant bias term $1$, the baseline control $u_0$ can be adapted online to recover its value.  Doing so will improve the performance of the linear DMRAC controller. It is highly recommended to add this term. Otherwise, the adaptive gains will increase to large values in order to compensate for the constant gravity term that was incorrectly "cancelled." If this paragraph doesn't make sense, then implement both versions so you can see what happenes yourself.
  
-Likewise, for better tracking some feedforward term can be added that looks a lot like a reference signal. The feedforward term to add is the acceleration needed to track the desired height of the birotor. It is recovered from the second derivative of the desired height (the $y$ coordinate). Adding this term will remove some of the gain/phase differences between the desired trajectory and the model reference trajectory, which means that the birotor will better track the desired trajectory.  This feedforward term can have an adaptive effectiveness gain that is tuned during online operation. It's not necessary, but you'll find that it enhances performance.+Likewise, for better tracking some feedforward term can be added that looks a lot like a reference signal. The feedforward term to add is the acceleration needed to track the desired height of the birotor. It is recovered from the second derivative of the desired height (the $y$ coordinate). Adding this term will remove some of the gain/phase differences between the desired trajectory and the model reference trajectory, which means that the birotor will better track the desired trajectory.  This feedforward term can have an adaptive effectiveness gain that is tuned during online operation. It's not necessary, but you'll find that it enhances performance. Overall, these considerations start to expose implementation differences between vanilla D-MRAC and actual implementation on a robotic-control system with a trajectory tracking task.
  
 ===== Step 2: Nonlinear Controller ===== ===== Step 2: Nonlinear Controller =====
Line 72: Line 81:
 Apply the transformation and derive the new equations of motion.  You should see that the nonlinearities can be factored such that the control and the nonlinearities plus gravity are all multipled by the same $\theta$ dependent $B$ matrix.  It is possible to transform the inputs so that they enter more directly and the nice structure of the overall system is made apparent. Just make sure to transform the controls back to their original form after computing the control law. Take advantage of the relationship between $e_1(\theta)$ and $e_2(\theta)$ when computing the transformed equations of motion. Apply the transformation and derive the new equations of motion.  You should see that the nonlinearities can be factored such that the control and the nonlinearities plus gravity are all multipled by the same $\theta$ dependent $B$ matrix.  It is possible to transform the inputs so that they enter more directly and the nice structure of the overall system is made apparent. Just make sure to transform the controls back to their original form after computing the control law. Take advantage of the relationship between $e_1(\theta)$ and $e_2(\theta)$ when computing the transformed equations of motion.
  
-==== Adaptive Controller ==== +===== Step 3: Adaptive Controller =====
- +
-Once the system has been derived and controlled, it should look like a linear system with nonlinear terms in the span of the input space.  We have covered such a system. Augment the existing controller with a model-reference adaptive controller. Show that it behaves well under the same random change of parameters from Step #1. +
- +
- +
- +
-===== Step 3: Nonlinear Control Lyapunov Approach [Not Done] =====+
  
-Do not work out this part.+Once the system has been derived and controlled, it should look like a linear system with nonlinear terms in the span of the input space We have covered such a system. Augment the existing controller with a model-reference adaptive controller. Show that it behaves well under the same random change of parameters from Step #1. Demonstrate improved performance relative to the static nonlinear controller design.  If possible, compare performance of linear vs nonlinear adaptive controllers (best to quantify).  
  
-==== The PWMN Controller ====+//Note:// One thing to be careful about is the initial transient experienced by the adaptive controllers.  When comparing, it is usually best to separate the transient time period from the non-transient period. Also, one could argue that the most important comparison is with the repeated run since that would be the normal use case for an adaptive controller. Thus, limiting comparison to the repeat run outcomes is acceptable and may even be preferred for simplicity of data collected and analyzed.
  
-==== Performance Reference Adaptive Control ===== 
  
 ====== Report Considerations ======= ====== Report Considerations =======
ece6554/project_planarheli.1678116707.txt.gz · Last modified: 2023/03/06 10:31 by 127.0.0.1