| |
ece6554:project_invcartpend [2023/03/15 11:47] – [Adaptive System] classes | ece6554:project_invcartpend [2024/08/20 21:38] (current) – external edit 127.0.0.1 |
---|
My units might be off for the friction, but the values are fine. | My units might be off for the friction, but the values are fine. |
| |
| ===== Implementation ===== |
| |
| Functional code stubs for the implementation are provided in the {{ ECE6554:projects:invpendcart.zip | invpendcart zipfile}}. They implement a constant control signal that most definitely fails to do the job, but provide enough structure to complete the project. Missing is a reasonable implementation of a single layer Gaussian radial basis function neural network. That's left to be coded up as a class and properly used as part of the Inverted Pendulum on a Cart neuro-adaptive controller. |
====== Activities ====== | ====== Activities ====== |
---------------------- | ---------------------- |
===== Step 2: Implement Single-Layer GRBF Neural Network for Function Approximation ===== | ===== Step 2: Implement Single-Layer GRBF Neural Network for Function Approximation ===== |
| |
To be fleshed out, but main idea is to first approximate a real function like $\sin(x)$ or $\sinc(x)$ or some polynomial function, over a fixed interval. The second is to approximate a function of two variables that can be a thin plate spline or some made up function of two variables with a non-trivial but not too complex surface plot. Once you get the hang of that, you should be able to implement for arbitrary dimensions, just that it gets a bit more complex. Then you'll be ready for the adaptive control version. | To be fleshed out, but main idea is to first approximate a real function like $\sin(x)$ or $\mathrm{sinc}(x)$ or some polynomial function, over a fixed interval. The second is to approximate a function of two variables that can be a thin plate spline or some made up function of two variables with a non-trivial but not too complex surface plot. Once you get the hang of that, you should be able to implement for arbitrary dimensions, just that it gets a bit more complex. Then you'll be ready for the adaptive control version. |
| |
For completeness, consider implementing a neuro-adaptive system for a scalar system as noted in Tip 1. | |
| |
| For completeness, implement a neuro-adaptive system for a scalar system as noted in //Tip 1//. Follow through on the exploration of //Tip 2//. If this is your first time digging deep into function approximation with Matlab, then you'll probably have to check out //Tip 3//. |
===== Step 3: Neuro-Adaptive Controller ===== | ===== Step 3: Neuro-Adaptive Controller ===== |
| |
==== Tip 1: Neuro-Adaptive Updates ==== | ==== Tip 1: Neuro-Adaptive Updates ==== |
| |
When implementing new things, it is important to start from a position of strength. Do not try to do it all in one step given that there are many smaller steps that you most likely have never done. Break the system down into smaller pieces. From the first step, you should already have functioning adaptive controllers on only the linear system, plus running on the true nonlinear dynamics (but with a linear reference model). Naturally, this means you also have the non-adaptive versions working, as those should have been an initial testing step prior to incorporating adaptation. | When implementing new things, it is important to start from a position of strength. Do not try to do it all in one step given that there are many smaller steps that you most likely have never done. Break the system down into smaller pieces. From the first step, you should already have functioning adaptive controllers on only the linear system, plus running on the true nonlinear dynamics (but with a linear reference model). Naturally, this means you also have the non-adaptive versions working, as those should have been an initial testing step prior to incorporating adaptation. This is what //Step 1// aims to achieve. |
| |
That means the new part is the neuro-adaptive component. Rather than try to toss the entire thing in, it is better to implement neuro-adaptive estimation on a simpler system as the only adaptive component. In fact, it is best to first try out the neuro-adaptive controller on a first-order scalar system. The one below is a great option: | Assuming that you know how to implement a neural network, then the new part is the neuro-adaptive component. |
| Otherwise you've got two new parts, the neural network and the neuro-adaptive controller. Continuing, rather than try to implement the more complex version of a neuro-adaptive controller for a multi-state system, it is better to implement neuro-adaptive estimation on a simpler system as the only adaptive component. In fact, it is best to first try out the neuro-adaptive controller on a first-order scalar system. The one below is a great option: |
\begin{equation} | \begin{equation} |
\dot x = f(x) + u(x; \alpha) = f(x) + k x - \alpha^T \Phi(x) | \dot x = f(x) + u(x; \alpha) = f(x) + k x - \alpha^T \Phi(x) |
This is just one tip. Overall, if you are not doing it, you need to learn how to break down a problem into digestable bits. The Step 1 and Step 2 breakdown does that, but you can and should go ever further yourself when resolving this project. | This is just one tip. Overall, if you are not doing it, you need to learn how to break down a problem into digestable bits. The Step 1 and Step 2 breakdown does that, but you can and should go ever further yourself when resolving this project. |
| |
==== Tip 2: Implementing the Function Approximator ==== | ==== Tip 2: Neural Network Approximation ==== |
| |
| Suppose that you haven't ever implemented a neural network. Well, then before even starting the neuro-adaptive part, it is mission critical to understand how to construct and approximation functions using a neural network outside of an adaptive controller. That means taking arbitrary nonlinear functions that you like and building neural networks that approximate well the functions. Play around with the neural network parameters: the number of neurons, the bandwidth of the neurons, etc. Go from functions of a real variable to vector input functions, one dimension at a time. See how the increase in dimension increases the complexity of the network (or rather the number of neurons). |
| |
| Overall, there is an exploration path that you need to follow if you are going to succeed at this. |
| ==== Tip 3: Implementing the Function Approximator ==== |
| |
One difficult thing for many newbies to Matlab is writing compact and efficient code. Efficient usually means avoiding ''for'' loops as much as possible. Chances are your neural network will have on the order of 100 to 10,000 //neurons//, so you really want to leverage Matlab's built-in functions for iterating over a matrices. You want to do the same for the center creation. Some Matlab function shout outs are: | One difficult thing for many newbies to Matlab is writing compact and efficient code. Efficient usually means avoiding ''for'' loops as much as possible. Chances are your neural network will have on the order of 100 to 10,000 //neurons//, so you really want to leverage Matlab's built-in functions for iterating over a matrices. You want to do the same for the center creation. Some Matlab function shout outs are: |
---------------------- | ---------------------- |
| |
Even though the neuro-adaptive controller was done first, a sequencing that makes sense goes from no adaptation to matched adaptation and eventually to neur-adaptation. I might be mistaken, but the neuro-adaptive part should have some ability to correct for the unmatched since those dynamics are influencing the error. To support each of these parts, the report should include the appropriate controlled equations of motion for the different realizations (linear, nonlinear, matched+unmatched, etc). It should cover the controller design and control synthesis for static and adaptive cases. If attempting an adaptive structures slightly different from what was covered in the lectures, then its derivation should be included. If using adaptive controllers covered in class, then only their setup and final adaptive laws should be covered. Trajectories applied should include regulation (moving to a new, feasible set point) and tracking. Just like in homeworks, attention should be paid to highlighting how the static controller fails to perform under incorrect parameters estimates. Otherwise, the //Final Deliverable// assignment item should cover what's needed. | Even though the neuro-adaptive controller was done first, a sequencing that makes sense goes from no adaptation to matched adaptation and eventually to neuro-adaptation. I might be mistaken, but the neuro-adaptive part should have some ability to correct for the unmatched since those dynamics are influencing the error. To support each of these parts, the report should include the appropriate controlled equations of motion for the different realizations (linear, nonlinear, matched+unmatched, etc). It should cover the controller design and control synthesis for static and adaptive cases. If attempting an adaptive structures slightly different from what was covered in the lectures, then its derivation should be included. If using adaptive controllers covered in class, then only their setup and final adaptive laws should be covered. Trajectories applied should include regulation (moving to a new, feasible set point) and tracking. Repeat runs are a must to see how the adaption influences future performance (use the adaptive parameters from the previous run as the initial conditions for next run). Just like in homeworks, attention should be paid to highlighting how the static controller fails to perform under incorrect parameters estimates. Otherwise, the //Final Deliverable// assignment item should cover what's needed. |
| |
====== References ====== | ====== References ====== |
---------------------- | ---------------------- |
| |
No references given. This project is self-contained given the lecture notes and the tips. | No references given. This project is self-contained given the lecture notes and the tips. The Hovakimyan and the Lavretsky and Wise references also work. |
| |
------ | ------ |