美文网首页
高阶无模型自适应迭代学习控制学习记录

高阶无模型自适应迭代学习控制学习记录

作者: fx2h | 来源:发表于2020-10-08 18:20 被阅读0次

参考文献:"High-Order Model-Free Adaptive Iterative Learning Control of Pneumatic Artificial Muscle With Enhanced Convergence." Ieee Transactions on Industrial Electronics

控制律采用的伪偏导和准者函数,基于紧凑型DL来线性化,伪偏导的估计采用高阶估计方式,给PPD初始值分配为10,控制律部分和PPD更新部分如下所示:
\left\{\begin{array}{l}\hat{\phi}_{k}(t)=\hat{\phi}_{k-1}(t)+\frac{\eta_{k, t} \Delta u_{k-1}(t)}{\mu+\left|\Delta u_{k-1}(t)\right|^{2}} \\ \times\left(\Delta y_{k-1}(t+1)-\hat{\phi}_{k-1}(t) \Delta u_{k-1}(t)\right) 2 \leq k<m \\ \hat{\phi}_{k}(t)=\frac{\Delta u_{k-1}(t) \Delta y_{k-1}(t+1)}{\mu+\left|\Delta u_{k-1}(t)\right|^{2}} \\ +\frac{\mu \eta_{k, t}}{\mu+\left|\Delta u_{k-1}(t)\right|^{2}} \sum_{i=1}^{m} \alpha_{i} \phi_{k-i}(t) \quad k \geq m\end{array}\right.
\begin{equation} \begin{array}{c} \hat{\phi}_{k}(t)=\hat{\phi}_{0}(t), \text { if } \hat{\phi}_{k}(t) \leq \varepsilon \text { or }\left|\Delta u_{k}(t)\right| \leq \varepsilon \\ u_{k}(t)=u_{k-1}(t)+\frac{\rho_{k, t} \hat{\phi}_{k}(t)}{\lambda+\left|\hat{\phi}_{k}(t)\right|^{2}} e_{k-1}(t+1) \end{array} \end{equation}
给出的仿真系统函数为:
y(t+1)=\left\{\begin{array}{ll} \frac{y(t)}{1+y^{2}(t)}+u^{3}(t), & 0 \leq t \leq 50 \\ (y(t) y(t-1) y(t-2) u(t-1) & \\ \times(y(t-2)-1)+\alpha(t) u(t)) & \\ /\left(1+y^{2}(t-1)+y^{2}(t-2)\right), & 50<t \leq 100 \end{array}\right.
期望轨迹为:
y_{d}(t+1)=\left\{\begin{array}{ll} 0.5 *(-1)^{\text {round }(t / 10)}, & 0 \leq t \leq 30 \\ 0.5 * \sin (t \pi / 10) & \\ +0.3 * \cos (t \pi / 10), & 30<t \leq 70 \\ 0.5 *(-1)^{\text {round }(t / 10)}, & 70<t \leq 100 \end{array}\right.
系统参数设为:

epsilon = 0.01;
lambda = 1; %0.6
rho = 0.85;  %1
mu = 1;    %1
eta = 0.6;  %0.6
alpha_1 = 0.4;
alpha_2 = 0.4;
alpha_3 = 0.2;

仿真结果如下,感觉我的迭代次数设置只有几十次的效果比不上论文里的效果。
迭代60次:

60.jpg
迭代300次:
300.jpg
迭代600次:
600.jpg
迭代1000次:
1000.jpg
代码见github仓库

相关文章

网友评论

      本文标题:高阶无模型自适应迭代学习控制学习记录

      本文链接:https://www.haomeiwen.com/subject/avyhpktx.html