Adaptive Control + Robust Control - Does it work?












0














I have a qurius question! Is it possible to design a robust controller for a system by using algoritms and system identification, which are adaptive control + robust control?



I know there is a lot of math to do this, but is it possible? For example, I create an algorithm which identify the system and then creates a transfer function. With that transfer function, the algorithm designs a $H_{infty}$ controller with integral action. It would be like a PI-controller with guaranteed stability margins and autotuning.










share|cite|improve this question






















  • Neural network control is used when you don't want to explicitly model nonlinearities in the system. Instead, you have the network learn the term over time as it also controls the system. You often have to combine this with a robust control in order for this to work appropriately. A good paper is "Robust-neural network control of rigid-link electrically driven robots" by C. Kwan, F.L. Lewis, and D.M. Dawson. Most other papers by F.L. Lewis are very good in this area as well.
    – Preston Roy
    Aug 27 '17 at 15:27












  • OK. I assume that is possible to create a robust controller with autotunning. Thank you for the answer.
    – Daniel Mårtensson
    Aug 27 '17 at 16:29












  • you can look into Retrospective Cost Adaptive Control, which circumvents the system ID problem by requiring very little model information. In most cases it tends asymptotically to an $H_{infty}$-optimal LQG controller.
    – SZN
    Aug 28 '17 at 2:25


















0














I have a qurius question! Is it possible to design a robust controller for a system by using algoritms and system identification, which are adaptive control + robust control?



I know there is a lot of math to do this, but is it possible? For example, I create an algorithm which identify the system and then creates a transfer function. With that transfer function, the algorithm designs a $H_{infty}$ controller with integral action. It would be like a PI-controller with guaranteed stability margins and autotuning.










share|cite|improve this question






















  • Neural network control is used when you don't want to explicitly model nonlinearities in the system. Instead, you have the network learn the term over time as it also controls the system. You often have to combine this with a robust control in order for this to work appropriately. A good paper is "Robust-neural network control of rigid-link electrically driven robots" by C. Kwan, F.L. Lewis, and D.M. Dawson. Most other papers by F.L. Lewis are very good in this area as well.
    – Preston Roy
    Aug 27 '17 at 15:27












  • OK. I assume that is possible to create a robust controller with autotunning. Thank you for the answer.
    – Daniel Mårtensson
    Aug 27 '17 at 16:29












  • you can look into Retrospective Cost Adaptive Control, which circumvents the system ID problem by requiring very little model information. In most cases it tends asymptotically to an $H_{infty}$-optimal LQG controller.
    – SZN
    Aug 28 '17 at 2:25
















0












0








0







I have a qurius question! Is it possible to design a robust controller for a system by using algoritms and system identification, which are adaptive control + robust control?



I know there is a lot of math to do this, but is it possible? For example, I create an algorithm which identify the system and then creates a transfer function. With that transfer function, the algorithm designs a $H_{infty}$ controller with integral action. It would be like a PI-controller with guaranteed stability margins and autotuning.










share|cite|improve this question













I have a qurius question! Is it possible to design a robust controller for a system by using algoritms and system identification, which are adaptive control + robust control?



I know there is a lot of math to do this, but is it possible? For example, I create an algorithm which identify the system and then creates a transfer function. With that transfer function, the algorithm designs a $H_{infty}$ controller with integral action. It would be like a PI-controller with guaranteed stability margins and autotuning.







algorithms control-theory optimal-control linear-control system-identification






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Aug 26 '17 at 20:48









Daniel Mårtensson

898316




898316












  • Neural network control is used when you don't want to explicitly model nonlinearities in the system. Instead, you have the network learn the term over time as it also controls the system. You often have to combine this with a robust control in order for this to work appropriately. A good paper is "Robust-neural network control of rigid-link electrically driven robots" by C. Kwan, F.L. Lewis, and D.M. Dawson. Most other papers by F.L. Lewis are very good in this area as well.
    – Preston Roy
    Aug 27 '17 at 15:27












  • OK. I assume that is possible to create a robust controller with autotunning. Thank you for the answer.
    – Daniel Mårtensson
    Aug 27 '17 at 16:29












  • you can look into Retrospective Cost Adaptive Control, which circumvents the system ID problem by requiring very little model information. In most cases it tends asymptotically to an $H_{infty}$-optimal LQG controller.
    – SZN
    Aug 28 '17 at 2:25




















  • Neural network control is used when you don't want to explicitly model nonlinearities in the system. Instead, you have the network learn the term over time as it also controls the system. You often have to combine this with a robust control in order for this to work appropriately. A good paper is "Robust-neural network control of rigid-link electrically driven robots" by C. Kwan, F.L. Lewis, and D.M. Dawson. Most other papers by F.L. Lewis are very good in this area as well.
    – Preston Roy
    Aug 27 '17 at 15:27












  • OK. I assume that is possible to create a robust controller with autotunning. Thank you for the answer.
    – Daniel Mårtensson
    Aug 27 '17 at 16:29












  • you can look into Retrospective Cost Adaptive Control, which circumvents the system ID problem by requiring very little model information. In most cases it tends asymptotically to an $H_{infty}$-optimal LQG controller.
    – SZN
    Aug 28 '17 at 2:25


















Neural network control is used when you don't want to explicitly model nonlinearities in the system. Instead, you have the network learn the term over time as it also controls the system. You often have to combine this with a robust control in order for this to work appropriately. A good paper is "Robust-neural network control of rigid-link electrically driven robots" by C. Kwan, F.L. Lewis, and D.M. Dawson. Most other papers by F.L. Lewis are very good in this area as well.
– Preston Roy
Aug 27 '17 at 15:27






Neural network control is used when you don't want to explicitly model nonlinearities in the system. Instead, you have the network learn the term over time as it also controls the system. You often have to combine this with a robust control in order for this to work appropriately. A good paper is "Robust-neural network control of rigid-link electrically driven robots" by C. Kwan, F.L. Lewis, and D.M. Dawson. Most other papers by F.L. Lewis are very good in this area as well.
– Preston Roy
Aug 27 '17 at 15:27














OK. I assume that is possible to create a robust controller with autotunning. Thank you for the answer.
– Daniel Mårtensson
Aug 27 '17 at 16:29






OK. I assume that is possible to create a robust controller with autotunning. Thank you for the answer.
– Daniel Mårtensson
Aug 27 '17 at 16:29














you can look into Retrospective Cost Adaptive Control, which circumvents the system ID problem by requiring very little model information. In most cases it tends asymptotically to an $H_{infty}$-optimal LQG controller.
– SZN
Aug 28 '17 at 2:25






you can look into Retrospective Cost Adaptive Control, which circumvents the system ID problem by requiring very little model information. In most cases it tends asymptotically to an $H_{infty}$-optimal LQG controller.
– SZN
Aug 28 '17 at 2:25












2 Answers
2






active

oldest

votes


















1














A very effective implementation of this is using a combined sliding mode control (SMC) and adaptive control law. This combines the resistance to uncertainties that SMC offers, with the reduction in uncertainties that results from adaptive control. Keep in mind though that this is of limited practical use. A quick example could is detailed as follows:



Consider the system of the form, $$ x^{(n)} = sum_{i = 1}^{m}gamma_iphi_i(textbf{x}) + butag{1}$$



$begin{align}text{where, }\&textbf{x} = begin{bmatrix}x dot{x} ... x^{(n-1)}end{bmatrix}^Ttext{ is the state vector} \ & phi_i(textbf{x}) text{ are some known functions} \ & gamma_i, btext{ are constant unknown parameters}end{align}$



It is assumed that the sign of $b$ is known and is assumed positive. Now consider, as in all SMC formulations, a Hurwitz stable linear combination of the system error. Here I will take a combination as suggested in Slotine and Li, $$s = left(dfrac{d}{dt} + lambdaright)^{n - 1}e$$



where, $e = x - x_d$ given some suitably differentiable desired trajectory $x_d$



Now if we take derivative of $s$, $$dot{s} = sum_{i=1}^{m}gamma_iphi_i(textbf{x}) + bu - vtag{2}$$
where, $v = x_d^{(n)} - lambda x^{(n-1)} - text{ ....}$



Using parameter estimates, let $u = hat{b}^{-1}(v - Ksgn(s)) - sum_{i=1}^{m}hat{gamma_i}phi_i$, where $K > 0$



Substituting this in (2) and rearranging we get,
$$dot{s} = sum_{i=1}^{m}(gamma_i - bhat{gamma_i})phi_i + (bhat{b}^{-1} -1)(v - Ksgn(s)) - Ksgn(s)tag{3}$$



Consider the Lyapunov function, $ V = frac{1}{2}(s^2 + b^{-1}(bhat{b}^{-1} -1)^2 + b^{-1}sum_{i=1}^{m}(gamma_i - bhat{gamma_i})^2phi_i) $. Taking its derivative,



$$begin{align}& dot{V} = sdot{s} + (bhat{b}^{-1} -1)dot{hat{b}}^{-1} - sum_{i=1}^{m}(gamma_i - bhat{gamma_i})dot{hat{gamma_i}} \ & dot{V} = (bhat{b}^{-1} - 1)(vs - Kmid smid + dot{hat{b}}^{-1})
+ sum_{i=1}^{m}(sphi_i - dot{hat{gamma_i}})(gamma_i - bhat{gamma_i}) - Kmid smidtag{from (3)}end{align}$$



Choosing adaptation laws as:
$$begin{align}&dot{hat{b}}^{-1} = Kmid smid - vs \ & dot{hat{gamma_i}} = sphi_iend{align}$$



we are ensured that $dot{V} < 0$ and hence we have global asymptotic stability.



As is obvious, to use above formulation the dynamics of the system must be linear in unknown parameters. If that is not the case, one can make use of adaptive fuzzy control (as direct or indirect adaptive fuzzy control) to solve the problem. This again can be coupled with sliding mode control.






share|cite|improve this answer





























    1














    Yes it is.



    The idea of indirect adaptive control, often called certainty-equivalence, is to estimate parameters in real time, and design a controller for the estimated plant model as if they were the real plant parameters. The control design method is left open - robust control is a possibility, as are many others. The resulting controller is unlikely to be a PI-controller because 1) it is adaptive, thus nonlinear and time-varying; and 2) H infinity controllers are most often of high order.



    Caveat: adaptive controllers tend to be somewhat complex, and their performance in practice is very much dependent on the prior knowledge available about the plant. It is not realistic to expect good behavior if your initial estimates are far off the reality. More complicated methods such as adaptive neural networks and model-free controllers, as suggested in the comments, make even more stringent requirements on prior knowledge and controllers training, otherwise their performance is even more pitiful.






    share|cite|improve this answer





















      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2407037%2fadaptive-control-robust-control-does-it-work%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      1














      A very effective implementation of this is using a combined sliding mode control (SMC) and adaptive control law. This combines the resistance to uncertainties that SMC offers, with the reduction in uncertainties that results from adaptive control. Keep in mind though that this is of limited practical use. A quick example could is detailed as follows:



      Consider the system of the form, $$ x^{(n)} = sum_{i = 1}^{m}gamma_iphi_i(textbf{x}) + butag{1}$$



      $begin{align}text{where, }\&textbf{x} = begin{bmatrix}x dot{x} ... x^{(n-1)}end{bmatrix}^Ttext{ is the state vector} \ & phi_i(textbf{x}) text{ are some known functions} \ & gamma_i, btext{ are constant unknown parameters}end{align}$



      It is assumed that the sign of $b$ is known and is assumed positive. Now consider, as in all SMC formulations, a Hurwitz stable linear combination of the system error. Here I will take a combination as suggested in Slotine and Li, $$s = left(dfrac{d}{dt} + lambdaright)^{n - 1}e$$



      where, $e = x - x_d$ given some suitably differentiable desired trajectory $x_d$



      Now if we take derivative of $s$, $$dot{s} = sum_{i=1}^{m}gamma_iphi_i(textbf{x}) + bu - vtag{2}$$
      where, $v = x_d^{(n)} - lambda x^{(n-1)} - text{ ....}$



      Using parameter estimates, let $u = hat{b}^{-1}(v - Ksgn(s)) - sum_{i=1}^{m}hat{gamma_i}phi_i$, where $K > 0$



      Substituting this in (2) and rearranging we get,
      $$dot{s} = sum_{i=1}^{m}(gamma_i - bhat{gamma_i})phi_i + (bhat{b}^{-1} -1)(v - Ksgn(s)) - Ksgn(s)tag{3}$$



      Consider the Lyapunov function, $ V = frac{1}{2}(s^2 + b^{-1}(bhat{b}^{-1} -1)^2 + b^{-1}sum_{i=1}^{m}(gamma_i - bhat{gamma_i})^2phi_i) $. Taking its derivative,



      $$begin{align}& dot{V} = sdot{s} + (bhat{b}^{-1} -1)dot{hat{b}}^{-1} - sum_{i=1}^{m}(gamma_i - bhat{gamma_i})dot{hat{gamma_i}} \ & dot{V} = (bhat{b}^{-1} - 1)(vs - Kmid smid + dot{hat{b}}^{-1})
      + sum_{i=1}^{m}(sphi_i - dot{hat{gamma_i}})(gamma_i - bhat{gamma_i}) - Kmid smidtag{from (3)}end{align}$$



      Choosing adaptation laws as:
      $$begin{align}&dot{hat{b}}^{-1} = Kmid smid - vs \ & dot{hat{gamma_i}} = sphi_iend{align}$$



      we are ensured that $dot{V} < 0$ and hence we have global asymptotic stability.



      As is obvious, to use above formulation the dynamics of the system must be linear in unknown parameters. If that is not the case, one can make use of adaptive fuzzy control (as direct or indirect adaptive fuzzy control) to solve the problem. This again can be coupled with sliding mode control.






      share|cite|improve this answer


























        1














        A very effective implementation of this is using a combined sliding mode control (SMC) and adaptive control law. This combines the resistance to uncertainties that SMC offers, with the reduction in uncertainties that results from adaptive control. Keep in mind though that this is of limited practical use. A quick example could is detailed as follows:



        Consider the system of the form, $$ x^{(n)} = sum_{i = 1}^{m}gamma_iphi_i(textbf{x}) + butag{1}$$



        $begin{align}text{where, }\&textbf{x} = begin{bmatrix}x dot{x} ... x^{(n-1)}end{bmatrix}^Ttext{ is the state vector} \ & phi_i(textbf{x}) text{ are some known functions} \ & gamma_i, btext{ are constant unknown parameters}end{align}$



        It is assumed that the sign of $b$ is known and is assumed positive. Now consider, as in all SMC formulations, a Hurwitz stable linear combination of the system error. Here I will take a combination as suggested in Slotine and Li, $$s = left(dfrac{d}{dt} + lambdaright)^{n - 1}e$$



        where, $e = x - x_d$ given some suitably differentiable desired trajectory $x_d$



        Now if we take derivative of $s$, $$dot{s} = sum_{i=1}^{m}gamma_iphi_i(textbf{x}) + bu - vtag{2}$$
        where, $v = x_d^{(n)} - lambda x^{(n-1)} - text{ ....}$



        Using parameter estimates, let $u = hat{b}^{-1}(v - Ksgn(s)) - sum_{i=1}^{m}hat{gamma_i}phi_i$, where $K > 0$



        Substituting this in (2) and rearranging we get,
        $$dot{s} = sum_{i=1}^{m}(gamma_i - bhat{gamma_i})phi_i + (bhat{b}^{-1} -1)(v - Ksgn(s)) - Ksgn(s)tag{3}$$



        Consider the Lyapunov function, $ V = frac{1}{2}(s^2 + b^{-1}(bhat{b}^{-1} -1)^2 + b^{-1}sum_{i=1}^{m}(gamma_i - bhat{gamma_i})^2phi_i) $. Taking its derivative,



        $$begin{align}& dot{V} = sdot{s} + (bhat{b}^{-1} -1)dot{hat{b}}^{-1} - sum_{i=1}^{m}(gamma_i - bhat{gamma_i})dot{hat{gamma_i}} \ & dot{V} = (bhat{b}^{-1} - 1)(vs - Kmid smid + dot{hat{b}}^{-1})
        + sum_{i=1}^{m}(sphi_i - dot{hat{gamma_i}})(gamma_i - bhat{gamma_i}) - Kmid smidtag{from (3)}end{align}$$



        Choosing adaptation laws as:
        $$begin{align}&dot{hat{b}}^{-1} = Kmid smid - vs \ & dot{hat{gamma_i}} = sphi_iend{align}$$



        we are ensured that $dot{V} < 0$ and hence we have global asymptotic stability.



        As is obvious, to use above formulation the dynamics of the system must be linear in unknown parameters. If that is not the case, one can make use of adaptive fuzzy control (as direct or indirect adaptive fuzzy control) to solve the problem. This again can be coupled with sliding mode control.






        share|cite|improve this answer
























          1












          1








          1






          A very effective implementation of this is using a combined sliding mode control (SMC) and adaptive control law. This combines the resistance to uncertainties that SMC offers, with the reduction in uncertainties that results from adaptive control. Keep in mind though that this is of limited practical use. A quick example could is detailed as follows:



          Consider the system of the form, $$ x^{(n)} = sum_{i = 1}^{m}gamma_iphi_i(textbf{x}) + butag{1}$$



          $begin{align}text{where, }\&textbf{x} = begin{bmatrix}x dot{x} ... x^{(n-1)}end{bmatrix}^Ttext{ is the state vector} \ & phi_i(textbf{x}) text{ are some known functions} \ & gamma_i, btext{ are constant unknown parameters}end{align}$



          It is assumed that the sign of $b$ is known and is assumed positive. Now consider, as in all SMC formulations, a Hurwitz stable linear combination of the system error. Here I will take a combination as suggested in Slotine and Li, $$s = left(dfrac{d}{dt} + lambdaright)^{n - 1}e$$



          where, $e = x - x_d$ given some suitably differentiable desired trajectory $x_d$



          Now if we take derivative of $s$, $$dot{s} = sum_{i=1}^{m}gamma_iphi_i(textbf{x}) + bu - vtag{2}$$
          where, $v = x_d^{(n)} - lambda x^{(n-1)} - text{ ....}$



          Using parameter estimates, let $u = hat{b}^{-1}(v - Ksgn(s)) - sum_{i=1}^{m}hat{gamma_i}phi_i$, where $K > 0$



          Substituting this in (2) and rearranging we get,
          $$dot{s} = sum_{i=1}^{m}(gamma_i - bhat{gamma_i})phi_i + (bhat{b}^{-1} -1)(v - Ksgn(s)) - Ksgn(s)tag{3}$$



          Consider the Lyapunov function, $ V = frac{1}{2}(s^2 + b^{-1}(bhat{b}^{-1} -1)^2 + b^{-1}sum_{i=1}^{m}(gamma_i - bhat{gamma_i})^2phi_i) $. Taking its derivative,



          $$begin{align}& dot{V} = sdot{s} + (bhat{b}^{-1} -1)dot{hat{b}}^{-1} - sum_{i=1}^{m}(gamma_i - bhat{gamma_i})dot{hat{gamma_i}} \ & dot{V} = (bhat{b}^{-1} - 1)(vs - Kmid smid + dot{hat{b}}^{-1})
          + sum_{i=1}^{m}(sphi_i - dot{hat{gamma_i}})(gamma_i - bhat{gamma_i}) - Kmid smidtag{from (3)}end{align}$$



          Choosing adaptation laws as:
          $$begin{align}&dot{hat{b}}^{-1} = Kmid smid - vs \ & dot{hat{gamma_i}} = sphi_iend{align}$$



          we are ensured that $dot{V} < 0$ and hence we have global asymptotic stability.



          As is obvious, to use above formulation the dynamics of the system must be linear in unknown parameters. If that is not the case, one can make use of adaptive fuzzy control (as direct or indirect adaptive fuzzy control) to solve the problem. This again can be coupled with sliding mode control.






          share|cite|improve this answer












          A very effective implementation of this is using a combined sliding mode control (SMC) and adaptive control law. This combines the resistance to uncertainties that SMC offers, with the reduction in uncertainties that results from adaptive control. Keep in mind though that this is of limited practical use. A quick example could is detailed as follows:



          Consider the system of the form, $$ x^{(n)} = sum_{i = 1}^{m}gamma_iphi_i(textbf{x}) + butag{1}$$



          $begin{align}text{where, }\&textbf{x} = begin{bmatrix}x dot{x} ... x^{(n-1)}end{bmatrix}^Ttext{ is the state vector} \ & phi_i(textbf{x}) text{ are some known functions} \ & gamma_i, btext{ are constant unknown parameters}end{align}$



          It is assumed that the sign of $b$ is known and is assumed positive. Now consider, as in all SMC formulations, a Hurwitz stable linear combination of the system error. Here I will take a combination as suggested in Slotine and Li, $$s = left(dfrac{d}{dt} + lambdaright)^{n - 1}e$$



          where, $e = x - x_d$ given some suitably differentiable desired trajectory $x_d$



          Now if we take derivative of $s$, $$dot{s} = sum_{i=1}^{m}gamma_iphi_i(textbf{x}) + bu - vtag{2}$$
          where, $v = x_d^{(n)} - lambda x^{(n-1)} - text{ ....}$



          Using parameter estimates, let $u = hat{b}^{-1}(v - Ksgn(s)) - sum_{i=1}^{m}hat{gamma_i}phi_i$, where $K > 0$



          Substituting this in (2) and rearranging we get,
          $$dot{s} = sum_{i=1}^{m}(gamma_i - bhat{gamma_i})phi_i + (bhat{b}^{-1} -1)(v - Ksgn(s)) - Ksgn(s)tag{3}$$



          Consider the Lyapunov function, $ V = frac{1}{2}(s^2 + b^{-1}(bhat{b}^{-1} -1)^2 + b^{-1}sum_{i=1}^{m}(gamma_i - bhat{gamma_i})^2phi_i) $. Taking its derivative,



          $$begin{align}& dot{V} = sdot{s} + (bhat{b}^{-1} -1)dot{hat{b}}^{-1} - sum_{i=1}^{m}(gamma_i - bhat{gamma_i})dot{hat{gamma_i}} \ & dot{V} = (bhat{b}^{-1} - 1)(vs - Kmid smid + dot{hat{b}}^{-1})
          + sum_{i=1}^{m}(sphi_i - dot{hat{gamma_i}})(gamma_i - bhat{gamma_i}) - Kmid smidtag{from (3)}end{align}$$



          Choosing adaptation laws as:
          $$begin{align}&dot{hat{b}}^{-1} = Kmid smid - vs \ & dot{hat{gamma_i}} = sphi_iend{align}$$



          we are ensured that $dot{V} < 0$ and hence we have global asymptotic stability.



          As is obvious, to use above formulation the dynamics of the system must be linear in unknown parameters. If that is not the case, one can make use of adaptive fuzzy control (as direct or indirect adaptive fuzzy control) to solve the problem. This again can be coupled with sliding mode control.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Jun 27 at 10:13









          BabaYaga

          494




          494























              1














              Yes it is.



              The idea of indirect adaptive control, often called certainty-equivalence, is to estimate parameters in real time, and design a controller for the estimated plant model as if they were the real plant parameters. The control design method is left open - robust control is a possibility, as are many others. The resulting controller is unlikely to be a PI-controller because 1) it is adaptive, thus nonlinear and time-varying; and 2) H infinity controllers are most often of high order.



              Caveat: adaptive controllers tend to be somewhat complex, and their performance in practice is very much dependent on the prior knowledge available about the plant. It is not realistic to expect good behavior if your initial estimates are far off the reality. More complicated methods such as adaptive neural networks and model-free controllers, as suggested in the comments, make even more stringent requirements on prior knowledge and controllers training, otherwise their performance is even more pitiful.






              share|cite|improve this answer


























                1














                Yes it is.



                The idea of indirect adaptive control, often called certainty-equivalence, is to estimate parameters in real time, and design a controller for the estimated plant model as if they were the real plant parameters. The control design method is left open - robust control is a possibility, as are many others. The resulting controller is unlikely to be a PI-controller because 1) it is adaptive, thus nonlinear and time-varying; and 2) H infinity controllers are most often of high order.



                Caveat: adaptive controllers tend to be somewhat complex, and their performance in practice is very much dependent on the prior knowledge available about the plant. It is not realistic to expect good behavior if your initial estimates are far off the reality. More complicated methods such as adaptive neural networks and model-free controllers, as suggested in the comments, make even more stringent requirements on prior knowledge and controllers training, otherwise their performance is even more pitiful.






                share|cite|improve this answer
























                  1












                  1








                  1






                  Yes it is.



                  The idea of indirect adaptive control, often called certainty-equivalence, is to estimate parameters in real time, and design a controller for the estimated plant model as if they were the real plant parameters. The control design method is left open - robust control is a possibility, as are many others. The resulting controller is unlikely to be a PI-controller because 1) it is adaptive, thus nonlinear and time-varying; and 2) H infinity controllers are most often of high order.



                  Caveat: adaptive controllers tend to be somewhat complex, and their performance in practice is very much dependent on the prior knowledge available about the plant. It is not realistic to expect good behavior if your initial estimates are far off the reality. More complicated methods such as adaptive neural networks and model-free controllers, as suggested in the comments, make even more stringent requirements on prior knowledge and controllers training, otherwise their performance is even more pitiful.






                  share|cite|improve this answer












                  Yes it is.



                  The idea of indirect adaptive control, often called certainty-equivalence, is to estimate parameters in real time, and design a controller for the estimated plant model as if they were the real plant parameters. The control design method is left open - robust control is a possibility, as are many others. The resulting controller is unlikely to be a PI-controller because 1) it is adaptive, thus nonlinear and time-varying; and 2) H infinity controllers are most often of high order.



                  Caveat: adaptive controllers tend to be somewhat complex, and their performance in practice is very much dependent on the prior knowledge available about the plant. It is not realistic to expect good behavior if your initial estimates are far off the reality. More complicated methods such as adaptive neural networks and model-free controllers, as suggested in the comments, make even more stringent requirements on prior knowledge and controllers training, otherwise their performance is even more pitiful.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Aug 28 '17 at 10:04









                  Pait

                  1,157916




                  1,157916






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2407037%2fadaptive-control-robust-control-does-it-work%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Berounka

                      Sphinx de Gizeh

                      Different font size/position of beamer's navigation symbols template's content depending on regular/plain...