LongCut logo

Mod-13 Lec-32 Lyapunov Theory -- II

By nptelhrd

Summary

## Key takeaways - **Variable Gradient Method Bypasses V Guessing**: One method is the variable gradient method, where instead of guessing a Lyapunov function V, you select its gradient g(X) with adjustable parameters, then integrate to obtain V. [01:18], [03:28] - **Curl Condition Ensures Gradient Generates Scalar V**: The selected gradient vector must satisfy the curl condition: the matrix ∂g/∂X must be symmetric, otherwise a scalar V cannot produce that gradient. [05:09] - **Krasovskii Uses V = f^T f as Lyapunov Candidate**: Krasovskii's theorem proposes the Lyapunov candidate V = f^T f, leading to Vdot = f^T (A + A^T) f, and asymptotic stability follows if the matrix F = A + A^T is negative definite. [24:13] - **Generalized Krasovskii Introduces P and Q Matrices**: A generalized version introduces positive‑definite matrices P and Q such that A^T P + P A + Q ≤ 0, relaxing the need for F to be strictly negative definite. [32:19] - **LaSalle Theorem Upgrades Semi‑Definiteness to Asymptotic Stability**: If V is positive definite, Vdot ≤ 0, and Vdot vanishes only at the equilibrium, the origin is asymptotically stable even when Vdot is only semi‑definite. [45:21] - **Invariant Sets and Limit Sets Define System Convergence**: An invariant set contains all future trajectories from any initial point inside it, while a limit set is the set of points to which trajectories converge as time goes to infinity. [40:10], [43:12]

Topics Covered

  • Variable Gradient Method Reverses Lyapunov Construction Logic
  • Gradient Function Must Satisfy Curl Symmetry Condition
  • Domain of Attraction Is a Subset of Stability Region D
  • Krasovskii Method Uses A plus A Transpose for Stability
  • LaSalle Theorem Rescues Asymptotic Stability from Negative Semi Definiteness

Full Transcript

Lyapunov theory concepts and some basic definitions as well as some direct theorems and all that and one or two examples. Now, continue there we will continue that discussion further and see some of the further concepts in Lyapunov theory. Now, one of things that we saw there in this Lyapunov theory in the previous lecture is I mean the measure issues being to one

is how do you see I mean how do you find the Lyapunov function, which will do the job for you. And second issue is like if I have a negative semi defined what we are do with

you. And second issue is like if I have a negative semi defined what we are do with that, because many times it is nice to see negative definiteness. So, that I can conclude more than what I mean otherwise can conclude. So, we will try to address some of these issues and then try to see further concepts and think like that. So, first issue is like construction

of Lyapunov function so, in some possible in some problems it is possible to construct Lyapunov function in a I mean following some sort of procedure and all that actually. So,

that some of these results, we will see here in this particular lecture first.

So, one method is something called variable gradient method and that, because see why this important because in Lyapunov function analysis, what we saw in the last class. All

the theorems are happens, to be like sufficiency condition that means, if I am lucky I will keep on trying and if I am lucky one time then probably I will get the results that I want. But in general there is no guarantee actually So, how many times I will try so

I want. But in general there is no guarantee actually So, how many times I will try so instead of doing that is there any standard procedure to see some of these constructions of Lyapunov functions other then the very standard thing that we discuss.

If this X transpose X and all that the quadratic function may not work out always actually, but that is a standard function that we can try out and this kinetic energy plus potential energy that is another function that you can always try out. But other than that suppose, these two also fail I mean these two regular candidates fail then what are the other things

that we are worried about actually. So, this is where we lean to this constructions, concepts and all that then the first method which comes to picture is variable gradient method. So,

this variable gradient method tells us that let us not worry about selecting a V, but let us select delta V instead that means, we will select some sort of a gradient of V let us talk about that actually, because remember V dot is del V by del X transpose into f of X. So, instead of selecting a V and then working

out this del V del X, because del V by del X is something that will go to picture. If

I see this previous class somewhere, we discussed about that actually, this V dot is del V by del X transpose X dot so, del V by del X into f of x that is ultimately V dot. So, if I have to conclude something about V dot then why starting with V and then working up this

del V by del X and all that, because we know f of x already. If you know f of x then I will be smart enough to select these del V by del X and see whether my V whatever, V I will compute from there whether this V will satisfy these or not. That is the whole idea actually, instead of starting with V and then carrying out this algebra, I will just see

that what is my f of x and select this del V by del X.

And then, but remember I have selected only del V by del X these two one and two condition has to be satisfied that means, I will solve this del V by del X expression and then get V of X and see whether this V of X satisfies that or not actually. So, this kind of a little bit reverse idea sort of thing so, that is what the variable gradient method talks about

so, let us select a del V by del X g of X that contains some adjustable parameters.

Then it obviously, adjust that depending on our f of x whatever f of x we have actually, then this expression tells me that this del V is nothing, but del V by del X transpose del X, but remember this expression is some sort of a kind of this d X contains many components of d X actually, d X is a vector quantity d V is a scalar quantity.

So, if I really have to solve for V X I have to be doing I mean I have to do this integration slightly carefully actually, but nevertheless if you start with this expression d V is that so integrate both sides from 0 to X. And then it will turn out that this left hand side

is V of X minus V of 0 and then most of the time V of 0 is 0 anyway so that is what we will select actually so, sort of thing. So, this expression will give us that what is my V of X actually so, V of X is integral of this gradient, but this integration that you are talking about has to be done component wise actually we will see some procedure and

all that further when you go along actually. And also remember that this del V by del X what you are selecting this g of X it must satisfy so called curl condition actually, that means if I see this del g i by del x j it is satisfy this del g j by del x i. So,

if I take like g j is a vector remember that so it take any component of that vector and take the partial derivate with respect to any component of state vector. And if I reverse the sequence that has to be satisfied what that is means? This del g by del x what you

see here this g is already a vector any more that is why if I talk del g by del x that is actually, matrix and this matrix has to be symmetric actually, that is all you are telling here actually. And also remember there is another thing that

this integration this value depends on the initial and final states and not necessarily on the path followed actually, and if that is so and this integration can be done component wise actually. That means, I travel along x 1 axis first then I will travel along x

wise actually. That means, I travel along x 1 axis first then I will travel along x 2 axis then I will travel along x 3 axis like that actually. So, I do not have to directly radial go from 0 to x I mean 0 to x that is way I can go in a component wise x 1 first

x 2 next and think like that actually so that is how I will be able to integrate this. So,

this is what is done here so V of X when you integrate this what that mean V of X we first integrate along x 1 axis.

So, all other axis are 0 anyway then whatever, function comes plus where you are the x 1 remains actually, then now we vary x 2 x 2 will be vary from 0 to x 2 then you vary x 3 x 1 x 2 and x 3 tilde that that is your variable I mean this integration variable and all that. And keep on doing that until the last co ordinate axes actually, that is

the expression what you see here actually. So, with the assumption that V of 0 is 0 actually V of 0 has to be 0 for that so that is isolative proceed further actually. So, remember I have

to start with a gradient of Lyapunov function g of X which will contain adjustable parameters which I have adjust later depending on my situation and this g of X must satisfies some sort of a curl condition. That means, if I take this del g by del X that becomes matrix and that matrix has to be same matrix actually then only I get a

solution what I want another wise I will not actually. The procedure to get V of X out of this gradient vector is like that, if I have already got a gradient vector and which will do my job I know then I will compute this V of X that way and then I will go back and see whether those conditions are satisfied or not that V is positive definite or not essentially.

So, let us see what this theorem actually so, the function g of x is the gradient of a scalar function V of X, if an only if the matrix del g by del X is symmetry that is what I told you just now this del V by del X matrix that I told you this. In other words this curl condition that we are talking about is nothing, but this condition that del g

by del X is symmetric that what I tells you. Now, see if and only if condition so let see whether that that is true or not actually, in way every quickly this is very straight forward thing also the prove is there in Marquize book, but I will also take you through here actually. So, del g by del X when you mean that is what you mean del g by del X del g

actually. So, del g by del X when you mean that is what you mean del g by del X del g 1 by del x 1 del g 1 by del X and all that actually so, this matrix needs to be symmetric actually.

And how do you show that if only if conditions so, first let us see that this is a matrix and all that is very easy to show g of X is del V by del X by definition. So, del g by del X is by definition that and this is by that we simple expand what is going on here

del g by del X and all it is nothing, but that so del square V del X square actually, that will happen to be like this obviously because these conditions are true.

That means, del square V by del x 1 del x j this is actually i, del square V by del x i del x j is equal to del square V by del x j del x i that is from standard calculus.

I can take the partial derivate in any sequence that is all it tells you. So, then obviously, my symmetric condition is satisfied actually so that needs to be symmetric actually. So,

if g of X happens to be gradient function of V of X then this del g by del X is symmetric that is what you have shown now, what about the other one if this matrix is symmetric then what will happen actually, that it sufficiency actually.

So, you have assumed that this matrix is symmetric that is del g i by del x j equal to the del g j by del x i. And we need to show that this del V by del x i if I take that is nothing, but those same g i of X which is gradient vector actually.

And for that we can show that this V of X initially evaluate that way we just discuss about that and then we talk about let say del V by del x 1 what is that mean actually.

This is entire expression this is V of X expression I just take del V by del x 1 left hand side.

So, if I take del V by del x 1 the first one is contains only x 1 by the way this expression it contains only x 1. So, it will give me g 1, g 1 is function of x 1 and then all other things that partial derivate will be pushed inside actually.

So, I will just take that partial derivate inside del g 2 by del x 1 del g 3 by del x 3 like that actually so, it will continue that way and I will be able to show this I mean all that partial derivate I will take it inside actually. So, I have got g 1 plus

bunch of integrals to evaluate and all that then what happens? If you see this integral to be evaluated and all that actually that way then it turns out that first one is g

1 anyway second one is g 1 of x 2 tilde why is that, because if I see this I will invoke this condition that this is true. Whenever, I have del g 1 by del x 2 I will consider

that as del g 2 by del x 1 and that is what I will do here del g 2 by del x 1 is nothing but equal to del g 1 by del x 2 this expression here.

Because this is already symmetric for that condition I have invoke actually whatever condition I have here. So, if I invoke that then evaluate the condition then what will happen this actually del g 1 by del x 2 and then integrated over del x 2 and similarly, the last one is del g 1 by del x n integrated over del x n. So, that is what will simplify

the procedure that is what it is written here so, from this expression to this expression I have been invoke this condition actually.

So, when I invert this condition I will be able to integrate these integral values also and that will lead me to some sort of evaluation like that everything will happen in terms of g 1 only. And then what you do these value what I see here is nothing but this expression minus this one I mean evaluate from here to here anyway. So, that way so if I see this

carefully what will happen this one this g 1 of x 1 and g 1 of x 1 this gets cancelled out actually, this particular thing is same as this one positive negative. Similarly,

the other things will also cancelled out and you will be left out with only this follow so, that is what we are telling here we are left out with only g 1 that means, del V by del x 1 is nothing g 1 only. So, similarly, del V by del x 2 will be nothing,

but g 2 del V by del x 3 will be nothing but g 3 like that actually. So, del V by del x i is nothing but g i so that is how we will able to show both actually first we showed that, if it is I mean this is the gradient vector then obviously, if I take derivate it is happens to be symmetric matrix very straight forward. The other one is if it happens

to be symmetry then if I take integral and all that I will able to show that this matrix that I started and all that, that is nothing but the gradient of del gradient vector of V actually and so, that is where this if and only if condition happen actually.

So, summary part it tells you what actually, a function g of X is the gradient of a scalar function if and only if the matrix is symmetry that means, you have to select is g of X such that this matrix del g by del x is symmetric that is all you have to do. And certainly

it will lead to Lyapunov function and some sort of thing later basically, so that is what you need to do here actually.

So, let us see that through in example, this is a small example x 1 dot is like that and x 2 dot is like that. So, x 1 dot x 2 dot if you put equal to 0 that means, equilibrium point has to be 0 that is the first condition we have see actually. So, if it is equal to

0 the next one is 0 and this is equal to 0 then this is already 0 this will be x 2 equal to 0 we want to see what is an equilibrium point actually. So, if we want to see what is an equilibrium point we put this one 0 and this is also 0 and this will give me that

x 1 equal to 0 and this will give me, that once x 1 is 0 this term is not there, that means x 2 is 0 actually. So, I got x 1 x 2 equal is 0 I do not have to do any coordinate transformation basically so, x 0 has to be an equilibrium point actually.

So, now we start with applying this variable gradient method so that means the we start with del V by del X let see g of X and we assume that we have to select this g of X in such way that del g by del X has to be symmetric matrix. So, if I select something

like A times X del g by del X is nothing but A basically, symmetric say so if I take this matrix should be symmetry then I satisfy that condition del g by del X is symmetry basically.

So, del g by del X is something like this matrix k times x basically where this k is nothing but a symmetry matrix so I start with this k 1 k 2 and k, k off diagonal so, obviously it is symmetric matrix. So, I have to select this k 1 k 2 and k in such way that I will able to use some of the Lyapunov theorems actually my V of X when

I solve for V of X it will satisfy the first two condition actually. So, we started with del V by del X so del V by del X if I see this this is del V by del X is k 1 times x 1 plus k times x 2 and the second component is k times x 1 plus k 2 x 2 that is what we start with actually.

So, what I have taken is a this one what I simplified is I have taken k equal to 0 here already. So, if I take k equal to 0 I have liberty to selected whatever, I want actually

already. So, if I take k equal to 0 I have liberty to selected whatever, I want actually so, I select this not only symmetric matrix, but simply a diagonal matrix actually this is also symmetric anyway. So, k is there with that del V by del X is like that so if I integrate this V of X I mean integrate this del V by del X in the way that we discuss before that

means, first 0 to x 1 this integral then 0 to x 2 along x 2 path keeping x 1 as free variable basically. So, then this g 1 is k 1 x 1 so that I have to integrate and g 2

variable basically. So, then this g 1 is k 1 x 1 so that I have to integrate and g 2 is nothing but k 2 x 2 I have to integrate that one actually.

So, if it is I mean then if you very clear that this is like that actually so, as long as I select k 1 and k 2 as positive constrains then I am done, because V of X is positive definite actually. So, k 1 k 2 positive and then V of X is positive for all X is not equal

definite actually. So, k 1 k 2 positive and then V of X is positive for all X is not equal to 0 and V of 0 is also 0 that is another issue that we have to land up with expression for, which if I plug in this is x equal to 0. Then I should get V of X is equal to 0 also and this one also satisfy this expression also satisfies that condition quadratic expression

essentially and what we really got is quadratic expression actually, which is satisfy all the conditions anyway basically.

So, choose k 1 k 2 positive and then V of X is positive for all X not equal to 0 so V of 0 is equal to 0 and certainty V of X is a Lyapunov function candidate actually.

So, then V dot we have to able to show that, because V dot of X we still have to work out, I mean the way we selected here it does not give an impression, but the way we need to select in any practical problem is we first have an eye on this one whatever, system dynamics we have and based on that we select this, which will also satisfy this symmetric condition

and all that actually. So, we will worry about V dot now V dot is that so we put that k 1 k 2 and then analyze so we will end up with some expression like that. So, nothing can be said more about that until unless we know some a, b k 1, k 2 and

that. So, nothing can be said more about that until unless we know some a, b k 1, k 2 and all that actually, but V dot ultimately has to be negative definite function remember that. So now, we choose let us say k 1 k 2 one here then we will end up is some expression

that. So now, we choose let us say k 1 k 2 one here then we will end up is some expression like that still we cannot say anything more than that until you do some further analysis actually and what is that further analysis this expression what you have V dot is nothing but I can write it that way the same expression what I have. I take negative sign here and

then both will be negative and think like that and then I will be able to write it that way actually.

So, what is actually mean here and this expression further assumes that this condition is true if this condition is not true, you cannot write it that way. So, with the assumption that my original equation what I started with satisfy this a b I mean this constant a b

satisfy this condition a is greater than 0 and b is negative, a is positive and b is negative. Then I will be able to show something like that and what does it tell you now, any

negative. Then I will be able to show something like that and what does it tell you now, any expression like this that means for small x 1 x 2 this expression is going to be positive actually because this is a positive number after all actually.

So I have got some domain around this x 1 x 2, if I say x 1 x 2 and this is equilibrium some small domain is there around 0 for which this b is larger than that one actually. If

I take x 1 x 2 combination I mean x 1 and x 2 and then multiple then whatever that is there then this modulus of b what I am talking about here is greater than that value. So,

obviously, this expression that you see in parenthesis is positive value actually and once a is positive value negative of a is negative and this is also positive that mean negative that is negative. So, obviously V dot is negative definite actually so that means V dot is less than 0 in same domain D and that domain what you are talking

whatever, domain also contains equilibrium point actually that is more important actually.

But this domain what you are talking here domain D from this expression whatever you are concluding. This domain encircles the equilibrium point if it does not encircle

are concluding. This domain encircles the equilibrium point if it does not encircle does not contain that then also you will fail actually. So, also remember some of those things actually, that means V dot of X is negative definite in D and hence the system is locally asymptotically stable. So, this example is a small example after

all, but using this example we started with some condition like that where this matrix is symmetric and all that we started with that then on the way we assumed this diagonal matrix and on the way you assumed this is k 1 k 2 identity that means, we simple started with as good as telling that we simple started with del V by del X equal to x 1 x 2 I mean

this x 1 and x 2 in the second component that is why we started with. Then carried out this algebra and then tell ok V of x is like that and then satisfy the property that V of X wants, but V dot of X when you want to conclude something we have to do further assumptions and all where you assumed k 1 k 2 one and then also assumed that a is positive and b

is negative. Under those assumptions it turns out that

is negative. Under those assumptions it turns out that V dot is negative in some domain D which contains origin and hence in that domain at least the system V dot is negative and hence, the system is locally asymptotically stable. And also

there is an another concept that we are going to discuss in next class or may today this class itself or something. V dot is negative in the domain D does not mean that your trajectory is starting with it is domain any point in domain D will ultimately go to equilibrium that is not there. All that it tells you V dot is negative definite in domain D I mean

does not necessarily that if you start with any point in domain D will ultimately go to equilibrium point. So, in other words this region D that you

equilibrium point. So, in other words this region D that you are talking about is not necessarily the domain of attraction so, there is a domain of attraction concept which will a subset of D. So, we will talk about that later in this class or probably in next class or something I mean probably in this class itself and think like that we will see that. But their actual domain of attraction that means, if you start with that

domain you will ultimately go to the equilibrium and all is some concept called domain attraction that is going to be some sort of subset of this D. And then it will invoke LasalleÕs theorem and all that it is substantially harder to conclude that domain actually.

So, but remember that the property is that you are talking about is in some domain D, which will tell you that the system is locally asymptotically stable that is all it will tell you, but it does not mean that the entire domain D the system trajectory is attractive nothing like that just remember that actually.

Now, that is all about the this variable gradient method what about the other method. This is

something call karsovskiiÕs method, it is very straight forward in fact we will not worry about the proof part of it here and all that well let see we just want to understand this actually. And this is like let us assume the system same X dot equal to f of X and

this actually. And this is like let us assume the system same X dot equal to f of X and we construct this A of X matrix the regular way A of X is del f by del X which is nothing but Jacobean matrix. We construct the same kind of a linearization concept sort of something, but you are not really doing any linearization we are not linearizing the system dynamics.

We are simply evaluating the expression del f by del X we are not evaluating it around the equilibrium point nothing actually. So, we will end up with A of X which is actually a matrix which is nothing but Jacobean matrix. And this theorem tell us that if we construct

this A of X and then you construct this F of X which is another matrix which is nothing but A plus A transpose and if this F of X matrix turns out to be negative definite matrix for all X and D. Then the equilibrium point is locally asymptotically stable, understand

the condition I mean it is actually comes under a theorem actually, but it also gives you a construction of Lyapunov function. Because this associated Lyapunov function turns out to be like that this V of X is actually f transpose f where f is the original system dynamics matrix I mean system dynamics vector, vector function. So, if I start with this

f transpose f instead of f transpose X then this gives us a candidate f transpose f actually.

f is also a vector remember that. But the f transpose f is a Lyapunov function candidate provided this f of X which his nothing but A plus A transpose is a negative definite matrix that is the only condition. So, given any system dynamics I cannot jump into that I cannot tell f transpose f is only Lyapunov function candidate this is candidate provided

this follow is actually a negative definite matrix function and that is where it is difficult actually. Because if it happens to be let say if it happens to be only some sort of

actually. Because if it happens to be let say if it happens to be only some sort of a linear function of f, f is a linear function and all then A of X will be some sort of a constant matrix. Then constant matrix I will be able to tell

constant matrix. Then constant matrix I will be able to tell negative definite for all X and all that the movement it happens to be a function of X then I have to very carefully in telling that whether it really happens to all X in the domain or not actually that way. And that is where it is difficult, but then this I

mean this theorem is very handy also in a way. Now, what does it tell it tells me that I evaluate a jacobian matrix and then I construct this A plus A transpose and this matrix function what I am talking this big F of X has to be a negative definite function for all X which belongs to D actually. Then the equilibrium point is certainly going

to be locally asymptotically stable and the corresponding Lyapunov function is f transpose f actually, proof is there I mean I will not going to discuss proof part of it you can see some of the text books to see the proof actually. And the same text book that, I mentioned

in my previous lecture that will also contain that actually. So, if D happens to be R n that means the entire R n space and V of X happens to be radially unbounded remember there is no guarantee, because f transpose f we are talking here actually.

So, we are not designing we are not selecting it our self actually that way so, there is no guarantee that it will be radially unbounded, but if it is happens to be radial unbounded and D happens to be the entire R n space. Then the equilibrium point will be globally

asymptotically stable so, this is a karsovskiiÕs method and it is very powerful thing and it is a direct way of evaluating and all that actually.

So, let us see this why it so and all that actually why this A transpose A comes into picture it is very straight forward if my V of X is a like this f transpose f then my V dot is turns out to be f transpose f dot plus f dot transfers f. So, f dot is nothing

but del f by del X into x dot this is f dot, f dot is del f by del X into f dot alright f is a function of X only so, f dot is nothing but del f by del X into X dot. So, that is what I do here and similar thing I do here in the reverse transpose and all that then

I will end up with this when I put this X dot is nothing but f of X right X dot is f of X. So, I substitute that X dot is f and X dot

of X. So, I substitute that X dot is f and X dot transpose is f transpose anyway then I will end up with something like this so, if I start with V is something like f transfers f I will end up with V dot something like f transpose f big F times small f. And where big F is defined as A transpose A sorry A transpose

plus A so, if f of X this big F is negative definite then obviously, this V dot is also negative definite because this expression tells me like that actually this is a quadratic expression in terms of f actually. So, if big F is negative definite matrix function

then V dot is ultimately a negative definite scalar function. So, what is it I am so, we want to show a negative definiteness of a scalar function, but we are taking the help of negative definiteness of a matrix function. So, that is actually increase of complexity

in way, but any way the theorem tells you if I start with like this I will end up with some expression like that and hence, if my this f of X is negative definite then obviously, V dot is also negative define and I am done actually.

So, obviously, a thing to note here is the global asymptotic stability of the system is guaranteed by the global version of the Lyapunov direct method actually. While the

usage of this result is fairly straight forward its, applicability is limited in practice since this big F of X for many systems do not satisfy the negative definite property that is what I told in the beginning also. This is actually a nice thing to see in math, but this is a complexity amplification actually, like in otherwise you want to show a negative definiteness of a scalar function, but you are taking help of negative definiteness of

a matrix function which is actually amplification of complexity actually.

But it is so happens, that your F of X contains only small expressions of X and all that then maybe we will be able to do that actually.

Now, there is a generalized version of that this karsovskiiÕs theorem, which tells us the like some sort of Lyapunov equation expression sort of thing it will happen here. Let see

that actually what it tells me is let me evaluate this A of X del f by del X anyway and one more sufficiency condition tells me, that the origin is asymptotically stable. If there

exist two positive definite matrices P and Q such that for all X that is not equal to 0 the matrix this expression remember in Lyapunov equation this is equal to 0, but in general it is not equal to 0. So, A of X where, A is a matrix function now,

but there exist two pdf matrices P and Q such that this expression is negative semi definite.

Then in that neighborhood D of the origin then the system is asymptotically stable and if in addition this D is equal to R n and this V of X is radially unbounded and think like that then obviously it will lead to globally asymptotically stable condition actually.

So, what is that mean actually so I am able to relax this negative definite condition what I am demanding here to negative semi definite condition that is the more important things. And as we know that this negative definite to semi definiteness is lot of simplicity

things. And as we know that this negative definite to semi definiteness is lot of simplicity actually. So, all that it tell us that there has to

actually. So, all that it tell us that there has to okay; here it told what actually it simply very straight forward you just talk about evaluating this A transpose A and if it happens well and good otherwise no actually. But here

it tells okay wait a second I can select this two positive definite matrices P and Q such that this expression, which has an additional component Q now and additional P is multiplied and all that see originality it is A transpose plus A that is all. Now, you have A transfers P plus P A plus Q where P and Q are supposed to be selected by us so, it gives us lot of

flexibility. And it tells f of X need not be negative definite, but it needs to be only

flexibility. And it tells f of X need not be negative definite, but it needs to be only negative semi definite which is another simplification actually.

So, that is what the krasovskiiÕs general theorem and all that and this V of X earlier this V of X was f transpose f now, this V of X will be f transfer P times f of X. If

you tell that now, this is actually nice in way because if you think that linear time invariant system and all X dot is equal to A X and all that. And then you tell if this P is kind of let us say identity matrix and all so, V of X is nothing but X transpose X. But if P is positive definite then it will take the X transfer P X and that is what we

X. But if P is positive definite then it will take the X transfer P X and that is what we did in linear time invariant systems in last class actually.

So, then it resulted in a Lyapunov equation which is nothing but this expression equal to 0 whatever, the Lyapunov equation this particular expression that we are talking here that happened to be equal to 0 for LTI systems actually. So, all these are kind of

extensions of that what you already know in a way actually and why it is so, because if I talk about this linear time invariant system this is my f of X this A times X. So, what

is my del f by del X? Del f by del X is simply A and I will end up with simply A basically so, all these concepts are kind of nicely exploited in that side actually.

So, what is the proof part it like if I take V of X is equal to let say f transfer P f then V dot will follow through this algebra which is very straight forward anyway. We

done it before, but here this time that P matrix will come into picture here and then f dot you substitute and again this is like X dot is f actually so, we end up with this kind of a thing. Now, you add and subtract Q matrix here, we are just that doing additional algebra here so, interpret that as one component is coming from that and one component is that.

Now, if Q happens to be negative definite then this expression is again negative definite anyway and hence all that you are demanding for here is that this expression has to be negative semi definite. By definite what you are demanding there exist two positive definite matrices P and Q. So, if my Q is already positive definite that

means, this expression not this expression this entire expression this starting from negative sign, negative sign included this this entire expression. If Q is positive definite then minus f transfers Q f is negative definite so, all that it means I have to make sure that this entire expression what I see in the left hand side not left hand side was

left part of it this part has to be negative semi definite. So, that is what it tells you that okay this expression has to be simply negative semi definite that is what it tells actually.

Another example lets analyze the stability behavior of the following system again it is a small example in two dimensional and all that. So, del f by del X happens to be like this however, this is no more a simple number basically, if I have a linear system we will end up with number actually. But if it is non-linear system I have an expression

here and sense this f is A plus A transpose contains an expression basically. So, I have to make sure I have to analyze this F and then control something about that so, I carry out with the Eigen value of f I will be able to do that, but Eigen value itself is a function of x 2 this matrix contains an expression of x 2 so, Eigen value will also be a function of x 2 basically.

So, let us put that Eigen value expression I mean this characteristic equations and then try to analyze this so, this will lambda plus 12 whole square plus lambda plus 12 into that minus this 4 into 4 is 16 actually so put it there. And then expend that and then collect the coefficients and think like that so, lambda happens to be like this actually so these

are functions of x 2 basically.

So, we analyze this expression now try to simplify 2 you cancelled out and think like that whatever simplification is possible here. Then you tell wait a second this expression that I have here inside is certainly going to be in between 0 and this expression 12 plus 6 x 2 square and because this is something and I am actually taking out something else.

But no matter whatever, this expression this expression is going to be bigger than that so, if this expression is going to be bigger than that this entire expression is bounded between 0 and that value and hence, what I am telling this is plus or minus anyway.

So, when I talk about lambda, lambda is certainly less than equal to 0 for all x 2 in R basically, but this is an expression whatever it is contained right if it is negative is further reduction if it is positive then there was a chance actually to getting to make it positive and

all that, but if it is positive this expression is never going to super power this expression, because this expression is less than this follow anyway. So, it is less than that, even if I add that quantity then also I will be landing up with some negative quantity only so, this lambda is guarantee to be some sort of negative number basically so, that means

A is negative definite in R square actually. I mean no matter whatever, that expression and that quantity and all that this A is going to be negative definite actually.

So, this is why this is V of x and that means I am done with that, it turns out that using this karsovskiiÕs theorem and all that my f which is A plus A transpose turn out to be negative define. And hence, my Lyapunov function is given as f transpose f, which I can evaluate and this f transpose f satisfies all the condition that f transpose f if I

evaluate at X equal to 0 that means x 1 and x 2 both equal to 0 this V of x is 0 if they are non 0 this is actually like positive quantity remember some expression whole square plus this some expression whole square so, this is guarantee to be positive number so, this expression is positive definite. And this is also radially unbounded that means, if

I increase this negative I mean this norm if I go more and more away from away 0 origin then this expression is going to be more and more it goes to infinity actually.

So, was norm of X goes infinity this expression also goes into infinity basically that means, it is also radially unbounded. So, what is this in essence actually I will end it up with some V of X which is a positive definite and which is radially unbounded and for, which

my V dot of X is guaranteed to be negative definite. That means, I have this globally asymptotically stable condition actually that means, X equal to 0 happens to be globally asymptotically stable equilibrium point. So, that is how we kind of I mean use some of

these theorems and all that now, we can see some of the further concepts this is like a construction of Lyapunov functions. Now, how about doing this I mean the other issue so what happens when this Lyapunov function we will end up with this V dot is negative semi definite and all that actually. What we do about that and that is where we need

this concept of LaSalleÕs kind of theorems and all that before, which we want to study what this invariant set limit set and things like that actually.

So, let us see what that thing here is so, what we talk about is like a invariant set so a set to be invariant set with respect to some system of X. Set M is an invariant set provided if my initial condition belongs to that set, then for all time my solution

also belong to that set. So, it is obviously invariant it does not go anywhere else actually so what is the example obviously, an equilibrium point is an invariant set if I start on equilibrium I will stay in equilibrium always. My solution trajectory also in invariant set if I start with any point on the trajectory I will go on the trajectory only so if I take all the points, on the trajectory and define a set.

Then obviously, that set is invariant actually I start with any point on the trajectory and I will keep on staying on the trajectory actually. Then obviously a limit cycle is also invariants set and then also limit cycle is a cycle in the state space, in a close curve in the state space. So, that means, this is a close curve let say if I start any point on the limit

space. So, that means, this is a close curve let say if I start any point on the limit cycle then I will keep on moving on the limit cycle actually. And then there is an another example which tells us that if I define omega l such way that V of X is less than equal

to some number l some positive number l. Where V of X is a continuously differentiable function such that V dot X is less than equal to 0 along the solution of that then this set is also invariant and why this definition, because this definition is what we are going to use in LaSalleÕs theorem in domain of attraction and all that actually. So, these

this examples are very intuitive but this example is very useful actually so, we define a set which is like X belongs to R n where V of X is less than equal to l that means this define some sort of level set actually. If V of X is equal to l then that will define

a set which is a level set but we will define V of X is less than equal to l so, it is an entire domain sort of thing within that domain V of X is continuously differentiable such that V dot of X is also negative I mean less than equal to 0 along the solution. So, just

remember that, that set is also an invariant set actually. So, this condition what it means if you start with that and then V dot is less than equal to 0 that means, I will never be able to come out of this set actually if I define this some sort of a number here because V dot is negative anyway. So, I started with some positive number, but V will keep on decreasing

so, I will never be able to come out of that set basically, that is the meaning actually.

Now, what is limit set? Limit set tells us that the dynamic system X dot f of X is the definition, big definition anyway let X of t be a trajectory of the dynamical system X dot equal to f of X. Then the set N is called the limit set or positive limit set of X of

t, if for any P that belongs to be this set N there is a sequence of limits t n this curly bracket is a sequence actually and then sequence of time actually t 0 t 1 t 2 t 3 and all that.

So, for any point p that belongs to N there exist a time sequence t n zero to infinity such that ultimately my X of t n will approach this point p actually.

When t n goes to infinity then my X of t n should to go this point p actually so, roughly speaking the limits set N of X t is whatever, this X of t finally goes actually, X of t will go somewhere right obviously. So, wherever, it goes that is actually the limit set actually I am go to equilibrium point it may go to a limit cycle also if it is a limit cycle

it is actually infinite point right on the limit cycle. So, roughly speaking that N I mean the limit set actually is a set of point that wherever, this X of t tends to in the limit actually when that t goes to infinity it will go somewhere that is the set actually.

And then example obviously, an asymptotically stable equilibrium point is the limit set of any solutions starting from a close neighborhood of the equilibrium point by definition. If

it is asymptotically stable equilibrium point obviously, this trajectory is going to that equilibrium point anyway so, obviously in the limit the solution will converge to the limit point actually that is equilibrium point hence equilibrium point is a limit set actually.

But that equilibrium point must be an asymptotically stable equilibrium point otherwise you cannot tell that because the solution may not converge to that otherwise. And if it is unstable equilibrium point it does not make any sense actually, because it will not have any I mean probably if you consider only that equilibrium point as a set probably it is okay.

But any domain around that will not satisfy that actually also as I told the stable limit cycle is also a limit set, because the concept of stability of limit cycle is also there

that means, if I consider this limit cycle to this is the limit cycle. Let say and this keeps on moving around that so, if I have

some domain around that limit cycle and in that domain let say if I start with some initial condition in that domain then ultimately my solution converges to that that concept is called stability of the limit cycle actually. Either in some neighborhood of the limit cycle my solution will converge to that limit cycle actually. So, if that happens that means stable

limits cycle is also a limit set actually.

So, this is actually a very useful theorem it is actually like a subset of what we will study as LaSalleÕs theorem actually this theorem tell us that this V of X is positive definite a regular condition like that. But V dot of X negative semi definite and that

is what the utility comes actually, let V of X is positive definite function in domain D V dot of X is a negative semi definite function in a bounded region which is subset of D.

And this V dot of X does not vanish that means V dot of X is not equal to 0 along any other trajectory in R other than the null solution X 0, X equal to 0 that is a critical condition actually. See if V dot is negative semi definite it

actually. See if V dot is negative semi definite it is fine, but V dot of X is not equal to 0 anywhere else other than X equal to 0. So,

using that additional condition we will be able to show asymptotical stability in many case actually and that is where it becomes powerful theorem actually. And if the above condition hold good for R equal to R n and for entire surface and all that actually and then this region R what you are talking about this region R happens to be R n and then V

of X is radially unbounded obviously, we will end up with this globally asymptotically stable condition also basically. So, this theorem is actually very powerful theorem because many times where you struck otherwise that V dot is negative semi definite we will be able to close the chapter and tell okay even if it is a negative semi definite

V dot of X happens to be 0 only at 0 X equal to 0 and hence, using this theorem the system is still asymptotically stable.

And let see that same example that we discuss last class, we see that pendulum with friction and we started with this Lyapunov function kinetic energy plus potential energy. Landed

with the condition for, which this V dot of X was negative semi definite what about next now, you have to analyze what happens to V dot of X equal to 0 whether it remain 0 only at equilibrium point or it happens anywhere else also.

So, now the for that we need to study it so let us study V dot of X equal to 0 for all time that is the condition that we want to study when it will happen actually. Now, V

dot of X equal to 0 for all time means, if I see this V dot expression that means this expression has to be equal to 0 for all time that means x 2 equal to 0 for all time. Now,

if x 2 equal to 0 for all time then x 2 dot is also equal to 0 for all time now, I will go back to x 2 dot expression and x 2 dot expression is this one now that means, using this condition here I will be able to tell that this happens to be 0 for all time. Now,

if this happens to be 0 for all time this expression now x 2 is 0 already right.

So, if x already then sin of x 1 is 0 and sin of x 1 is 0 that means x 1 is 0 in that domain actually. The only solution is x 1 equal to 0 in that domain so as long as I

domain actually. The only solution is x 1 equal to 0 in that domain so as long as I start with the domain remember it is an open set that means the vertical equilibrium condition is rule out actually. If it is close set then I will end up with vertically inverted pendulum actually, vertical inverted position equilibrium point that I am not including here. But any

other point around that if the entire region the only solution for which V dot is equal to 0 is the equilibrium point that I talked about that means their existence a region R for which V dot is actually equal to 0 only on the equilibrium point nowhere else actually.

So, using this theorem that we just talk that means V dot of X does not vanish along any other trajectory other than the null solution X equal to 0 and using this theorem we will able to tell that this system this X is equal to 0 is asymptotically stable. So, that is what you are able to do that actually so, we will end up with this V dot is negative

semi definite, but we exploited that, analyze that little further and tell V dot is equal to 0 where it is equal to 0 really. And if it remains zero for all time V dot of 0 then it will remain 0 for all time only at the equilibrium point nowhere else actually.

So, there is a region for which this condition holds good and hence using this theorem which is a subset of LaSalleÕs theorem, we will able to show that this system is still asymptotically stable. And now we are happy because we know that is reality in the pendulum with friction

stable. And now we are happy because we know that is reality in the pendulum with friction is supposed to go to 0 ultimately actually.

So, that is how we will able to show that how about example 2. Let us talk about another example x 1 dot is x 2 x 2 dot is that and think like that. So, V of X is actually alpha x 1 square plus x 2 square where alpha is greater than 0, then V dot of X is del V by

del X transpose times f of X and then tell okay take you through this algebra and all that, because f 1 and f 2 are available you put it there and then you tell this is my expression actually.

So, V dot if I analyze that happens to be like that again unfortunately this is actually negative semi definite, it is not negative definite and because in this expression if this one is not there then I will be able to show that. But this 1 plus expression creates a problem here actually it does not tell that is negative definite but it is certainly negative

semi definite. So, that is what I am telling many of the times we will be able to show

semi definite. So, that is what I am telling many of the times we will be able to show that it is negative semi definite but negative definite will not be able to, so what I mean we will consider the same condition that V dot is equal to 0 for all time then it tells me that V dot this expression anyway. So, this is never 0 because 1 plus some expression

is never 0 so that means x 2 dot I mean x 2 has to be 0 for all time. That means, x 2 dot is also 0 for all time that x 2 dot is 0 means this is x 2 dot expression so, this expression has to be 0, but x 2 is already 0. So, wherever x 2 is there I will take out

and that will give me this is 0 x 2 is zero already so that is 0 and this is also 0 so, what is left out the left out is only that so that means x 1 is also 0. So, that means x equal to 0 actually. We are able to show that V dot of X is 0 only on the equilibrium

point nowhere else actually. So, that is how we are able to conclude using this theorem that this equilibrium point that we are talking is still an asymptotically stable equilibrium point.

So, what we have shown that V dot of X does not vanish along any trajectory other than X is equal to 0 V dot is negative semi definite and V of X happens, to be radially unbounded also actually right this V of X what you are talking is certainly radially unbounded, it is a quadratic function after all for alpha greater than 0. So it is radially unbounded.

So, certainly the origin is globally asymptotically stable basically now, finally the full the LaSalleÕs theorem, because the full LaSalleÕs theorem the original LaSalleÕs theorem is much more than that. And it actually define some sort of this set I mean defines another set actually we see that in brief that let say m I mean, this V which is define from

D to R which is continuously differentiable. So, all that remember this V of X that you are demanding here in LaSalleÕs theorem is all that it needs to be continuously differentiable positive definite function happens to be continuously differentiable functions. But what we are talking here is this V need not to be a Lyapunov function it is a function which is continuously

differentiable. And this three four conditions are valid that

differentiable. And this three four conditions are valid that means, M which is a subset of D is compact set and compact set by definition is closed and bounded actually. Set has to be closed as well as bounded so, that M is a compact set which is invariant set with respect to the solution actually. So, that means either

it will something like an equilibrium point something like limit a cycle something like a solution trajectory or whatever thing we discuss before it has to be one of that. Then

V dot is negative semi definite in that particular set and then we will define a set E for which this V dot is equal to 0 all these things whatever, examples we have studied all that will satisfy that basically. So, if you want to see examples you can go back and see these examples carefully and try to have a meaning yourself actually. So,

that is what it tells us actually so, D belongs I am mean this V dot is negative semi definite in M and then define another set E will that way that means X belongs to M such that V dot S is equal to 0. And then that means, this E is set of all points of M for which

V dot is equal to 0 that is all it mean then you tell another set which is actually largest invariants set in E. So, we have to talked about several definitions and notations here actually so, we define like M which is subset of D is a compact set invariant with respect

to solution the system V dot is less than equal to 0 in that set M.

Then we define a set E for which V dot is equal to 0 not less than equal to 0, but equal to 0 so obviously, that E is a subset of M. And then you define another set N which is largest invariant set in E that means N is subset of E. So, ultimately we will be able to tell that every solution that starts in the domain M starting in set M will ultimately

converge to N that is what the theorem tells actually as N approaches to infinity.

So, what it means actually is if I start with any set M which is define something like that way, then I will be able to go to N basically so, it is subset, subset like that and ultimately I have some subset for which the solution is eventually go actually. And all these theorem

that is what we discuss as subset of this theorem I mean this subset of LaSalleÕs invariant theorem and all that will satisfy all that actually. But this is with respect to an equilibrium point only but that is with respect to an invariant set and hence it is much more general.

And this generalities written I mean remarks are like that, that V of X is required to be only continuously, differentiable it need not to be positive definite and LaSalleÕs theorem applies not only to equilibrium point, but it also is very much general and it can be studied for something like stability behavior of limit cycle as well. So, the earlier theorem

so I mean that on asymptotic stability can be derived as a corollary of this theorem actually this is the theorem that we discussed. It is actually like the earlier theorem that we discussed can be derived as a corollary very straight forward corollary rather actually.

If you simple define this sets and N happens to be the equilibrium point ultimately actually so it will happen to be like that, anyway with this thing. I think we will stop here in this class and we will study this theorem once again and see some examples and then proceed further to exploit this in something called domain of attraction with some examples and all that later in next class. Thanks a lot.

Loading...

Loading video analysis...