Observa Dor Neuronal
-
Upload
chino-leonel-garcia -
Category
Documents
-
view
236 -
download
0
Transcript of Observa Dor Neuronal
-
7/31/2019 Observa Dor Neuronal
1/14
-
Scheme or Tra ector Trac inwith Constrained Inputs
1
-
7/31/2019 Observa Dor Neuronal
2/14
-
7/31/2019 Observa Dor Neuronal
3/14
Now let us consider the recurrent high order neural observer
( ) ( ) ( )
T
T
A Wz y bv t K y c
c
= + +
=
(4)
( )where is selected such that is Hurtwitz and ( ) is anTK A Kc v t.
Definin the state and out ut estimation error as
( ) ( ) ( )
whose dynamics is given by
t t t =
( ) * ( ) ( ) ( ) ( )
T
T
t A Kc W z y Wz y bv t = + +
=
(5)
3
-
7/31/2019 Observa Dor Neuronal
4/14
Adding and substracting the term ( ), we obtainWz y
( ) ( ) ( ) ( )c zt A Wz y W y bv t = + + + (6)
*
T
c
W W WA A c K
= =
( ) , ( ) ( )
The term ( ) is bounded according to
y y z y z y
y
=
( )( ) - ( )
z y z y L y
y
The stabilizing signal ( ) is determined via the Lyapunov
methodology.
v t
4
-
7/31/2019 Observa Dor Neuronal
5/14
Stability analysis
{ }
1 1
2 2
T TV P tr W W = + (7)where P 0 is a positive definite symmetric matrix whichsolves the Ricatti equation
>
T
C C
T
A P PA -Q
Pb c
+ =
=with positive definiteQ symmetric matrix.
The time derivative of is given byV
{ }1 1 1( ) ( )2 2
TT T
c cV P A f bv t A f bv t P tr W W
= + + + + + +
5
( , ) ( )zf W y y Wz y= +
-
7/31/2019 Observa Dor Neuronal
6/14
which can be simplified as
1
( ) ( , ) ( )T T T
V PWz PW v t = + + +
{ }1 Ttr W W+
we now define the learning adaptation law
1 T T T
2
which can be written term b term as
r z y y z yb
= =
iw = 21
Ti j
b yz (y)b
Replacing the learning law in the Lyapunov time derivative,
we obtain
62
1 1
( , ) ( )2
T T
zV Q b W y y y yv t b = + +
(10
)
-
7/31/2019 Observa Dor Neuronal
7/14
Applying the inequality
T T 11which holds for any vectors , , to the second termka b R
2 2 2 2
,
1 1 12
,2 4
1 1 1
y y y yv tb
+ + +
2
1 ( )2 4
y yv tb
+ + +
f
2 2 2
2
1 1 11
g
T
fV Q b W y
= + +
7gV y =
-
7/31/2019 Observa Dor Neuronal
8/14
Then, the observer stabilizing signal ( ) which guaranteesv t
2 2
2
11v(t) b W y
= + (12)
( )1 , 1, gR Vy W >
( )2
ep ac ng , we o a
11 1
n
1T
v
QV
= + 2 2
b W y
(13)0
( )2 2
2
1
( ) 1 TA Wz y b b W K y cy
= + + + +
8 Ty c =
-
7/31/2019 Observa Dor Neuronal
9/14
Once the observer stabilizing signal is obtained,
INVERSE OPTIMALITY ANALYSIS
we proceed to analyze its optimality with respect to
a cost functional defined by
( ) ( ) ( )( )0
lim 2 , ,t
T
tJ v V l W v R W v d
= + +
(14)The Lyapunov function solves the Hamilton-Jacobi-Bellman
equation.
( ) ( )12, 2 , 0
2 is bounded when
T
f g gl W V VR W V
V t
+ =
( )We require , positive defined and radial unbounded
with respect to ,
l W
9,l
( ) ( )
12
2 ,
T
f g t gW V VR e W V
= + (15)
-
7/31/2019 Observa Dor Neuronal
10/14
Substituting (15) into (14), the learning adaptation law and then applying
( )( ) 2 2
2
21 1, 1
Tl W Q yb W
+ +
,
being positiv
,
e definite and radiallly unbounded.
Hence, (15) is a suitable cost functional which is evaluated replacing
T = , ,
into (13) to obtain
=
10This optimal value is achieved by the stab ( )ilizing signal .v t
-
7/31/2019 Observa Dor Neuronal
11/14
Consider the Van der Pol nonlinear oscillator d namical s stem
Simulation Example for Neural Observer.
1 2
20.5 0.5cos 1.1
x x
x x x x t
=
= +
1
0 0 0.25
y x
x=
=
We use the neural observer given by (4) with
= 400 2K 600 470=
[ ] (0) 0.5 0.5 =
high order terms and another simulation considering them.
For the second case, we consider 10 hi h order
11terms ( )( ) tanh( ) 0.65i
iz x ky k= =
-
7/31/2019 Observa Dor Neuronal
12/14
RHONO without high order terms
12Time evolution of the estimated states 2Estimation error forx
-
7/31/2019 Observa Dor Neuronal
13/14
RHONO with ten high order terms
13Time evolution of the estimated states 2Estimation error forx
-
7/31/2019 Observa Dor Neuronal
14/14
RHONO with ten high order terms
14