RESEARCH OF STABILIZATION CONDITIONS AND ROBUST STABILITY OF DISCRETE ALMOST CONSERVATIVE SYSTEMS

Conditions for the stabilizability of discrete almost conservative systems in which the coefficient matrix of a conservative part has no multiple eigenvalues are investigated. It is known that a controllable system will be stabilized if its coefficient matrix is asymptotically stable. The system stabilization algorithm is constructed on the basis of the solvability condition for the Lyapunov equation and the positive definiteness of P 0 and Q 1 . This theorem shows how to find the parameters of a controlled system under which it will be asymptotically stable for sufficiently small values of the parameter e (P>0, Q>0). In addition, for a small parameter e that determines the almost conservatism of the system, an interval is found in which the conditions for its stabilizability are satisfied (Theorem 2).


Introduction
When developing modern navigation and gyroscopic devices for aircraft industry and shipping industry, controlled mathematical models are often used [1], which belong to the class of almost conservative systems. Recently, the search for analytical solutions in the construction of robust stable systems has come to the forefront of research. Therefore, finding in the analytical form of a robust controller for discrete almost conservative systems is topical.
In [2], the problem of stabilization of continuous almost conservative systems with a small parameter e>0 is considered. Let's apply this approach to the stabilization of discrete almost conservative systems. The basis is the fact [3] that, under certain restrictions on the matrix coefficients of almost conservative systems, there exists e 0 >0, for which the parameter e from the interval (0, e 0 ) does not affect the stability of these systems.

The aim and objectives of research
To investigate the stabilizability conditions of discrete almost conservative systems in which the coefficient matrix of the conservative part has no multiple eigenvalues if the parametere is small enough.
To construct a robust controller for discrete almost conservative systems in the analytical form.
To expand the limits of the application of the method [2,3]. If necessary, to find additional conditions those ensure the symmetry of the matrix P 0 in the case of discrete almost conservative systems.
To find the interval for parameter e for the closed system in which the stabilization conditions are satisfied.

Construction of a stabilizing controller in analytical form
Let's consider a discrete controlled almost conservative system where xÎÂ n -the state vector, F 0 ÎÂ n×n -the orthogonal matrix ( ) TT 0 0 00 F F FF I == , F 1 ÎÂ n×n -an arbitrary constant matrix, uÎÂ m -the control vector, GÎÂ n×m -the matrix for control, e> -a small parameter.

Mathematical Sciences
Let's assume that the control of the system (1) depends linearly on its state where HÎÂ m×n -some unknown constant matrix.
The closed system (1), (2) will be stabilizable if its coefficient matrix F 0 +e(F 1 +GH) is asymptotically stable, that is, it satisfies the Lyapunov matrix equation [4] [F 0 +e (F 1 +GH)] T P [F 0 +e (F 1 +GH)]-P=-2Q, where P, QÎÂ n×n are positive definite matrices. Let's find the matrix H on the basis of a positive definite solution P, Q of the reduced equation.
On the basis of the form of equation (3), let's find symmetric solution matrices P and Q in the form of power series in the small parameter P=P 0 +eP 1 +e 2 P 2 +…, Q = Q 0 +eQ 1 +e 2 Q 2 +… (4) Let's assume that the conditions for the convergence of the series (4) [5] are satisfied. The matrix equation (3) is equivalent to an infinite system of equations [3,6]: Equation (5) shows the commutativity of the matrices F 0 , P 0 . Let's assume that the orthogonal matrix F 0 does not have multiple eigenvalues. In this case, the matrix P 0 can be represented in the form [7] n1 0 0n 10 n10 where α 0 , α 1 , …, α n-1 are the free parameters. I n is the identity matrix of dimension n. In contrast to the continuous case, the right-hand side of equation (7) is not symmetric, so the problem arises to ensure not only its positive definiteness, but also symmetry, which can be done with the help of free parameters. Using some non-degenerate orthogonal transformation UÎÂ n×n , the matrix F 0 can be written in the form Let's multiply the i-th equation of the system (6) from the left by U T and from the right by U and obtain , and D i is the right-hand side of the initial equation. In (9) it is taken into account that ( ) In [8], the solution of the Lyapunov matrix equation for discrete almost conservative systems is given, when the orthogonal matrix has the canonical form (8). It is shown there that the elements of the matrix i D  must satisfy the following conditions for the solvability of equations (9): Thus, the right-hand side of equation (9) must satisfy r conditions and the same number of free parameters remains in the matrix i P  after its calculation from the i-th equation. Let's assume that the matrices Q i , i=1, 2, … are given in a specific or parametric form, but so that the calculated matrix Q is positive definite.
The transition to the canonical form can be done with the help of a non-degenerate orthogonal transformation, therefore the solvability of equations (6) is also involved some r conditions with respect to the right-hand side of these equations. On the other hand, after solving such equation, not all parameters of the matrix P i will get specific values, but free parameters will remain. They will get values at the (i+1)-th step, similarly as in [9], or at subsequent steps when conditions are agreed.
It is known [10] that a linear matrix equation has a solution when the rank of the coefficient matrix is equal to the rank of the extended matrix. Let's find similar conditions to which the right-hand side of equations (6) must correspond so that they have a solution. To do this, let's pass to the equivalent equation, which coefficient matrix has the size n 2 . This can be done through a direct product [11].
Since the right side of i-th equation system (6) is denoted by D i , then l-th rows respectively D i , Pi matrix are denoted by D i,l* , P i,l* . Let's obtain the following equivalent system of equations: where TT n n0 0 Ä is the symbol of the direct product. The symmetry of the matrix P i with the solution q i is reached by means of free parameters.
So, if the equalities (necessary and sufficient conditions) are satisfied, then equations (11) are solvable. In other words, the matrices of both sides of (12) have a common zero-space. The matrix TT 00 FF ⊗ has eigenvalues l l l j [11], where l l , l j , { } l, j 1, n ∈ are the eigenvalues of the matrix F 0 , that is, the n eigenvalues of the matrix F are zero because ll 1 λλ = ,

Mathematical Sciences
Thus, the free parameters of the matrix P i-1 , i=1, 2, …, can be determined from the condition for the solvability of the i-th equation, namely, To perform (14) and calculate the free parameters of the matrix P i-1 in the scalar product (g, x i ), it is necessary to equate the coefficients for arbitrary constants to zero. Now let's calculate the matrix P i with free parameters. First, let's reduce the matrix F  to the upper triangular form by means of left elementary operations on polynomial matrices [10], which correspond to the matrices S 1, S 2 ,…, S l . Thus, the transformation matrix S=S l S l-1 …S 1 allows to proceed to a simplified system of equations From equations (15), the vector q i is found quite simply by calculating unknown elements from the bottom up. The symmetry of the matrix P i is coordinated by means of a part with n free parameters.
It should be noted that the separation of procedures for finding the matrix P i with free parameters and the specific values of the free parameters of the matrix P i-1 leads to a simplification of calculations and identification of internal relationships of the equation that can be used to solve other problems, for example, the problem of system stabilization.
Let's describe the stabilization of the system (1), (2), based on condition (14) and a positive definite solution of the Lyapunov matrix equation (3).
Theorem 1. Let's suppose that a general orthogonal matrix F 0 of the system (1) does not have multiple eigenvalues, and Q 1 is a symmetric positive definite matrix.
Then, if the elements of the matrix H and the expansion coefficients (7) satisfy the following conditions: T 00 PP = (17) and one of the alternatives where l min (V) is the minimal eigenvalue of the matrix V, Proof. By the hypothesis of the theorem, the orthogonal matrix F 0 does not have multiple eigenvalues; therefore, the zero approximation of the matrix-solution can be written in the form of a polynomial in the permutation matrix (7). Equality (17) shows the symmetry of the matrix P 0 , which is always achieved with the parameters α 1 , …, α n-1 . (20) By analogy with (20), a sufficient condition for the matrix P 0 =α 0 I+V can be represented in terms of the dominant diagonal α 0 I, namely, inequality (18).
To obtain a positive definite matrix P 0 , the condition on the parameter α 0 can be represented in another way. It is known [10] that the eigenvalues of a positive definite matrix are all positive, that is, where l(×) is an arbitrary eigenvalue of the matrix. From the inequality (21) obtain the condition of positive definiteness of the matrix P 0 (19), which defines the exact lower bound for the parameter α 0 , while condition (18) can determine the overestimated one.
From the above (equations (11)−(14)) it follows that condition (16) determines the solvability of the first equation of system (6). The vector g has arbitrary constants; therefore, to determine the required parameters, it is sufficient to equate the coefficients of these constants with zero, taking into account conditions (17)−(19). If it can achieve the fulfillment of (16) with the help of the parameters of the matrix H, than let's calculate the concrete symmetric positive definite matrix P 0 from the conditions (17)−(19), and then let's find the matrix H from (16).
Thus, the matrix P 0 >0 is constructed and Q 1 >0 is chosen, therefore, in accordance with the expansions (4), the matrices P i , Q i+1 , i=1, 2, …. do not affect the positive definiteness of the matrices P, Q, respectively. So, if the matrix H satisfies equality (16), then the matrix of coefficients F 0 +e (F 1 +GH) of the closed system (1), (2) will be asymptotically stable.
It should be noted that the formulas (16) It is necessary to stabilize the given discrete almost conservative system. The matrix F 0 has different eigenvalues: therefore, to stabilize a given system, let's apply Theorem 1. The zero approximation of the solution P has the form 23 0 03 10 2 0 3 0 P IFF F =α +α +α +α .

Mathematical Sciences
The condition T 00 PP = implies that α 2 =-α 3 , 13 (1 / 2 ) α= α. Let α 3 =1, then the minimal eigen value of the matrix ( ) , hence from (19) let's choose α 0 =2>−l min (V) and obtain a positive definite matrix P 0 . Since equality TT 4 40 0 rank I I F F 12  ⊗  =  ⊗ − holds, the vector g has free parameters, there are four of them: n 1 , n 2 , n 3 , n 4 . Let's select the matrix Q 1 =I and form the vector x 1 ÎÂ 16 from the elements of the matrix D 1 along the rows from top to bottom (11). In equation (16) let's equate to zero the coefficients for arbitrary constants of the vector g and from the system of equations, two of which are independent, let's obtain the elements of the required matrix The free parameters in (22) are set equal to zero. Thus, the asymptotically stable matrix of the coefficients F=F 0 +e (F 1 +GH) of the closed system (1), (2) has the form 11 1  2  22  22   606 204  1  1  2  20 149  22  49  49  2  2  2  49 49  F  1  11  20  22  22  11 1/ 2 4 2 22 where the parameter e has sufficiently small values.

Construction of robust stability interval
Theorem 1 shows how to find the parameters of the system (1), (2) under which it will be asymptotically stable for sufficiently small values of the parameter e. But it is expedient to find an interval for e in which the constructed closed system is asymptotically stable. For this, on the basis of Theorem 1, it is necessary to find a solution of the Lyapunov matrix equation (3) in the expansion (4) and an interval for the parameter e in which the solution matrices P and Q are positive definite.
The following statement shows how to do this. Theorem 2. Let's suppose that the orthogonal matrix F 0 of general form does not have multiple eigenvalues, the matrices P 0 >0, Q 1 >0, H are defined by Theorem 1 and the symmetric matrices P=P 0 +e P 1 , Q =eQ 1 +e 2 Q 2 +e 3 Q 3 , satisfy the Lyapunov matrix equation (3).
Also, let m max be the maximal eigenvalue of the pencil of matrices mP 0 +P 1 , ( ) i i 1, 2 n δ= are the eigenvalues of the quadratic matrix pencil d 2 Q 1 +d Q 2 + Q 3 and d min , d max -respectively, its minimum and maximum real eigenvalues.
Then Proof. Let's show that the solution matrices (23) exist.
The elements of the matrix H and the coefficients α 0 , α 1 , …, α n-1 are calculated from conditions (16)−(19) under the assumption of positive definiteness of the chosen symmetric matrix Q 1 . Let's find the matrix P 0 from the expansion (7), and conditions (17)−(19) show its symmetry and positive definiteness.
Next, let's calculate the matrix P 1 from the first equation of system (6) for the known righthand side D 1 , the condition for its resolution (16) is satisfied by Theorem 1. The free parameters of the found matrix P 1 can be set equal to zero or calculated, since this better inscribes it into the expansion (7) and, possibly, will give a wider range. To calculate the free parameters, let's choose the symmetric matrix Q 2 and equate the coefficients for arbitrary constants in the scalar product (g, x 2 ). In this case, the matrix Q 2 is chosen only for calculation of the values of the free parameters. Parameters that do not take values are assumed to be zero.
Let's equate the matrices P i , Q i+2 , i = 2,3 …to zero and Q 2 , Q 3 let's calculate by the formulas: This way of determining the elements of the expansions (4) gives grounds to state that the matrices P=P 0 +e P 1 , Q=e Q 1 +e 2 Q 2 +e 3 Q 3 satisfy the Lyapunov matrix equation (3). Now let's find conditions on the parameter e for which the matrices P, Q are positive definite. Instead of the pencil P 0 +eP 1 , let's consider the equivalent pencil of matrices mP 0 +P 1 , which has the same range of parameters e=1/m. The P 0 , P 1 are symmetric, therefore the eigenvalues of the pencil P(m) are real numbers. P 0 >0, therefore, for sufficiently large values of m>0 the matrix mP 0 +P 1 is positive definite, which follows from the eigenvalues of the matrix mP 0 (they are large positive) and the eigenvalues of the sum of Hermitian (symmetric) matrices [10]. The eigenvalues of the matrix depend continuously on its elements [12], therefore their sign does not change to the first zero |mP 0 +P 1 | right on the m-axis. Thus, for m>m max >0, where m max is the maximum eigenvalue of the pencil, have mP 0 +P 1 >0. If m max £0, then let's obtain the interval (0,¥). The possible cases describe the intervals (24). Now let's consider the quadratic pencil of matrices Q 1 +eQ 2 +e 2 Q 3 . The matrix Q 1 is symmetric positive defined, therefore the given pencil of matrices is strictly equivalent to such pencil [10]: