Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
1/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
Organization (S):
EDF/IMA/MN
Manual of Reference
R6.02 booklet: Direct methods
Document: R6.02.01
In connection with the methods of decomposition
of GAUSS type
Summary:
This document presents some aspects related to the method of decomposition of GAUSS.
After a rapid presentation, we recall the main advantages and disadvantages related to this method
direct. Then, we detail the implementation of algorithm LDL
T
implemented in the Code Aster.
GAUSS (1777 - 1855) is at the origin of all the direct methods of numerical resolution of systems
linear. That it is thanked here for it.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
2/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
Contents
1 General information on the methods of the type GAUSS ..................................................................................... 3
1.1 Presentation of the method. ........................................................................................................... 3
1.2 Concept of pivot ................................................................................................................................ 4
1.3 Stability of the method .................................................................................................................... 4
1.4 Unicity of the decomposition ............................................................................................................ 4
1.5 An alternative: the factorization of CROUT ....................................................................................... 4
1.6 Case of the symmetrical matrices ........................................................................................................ 6
2 Disadvantages of the methods of the type GAUSS ...................................................................................... 7
2.1 The number of operations ................................................................................................................... 7
2.2 Filling of the matrix .......................................................................................................... 8
2.3 The loss of precision in the course of calculation ....................................................................................... 10
2.3.1 Summary study of the loss of precision ............................................................................ 10
2.3.2 Estimate of the error on the solution .................................................................................... 12
2.3.3 Estimate of the number of significant digits of the solution ............................................... 13
2.3.4 Method to reduce conditioning ........................................................................... 13
2.3.5 Example of badly conditioned matrix ................................................................................. 13
2.3.6 A geometrical Interpretation of the bad conditioning ............................................ 14
2.4 Criteria of determination of a null pivot ........................................................................................ 15
3 Method LDLT per blocks implemented in Aster ......................................................................... 16
3.1 Implementation of factorization ................................................................................................. 16
3.2 Implementation of the resolution .................................................................................................... 19
3.3 Scaling .............................................................................................................................. 20
3.4 Tests on the pivot ........................................................................................................................... 20
3.5 Factorization of complex matrices ............................................................................................ 20
Appendix 1
Methods of conventional storage .......................................................................... 21
Appendix 2
Variations on the algorithm of GAUSS .................................................................... 23
4 Bibliography ........................................................................................................................................ 24
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
3/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
1
General information on the methods of the GAUSS type
1.1
Presentation of the method.
On the basis of the observation which it is easy to solve the system
A.X = B
when
With
is a matrix
triangular lower or higher, one seeks to break up the full initial matrix by one
factorization of triangular matrices.
The guiding principle is to seek a regular matrix
P
, known as matrix of permutation, such as
product
P. With
that is to say triangular, then to solve
P.A.X = P.B
Note:
In practice,
P
is determined by products of elementary matrices of permutation
P = P
(K)
… P
(1)
.
Matrices
P
(I)
depend on the alternative chosen, but one never calculates explicitly
the matrix
P
but only
P. With
and
P.B
The matrix
With
being factorized in the general form
L.U
(
L
lower triangular matrix,
U
stamp
triangular higher), we are brought to solve the two linear systems:
L.y = B
U.x = y
Note:
In the method known as of elimination of GAUSS, one carries out simultaneously the factorization of
With and the resolution of
L.y = B
The following algorithm carries out the elimination of GAUSS and the resolution of
L.y = B
at the stage (
p+1
) we have
()
has
has
has
. has
.a
ij
(p 1)
ij
(p)
ij
(p)
p
(p)
1
pj
(p)
+
-
=
-
for
p+1
I
N
p+1
J
n+1
has
has
ij
(p 1)
ij
(p)
+
=
for
1
I
p
1
J
n+1
has
0.
ij
(p 1)
+
=
for
p+1
I
N
1
J
p
Note:
In this writing of the algorithm of elimination of GAUSS, the second member
B
is
regarded as an additional column of the matrix which is then treated like one
stamp
N X (n+1)
.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
4/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
1.2
Concept of pivot
The preceding implementation of the algorithm of elimination of GAUSS supposes implicitly that them
diagonal terms called pivots are not null in the course of calculation.
Case of null pivots: A regular matrix can have a null pivot.
example: the matrix
0 1
2 1
To cure this disadvantage, one can use a strategy of swivelling
·
complete swivelling: this strategy consists in choosing like pivot, the major term in
block remaining then to carry out a permutation of line and column.
There is then the system
P.A.Q (QT.x) = P.B
where
P
is the matrix of permutation of the lines and
Q
that of the columns.
The found solution is then
y = Q
T
X
, and it is thus necessary to preserve the matrix of
permutation
Q
to obtain the sought solution
X = Q.y
·
partial swivelling: the pivot is required as being the term of maximum value, among
not yet treated terms, in the current column (the kth one at the stage K) then one carries out one
permutation of line.
1.3
Stability of the method
Definition:
A numerical method of resolution of system linear is known as mathematically
stable when “some is the matrix
With
regular, the algorithm succeeds ".
Theorem:
The method of GAUSS with a strategy of swivelling is mathematically stable
for any regular matrix.
Corollary:
If during a factorization of GAUSS with swivelling, a null pivot is detected then
the matrix is singular and this system does not have a single solution.
Theorem:
The method of GAUSS (without swivelling) is stable for definite real matrices
positive.
For more details, one will consult the basic books which are [bib13] [bib14] [bib6].
1.4
Unicity of the decomposition
Proposal: The decomposition of GAUSS is not single,
but if one specifies the diagonal of
L
or of
U
then there is unicity of the decomposition.
1.5
An alternative: the factorization of CROUT
The method of factorization of CROUT [1] is the same algorithm, which requires the same number
operations and carries out the same filling of the matrix but calculations are carried out in way
different.
We place ourselves if the matrix is factorisable, which is always the case with one
permutation close to the lines and the columns since the matrix is regular: one thus has
WITH = L U
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
5/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
Then one proceeds by identification
for
for
I
J has
U
L .u
I
J has
L .u
ij
ij
ik
kj
k=1
i-1
ij
ik
kj
k=1
J
=
+
>
=
(
L
ij
and
U
ij
are the elements of
L
and
U
)
from where values of
U
ij
and
L
ij
according to
has
ij
U
1j
= has
1j
J = 1,…, N
L
has
U
i1
i1
i1
=
I = 1,…, N
U
has
L .u
ij
ij
ik
kj
k=1
i-1
=
-
I
J
L
1
U
has
L .u
ij
jj
ij
ik
kj
k=1
j-1
=
-
I > J
Note:
The command of calculations is not arbitrary, it is necessary to know them
L
ik
located on the left and them
U
kj
with
above of each term to be calculated.
One sees whereas at the kth stage, one defers on the kth line all the former contributions
leaving unchanged the lines
k+1
with
N
.
This alternative of CROUT is also called elimination of GAUSS by columns (or columm activates),
and prévilégie the scalar operation of product.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
6/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
1.6
Case of the symmetrical matrices
Proposal: The decomposition of GAUSS respects symmetry.
It is enough to note that with each stage the terms
aij
and
aji
receive the same contribution.
Indeed:
I J
I
J
K
K
by assumption of recursion, one supposes the matrix
symmetrical at the stage
K
and consequently one a:
has
has
has
has
/has
has
has
has
has
/has
ij
(K 1)
ij
(K)
ik
(K)
kj
(K)
kk
(K)
ji
(K 1)
ji
(K)
jk
(K)
ki
(K)
kk
(K)
+
+
=
-
=
-
from where the proposal.
Consequently, a matrix
With
symmetrical perhaps factorized in the form
WITH = LDL
T
where
D
is a diagonal matrix and
L
a unit lower matrix (i.e. with unit diagonal)
This decomposition, single since a diagonal was fixed, applies to any symmetrical matrix
nonsingular.
If the matrix
With
is definite positive, then the terms of the diagonal are strictly positive and one can
to use the form known as of CHOLESKY
WITH = L
T
= (LD
1/2
. D
1/2
L
T
)
.
Let us notice that the decomposition of CHOLESKY requires N extractions of square root (which is one
expensive operation in time).
In the case of a factorization LDL
T
for symmetrical matrices, we can write the algorithm of
CROUT in the following form:
Loop on the columns ic = 2, N
Loop on the lines it = 1, ic
1
Loop on the contributions im = 1, it
1
L
L
- L
* L
Fine loops
L
L
/L
Fine loops
Loop on the contributions im = 1, ic
1
lic, ic
L
- L
* L
Fine loops
Fine loops
it, ic
it, ic
ic, im
im, it
it, ic
it, ic
it, it
ic, ic
ic, im
im, ic
…
…
-
…
-
…
-
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
7/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
2
Disadvantages of the methods of the GAUSS type
The disadvantages of the methods of the GAUSS type are primarily of three types:
1) a high number of operations
2) a filling of the matrix
3) a loss of precision in the course of calculation
The first two points are often qualified major defects whereas third is
regarded as a minor defect.
2.1
The number of operations
For a system full with size N, at the p-ième stage, we must carry out to calculate the new ones
coefficients of the matrix and the second member:
·
(Np)
divided
·
(n-p+1) (Np) additions and multiplications
The number of operations is thus:
(N
p)
N (N
1)
2
p 1
N 1
-
=
-
=
-
divided
(
)
(
) (
) (
)
(N
p + 1) N
p
Q
Q
N N
1 N
2
6
N N
1
2
p 1
N 1
2
Q 1
N 1
Q 1
N 1
-
-
-
-
+
-
=
-
=
-
=
-
=
+
=
additions and as much of
multiplications.
That is to say
1
3
N (n-1) N
1
2
+
operations for which it is advisable to add them
N
2
operations of the resolution of
triangular system.
In short:
For a great full system, the algorithm of GAUSS requires about
1
3
3
N
operations.
Note:
In the case of a stored matrix tape, the number of operations is
n.b
2
where
B
is the width of
the tape.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
8/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
2.2
Filling of the matrix
Let us begin with a conventional example of matrix known as “arrow” which one meets for example in
chemicals [bib9].
That is to say
With
the matrix such as
has (1, I)
0, have (I, 1)
0, have (I, I)
0
and all its other terms
are null, the matrix takes the following form then:
With =
0
0
After the 1ère stage of factorization (by the algorithm of GAUSS), the matrix is full with the direction where it y
has more theoretically null terms.
More formally let us look at the phenomenon of filling of to the algorithm; for that let us récrivons
the algorithm:
For K varying of 1 to N - 1 to make % loops on the stages
For I varying of K + 1 to N to make % loops on the lines
For J varying of K + 1 to N to make % loops on the columns
has (I, J) = has (I, J) - has (I, K). has (K, J)/has (K, K)
end to make
end to make
end to make
What one can schematize graphically, at the kth stage, for the calculation of the term has (I, J) by:
K (3)
J (2)
I (1)
has (I, J)
has (I, J) = has (I, J) - (1) * (2)/(3)
The term
has (I, J)
is nonnull at the end of the kth stage:
·
if it were nonnull at the beginning of this kth stage,
·
or if terms
has (I, K)
and
has (K, J)
are all two the nonnull ones at the beginning of the kth stage,
and this independently of the initial value of the term
has (I, J)
.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
9/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
Moreover, one sees that the method of GAUSS fills the profile wraps matrix during
stages of factorization.
In the example of the matrix “arrow”: the profile envelope is the full matrix, from where the result
noted.
This example highlights the importance of the classification of the unknown factors of the matrix since
matrix can be récrite, after permutation of the unknown factors, in the form:
0
0
whose profile envelope is the matrix it even (there is thus no filling).
We have just seen the importance of the classification of the unknown factors.
We could not insist too much on the fact that algorithms of “optimal” renumerotation must be
used to minimize the filling of the matrix.
These algorithms rest on the heuristic ones and are specialized.
Among the algorithms most usually used let us quote
Algorithms
Objectives
CUTHILL - Mc KEE
to minimize the bandwidth
Transfer CUTHILL - Mc KEE
to minimize the profile
Degree minimum
to minimize the multiplications by 0
Intrinsic formulation of the filling
One can give an intrinsic formulation of the filling during the elimination of unknown factor in terms
of graph [bib5].
That is to say a matrix
With
which we associate the graph
G (X, E)
, where
X
is the whole of the nodes and
E
the whole of the not directed edges.
The problem of elimination of an unknown factor of the matrix is then equivalent to eliminate a node from
graph.
Definition: Are
X, y
X
, it will be said that
X
and
y
are adjacent if and only if {
X, y
}
E
If
Y
is a subset of nodes of
X (X
Y)
we can define the following assemblies:
the whole of the adjacent nodes with
Y
Adj (
Y
) = {
X/X
X there
and
y
Y
such as {
X, y
}
E
}
the whole of the with dimensions incidents with
Y
Inc (
Y
) = {{
X, y
}/
y
Y
,
X
Adj (
Y
)}
the whole of definition of
X
Def (
X
) = {{
y, Z
}/
y, Z
Adj (
X
),
y · Z
and
Z
Adj (
Y
)}
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
10/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
y
def (y)
y
·
·
in hatched: what one eliminates,
into dotted: what one adds (filling)
Operation of elimination of
y
The elimination of
y
then consist in considering the subset
Elim
(y, G) = {X - {y}, (E
Inc
(y)) U
Def
(y)}
It is thus necessary well to consider the filling which is related to Def (
y
).
To minimize the filling one can use heuristics consisting in eliminating it
y
of degree
minimal, the degree of {
y
} being the cardinal of the Adj unit (
y
). It is the governing idea of the use of
the algorithm of the degree minimum before a factorization of the GAUSS type.
This approach is used in the multi-frontal method implemented in Aster [bib15].
2.3
The loss of precision in the course of calculation
The problem comes owing to the fact that in the course of algorithm the pivots decrease and that they are used
like denominator for the following stages [bib13].
2.3.1 Summary study of the loss of precision
Let us note
With
(K)
the matrix at the stage K (i.e. after the elimination of the kth variable); with by
convention
With
(O)
= A
.
We can then write that
With
(1)
check
L
(1)
.A
(1)
= A
(0)
by taking L
(1)
=
0
1
0
by identification
Then after (n-1) factorizations of this type we obtain
(L
(N - 1)·
…
·
L
(1)
) U = L.U = A
(0)
Because of the errors
E
on the decomposition, it is advisable to write:
L.U = A
(0)
+ E
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
11/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
Coefficients of the matrix of error
E
can be evaluated [bib12] by taking account of the error
made on the floating operation which we will note
fl
:
E
3.e.m
. max has
ij
ij K, I, J ij
(K)
I, J
with
m
ij
: a number of terms such as
() ()
has
.a
0
ik
K
kj
K
:
relative error of the operations machine
Indeed, while placing itself if the elimination of GAUSS “succeeds” (for example: the matrix is
defined positive or one uses a technique of swivelling).
Let us note
(
)
ik
has
has
has
has
ß
ikk
kkk
ikk
kkk
=
+
-
-
-
-
(
)
(
)
(
)
(
)
.
1
1
1
1
1
1
for
I > K, K = 1,…, N - 1
with
1
<
The term
has
ijk
()
is then evaluated by:
(
)
has
ß has
has
ijk
ijk
ik
kjk
()
(
)
(
)
.
=
-
-
-
1
1
for
I, J > K, K = 1,…, N - 1
That is to say still
(
)
has
ß has
has
ijk
ijk
ik
kjk
()
(
)
(
)
.
=
-
-
-
1
1
with
2
,
3
<
The “disturbance”
E
ijk
undergone by
has
ijk
()
can then be evaluated; starting from the definition of
m
ij
us in
let us deduce the relation:
E
has
has
ijk
ijk
ij
-
1
1
.
.
(
)
for
i> K, K = 1,…, N - 1
and of the evaluation first of
has
ijk
()
we deduce:
(
)
(
)
(
)
µ
ik
kjk
ijk
ijk
has
has
has
.
/
/
(
)
(
)
()
-
-
-
+
+
1
1
3
2
1
1
and finally
(
) (
)
(
)
E
has
has
kjk
ijk
ijk
()
()
(
)
- +
+
-
- +
-
1
1
1
1
1
1
1
2
3
1
2
E
has
kjk
ij
()
.
.
3
however decomposition
L.U = A
(0)
+ E
us indicates that
E
ij
is the sum of the errors.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
12/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
2.3.2 Estimate of the error on the solution
We solved the system
L.U X = A
(0)
.x + E.x
where
E
.
X
is the term of error induced by the round-off errors/truncation in the operations of
factorization.
With
(0)
X
is the second member (in fact
B
).
The found approximate solution
~
X
who approximates the true solution
X
is:
~
X
= (A + E)
- 1
B
One shows whereas the evaluation of the error on
X
is related to the conditioning of
With
.
Let us pose the problem in the form:
(A +
With) (X +
X) = B
While supposing
A.
X
small one a:
X = A
- 1
.
A.X
from where while normalizing
X
X
Cond A
With
With
().
where
Cond (A)
=
(
)
WITH A
.
-
1
is the conditioning of the matrix
With
.
Note:
It is noted that the error induced on the second member is weak and the solution does not disturb
that through a bad conditioning of the matrix
With
.
Indeed, if the system is considered
With (X +
X) = B +
B
,
because of the equalities:
X = A
- 1
.
B and Ax = B
(
)
one has
and
and thus
- 1
- 1
X
With
X
With
B
<
.
.
.
B
B
With X
X
With
B
Note:
If one considers the variations on
With, X
, and
B
at the same time, there is the following estimate
[bib14]:
X
X
Cond A
With
With
With
With
B
B
-
+
-
()
.
.
1
1
Some remarks on conditioning
·
conditioning is defined only for one regular matrix,
·
conditioning depends on the standard chosen on
IRn
,
·
some is the standard chosen, we have
1
Cond (A)
and a matrix is of as much
conditioned better than its conditioning is close to
1
.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
13/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
If the euclidian norm is selected for standard, then
·
the conditioning of a matrix
With
unspecified is
Cond (A) =
µ
µ
N
1
where
µ
1
and
µ
N
are the extreme singular values of
With
(i.e. smallest and more
large of the eigenvalues of
With * .A
).
·
If the matrix
With
is symmetrical (or square) then
Cond (A) =
N
1
where
1
and
N
are the eigenvalues of minimal and maximum module of
With
.
2.3.3 Estimate of the number of significant digits of the solution
If one has a precision of
p
figures (decimal) significant, one has then:
With
With
p
-
10
If one wishes a precision of S figures (decimal) significant on the solution
X
X
S
-
10
from where the estimate of the number of exact decimal significant digits of the solution
S
p-log
10
(Cond (A))
2.3.4 Method to reduce conditioning
The simplest method is that of the scaling:
One “passes” from
With
with
1
.
With
.
2
with
I
stamp diagonal such as
Cond
(
1
.
With
.
2
) is better than
Cond (A)
.
This is very theoretical and there is not universal method to determine
1
and
2
.
Let us note that if the matrix
With
is symmetrical and that one wishes to preserve this property, it is necessary then
to take
1
2
.
2.3.5 Example of badly conditioned matrix.
This example, very significant and instructive is with R.S. WILSON.
That is to say the system
Ax = B
with:
With
=
10 7
8
7
7
5
6
5
8
6 10
9
7
5
9
10
and
B
=
32
23
33
31
and whose solution is
X
=
1
1
1
1
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
14/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
·
If one disturbs the second member of about 0.5% while taking:
(
)
~
B
=
32.1, 22.9, 33.1, 30.9
then the solution is:
(
)
~
X
=
9.2, - 12.6, 4.5, - 1.1
.
·
If the matrix is disturbed of about 1%:
à =
10.
7
8.1
7.2
7.08 5.04
6
5
8
5.98 9.89
9
6.99 4.99
9
9.98
then the solution is
(
)
~
X
=
- 81, 137, - 34, 22
Remarks on the properties of the matrix
With
:
·
It is symmetrical, definite positive, of determinant
1
and of reverse “sympathetic nerve”.
With
- 1
=
25
- 41
10
- 6
- 41
68
- 17 10
10
- 17
5
- 3
- 6
10
- 3
2
·
Its conditioning within the meaning of the euclidian norm is:
Cond (A)= 4
L
1
L
=
30.2887
0.01015
= 2984.11
2.3.6 A geometrical Interpretation of the bad conditioning
One can give a very simple interpretation of the bad conditioning of a linear system
Ax = B
in the particular case where
With
is normal (
i.e. A * .A = A.A *
).
Are
1
the smallest eigenvalue of the matrix
With
and
N
its greater eigenvalue and are
v
1
and
v
N
associated clean vectors.
for
B = v
N
and
B =
.
v
, one has
()
X
X
cond
With
B
B
=
B +
B
B = v
N
X +
X
, v,
U = v
N
N
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
15/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
from where if
cond (A)
=
N
1
is large, a small disturbance
B
of
B
involve a great variation
on the solution
v
.
2.4
Criteria of determination of a null pivot
Definition:
A numerically degenerated system is a system for which a pivot is null or
does not have exact significant digit.
Let us note that a system can be degenerated numerically without being it mathematically.
In these two cases, it is advisable not to solve the system from where need for determining one
criterion of stop as soon as one of the pivots does not have any more exact significant digits.
Are
With
the matrix to be factorized and
F
the matrix of free diagonal resulting from factorization.
Criterion 1: The simplest criterion is to consider that the pivot is null as soon as it is lower, in
absolute value with a given threshold.
F
II
<
1
1
is a “small number” in lower part of which it is considered that the values are arbitrarily null.
Criterion 2:
This criterion applies to the number of significant digits still available. In
noting that one cannot have any more p significant digits on a given machine,
it will be checked that the decrease of the pivot is not carried out in a report/ratio higher than
10
- p
.
F
II
has
II
2
= 10
- p
Let us note that the report/ratio
F
has
II
II
is always lower or equal to 1 because of decrease of the pivots.
The basic cause of a bad numerical conditioning is the round-off error caused by
the introduction of great numbers without physical significance.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
16/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
3 Method
LDL
T
by blocks implemented in Aster
This paragraph details the implementation in Aster of the resolution of the linear system
A.X = B
by
method of factorization LDLT of the symmetrical matrix
With
.
The matrix
With
is stored in profile (or line of sky) per block.
Guiding principle:
A column of the matrix is contained very whole in only one block: us
let us not segment the columns.
The tables allowing the description of the stored matrix profile per block are:
·
HCOL
height of column of the matrix
HCOL (I)
height of the i-eme column
·
ADIA
address diagonal term in its block
ADIA (I)
return the address of the i-eme diagonal term in its block
·
ABLO
pointer of block
ABLO (i+1)
return the number of the last equation in total classification
contents in the i-eme block
By convention
ABLO (1) = 0
and numbers it equations in the i-eme block is given
by the relation
ABLO (i+1) - ABLO (I) + 1
The total number of equation results as being
ABLO (nombre_de_bloc + 1)
It is also necessary to memorize the total number of blocks used to contain them
coefficients of the matrix.
Note:
Formally the table
HCOL
is useless because it results from the tables
ADIA
and
ABLO
, but it
allows to carry out calculations more quickly.
3.1
Implementation of factorization
Main characteristics of the implementation of the factorization of GAUSS by the alternative of
CROUT in the shape LDLT of a stored symmetrical matrix profile per block are:
·
factorization is carried out in place, i.e. crushing the initial matrix,
·
perhaps partial factorization,
·
the criteria of null detections of pivot can be adapted to the factorization of matrices
quasi-singular,
·
in the event of detection of null pivot, this pivot is replaced by a very great value (
10
40
) it
who amounts introducing a condition of blocking by penalization.
Note:
Two tables of work are created:
·
a table which will contain the diagonal of the factorized matrix (minimization of
a number of access to the block),
·
a table for the current column (minimization of the number of calculations carried out).
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
17/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
Algorithm BEGINNING;
creation of an intermediate table for the current column
creation of an intermediate table for the current column
FOR ibloc VARIANT_DE 1 A nombre_de_bloc TO MAKE
·
Determination of end and the starting columns for the current block.
·
Seek smaller equation in relation to an equation contained in the block
running. This search is done by exploiting the table
HCOL
·
Seek block of membership of the equation found previously. This search
is done by exploiting the table
ABLO.
·
Request in mode
writing
i-eme block.
FOR jbloc VARIANT_DE plus_petit_concerne A ibloc-1 TO MAKE
Request in mode
reading
j-eme block
FOR iequa CONTAINED IN the i-eme block TO MAKE
calculation of the start address of the column in the block
calculation height of the column
FOR jequa CONTAINED IN the j-eme block TO MAKE
calculation of the start address of the column in the block
calculation length of the column
With (ibloc, jequa) = A (ibloc, jequa) - < A (ibloc, *), A (jbloc, *) >
FIN_POUR
FIN_POUR
release
j-eme block which was not modified
FIN_POUR
FOR iequa CONTAINED IN the i-eme block TO MAKE
calculation of the start address of the column in the block
calculation length of the column
FOR jequa CONTAINED IN the i-eme block and < iequa TO MAKE
calculation of the start address of the column in the block
calculation height of the column
With (ibloc, lm) = A (ibloc, lm) - < A (ibloc, *), A (jbloc, *) >
FIN_FAIRE
% use of the column
iequa
(calculation of the pivot)
calculation of the start address of the column in the block
calculation height of the column
back up column: wk. (I)
With (ibloc, I)
standardization of the column by using the diagonal table:
With (ibloc, *)
With (ibloc, *)/diag (*)
calculation of the diagonal term and updating of the table of work:
tabr8 (iadia) = tabr8 (iadia) - < A (ibloc, *), wk. (*) >
test of the pivot compared to
test of the pivot compared to the number of significant digits
FIN_POUR
release of the block running which one has just modified
FIN_POUR
release of the tables of work
FINE Algorithm;
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
18/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
Determination of end and the starting column for the current block.
This phase is due to the concept of partial factorization.
IF (last column of the block < beginning of factorization) THEN
request in reading mode of the i-eme block
to fill the table with work containing the diagonal.
release of the i-eme block
Al following block
SINON_SI (first column of the block > fine of factorization) THEN
TO LEAVE
IF NOT
IF (first column of the block < beginning of factorization) THEN
% to supplement the “diagonal” table
request in reading mode of the i-eme block
to fill the table with work containing the diagonal.
FIN_SI
IF (last column of the block > fine of factorization) THEN
amendment of the last equation to be taken into account
FIN_SI
FIN_SI
Notice on the overall dimension:
It is obligatory that one can have at least simultaneously in memory:
·
two blocks of the matrix,
·
two vectors of work of size: the number of equations of the system to be solved.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
19/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
3.2
Implementation of the resolution
The implementation of the simultaneous resolution of N second members of the system
A.X = B
, where
stamp
With
is symmetrical and was factorized in form L D L
T
(the resolution is in place)
Note:
One creates a table of work which will contain the diagonal of the factorized matrix, for
to minimize the access number to the block of the matrix and thus to limit the readings for
large matrices not being able to reside completely in memory.
Algorithm BEGINNING;
Creation of a table to store the diagonal to avoid readings at the time of the stage of
diagonal resolution.
FOR ibloc VARIANT_DE 1 WITH the nombre_de_bloc
% downward resolution and
% filling of the diagonal table
request in reading mode of the i-eme block
FOR iequa contained IN the BLOCK
Calculation height of the column and
calculation of the start address of the column in its block
FOR each second member TO MAKE
xsol (isol) = xsol (isol) - < X (isol), U) >
FIN_POUR
back up diagonal term in the table of work
FIN_POUR
release
i-eme block
FIN_POUR
FOR
each second member
TO MAKE
% diagonal resolution
FOR all the equations TO MAKE
xsol (iequa, isol) = xsol (iequa, isol)/diag (iequa-1)
FIN_POUR
FOR ibloc VARIANT_DE nombre_de_bloc to 1 PAR_PAS_DE - 1%
going up resolution
request in reading mode of the i-eme block
FOR iequa contained IN the BLOCK
Calculation height of the column and
calculation of the start address of the column in its block
FOR each second member TO MAKE
xsol (ixx+i, isol) = xsol (ixx+i, isol) - xsol (isol) * L (ide+i)
FIN_POUR
FIN_POUR
release
i-eme block
FIN_POUR
Release
working area (i.e. diagonal table)
FINE Algorithm;
Notice on the overall dimension:
It is necessary that one can have simultaneously in memory:
·
a block of the matrix,
·
a vector of work of size: the number of equations of the system to be solved,
·
N second members.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
20/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
3.3
Scaling
It is possible to make a scaling of the matrix be factorized; this setting on the straight ladder
fact in order to obtain a matrix whose diagonal terms are worth
1
.
The diagonal matrix is such as:
I
II
II
II
has
if has
if has
=
=
1
0
1
0
It should be noted that at the time of the resolution, one obtains the solution of the system only after déconditionnement.
Indeed: the initial system is
Ax = B
after multiplication on the left by
, one a:
Ax =
.b
However the solved system is:
With
X
=
B
from where the solution
X
obtained that “déconditionner is needed”.
3.4
Tests on the pivot
Two criteria of null detections of pivot are implemented:
·
the test in absolute value
has
II
<
, with
given,
·
the test in relative value on the number of exact significant digits.
Let us note that these tests can be reduced to their simpler expression by providing one
= 0
. and in
giving, by convention, a number of exact significant digits no one.
This option is made necessary by the fact that algorithms such as the algorithms of
search for eigenvalues [R5.01.01] [R5.01.02] seek to factorize matrices
quasi-singular.
3.5
Factorization of complex matrices
The algorithm implemented in Aster also makes it possible to treat the symmetrical matrices with
complex coefficients.
The implemented algorithm does not treat the square matrices, although it is theoretically
possible.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
21/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
Appendix 1 conventional Methods of storage
A1.1 Stamps full
A nonsymmetrical matrix full with size
N
have
N
2
coefficients.
If the matrix is symmetrical, one can store only his triangular the lower or higher is
N N
(
)
+
1
2
values.
No table of description of the matrix is necessary.
A1.2 Stamps tape
B + 1 = bandwidth
B
B
In this case one stores the tape (called sometimes rectified matrix) in a rectangular table
N
(2b + 1)
; they then are included
B (B + 1)
zero values corresponding to the complements of
points.
In the case of a symmetrical matrix, one can only store
N (B + 1)
values of which
(
)
B B
1
2
+
zero values (useless).
This method only requires to know the bandwidth.
A1.3 Stamps profile or matrix with line of sky
This technique consists in storing the terms of the matrix by columns and lines lengths
variables. Terms external with the “line of sky”, which is the envelope of the nodes of the columns
being supposed not to have any contribution in calculations, are not stored.
Profile of the i-ème line (resp. column) is determined by:
min {J
such as
1
J
N
has
ij
0}
(resp
min {J
such as
1
J
N
has
ji
0})
If the profile is symmetrical, one speaks about matrix with symmetrical profile.
This method of storage requires tables of storage which we will detail in the case
of a matrix with symmetrical profile.
Classically, in this option of storage, the matrix is arranged in the shape of a mono table
dimensioned requiring a table of pointer of input of column
ADIA
to explore the matrix:
the input is done by the diagonal terms, the number of terms of the column is obtained by differences
of two successive terms:
ADIA (i+1) - ADIA (I).
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
22/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
If the matrix is nonsymmetrical, but with symmetrical profile, it is necessary to store the i-ème line and
the i-ème column. Classically, one put them “ends at ends” and the number of terms of the column or
line is (
ADIA (i+1) - ADIA (I))/2.
A1.4 Storage per block
The methods of storage seen previously suppose implicitly that the matrix can reside
in main memory, which is not always the case.
From where concept of matrix stored per block (or segmented on disc).
All preceding storages can be segmented, but we will state only the case of
stored symmetrical matrix profile.
Stamp profile stored per block
We consider here only the case of the symmetrical matrices, which does not remove anything with the general information of
matter.
1
2
3
4
5
6
7
Appear A1.4-a:
Maximum size of a block: 20 elements
In this example, we suppose the of the same blocks cuts, to use blocks of variable size, it
is necessary to introduce an additional table containing the size of each block.
We also consider that a column can belong only to one block: “we do not cross
not columns ".
To manage the matrix, it is always necessary to know the address of the diagonal terms; but
now, this address relates to the block of membership of the column.
This table, it is necessary to join a table giving the equations contained in a block.
This table dimensioned with the number of block more
1
the last equation of the block contains.
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
23/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
Appendix 2 Variations on the algorithm of GAUSS
As we saw with the alternative of CROUT, there are several implementations of the algorithm
of GAUSS: it consists in carrying out calculations of the coefficients in a different order.
Schematically, it is considered that there are three overlapping loops:
·
loop
I
on the lines,
·
loop
J
on the columns,
·
loop
K
on the stages.
The standard algorithm is characterized by the sequence
kij
, but there are 5 other permutations of
indices which give place to as many alternatives (or of algorithms).
·
the algorithm of CROUT is characterized by the sequence
jki
,
·
the algorithm corresponding to the sequence
ikj
, which works by line, is known under the name
of algorithm of “Doolitle”.
Let us give here a chart drawn from [bib10].
With
U
fact
L
fact
U
fact
L
fact
not yet
m odifié
U
fact
L
fact
not yet
m odifié
algorithms
kij - kji
the kth one is calculated
column and
K + 1
line and one reactualize
the submatrix
With
algorithms
ikj - ijk
one calculates i-ième
line of
L
and
U
algorithms
jki - jik
one calculates j-ième
line of
L
and
U
Code_Aster
®
Version
2.3
Titrate:
In connection with the methods of decomposition of the GAUSS type
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
24/24
Manual of Reference
R6.02 booklet: Direct methods
HI-75/93/084/A
4 Bibliography
[1]
P.D. CROUT A shorts method for evaluating determining and solving systems off linear
equations with real gold complex coefficients.- AIEE Trans Flight 60, 1941, p 1235 - 1240
[2]
E. CUTHILL & J. Mc KEE “Reducing the band with off sparse symmetric matrices” Proc 24th
Nat Conf Assoc Comput Mach ACM Publ (1969)
[3]
E. CUTHILL “Several strategies for reducing the bandwith off the matrices” in Sparse
Matrices and to their applications - D.J. ROSE & R.A. WILLOUGHBY Editors, Plenum Close,
New York (1972) p 157 - 166
[4]
G. Von FUCHS, J.R. ROY, E. SCHREM Hypermatrix solution off broad sets off symmetric
positive definite linear equations - Computer methods in Applied Mechanics and Engineering
(1972)
[5]
A. GEORGE, minimum D.R. Mc INTYRE “One the application off the dismantles algorithm to finite
element systems " SIAM J. Num. Year. Flight 15, (1978) pp90 - 112
[6]
G.H. GOLUB and c.f. Van LOAN Matrix computations. Johns Hopkins University Close -
Baltimore (1983)
[7]
B. Mr. WILL GO Roundoff criteria in direct stiffness solutions - AIAA Journal 6 n° 7 p 1308 - 1312
(1968)
[8]
B. Mr. WILL GO A frontal solution program for finite elements analysis - Int Journal Num. Meth.
Eng., 2, 1970
[9]
O' LEARY & STEWART, Computing the eigenvalues and eigenvectors off symmetric
arrowhead matrices, J. off comp physics 90, 497-505 (1990)
[10]
J.M. ORTEGA “Introduction to parallel and vector solution off linear systems Plenum Near
(1988)
[11]
G. RADICATI di BROZOLO, Mr. VITALETTI Sparse matrix-vector product and storage
representations one the IBM 3090 with vector facility - IBM-ECSEC G513 Carryforward - 4098
(Rome) July 1986
[12]
J.K. REID A notes one off the stability gaussian elimination. J. Int Applies Maths, 1971, 8 p 374 -
375
[13]
J.H. WILKINSON Rounding Errors in Algebraic. Processes Her majesty' S stationery office
(1963)
[14]
J.H. WILKINSON The algebraic eigenvalue problem Clarendon Close Oxford (1965)
[15]
C. ROSE “a method multifrontale for the direct resolution of linear systems” Notes
EDF - DER HI-76/93/008 (1993)
[16]
D. SELIGMANN “Algorithms of resolution for the problem generalized with the eigenvalues”
[R5.01.01] - Note EDF - DER HI-75/7815 (1992)
[17]
D. SELIGMANN, R. MICHEL “Algorithms of resolution for the quadratic problem with
eigenvalues " [R5.01.02] - Note EDF - DER HI-75/7816 (1992)