Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
1/24
Organization (S): EDF/IMA/MN
Handbook of Référence
R6.02 booklet: Direct methods
Document: R6.02.01
In connection with the methods of decomposition
of type GAUSS
Summary:
This document presents some aspects related to the method of decomposition of GAUSS.
After a rapid presentation, we recall the principal advantages and disadvantages related to this method
direct. Then, we detail the implementation of algorithm LDLT implemented in Code Aster.
GAUSS (1777 - 1855) is at the origin of all the direct methods of numerical resolution of systems
linear. That it is thanked here for it.
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
2/24
Contents
1 General information on the methods of the type GAUSS ..................................................................................... 3
1.1 Presentation of the method. ........................................................................................................... 3
1.2 Concept of pivot ................................................................................................................................ 4
1.3 Stability of the method .................................................................................................................... 4
1.4 Unicity of the decomposition ............................................................................................................ 4
1.5 An alternative: the factorization of CROUT ....................................................................................... 4
1.6 Case of the symmetrical matrices ........................................................................................................ 6
2 Disadvantages of the methods of the type GAUSS ...................................................................................... 7
2.1 The number of operations ................................................................................................................... 7
2.2 Filling of the matrix .......................................................................................................... 8
2.3 The loss of precision in the course of calculation ....................................................................................... 10
2.3.1 Summary study of the loss of precision ............................................................................ 10
2.3.2 Estimate of the error on the solution .................................................................................... 12
2.3.3 Estimate of the number of significant digits of the solution ............................................... 13
2.3.4 Method to reduce conditioning ........................................................................... 13
2.3.5 Example of badly conditioned matrix ................................................................................. 13
2.3.6 Geometrical Interprétation of the bad conditioning ............................................ 14
2.4 Criteria of determination of a null pivot ........................................................................................ 15
T
3 Method LDL per blocks implemented in Aster ......................................................................... 16
3.1 Implementation of factorization ................................................................................................. 16
3.2 Implementation of the resolution .................................................................................................... 19
3.3 Scaling .............................................................................................................................. 20
3.4 Tests on the pivot ........................................................................................................................... 20
3.5 Factorization of complex matrices ............................................................................................ 20
Appendix 1
Methods of traditional storage .......................................................................... 21
Appendix 2
Variations on the algorithm of GAUSS .................................................................... 23
4 Bibliography ........................................................................................................................................ 24
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
3/24
1
General information on the methods of the type GAUSS
1.1
Presentation of the method.
On the basis of the observation which it is easy to solve system A.X = B when A is a matrix
triangular lower or higher, one seeks to break up the full initial matrix by one
factorization of triangular matrices.
The guiding principle is to seek a regular matrix P, known as matrix of permutation, such as
product P.A is triangular, then to solve P.A.X = P.B
Note:
In practice, P is determined by products of elementary matrices of permutation
P = P (K)… P (1).
The matrices P (I) depend on the alternative chosen, but one never calculates explicitly
the matrix P but only P.A and P.B
Matrix A being factorized in general form L.U (L stamps triangular lower, U stamps
triangular higher), we are brought to solve the two linear systems:
L.
y = B
U.x = y

Note:
In the method known as of elimination of GAUSS, one carries out simultaneously the factorization of
With and the resolution of L.y = B
The following algorithm carries out the elimination of GAUSS and the resolution of L.y = B
at the stage (p+1) we have
(p 1)
+
-
(p)
(p)
1
(p)
has
= has
-
for
p+1 I N
has
(p)
ij
ij (
. app) .a
ij
pj
p+1 J n+1
(p 1)
+
(p)
has
= has
for
1 I p
ij
ij
1 J n+1
(p 1)
+
has
= 0.
for
p+1 I N
ij
1 J p
Note:
In this writing of the algorithm of elimination of GAUSS, the second member B is
regarded as an additional column of the matrix which is then treated like one
stamp N X (n+1).
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
4/24
1.2
Concept of pivot
The preceding implementation of the algorithm of elimination of GAUSS supposes implicitly that them
diagonal terms called pivots are not null in the course of calculation.
Case of null pivots: A regular matrix can have a null pivot.
0 1
example: the matrix

2 1
To cure this disadvantage, one can use a strategy of swivelling
· complete swivelling: this strategy consists in choosing like pivot, the major term in
block remaining then to carry out a permutation of line and column.
There is then the system P.A.Q (QT.x) = P.B
where P is the matrix of permutation of the lines and Q that of the columns.
The found solution is then y = QTx, and it is thus necessary to preserve the matrix of
permutation Q to obtain the sought solution X = Q.y
· partial swivelling: the pivot is required as being the term of maximum value, among
not yet treated terms, in the current column (the kth one at the stage K) then one carries out one
permutation of line.
1.3
Stability of the method
Definition:
A numerical method of resolution of system linear is known as mathematically
stable when “some is regular matrix A, the algorithm succeeds”.
Theorem:
The method of GAUSS with a strategy of swivelling is mathematically stable
for any regular matrix.
Corollary:
If during a factorization of GAUSS with swivelling, a null pivot is detected then
the matrix is singular and this system does not have a single solution.
Theorem:
The method of GAUSS (without swivelling) is stable for definite real matrices
positive.
For more details, one will consult the basic books which are [bib13] [bib14] [bib6].
1.4
Unicity of the decomposition
Proposal: The decomposition of GAUSS is not single,
but if one specifies the diagonal of L or U then there is unicity of the decomposition.
1.5
An alternative: the factorization of CROUT
The method of factorization of CROUT [1] is the same algorithm, which requires the same number
operations and carries out the same filling of the matrix but calculations are carried out in way
different.
We place ourselves if the matrix is factorisable, which is always the case with one
permutation close to the lines and the columns since the matrix is regular: one thus has A = L U
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
5/24
Then one proceeds by identification
i-1

Po
ur I J has = U + L .u
ij
ij
ik
kj

k=1

J
for I > J has = L .u
ij
ik
kj


k=1
(L and
are the elements of
ij
uij
L and U)
from where values of U and
ij
lij according to aij
u1j = a1j
J = 1,…, N
has
L
i1
=
I = 1,…, N
i1
ui1
i-1
U
= - L .u has
ij
ij
ik
kj
I J
k=1
1
j-1


L
=
- L .u has
I > J
ij


U
ij
ik
kj
jj

k=1
Note:
The command of calculations is not arbitrary, it is necessary to know the L located on the left and them
ik
ukj with
above of each term to be calculated.
One sees whereas at the kth stage, one defers on the kth line all the former contributions
leaving unchanged the lines k+1 with N.
This alternative of CROUT is also called elimination of GAUSS by columns (or columm activates),
and prévilégie the scalar operation of product.
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
6/24
1.6
Case of the symmetrical matrices
Proposal: The decomposition of GAUSS respects symmetry.
It is enough to note that with each stage the aij terms and aji receive the same contribution.
Indeed:
K
I J
K
by assumption of recurrence, one supposes the matrix
symmetrical at the stage K and consequently one a:
+
has (K 1) = has (K) - has
has
(K) (K)/has (K)
ij
ij
ik
kj
kk
+
has (K 1) = has (K) - has
has
(K) (K)/has (K)
I
ji
ji
jk
ki
kk
J
from where the proposal.
Consequently, a matrix A symmetrical perhaps factorized in the form A = LDLT
where D is a diagonal matrix and L a unit lower matrix (i.e. with unit diagonal)
This decomposition, single since a diagonal was fixed, applies to any symmetrical matrix
nonsingular.
If matrix A is definite positive, then the terms of the diagonal are strictly positive and one can
to use the form known as of CHOLESKY A = LLT= (LD1/2. D1/2LT).
Let us notice that the decomposition of CHOLESKY requires N extractions of square root (which is one
expensive operation in time).
In the case of a factorization LDLT for symmetrical matrices, we can write the algorithm of
CROUT in the following form:
Loop on the columns ic = 2,
… N
Loop on the lines it = 1,
… ic - 1
Loop on the contributions im = 1,
… it - 1
L
L

-
L
*
L
it, ic
it, ic
ic, im
im, it
Fine loops
L
L
/L
it, ic
it, ic
it, it
Fine loops
Loop on the contributions im = 1,
… ic - 1
L

ic, ic

L
- L
* L

ic, ic
ic, im
im, ic
Fine loops
Fine loops
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
7/24
2
Disadvantages of the methods of the type GAUSS
The disadvantages of the methods of the type GAUSS are primarily of three types:
1) a high number of operations
2) a filling of the matrix
3) a loss of precision in the course of calculation
The first two points are often qualified major defects whereas third is
regarded as a minor defect.
2.1
The number of operations
For a system full with size N, at the p-ième stage, we must carry out to calculate the new ones
coefficients of the matrix and the second member:
· (Np)
divided
· (n-p+1) (Np) additions and multiplications
The number of operations is thus:
N 1
-
N (N - 1)
(N
- p) =
divided
2
p 1
=
N 1
-
N 1
-
N 1
-

-
-
-


-
(-)
2
(
N N
) 1 (N) 2 (
N N
) 1
(N
p + 1) N
p =
Q
+
Q

=
+


6
2

additions and as much of
p 1
=
Q 1
=
Q 1
=
multiplications.
1

1
That is to say
N (n-1) N +
3



2
operations for which it is advisable to add N2 operations of the resolution of
triangular system.
In short:
1
For a great full system, the algorithm of GAUSS requires about
3
N

3
operations.
Note:
In the case of a stored matrix tape, the number of operations is n.b2 where B is the width of
the tape.
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
8/24
2.2
Filling of the matrix
Let us begin with a traditional example of matrix known as “arrow” which one meets for example in
chemistry [bib9].
Either À la stamps such as has (1, I) 0, has (I, 1) 0, has (I, I) 0 and all its other terms
are null, the matrix takes the following form then:
0
0
With =
After the 1ère stage of factorization (by the algorithm of GAUSS), the matrix is full with the direction where it y
has more theoretically null terms.
More formally let us look at the phenomenon of filling of to the algorithm; for that let us récrivons
the algorithm:
For K varying of 1 to N - 1 to make % loops on the stages
For I varying of K + 1 to N to make % loops on the lines
For J varying of K + 1 to N to make % loops on the columns
has (I, J) = has (I, J) - has (I, K). has (K, J)/has (K, K)
end to make
end to make
end to make
What one can schematize graphically, at the kth stage, for the calculation of the term has (I, J) by:
K (3)
J (2)
I (1)
has (I, J)
has (I, J) = has (I, J) - (1) * (2)/(3)
The term has (I, J) is nonnull at the end of the kth stage:
· if it were nonnull at the beginning of this kth stage,
· or if the terms has (I, K) and has (K, J) are all two the nonnull ones at the beginning of the kth stage,
and this independently of the initial value of the term has (I, J).
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
9/24
Moreover, one sees that the method of GAUSS fills the profile wraps matrix during
stages of factorization.
In the example of the matrix “arrow”: the profile envelope is the full matrix, from where the result
noted.
This example highlights the importance of the classification of the unknown factors of the matrix since
matrix can be récrite, after permutation of the unknown factors, in the form:
0
0
whose profile envelope is the matrix it even (there is thus no filling).
We have just seen the importance of the classification of the unknown factors.
We could not insist too much on the fact that algorithms of “optimal” renumerotation must be
used to minimize the filling of the matrix.
These algorithms rest on the heuristic ones and are specialized.
Among the algorithms most usually used let us quote
Algorithms
Objectives
CUTHILL - Mc KEE
to minimize the bandwidth
Transfer CUTHILL - Mc KEE
to minimize the profile
Degree minimum
to minimize the multiplications by 0
Intrinsic formulation of the filling
One can give an intrinsic formulation of the filling during the elimination of unknown factor in terms
of graph [bib5].
That is to say a matrix A which we associate the graph G (X, E), where X is the whole of the nodes and E
the whole of the not directed edges.
The problem of elimination of an unknown factor of the matrix is then equivalent to eliminate a node from
graph.
Definition: Are X, there X, one will say that X and are adjacent there if and only if {X, y} E
So Y is a subset of nodes of X (X Y) we can define the following sets:
the whole of the adjacent nodes with Y
Adj (Y) = {X/X X there and there Y such as {X, y} E}
the whole of the with dimensions incidents with Y
Inc (Y) = {{X, y}/there Y, X Adj (Y)}
the whole of definition of X
Def (X) = {{y, Z}/y, Z Adj (X), y · Z and Z Adj (Y)}
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
10/24
def (y)
y
y
· in hatched: what one eliminates,
· into dotted: what one adds (filling)
Operation of elimination of y
The elimination of then consists in there considering the subset
Elim (y, G) = {X - {y}, (E Inc (y)) U Def (y)}
It is thus necessary well to consider the filling which is related to Def (y).
To minimize the filling one can use heuristics consisting in eliminating there from degree
minimal, the degree of {y} being the cardinal of the unit Adj (y). It is the governing idea of the use of
the algorithm of the degree minimum before a factorization of the type GAUSS.
This approach is used in the multi-frontal method implemented in Aster [bib15].
2.3
The loss of precision in the course of calculation
The problem comes owing to the fact that in the course of algorithm the pivots decrease and that they are used
like denominator for the following stages [bib13].
2.3.1 Summary study of the loss of precision
Let us note A (K) the matrix at the stage K (i.e. after the elimination of the kth variable); with by
convention A (O) = A.
We can then write that A (1) checks L (1) .A (1) = A (0)
0
1
0
by taking L (1) =
by identification

Then after (n-1) factorizations of this type we obtain
(L (N - 1)· … ·L (1)) U = L.U = A (0)
Because of the errors E on the decomposition, it is advisable to write: L.U = A (0) + E
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
11/24
The coefficients of the matrix of error E can be evaluated [bib12] by taking account of the error
made on the floating operation which we will note fl:
(K)
E
3.e.m. max has

ij
ij
I, J
K, I, J
ij
with
()
K
(K)
m
:
ij
a many terms such as A. have
0
ik
kj
: relative error of the operations machine
Indeed, while placing itself if the elimination of GAUSS “succeeds” (for example: the matrix is
defined positive or one uses a technique of swivelling).
(K) 1 has
has (K -)
1
Let us note
ik
ik = ß
ik



1
for
has (K -)
1

has (K -)
1
.(1)

+
I > K, K = 1,…, N - 1
kk
kk
with
<
1
The term has (K)
ij
is then evaluated by:
has (K) = ß (has (K) 1 - .a (K) 1
ij
ij
ik
kj
) for I, J > K, K = 1,…, N - 1
()
(-)
1
(K -)
1
That is to say still K has = (
ß has K
- .a
ij
ij
ik
kj
) with, <
2
3
The “disturbance” ek
(K)
ij undergone by aij
can then be evaluated; starting from the definition of mij us in
let us deduce the relation:
ek .a (K

1)
1
. has
ij
ij
ij for i> K, K = 1,…, N - 1
and of the evaluation first of A (K)
ij
we deduce:
µ
(K -)
1
(K -)
1
(K)
ik .akj
(ij
has
- ij
has
/(1+ 3)/(1+ 2)
and finally

1


1

E (K)
has (K)
(K -)

1

1 1

ij
-
-


(1+) (1+) has
kj
- ij



(1
2
3
+ 2
)
E (K).
3 A
kj
ij.
however the decomposition L.U = A (0) + E indicates to us that eij is the sum of the errors.
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
12/24
2.3.2 Estimate of the error on the solution
We solved the system
L.U X = A (0) .x + E.x
where
E.x is the term of error induced by the round-off errors/truncation in the operations of
factorization.
With (0) X is the second member (makes b) of them.
The found approximate solution ~
X which approximates true solution X is:
~
X = (A + E) - 1b
It is shown whereas the evaluation of the error on X is related to the conditioning of A.
Let us pose the problem in the form: (A + A) (X + X) = B
By supposing A.X small one a: X = A-1.A.X
X
With
from where while normalizing
Cond (A).
X
With
where
- 1
Cond (A) = (A.A
) is the conditioning of matrix A.
Note:
It is noted that the error induced on the second member is weak and the solution does not disturb
that through a bad conditioning of matrix A.
Indeed, if one considers system A (X + X) = B + B,
because of the equalities: X = A-1.b and Ax = B
- 1
there are X A
. B E
T B < A.X
X
B
- 1
and thus
With
X
(. With) B
Note:
If one considers the variations on A, X, and B simultaneously, one has the following estimate
[bib14]:
X
Cond (A)
With
B

.
+
X
1 - To 1. WITH A
B
-
Some remarks on conditioning
· conditioning is defined only for one regular matrix,
· conditioning depends on the standard chosen on IRn,
· some is the standard chosen, we have 1 Cond (A) and a matrix is of as much
conditioned better than its conditioning is close to 1.
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
13/24
If the euclidian norm is selected for standard, then
· the conditioning of an unspecified matrix A is
µn
Cond (A) = µ1
where µ and µ are the extreme singular values of
1
N
With (i.e. smallest and more
large of the eigenvalues of A * .A).
· If matrix A is symmetrical (or square) then
N
Cond (A) = 1
where and are the eigenvalues of minimal and maximum module of
1
N
A.
2.3.3 Estimate of the number of significant digits of the solution
If one has a precision of p figures (decimal) significant, one has then:
With
p
-
10
With
If one wishes a precision of S figures (decimal) significant on the solution
X
S
-
10
X
from where the estimate of the number of exact decimal significant digits of the solution
S p-log10 (Cond (A))
2.3.4 Method to reduce conditioning
The simplest method is that of the scaling:
One “passes” from A to.
with
.
) is better than
1 A.2
I stamps diagonal such as Cond (1 A.2
Cond (A).
This is very theoretical and there is not universal method to determine
.
1 and 2
Let us note that if matrix A is symmetrical and that one wishes to preserve this property, it is necessary then
to take
.
1 2
2.3.5 Example of badly conditioned matrix.
This example, very significant and instructive is with R.S. WILSON.
That is to say the Ax system = B with:
10 7 8 7
32
1




7
5
6
5
23
1
With =
and B = and whose solution is X =
8
6 10
9


33

1






7
5
9
10
31
1
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
14/24
· If one disturbs the second member of about 0.5% while taking:
~
B = (32.1, 22.9, 33.1, 30)
.9
then the solution is: ~
X = (9.2, - 12.6, 4.5, - 1. )
1.
· If the matrix is disturbed of about 1%:
10.
7
8.1
7.2


7.08 5.04
6
5
à =

8
5.98 9.89
9





6.99 4.99
9
9.98
then the solution is ~
X = (- 81, 137, - 34,
)
22
Remarks on the properties of matrix a:
· It is symmetrical, definite positive, of determinant 1 and reverse “sympathetic nerve”.
25 - 41 10 - 6


- 41
68
- 17 10
A-1 =

10
- 17
5
- 3





- 6
10
- 3
2
· Its conditioning within the meaning of the euclidian norm is:
4
L
30.2887
Cond (A)=
=
= 2984.11
1
L
0.01015
2.3.6 Geometrical Interprétation of the bad conditioning
One can give a very simple interpretation of the bad conditioning of a linear system
Ax = B in the particular case where A is normal (i.e. A * .A = A.A *).
Are 1 the smallest eigenvalue of matrix A and N its greater eigenvalue and are v1 and
v associated clean vectors.
N
X
B
for B = v
=
N and B = .v, one has
cond (A)
X
B
X + X
B + B
, v,
B = vn
U = vn
N
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
15/24

from where if
N
cond (A) = is large, a small disturbance B of B involves a great variation
1
on solution v.
2.4
Criteria of determination of a null pivot
Definition:
A numerically degenerated system is a system for which a pivot is null or
does not have exact significant digit.
Let us note that a system can be degenerated numerically without being it mathematically.
In these two cases, it is advisable not to solve the system from where need for determining one
criterion of stop as soon as one of the pivots does not have any more exact significant digits.
Are À la stamps to factorize and F the matrix of free diagonal resulting from factorization.
Criterion 1: The simplest criterion is to consider that the pivot is null as soon as it is lower, in
absolute value with a given threshold.
F
<

II
1
1 is a “small number” in lower part of which it is considered that the values are arbitrarily null.
Criterion 2:
This criterion applies to the number of significant digits still available. In
noting that one cannot have any more p significant digits on a given machine,
it will be checked that the decrease of the pivot is not carried out in a report/ratio higher than
10-p.
fii
- p
= 10
2
has II
F
Let us note that report/ratio II is always lower or equal to 1 because of decrease of the pivots.
aii
The basic cause of a bad numerical conditioning is the round-off error caused by
the introduction of great numbers without physical significance.
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
16/24
3 Method
LDLT per blocks implemented in Aster
This paragraph details the implementation in Aster of the resolution of linear system A.X = B by
method of factorization LDLT of symmetrical matrix A.
Matrix A is stored in profile (or line of sky) per block.
Guiding principle:
A column of the matrix is contained very whole in only one block: us
let us not segment the columns.
The tables allowing the description of the stored matrix profile per block are:
· HCOL height of column of the matrix
HCOL (I) height of the i-eme column
· ADIA addresses diagonal term in its block
ADIA (I) returns the address of the i-eme diagonal term in its block
· Pointer ABLO of block
ABLO (i+1) returns the number of the last equation in total classification
contents in the i-eme block
By convention ABLO (1) = 0 and numbers it equations in the i-eme block is given
by relation ABLO (i+1) - ABLO (I) + 1
The total number of equation results as being ABLO (nombre_de_bloc + 1)
It is also necessary to memorize the total number of blocks used to contain them
coefficients of the matrix.
Note:
Formally table HCOL is useless because it results from tables ADIA and ABLO, but it
allows to carry out calculations more quickly.
3.1
Implementation of factorization
Principal characteristics of the implementation of the factorization of GAUSS by the alternative of
CROUT in the shape LDLT of a stored symmetrical matrix profile per block are:
· factorization is carried out in place, i.e. crushing the initial matrix,
· perhaps partial factorization,
· the criteria of null detections of pivot can be adapted to the factorization of matrices
quasi-singular,
· in the event of detection of null pivot, this pivot is replaced by a very great value (1040) it
who amounts introducing a condition of blocking by penalization.
Note:
Two tables of work are created:
· a table which will contain the diagonal of the factorized matrix (minimization of
a number of access to the block),
· a table for the current column (minimization of the number of calculations carried out).
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
17/24
Algorithm BEGINNING;
creation of an intermediate table for the current column
creation of an intermediate table for the current column
POUR ibloc VARIANT_DE 1 A nombre_de_bloc FAIRE
· Determination of end and the starting columns for the current block.
· Seek smaller equation in relation to an equation contained in the block
running. This search is done by exploiting table HCOL
· Seek block of membership of the equation found previously. This search
is done by exploiting table ABLO.
· Request in writing mode of the i-eme block.
POUR jbloc VARIANT_DE plus_petit_concerne A ibloc-1 FAIRE
Request in reading mode of the j-eme block
POUR iequa CONTENUE DANS the i-eme block FAIRE
calculation of the start address of the column in the block
calculation height of the column
POUR jequa CONTENUE DANS the j-eme block FAIRE
calculation of the start address of the column in the block
calculation length of the column
With (ibloc, jequa) = A (ibloc, jequa) - < A (ibloc, *), A (jbloc, *) >
FIN_POUR
FIN_POUR
release of the j-eme block which was not modified
FIN_POUR
POUR iequa CONTENUE DANS the i-eme block FAIRE
calculation of the start address of the column in the block
calculation length of the column
POUR jequa CONTENUE DANS the i-eme block and < iequa FAIRE
calculation of the start address of the column in the block
calculation height of the column
With (ibloc, lm) = A (ibloc, lm) - < A (ibloc, *), A (jbloc, *) >
FIN_FAIRE
% use of the column iequa (calculation of the pivot)
calculation of the start address of the column in the block
calculation height of the column
back up column: wk. (I) A (ibloc, I)
standardization of the column by using the diagonal table:
With (ibloc, *) A (ibloc, *)/diag (*)
calculation of the diagonal term and updating of the table of work:
tabr8 (iadia) = tabr8 (iadia) - < A (ibloc, *), wk. (*) >
test of the pivot compared to
test of the pivot compared to the number of significant digits
FIN_POUR
release of the block running which one has just modified
FIN_POUR
release of the tables of work
FINE Algorithm;
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
18/24
Determination of end and the starting column for the current block.
This phase is due to the concept of partial factorization.
IF (last column of the block < beginning of factorization) ALORS
request in reading mode of the i-eme block
to fill the table with work containing the diagonal.
release of the i-eme block
Al following block
SINON_SI (first column of the block > fine of factorization) ALORS
SORTIR
SINON
IF (first column of the block < beginning of factorization) ALORS
% to supplement the “diagonal” table
request in reading mode of the i-eme block
to fill the table with work containing the diagonal.
FIN_SI
IF (last column of the block > fine of factorization) ALORS
modification of the last equation to be taken into account
FIN_SI
FIN_SI
Notice on the obstruction:
It is obligatory that one can have at least simultaneously in memory:
· two blocks of the matrix,
· two vectors of work of size: the number of equations of the system to be solved.
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
19/24
3.2
Implementation of the resolution
The implementation of the simultaneous resolution of N second members of system A.X = B, where
stamp A is symmetrical and was factorized in form L D LT (the resolution is in place)
Note:
One creates a table of work which will contain the diagonal of the factorized matrix, for
to minimize the access number to the block of the matrix and thus to limit the readings for
large matrices not being able to reside completely in memory.
Algorithm BEGINNING;
Creation of a table to store the diagonal to avoid readings at the time of the stage of
diagonal resolution.
POUR ibloc VARIANT_DE 1 WITH the nombre_de_bloc
% downward resolution and
% filling of the diagonal table
request in reading mode of the i-eme block
POUR iequa contained DANS the BLOC
Calculation height of the column and
calculation of the start address of the column in its block
POUR each second member FAIRE
xsol (isol) = xsol (isol) - < X (isol), U) >
FIN_POUR
back up diagonal term in the table of work
FIN_POUR
release of the i-eme block
FIN_POUR
POUR each second member FAIRE
% diagonal resolution
POUR all equations FAIRE
xsol (iequa, isol) = xsol (iequa, isol)/diag (iequa-1)
FIN_POUR
POUR ibloc VARIANT_DE nombre_de_bloc to 1 PAR_PAS_DE - 1% going up resolution
request in reading mode of the i-eme block
POUR iequa contained DANS the BLOC
Calculation height of the column and
calculation of the start address of the column in its block
POUR each second member FAIRE
xsol (ixx+i, isol) = xsol (ixx+i, isol) - xsol (isol) * L (ide+i)
FIN_POUR
FIN_POUR
release of the i-eme block
FIN_POUR
Release of the working area (i.e. of the diagonal table)
FINE Algorithm;
Notice on the obstruction:
It is necessary that one can have simultaneously in memory:
· a block of the matrix,
· a vector of work of size: the number of equations of the system to be solved,
· N second members.
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
20/24
3.3
Scaling
It is possible to make a scaling of the matrix be factorized; this setting on the straight ladder
fact in order to obtain a matrix whose diagonal terms are worth 1.
The diagonal matrix is such as:
1
if has

0

II
I =
aii
1
if has

II = 0
It should be noted that at the time of the resolution, one obtains the solution of the system only after déconditionnement.
Indeed: the initial system is Ax = B
after multiplication on the left by, one a:
Ax = .b
However the solved system is:
Ax = B
from where solution X obtained that “déconditionner is needed”.
3.4
Tests on the pivot
Two criteria of null detections of pivot are implemented:
· the test in absolute value aii <, with given,
· the test in relative value on the number of exact significant digits.
Let us note that these tests can be reduced to their simpler expression by providing one = 0. and in
giving, by convention, a number of exact significant digits no one.
This option is made necessary by the fact that algorithms such as the algorithms of
search for eigenvalues [R5.01.01] [R5.01.02] seek to factorize matrices
quasi-singular.
3.5
Factorization of complex matrices
The algorithm implemented in Aster also makes it possible to treat the symmetrical matrices with
complex coefficients.
The implemented algorithm does not treat the square matrices, although it is theoretically
possible.
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
21/24
Appendix 1 Méthodes of storage traditional
Full A1.1 Matrice
A nonsymmetrical matrix full with size N has N2 coefficients.
If the matrix is symmetrical, one can store only his triangular the lower or higher is
N (N +)
1 values.
2
No table of description of the matrix is necessary.
A1.2 Matrice bandages
B
B
B + 1 = bandwidth

In this case one stores the tape (called sometimes rectified matrix) in a rectangular table N
(2b + 1); one includes then B (B + 1) zero values corresponding to the complements of
points.
(
B B +)
1
In the case of a symmetrical matrix, one can store only N (B + 1) values of which
2
zero values (useless).
This method only requires to know the bandwidth.
A1.3 Matrice profile or matrix with line of sky
This technique consists in storing the terms of the matrix by columns and lines lengths
variables. Terms external with the “line of sky”, which is the envelope of the nodes of the columns
being supposed not to have any contribution in calculations, are not stored.
Profile of the i-ème line (resp. column) is determined by:
min {J such as 1 J N
aij 0}
(resp
min {J such as 1 J N
aji 0})
If the profile is symmetrical, one speaks about matrix with symmetrical profile.
This method of storage requires tables of storage which we will detail in the case
of a matrix with symmetrical profile.
Classically, in this option of storage, the matrix is arranged in the shape of a mono table
dimensioned requiring a table of pointer of input of column ADIA to explore the matrix:
the input is done by the diagonal terms, the number of terms of the column is obtained by differences
of two successive terms: ADIA (i+1) - ADIA (I).
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
22/24
If the matrix is nonsymmetrical, but with symmetrical profile, it is necessary to store the i-ème line and
the i-ème column. Classically, one put them “ends at ends” and the number of terms of the column or
line is (ADIA (i+1) - ADIA (I))/2.
A1.4 Stockage per block
The methods of storage seen previously suppose implicitly that the matrix can reside
in main memory, which is not always the case.
From where concept of matrix stored per block (or segmented on disc).
All preceding storages can be segmented, but we will state only the case of
stored symmetrical matrix profile.
Stamp profile stored per block
We consider here only the case of the symmetrical matrices, which does not remove anything with the general information of
matter.
1
2
3
4
5
6
7
Appear maximum A1.4-a: Taille of a block: 20 elements
In this example, we suppose the of the same blocks cuts, to use blocks of variable size, it
is necessary to introduce an additional table containing the size of each block.
We also consider that a column can belong only to one block: “we do not cross
not columns ".
To manage the matrix, it is always necessary to know the address of the diagonal terms; but
now, this address relates to the block of membership of the column.
This table, it is necessary to join a table giving the equations contained in a block.
This table dimensioned with the number of block plus 1 contains the last equation of the block.
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
23/24
Appendix 2 Variations on the algorithm of GAUSS
As we saw with the alternative of CROUT, there are several implementations of the algorithm
of GAUSS: it consists in carrying out calculations of the coefficients in a different order.
Schematically, it is considered that there are three overlapping loops:
· loop
I on the lines,
· loop
J on the columns,
· loop
K on the stages.
The standard algorithm is characterized by the sequence kij, but there are 5 other permutations of
indices which give place to as many alternatives (or of algorithms).
· the algorithm of CROUT is characterized by the sequence jki,
· the algorithm corresponding to the sequence ikj, which works by line, is known under the name
of algorithm of “Doolitle”.
Let us give here a chart drawn from [bib10].
U made
U made
U made
L made
not yet
L made
m odifié
L made
not yet
m odifié
With
algorithms kij - kji
algorithms ikj - ijk
algorithms jki - jik
the kth one is calculated
one calculates i-ième
one calculates j-ième
column and K + 1
line of L and U
line of L and U
line and one reactualize
submatrix A
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Code_Aster ®
Version
2.3
Titrate:
In connection with the methods of decomposition of the type GAUSS
Date:
09/09/93
Author (S):
D. SELIGMANN
Key:
R6.02.01-A
Page:
24/24
4 Bibliography
[1]
P.D. CROUT A shorts method for evaluating determining and solving systems off linear
equations with real gold complex coefficients.- AIEE Trans Vol 60, 1941, p 1235 - 1240
[2]
E. CUTHILL & J. Mc KEE “Reducing the band with off sparse symmetric matrices” Proc 24th
Nat Conf Assoc Comput Mach ACM Publ (1969)
[3]
E. CUTHILL “Several strategies for reducing the bandwith off the matrices” in Sparse
Matrices and to their applications - D.J. ROSE & R.A. WILLOUGHBY Editeurs, Plenum Press,
New York (1972) p 157 - 166
[4]
G. Von FUCHS, J.R. ROY, E. SCHREM Hypermatrix solution off broad sets off symmetric
positive definite linear equations - Computer methods in Applied Mechanics and Engineering
(1972)
[5]
A. GEORGE, minimum D.R. Mc INTYRE “One the application off the dismantles algorithm to finite
element systems " SIAM J. Num. Year. Flight 15, (1978) pp90 - 112
[6]
G.H. GOLUB and c.f. Van LOAN Matrix computations. Johns Hopkins University Close -
Baltimore (1983)
[7]
B. Mr. IRONS Roundoff criteria in direct stiffness solutions - AIAA Journal 6 n° 7 p 1308 - 1312
(1968)
[8]
B. frontal Mr. IRONS A solution program for finite elements analysis - Int Journal Num. Meth.
Eng., 2, 1970
[9]
O' LEARY & STEWART, Computing the eigenvalues and eigenvectors off symmetric
arrowhead matrices, J. off comp physics 90, 497-505 (1990)
[10]
J.M. ORTEGA “Introduction to parallel and vector solution off linear systems Plenum Press
(1988)
[11]
G. RADICATI di BROZOLO, Mr. VITALETTI Sparse matrix-vector product and storage
representations one the IBM 3090 with vector facility - IBM-ECSEC Report G513 - 4098
(Rome) July 1986
[12]
J.K. REID A notes one off the stability gaussian elimination. J. Int Applies Maths, 1971, 8 p 374 -
375
[13]
J.H. WILKINSON Rounding Errors in Algebraic. Processes Her majesty' S stationery office
(1963)
[14]
J.H. WILKINSON The algebraic eigenvalue problem Clarendon Press Oxford (1965)
[15]
C. ROSE “Une method multifrontale for the direct resolution of linear systems” Note
EDF - DER HI-76/93/008 (1993)
[16]
D. SELIGMANN “Algorithmes of resolution for the problem generalized with the eigenvalues”
[R5.01.01] - Note EDF - DER HI-75/7815 (1992)
[17]
D. SELIGMANN, R. MICHEL “Algorithmes of resolution for the quadratic problem with
eigenvalues " [R5.01.02] - Note EDF - DER HI-75/7816 (1992)
Handbook of Référence
R6.02 booklet: Direct methods
HI-75/93/084/A

Outline document